Have you recently driven through a city after a snowstorm?
If so, you’ve probably seen that LED traffic lights don’t melt snow very well. It seems obvious in hindsight. LEDs don’t use much energy, and so they produce far less heat than an incandescent bulb. You’d think that, in colder areas, this would have been part of the conversation around the choice to switch.
Unfortunately, this issue has been catching municipalities around the world by surprise as they convert their traffic signals over to LEDs. Well over a decade after LEDs first widespread introduction in traffic lights people are still trying to find cost-effective ways of solving the problem.
How Could This Happen
So why wasn’t this issue addressed when municipalities were looking to switch?
Let’s think about the aspects of a traffic bulb that you might consider:
- What does it cost?
- How much power does it use?
- How long does it last?
- How bright is it?
- Is it compatible with my existing traffic lights?
Looking at these aspects of a bulb, you might conclude that switching over to LEDs had zero downsides – other than a higher upfront cost, which would be rapidly offset by the cost reduction of operating LEDs. That assumes the LEDs lasted as long as they claimed, which is also a problem, but that is for another time.
While incandescent bulbs have many unimportant aspects, they had one important emergent property, the heat they produced.
When looking at switching to LEDs this property was never considered because it was never part of the reason incandescent bulbs were chosen. In other words, the incandescent bulbs had an incidental feature of producing heat which was desired by the customer but was never explicitly asked for.
Does this sound familiar?
If you’re a UX designer or a software engineer, you’ve probably run into this plenty of times before. Some people call them “emergent features” or “accidental features” or “undocumented features”. In any complex system these can manifest themselves in a few forms:
- A detail of the system that wasn’t explicit, but has come to be depended on – an example is the ordering of columns in a data export.
- Emergent behavior coming from the interaction of features within a system – common when search or navigation doesn’t match typical workflows.
- Patterns of user behavior forming around implementation details or bugs – bugs in a user interface often act as a signifier that can be meaningful to power users.
It can be hard to think about these in the abstract. So let’s imagine a simple todo list application.
It’s Not a Bug. It’s a Feature.
The main page has a list of todos with the following attributes:
- Each row has a name and due date, both required fields.
- If you hover over the due date, a tooltip shows the optional person assigned to the task.
- There’s a bug in the UI though. The due date is only showing up if a person is also assigned.
- It’s not a huge bug, just a bad assumption that renders invalid due date HTML without a person assigned.
As part of other work, you fix it. Now the due date always shows regardless of the person assigned. Huzzah, an improvement!
Oh no, users are complaining. What happened?
It turns out users noticed the pattern of the bug and derived meaning from it. They used the due date visibility as a quick way to scan the list of todos, looking for any that needed a person assigned. Always showing the due date broke this behavior for them.
This example also demonstrates the point that while emergent behavior always has something to teach you, it is not always desirable. The correct solution to the todo list problem above is probably not to revert to the buggy behavior. Instead, the goal should be to understand the users’ need and provide a better, more intuitive signifier.
Accidental features might not seem like a big deal, but they can cause significant issues for your team. Usually this problem rears its ugly head when you do a release and suddenly customers start complaining.
You start looking at the issues customers are reporting and you realize that Bill in engineering “fixed” the capitalization on the header in a data export and broke dozens of customer integrations.
Not only did you waste time researching and fixing a problem that didn’t need to happen, but also you’ve eroded your trust with your customers. Avoiding these types of problems is hugely beneficial, but it can be quite difficult to identify accidental features.
You typically cannot just ask your users, because often they aren’t even aware of the dependency or that it might be accidental.
But there are a few things you can do to find and mitigate them:
- Get some experience – That sounds like a joke, but after you’ve worked in enough systems you start to get a feeling for where these pop up. This definitely won’t find all of them, but as they say, an ounce of prevention is worth a pound of cure.
- User shadowing – This is the best way of finding accidental features, but some people will say it isn’t realistic because it is so time-consuming. While I agree it is time-consuming, the goal of shadowing users isn’t to find accidental features, it is to better understand your users. Any time you invest in better understanding your users will pay for itself many times over, and finding accidental features is just a nice side effect!
- Feature flags – Feature flags in their most basic form allow you to deploy a feature and then quickly turn it off or rollback the behavior. In a slightly more complex form, you can use feature flags to deploy features to subsets of users so that you’re able to identify issues before you do a deploy to your entire user base.
- Automated deployments – Automated deployments simply allow you to do deploys quickly in response to an issue that arises. If a feature can’t be put behind a feature flag for some reason, automated deployments will give you the ability to undo your changes quickly.
- Canary deployments – If you’re operating in a very sophisticated environment, canary deployments might be an option for you. They allow you to do a release that slowly rolls out to customers in an automated way, and if it detects any significant changes in user behavior it is able to also automatically roll back.
Understanding Users and Their Environment
As with many problems in software development, this is fundamentally a matter of understanding your users. Understanding the people using your software and the contexts they inhabit is the best way to uncover accidental features.
Investing in shadowing users and understanding their real-world workflows will reveal more insights than you can imagine. If you’re not currently viewing your system through the eyes of a user, then you’re really only guessing at how your system is being used, which is a precarious situation to be in.
Think back to the traffic lights. They understood the technical aspects of the problem at hand, but what they were unable to see was an important aspect of how the incandescent bulb operated within the context of the real world. Having a meaningful understanding of the problem is necessary, but it is insufficient to understand how a system operates in the wild.
Deep thinking and empathy are important, but nothing can replace seeing a complex system being used in the real world. Quality engineering and robust test processes are important, but nothing can replace getting feedback from real users.
Accidental features and emergent behavior will always exist in a system, but the earlier they can be identified, the sooner they can be incorporated into the design. Incorporating them intentionally into the design means you’re providing your users with an experience that can be confidently iterated on without breaking their important workflows. And since no product is ever done, this is a goal we can all get behind.