We’ve all interacted with unique interfaces- whether it’s your DSLR, Nest, or blender, all devices have a physical interface — the face you interact with. While good physical design is essential to help sell a product, other factors like user experience and providing value are equally important for a device’s long term.
For instance, adding a screen on the hardware product increases the number of features it can have. Now, it can go far beyond what a few buttons and dials would allow. In addition, screens let users access menus, sub-menus, and settings — unlocking a whole new way to interact with this device.
But screens are also expensive which leads us to the balancing act of crafting great hardware products: cost vs. customer experience. If you add a screen customers would prefer a super-sharp OLED that works like the smartphone they already use but the screen, chips, and software development would cost more than most consumers could afford.
Part of my role in bringing the Wynd Halo environment sensor into being was crafting the user interface for the device to strike that balance between usability and cost. This is the process I took to explore how to design a usable interface for a hardware device.
The process
At its core, the Wynd Halo is a box full of sensors that can tell you everything from the number of particles in your air to the risk of mold growing at home. Mike Nuttall, an esteemed industrial designer and co-founder of the design firm IDEO, created Wynd Halo’s physical design. Once he was done he handed it over to us to figure out user interaction and how to cramp all our sensors into his incredible case.
The engineers started making prototypes while I started figuring out how users would use this thing.
Should we have a touch screen, twisting dial, buttons, capacitive touch, or no input at all with just the app to control everything?
I had no clue what our customers would want so I made a survey and sent it out to all the people who already bought our products.
The problem
We are designing a new interface with a new usability patterns
Not sure how people will intuitively interact with the device.
The house techie is our customer, but everyone in the home is our user.
A survey
I made a Google Form survey asking our existing customers how they use their internet connected devices today and what features might interest them. I sent this survey out to our thousands of customers and received over 300 responses.
The key takeaways were
- 56% male respondents
- 55% of customers have additional internet connected devices
- 52% said price was their biggest frustration with these internet devices
- 79% interact with them via the app, 70% via voice, and 38% via buttons.
- They wanted to know if their air was good or bad, but most didn’t know what we’re measuring in their air
- 78% ranked their concern of the air quality they or a loved one breathe at a 4 or 5 in terms of importance on a scale from 1–5 with 5 being vital.
- People are very concerned about their air quality, but don’t necessarily understand it so well.
- Price is important to purchase decision
The next step was talking with customers. I reached out to some of our existing customers and asked them the survey questions to dive deeper into the biggest pain points and where we could improve our device.
The key takeaways from these meetings were:
- The devices draw attention to themselves either via unnecessary push notifications or chimes.
- Customers are excited to have a broader view of their home environment
- Customers with internet of things (IoT) devices want Halo to integrate with them
- Controlling remotely is vital
- Keeping the price down is vital
- Customers wary of how difficult it is to set up some IoT devices
Competitive Analysis
The next piece of the puzzle was to look at what competitors were doing. This was pivotal in figuring out where customer needs remain unmet in the market.
I reviewed similar products brands like the Laser Egg, Nest, Awair, Awair Glow, Plume Flow, and Flare temperature sensor. These were the popular brands when it came to consumer air quality products.
After critical investigation of these brands, the main takeaways I found were:
- A screen is a better experience than an app or blinking lights since it requires less effort
- Lights are indicators to look at an app to figure out what the problem is
- When they say the air is bad for a long time it’s mildly stressful
- My girlfriend is not as tech savvy as me, so she could only access the devices with screens, not the apps
- Screens make it more accessible to everyone.
Considering the five solid points I inferred from these investigations, the opportunities started to present themselves. Those opportunities were:
- Useful screen to inform and control
- Intuitive interface that anyone could read and immediately interact with
- Connecting additional devices to solve the problem of an always red light
User personas
At this point, I had already gathered substantial information about our target market — what they want, what was missing in existing technologies, and ways to take advantage of that opportunity and enhance our user experience and to improve usability.
Based on all this information I gathered (surveys, interviews, and outcomes) I created 3 user personas:
- Techie — These guys are those that digitize just about anything in their homes and the air quality control is not left out.
- Family — Not everyone is a techie. There are some people that just want a better environment for their family and this category encompasses those people.
- Allergy sufferer — And then last but certainly not least are those that suffer allergic reactions as a result of allergens present in the air.
Now that we have grouped our potential market into three categories, we then have to design the interface to make it easy for every one of them to operate. Which brings us to the next step — information architecture.
Information Architecture
Information Architecture simply means how the information is laid out in an app, website, or — in our case — a hardware device. Since the interactions with the Wynd Halo would be short and quick, I opted to reduce how much could be done on the device for information and control.
The information aspect helps inform users exactly what is floating around their indoor airt. The control aspect then enables them to push a button or make a command to help fix any issues in their air. This information part will break it down for them in great detail, like the kind of allergen or particle in the air.
The next aspect was the control. The focus here was to organize controls in easy-to-use ways so the user could control their devices and set the environmental mood of the place.
With this knowledge at hand, I took an initial stab at the information architecture of the technology by framing a blueprint and then creating a wireframe around it. I did this with the help of one of my UX textbooks that referenced the usability case study of an airplane’s infotainment system.
The reason those infotainment systems have a long carousel despite it taking more time is because it’s immediately intuitive to the widest range of users, whereas having nested menus is less intuitive to first-time users.
My assumption based on the surveys and interviews is that our buyer would not be touching the device that often since they would mostly use the app, this means the rest of the household would primarily interact with the Halo via the physical device and they are less tech-savvy users. Focusing on them as the persona for the Halo’s user interface meant designing something immediately intuitive so just about anybody can use it even without any prior tech experience.
Outside of software, nested menus don’t really exist. A nested menu in hardware would be like pressing a button on your radio and the radio opens up to reveal additional buttons to press. It’s not very common. So, it’s often a better user experience for new users to have everything on one level instead of nesting things. It makes getting onboarded easier but means a bit more scrolling to get to everything. In this case that was seeing the air around you and choosing which setting you’d like your environment to be at.
Like the airplane user interface, books, and newspapers, we allow users to find additional features by navigating left and right. It meant more clicking, but it is also commonly accepted to be more intuitive.
I immediately created a mock up of this wireframe and ran a quick hallway usability test with five folks at the local Starbucks who tried out my little prototype.
I learned quite a few things from this first round of testing. The first was that thankfully, everyone I tested with, from 21 years old to 63 years of age, could figure out the carousel UI within about 20 seconds. The catch was that users had no idea where to click on the little paper face of the device they all assumed it was a touch screen.
When I showed my leadership the UI, they felt it was a bit simplistic and were looking for something people could navigate faster.
Understanding the approachability tradeoff, I opted to do a card sort. So I drew possible screens and went back to Starbucks to see how people would sort the screens.
Card sorting is a method of figuring out the information architecture of software by populating cards (or sticky notes) with the names and or wireframes of the different screens, then asking someone to sort those cards into categories.
There weren’t too many surprises. Settings, measurements, and environment controls were all categorized together. A few things were categorized in different places by different people such as the night mode setting and Fahrenheit or celsius unit locations.
With this new information architecture the user could scroll left or right 4 tiles and click the center button to enter that section and look at their info, change a setting, or change their environment.
I made a clickable software prototype with this design and began the long iteration process of figuring out how to make this intuitive for everyone.
First of all, for this prototype, we had 3 buttons that were invisible on the faceplate of the device.
Secondly, there is no back button. So users needed to figure out how to navigate back out of the screens once they’ve clicked into them.
After four rounds of a/b testing and iteration with four users each, we reached a final design that could be used by anyone up to the age of around thirty-five. People older than the age of thirty five sometimes figured it out, but it was quite frustrating and took them about twice as long as the younger users who succeeded in navigating every design and even a broken prototype.
Not having 100% usability for all ages was not an ideal, but leadership preferred this nesting design and were ok with the understanding that older adults had more trouble figuring it out.
With the bulk of the low fidelity designs tested, it was time to finalize the design and present to the leadership.
Creating the hi-fidelity design
I built the user interface in Sketch design system. This meant I could instantaneously update the dozens of screens in each of the usability tests I ran. It also meant I could change the entire branding and design of the interface in a matter of seconds from colors, to icons, to typography.
The core idea behind the hi-fidelity design was for users to understand the context of the numbers on the screen. Just showing a “73” to a user doesn’t provide a sense of scale. Putting that “73” in a partially filled circle helps provide context for how high the number can go.
After getting leadership’s approval, it was time to implement.
Technical Implementation
When creating your own devices with buttons and screens the design and technology need to go hand in hand. When I first started designing this UI we had capacitive touch and e-ink displays. Nearly a year later when the hardware was finalized we had clicky buttons, a TFT screen, small amount of memory, and a limited processor.
Unlike designing for an iPhone where you’re pretty much guaranteed to be able to have your colors, fonts, and designs render flawlessly, IoT devices need to ensure the two go hand in hand to help keep the costs down.
In our case, usability drove the decision to switch to clicking buttons and a more typical screen. Price drove the decision to reduce processor, storage, and screen quality.
Price is also a part of the user experience, picking too high of a price can mean making a phenomenal product that no one will buy while too low of a price will mean potentially more adoption but lower quality of user experience.
Limited computation, storage, and screen quality meant the original design needed to be updated. Colors still worked, but they would need to be used less to avoid alarming a user who is standing off-center. Animations couldn’t be supported without increasing the processor and cost. Instead, we used the screen’s unique refresh to help provide users with direction context as the moved through menus.
The embedded systems engineer and I were working together on the UI from the beginning to ensure the hi-fidelity final design could be used. Once we got approval, he began implementing aspects of the design.
One of the first things we ended up noticing was the device’s screen didn’t render color well. Depending on the viewing angle an orange could look purple. This meant I needed to take out a lot of the color in the design and make mostly black, white, and shades of grey.
Because of the low pixel density, it also meant updating the font to be anti-aliased and reducing the amount of fonts we supported such as font weights and other alphabets.
Conclusion
In the end, we were able to craft an intuitive interface, anyone, in your house, no matter their understanding of air quality or tech, could figure out instantly. Crafting bespoke hardware interfaces is an incredibly interesting problem to figure out how to strike the balance between a great customer experience and an affordable device.