Written as an assignment for DesignLab’s UX Academy, this article is a speculative look at where… – Info Gadgets
Written as an assignment for DesignLab’s UX Academy, this article is a speculative look at where information architecture might be going with virtual assistants.
“Hi, How Can I Help?”
Last November, Google set up a Donut Shoppe pop-up in San Francisco’s Hayes Valley and curious people lined up, stretching around the block. When it was your turn, you’d ask the displayed donut-shaped device a question from the “menu” of questions and commands and it would respond accordingly. That’s when a happy-meal-sized-box would magically appear in front of you. Inside it could be a Google Home Mini, or worst case scenario, a fresh donut. I could rave on about how much the donut pop-up experience has shaped how I, as a consumer, feel about Google’s hardware products and branding now, but that’s for another post. Takeaway? I was one of the random lucky winners, and now I use my Google Home Mini probably at least 10 times a day.
“Hi, how can I earn your trust?”
New technologies almost always go hand-in-hand with public hesitation (consider drones or autonomous vehicles). To recap how a voice assistant works, the device listens for an activation phrase such as “Hey Google,” and executes the user’s instructions for tasks such as playing music, looking up information, setting timers, etc.
Consumers have (and will) question their safety, security, and privacy in the space of uncertainly risky tech they don’t understand.
And they’re completely allowed to. Building trust is necessary in introducing new tech to the consumer market. Virtual assistants were not any different, but we’ve come a long way from considering the technology of VUIs as artificial intelligence in science fiction. Now, they’re commonly trusted as devices deemed helpful in hands-free and eyes-free situations.
Current popular voice assistants on the market are:
- Apple’s Siri
- Amazon’s Alexa
- Google’s Google Assistant
- Microsoft’s Cortana
- Samsung’s Bixby
Considering the long-term future of voice assistants, we expect the technology can only improve and be utilized more in our day-to-day. Over half of the information we understand is processed visually, so we don’t expect VUIs to overtake the user interface space. Still, we have a long way to go to support users in ways to cut short any wrong paths, vocal mistakes, or misunderstandings.
A New Design Frontier
With emerging technologies, people are often quick to declare that we need to reinvent design methods and principles to tailor them to the new tech. VUIs completely transform the user interaction experience by eliminating visual displays and tactile functions.
Humans and these devices understand each other through natural language semantics, and the design of successful VUIs relies on such, rather than keystrokes or point and click navigation. So does this mean that the all the rules have changed?
Certainly not! Usability has more to do with user capabilities and limitations than with technology’s advancement, so we can still approach designing the information architecture of VUIs as we would approach any other faction of design — industrial, interaction, visual. (1) The heuristics are different, yes, and that’s the challenge we as designers are tasked with.
The design rules don’t change because the people who are using it haven’t changed.
Consider Google’s search engine: You know what to type to reach your desired search results about how to cut a pineapple. If you misspelled “pineapple” or didn’t mean “opening a pineapple,” Google will understand the error and suggest the possible correct search, demonstrating the usability design principle of error prevention — rather than just help users recover from errors, a well-designed user experience will suggest the correct ones. (Find more usability heuristics for voice interfaces here.)
Good user experiences respect the user’s time and do so by not wasting it. We don’t expect screens to interact with us, but we place a higher expectation on voice assistants because by simple virtue of speaking, they’re more humanlike. This means that VUI designers have the challenging goal of making the interface as easy and as pleasant as talking with a human.
We know that providing a frictionless experience is exactly the expectation we have from an assistant or assistants. I wonder if there’s a distinct customer need to break down the current natural language semantics we already have to develop VUIs even further, especially with the rise of the IoT market.
We may need more market research on at least the extent of necessity, because even when Google pulls up the right search results, you’re still responsible for entertaining and serving your guests with some freshly cut pineapple. Despite the obvious challenges involved with designing for a voice command future, it is an eventuality that information architects must face.
Context Context Context
When you’re at a concert, no one’s thinking about the sound engineer until something feedbacks or the singer’s voice cuts out.
The same goes for UX design in VUIs (or any product, to be honest). Ask Siri how much time is left on your timer and she’ll answer with an article she found on The Times. Ask Alexa and she’ll tell you 6 minutes and 27 seconds. Siri’s failure to consider the user’s context makes for a bad experience for the user, at which point the user might abandon the product. VUIs as a context-dependent technology make contextual inquiries all the more important because the user scenarios can change anytime.
This makes it important for designers to research linguistic behavior and their contextual expectations of VUIs, determine the exact value of necessity, and determine how the content will be organized.
To enable users to go directly to what they need, designers need to rethink the site navigation and information architecture completely. Personalization will play a key role in this ,and context will determine how well the device interprets what the user commands, and, therefore, how well it can execute their task. True virtual assistance by voice command has a long way to go, but teams of designers will be behind getting us to the ideal product.
Conclusion (on a Personal Note)
As a musician, I’m well-versed in writing poems, lyrics, clever sayings, and puns. Maybe the closest I’ll get to enjoy producing visual art is in hand lettering and designing typography. I think what draws me to designing VUIs is that, while visual designers think in shapes, forms, and animations, I usually think in words and meaning, or the semantics of conversation.
Being a former preschool teacher, I also think about how the potential for the technology’s support would be for children’s language development and accessibility for those with special needs.
I’m excited by the opportunity to research and analyze linguistics in making a seamless experience for users to make voice commands through the most basic and human way that we all interact with each other — voicing.
Article Prepared by Ollala Corp
