After decades of swiping, scrolling, and staring at screens, the way we interact with technology in the future is set to be very different. As AI grows more conversational, brands are faced with a new challenge in how they communicate, that is how they sound. What happens to brand identity when voice, not images becomes the primary interface?
This topic was a thread from a conversation I had with Rosh Singh, chief executive at the immersive studio Astral City, about Big Tech’s growing investment in spatial computing, and glass more specifically.
“We've come through decades and decades of user interface, design and iterations, and development of what UI looks like. But we're at the precipice now of a moment where voice is going to be the next UI and understanding voice is going to be super important,” Rosh tells me.
The Hardest Interface to Get Right
If voice begins to replace the screen, then brands face a new creative frontier. What does Nike sound like when it speaks? How should H&M sound when it answers a shopper’s question?
“Voice,” Rosh says, “is hard to nail.” Crafting an AI system that people can talk to and want to talk to again is a delicate art. “A voice that is true to the brand experience that doesn't alienate people, that embodies enough humanity to give users comfort, but doesn't feel too human to feel eerie,” is what is required, he says.
Unlike other user interfaces, voice either works or it doesn’t. “It's not like a UI which you can make beautiful, even if the experience feels kind of clunky. With voice, if it doesn't sound right, it doesn't work,” Rosh says.
But get it right and voice interfaces can add a lot of value. “Let's be honest, a lot of us who work in this space are on the geekier side of things, and we like sci-fi, and you just look at all of the interfaces, whether it's K.I.T.T. in Knight Rider or J.A.R.V.I.S. in Iron Man, these voice interfaces feel magical, and when you nail them, they do feel like science fiction.”