Say what you want and see it!
It’s hard to believe now, but back in 2007, the most advanced mobile phones were "feature phones" like the Moto RAZR. There were strange devices such as the Treo that combined a Palm Pilot with a phone, a new Windows phone that let you aim a stylus towards a desktop-like START button, and a very successful Blackberry that offered a QWERTY keyboard, an overloaded application selection roller, and little else.
Into this now-ancient world, before the dawn of Siri, Alexa, Cortana and Ok, Google, we launched Tellme Search.
The design challenge was to communicate that speech was the preferred means of input while supporting other, more familiar and obvious means of interaction and then to also teach the user how to do it in a way that maximized their chance for a successful recognition. The constraints were to do so within the existing feature phone hardware and software capabilities.
From a java-based feature phone, Tellme Search let you
- hold down a button (walkie-talkie style),
- look up a business,
- call it,
- get directions, or
- see it on a map.
I was the lead designer, bringing together a team of interaction & visual designers, user researchers, developers, speech engineers and executives. I iterated on an early prototype, usability tested it for interaction guidance (press and hold to speak or press and record like voicemail? if it's not intuitive, how might we teach it?), crafted wireframes, worked with engineering to uncover and push the boundaries on what was possible (remember feature phones?), created style guidelines as the design and implementation team grew, a:b tested ideas, worked with partners like Sprint, Helio and eventually, Microsoft.