Most of the technology around us assumes you can speak.
"Hey Siri." "Okay Google." "Alexa, turn off the lights." The voice-activated future has been confidently announced — and it quietly excludes anyone who can't, or shouldn't, talk to a machine.
Per the National Institute on Deafness and Other Communication Disorders, an estimated 7.5 million Americans have trouble using their voices. Many more navigate the world with laryngitis, anxiety, autism, language barriers, or environments too loud, too quiet, or too sensitive for speech to work reliably. ²
Meanwhile, the ASL alphabet and the numbers 0 through 9 can be learned by most people in an afternoon. After that, silent interaction with a well-designed surface becomes possible — ordering food without yelling at a speaker, opening a gate without grabbing a sticky handle, picking a floor without touching a shared button cluster.
The case isn't that voice is bad. The case is that voice alone is exclusionary, noisy, and unnecessary as a default. A surface that watches the hands is more universal, more private, and quieter. Sign-based interfaces are a complement to existing controls — never a replacement.
Touchless and signless. Both. For everyone.