The language of hands The future of motion-sensing technology
We’ve taken a glimpse into the fascinating world of motion-sensing technology. How will this technology continue to develop to serve the needs of tens of millions of people who rely on sign language to communicate?
Motion sensing technology gets the green light
In 2011, Google released an April Fool’s video. They pranked viewers into believing a new Gmail feature allowed users to interact with their inboxes using gestures and body language. On December 31, 2018, the company’s motion-sensing technology secured approval from the Federal Communications Commission (FCC), paving a new way for humans to interact with their devices – and that’s no joke.
Radar technology plays a major role in this type of communication. Human-computer interaction (HCI) with radar sensors is a relatively new development, but experimentation with radio waves dates back to the 1880s. In the case of Google’s ATAP Project Soli (the very same one that received FCC approval), a commercialized outlet for motion-sensing with radar is now in the works. Because of this innovative technology, we can now expect similar developments from Google’s competitors. Controlling phones, watches, and other IoT devices with swift hand gestures is now a reality.
Human-computer interaction with radar – what’s the big idea?
Radar works by reflecting and detecting radio frequency waves. One of the benefits of radar technology is that light and atmospheric conditions do not negatively impact performance. Radar-sensing technology is sensitive enough to recognize gestures with sub-millimeter accuracy. This means that the technology is robust enough to pick up slight hand gestures, allowing you to perform functions without physical contact. A radar sensor, embedded in a device such as a smart watch, captures raw data and generates a signal. This signal is then interpreted by your device, allowing you to control it with the swipe of a hand.
Some current applications include:
- Vertical and horizontal swipes
- Two-finger pinch
Is this the birth of a language? Not all communication can necessarily be classified as a language
While radar-based motion sensing is a win for users with mobility and speech impairments, the language nerds among us are left pondering the linguistic implications that will be ushered in by this type of human-computer interaction technology. Some argue that a new language is in the making. Others say that virtual swipes and sliders are but gestures and cannot technically be defined as a language. Many are also drawing a connection to sign language.
Like sign language, motion-sensing technology allows for the interpretation of meaning – in this case – through hand gestures. But it would be a mistake to assume that sign language can be boiled down to hand movements. Sign language uses complex combinations of hand motions, body movements, and facial expressions to express meaning.
It seems that every year, a new, innovative endeavor presents us with a solution for interpreting sign language. In the last 2 months, at least 7 patents were filed with the World Intellectual Property Organization (WIPO) for technology designed specifically to interpret sign language. But the truth is, technology has not yet reached a point of truly capturing the complexities of sign language.
Just how important is sign language to communicate? Well, when we consider the population of people with hearing loss in just these 8 countries alone, we get a pretty clear picture of the importance of alternate forms of communication:
Quick facts about sign languages:
- Every sign language is unique. American Sign Language (ASL) and British Sign Language (BSL), for example, developed separately and are mutually unintelligible languages.
- Each sign language has its own grammar and culture.
- Deaf communities are known to be tight-knit and often spell the word “Deaf” with a capital letter.
- The grammar of spoken languages is linear (one idea is expressed at a time). Sign languages are spatial and express multiple ideas simultaneously.
Language is a living, breathing phenomenon
Information is communicated in various ways, but not all means of communication are classified as a language. Ants communicate information chemically, some mammals use odor to mark territory, and bees dance. What makes human language unique is productivity. Human language, both spoken and signed, has the capacity to use limited structure to produce an unlimited number of sentences. With the estimation of over 170,000 words in the English language, we can create sentences that no one has ever heard before, and they will be understood.
In the case of human-computer interaction technology, a sort of translation process is taking place – a device is programmed to interpret the limited commands given by a human. In natural language, concepts are represented by terms, definitions, and symbols while artificial language uses codes and formulae to achieve this.
Evolution and onward
If we know anything, it’s that language evolves. Natural language evolves with culture, geopolitics, historical events, and contact with other languages. But the languages of human-computer interaction with radar picking up gestures will also evolve in their own right. This is because the types of functions we can perform with technology will continue to develop. It seems as if we really are entering a new paradigm.
Stay up to date as Nimdzi publishes new insights. We will keep you posted as each new report is published so that you are sure not to miss anything.