Gesture Recognition

Gesture recognition is an alternative user interface for providing real-time data to a computer. Instead of typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the primary source of data input. This is what happens between the time a gesture is made and the computer reacts.

  1. A camera feeds image data into a sensing device that is connected to a computer. The sensing device typically uses an infrared sensor or projector for the purpose of calculating depth,
  2. Specially designed software identifies meaningful gestures from a predetermined gesture library where each gesture is matched to a computer command.
  3. The software then correlates each registered real-time gesture, interprets the gesture and uses the library to identify meaningful gestures that match the library.
  4. Once the gesture has been interpreted, the computer executes the command correlated to that specific gesture.

LG’s G8 ThinQ smartphone includes touchless gesture features. They’re called Hand ID and Air Motion, and they’re enabled by a time-of-flight camera and infrared sensor built into the front of the phone. You can unlock your phone, or control music, videos, phone calls, and alarms, all by waving your hand. Google’s Project Soli, which will be deployed in its upcoming Pixel 4 smartphone, may be the most technically impressive of the limited bunch. It’s a custom chip that incorporates miniature radars and sensors that track “sub-millimeter motion, at high speeds, with great accuracy.”

Related glossaries:

Sharing is happiness: