The team at Sensing, Interaction and Perception Labs at ETH Zurich, Switzerland has come up with TapType, an interesting text input method that relies entirely on a pair of wrist-worn devices, which means acceleration when wearers type on any old surface. . The results can be input up to 19 WPM by feeding the acceleration values from a pair of sensors on each wrist to a Bayesian inference classification type neural network that feeds a conventional potential language model (predictive text, you and I). 0.6% with average error. Expert TapTypers reports speeds up to 25 WPM, which can be quite usable.
The details are a bit sparse (this is a research project, above all) but the actual hardware seems simple enough, around the dialog DA14695 which is an excellent Cortex M33 based Bluetooth Low Energy SoC. It is an attractive device in its own right, it has a “sensor node controller” block, independent of the main CPU, capable of managing sensor devices connected to its interface. The sensor device used is the Bosch BMA456 3-axis accelerometer, which is notable for its low power consumption of only 150 μA.
The wristband units themselves seem to be a combination of a main PCB hosting BLE chip and a supporting circuit, with a pair of accelerometer devices connected to a flex PCB at each end. The assembly was then slipped into a flexible wristband, probably made from a 3D printed TPU, but we’re only really guessing, because progress on the wearable prototype from the first embedded platform is unclear.
What is clear is that the wrist itself is a dumb data-streaming device, and all the clever processing takes place on the connected device. The training of the system (and subsequent selection of the most accurate classifier architecture) was performed by recording the volunteers “typing” in an A3-sized keyboard image, tracking finger movements with a motion tracking camera, where acceleration data streams were recorded from both wrists. There are a few more details in the research paper published for those interested in digging a little deeper into this study.
Eagle-Eyes might think something similar to last year, from the same team that associated VR-type hand tracking with bone-carrying sensors to create input events in VR environments.