Smart Glove for Prostheses Identifies Objects in Hand
One thing that prosthetic device users hope for is to gain the ability to feel what they’re touching with their hands. There have been some developments in this space (see flashbacks below), but they’re still relatively rudimentary, not very sensitive, and not very good at helping users to identify what it is that they’re touching.
Engineers at MIT have now developed a glove, that can be worn over a prosthetic hand, that can sense the object it is touching and even identify, with pretty good accuracy, what it is. Moreover, it can even estimate the weight of the object being held, providing additional information about it. The researchers believe that the technology will be useful in robotics, but also as a smart way for prostheses to help their users interact with everyday things.
The so-called “scalable tactile glove” (STAG) has around 550 pressure sensors spread across its surface. These work together to generate a map of the object and how it weighs on the glove. A neural network, which was taught how different objects are felt by the glove, is used to analyze these data and identify what is being held.
This technology has interesting implications for manufacturers of prostheses, allowing them to identify how the devices are being used and how to improve them.
An announcement from MIT explains how the technology works:
STAG is laminated with an electrically conductive polymer that changes resistance to applied pressure. The researchers sewed conductive threads through holes in the conductive polymer film, from fingertips to the base of the palm. The threads overlap in a way that turns them into pressure sensors. When someone wearing the glove feels, lifts, holds, and drops an object, the sensors record the pressure at each point.
The threads connect from the glove to an external circuit that translates the pressure data into “tactile maps,” which are essentially brief videos of dots growing and shrinking across a graphic of a hand. The dots represent the location of pressure points, and their size represents the force — the bigger the dot, the greater the pressure.
From those maps, the researchers compiled a dataset of about 135,000 video frames from interactions with 26 objects. Those frames can be used by a neural network to predict the identity and weight of objects, and provide insights about the human grasp.