Summary:
This paper describes a system/method for recognizing the 26 letters in ASL, along with two other signs (space and enter), and also describes an attempt at an affordable alternative to other data gloves. They call their device The Accele Glove, which uses a micro controller and dual-axis accelerometers. They chose dual-axis sensor location to include the middle joints of the fingers and the distal joint of the thumb to eliminate ambiguity in the ASL alphabet. The accelerometer can track joint flexion and hand roll or yaw, or individual finger abduction, because it is suspended by springs.
They acquired data with five people each signing all 26 letters ten times, with J & Z only at their final position. For classification, they divide gestures into three subclasses: vertical, horizontal, or closed, defined by dividing the 3D space with planes whose locations are based on index finger position. They use a decision tree whose first division is based on these subclasses and further divisions are based on features of the gestures, like "flat" vs. "rolled" postures. They found that 21/26 letters reached 100% recognition, and that R, U & V could not be distinguished with their system.
Discussion:
This seems like a fairly reasonable method, and it seems like it should be able to be tweaked to recognize the letters they can't recognize. Of course, then there's always the question of whether it extends to other kinds of gestures without more and more tweaking.
At one point I misread their discussion of a voice synthesizer and thought they meant they were taking in voice data as well, which seemed like an interesting idea on its own. Having voice output while a person makes gestures during a user study could be useful later when we are trying to figure out what gestures they intended to make. On the other hand, it might have a negative effect if the person talks more or more slowly than she would ordinarily make the accompanying gestures, or if the act of talking distracts the user from making gestures, sort of how people may feel awkward describing what they are doing aloud in studies that test product usability.
Subscribe to:
Post Comments (Atom)
2 comments:
Okay - totally off topic - but your write up got me thinking. Since they created gestures for 'space' and 'enter', I suddenly went the complete other direction from natural gestures to defined gestures and started thinking about adding other gestures. In particular, I was thinking about gestures that people - or rather kids - would make up, as kids tend to make up their own spoken and gestural language already. They might be the ones who would define new symbols and their meaning. I also wouldn't mind a whole slew of hand gestures that could perform actions in my house. Remember the clapper? Well, we would no longer be limited to the noises that we can make with our hands.
Yeah, the verbal cues as to gesture recognition can be an issue if the user signs too fast. I have a feeling that accelerometers force the user to gesture slower, since the acceleration values can get thrown off with rapid, direction-changing gestures (depending on their sensitivity).
Post a Comment