Group 2: Gesture Assistant Field Work

Primary Research Approaches

  • Survey
  • Interviews (10 participants)

Secondary Research Links

  1. A glove that translates sign language to speech
  2. A project in China to recognize and translate sign language using a Microsoft Kinect
  3. A survey on sign language recognition from the IEEE Xplore database
  4. An approach to minimize processing recorded sign language to a sentence
  5. Low cost approach for real time sign language recognition
  6. Multilayer architecture in sign language recognition system
  7. The ‘Not’ Face: A Universally Recognized Facial Expression
  8. Why We Gesture. Pg. 5 shows a way to write text to suggest when to express emotions and gestures
  9. Real-Time American sign language recognition using desk and wearable and computer based video.
  10. A comparison of sign language and spoken language
  11. An introduction to hidden Markov models
  12. Human-Human-Interface
  13. Experiment: Record Electricity from Your Muscles!
    SuperHero_Web
  14. Ted Talk: How to Control Someone Else’s Arm with your Brain

Analysis of or inspiration from prior work

  • Have the system urge you when to make facial gestures with audio, e.g., while you are signing you would hear “smile now” (from the earpiece) if that was a necessary part of the gesture.
  • Current problems with video speech recognition is the processing time is too high.
  • The use of a Markov model could greatly help decrease processing time for video to speech analysis
  • The use of Hu-moments in tangent with a Markov chain prediction model could also be used to reduce the cost and keep accuracy up.
  • Exoskeletal structure are difficult to build, clunky, and expensive typically.
  • An easier way to control a persons arm instead of an exoskeleton would be to mimic electric pulses to stimulate neurons to send the correct signals in a sequence in order to mimic sign language.
    HHIArtist

Research Results

Setting: You are living in a world where it is not environmentally safe (e.g. smog, pollution, etc.) to verbally communicate while exposed to polluted air.

System: A gesture assistance system which will be worn on the arms, hands, and head. This will monitor your brainwaves to detect your thoughts, and suggest the needed upper extremity movements (via the arm portion) and emit spoken instructions for facial expressions (via the head portion) in order to produce the corresponding sign language. Additionally, a camera will be present on the device to detect the gestures of others, and emit the spoken translation from an earpiece.

Survey questions gauged users opinions of the effectiveness of this solution, whether they would use it, if they had any concerns, and if they spoke sign language already.

From the survey, we learned:

  • The majority of people misunderstood what the system would be doing
    • It wasn’t clear that the camera would capture and translate the gestures of others
    • They thought they would need to learn sign language to use it
  • Responders were neutral on the system
  • No responders spoke sign language

Interview questions

  • Would you use such a system?
  • Do you see any challenges of such a system?
  • What are your thoughts about the system?
  • On a scale from 1-10 (10 being best), do you think this system will solve the problem?
  • Do you feel comfortable monitoring your brainwaves to detect your thoughts?

From our interviews, we learned:

A word cloud of the raw interview data.

word_cloud

  • The majority of people said they wouldn’t want to or had reserves about using this system. Reasons include:
    • people were afraid of accidentally expressing private thoughts
    • glitchy movements/translation errors
    • privacy issues regarding if thoughts could be “hacked”
    • proximity of communicating
    • lingering effects on the body from detecting brainwaves (e.g. brain cell growth)
  • Reasons people that wanted to use the system include
    • personal health
    • being able to express themselves (presuming there are no kinks)
  • Diverse range of challenges
    • inability to properly express emotions
    • doubts about efficiently reading thoughts
    • implementation difficulties (e.g. error in translation and component synchronization)
    • potentially high price

“I’d be scared that my thoughts would be translated over if I was thinking something private.”

Lowest success prediction: 2
Average: 5.6
Highest success prediction: 9

“…looks like the logical next step to what Stephen Hawking uses to talk.”

Secondary Research

There are currently methods and technologies being worked on that could be applied to our system. For example, there is a lot of research around using video to translate video gestures to speech. The biggest problem with this method currently is the speed of processing is very slow, but I believe that a Markov model could be applied to this recognition system. A Markov chain is defined as “a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.” Using this Markov model in parallel with semantic analysis methods could greatly decrease the processing time required, thus getting closer to real time translation of sign language.

Another lower cost alternative concerning the video to speech translation could be the use of Hu-moments, otherwise known as image moments. Image moments are defined as a certain particular weighted average (moment) of the image pixels’ intensities, or a function of such moments, usually chosen to have some attractive property or interpretation. Hu-moments have been proven to be capable of being 76% accurate with a very low cost web-cam with an uncontrolled background setting. This research combined with a Markov technique from a single mounted camera could be used for a quick, low cost, and accurate method for interpretation of sign language in real time.

Exoskeleton structures are typically fairly expensive, and clunky, making them fairly ineffective at controlling a persons arms at a comfortable level. However, everyone sends neural signals from their brains to their extremities every day, every second. One example of neural control is the capability to place electrodes to the outside of a person skin right above their Ulnar nerve near the outside of their elbow. This allows for the electrodes to sends electric signals through the electrodes and control parts of a persons arm, wrist, and hand. If used in conjunction with other key nerves, it is possible to control the movements of a persons extremities, but still allow them control as well.

Overall Points for System Changes and Inspiration

  • Privacy is a huge concern surrounding both hacking and expressing themselves
  • Decided to mimic neural signals to provoke movement in users
  • Decided that a single camera could work with tangential use of Markov chains and image points
  • Decided to have a suggestion setting where it doesn’t force the user to move the arms entirely, just make ticking neural suggestions
  • Use components and methods that don’t have harmful body effects

 

Advertisements
Tagged , , ,

One thought on “Group 2: Gesture Assistant Field Work

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: