Gesture Assistant: Sketches/Paper Prototyping

overall

We decided to present our information on a poster board. This includes

  • Current photos of smog polluted cities from all around the world (Beijing, Dubai, Canterbury, New Delhi, and Los Angeles) to put our smog polluted world into perspective.
  • Hardware List
  • Prototype sketches
  • Story board detailing the most common scenario a user of this system would encounter
  • Interview questions and results including common responses/reserves, predicted challenges, and quotes, along with our responses/inspiration.
  • Survey questions and results including common responses/reserves and quotes, along with our inspiration.
  • Research and inspiration from past work or ongoing projects including pictures and quotes

 

Current Photos of Smog Polluted Cities

  • Beijing, China

    Air Pollution In Beijing

    BEIJING, CHINA – JANUARY 23: A tourist and her daughter wearing the masks visit the Tiananmen Square at dangerous levels of air pollution on January 23, 2013 in Beijing, China. The air quality in Beijing on Wednesday hit serious levels again, as smog blanketed the city. (Photo by Feng Li/Getty Images)

  • Dubai, UAE
    smog_dubai

    Dubai Smog. Image: Faris Algosaibi

     

  • Canterbury, New Zealand 
    smog_canterbury

    Factory chimneys emit smoke and winter smog makes it hard to see houses between the trees in Christchurch, Canterbury, New Zealand in June 2009. Image: Colin Monteath/Hedgehog House/Minden Pictures/Corbis

  • New Delhi, India
    AgraNewDelhi

    Behind the Taj Mahal in Agra, India in March 2011. Image: Eric Meola/Corbis

     

  • Los Angeles, California, USA
    smog_losangeles

    Birds fly across the sky at daybreak over downtown Los Angeles [one of the 10 most polluted cities in the U.S. from the Washington Post] in 2011. Image: Frederic J. Brown/AFP/Getty Images

Hardware List

  1. Camera
  2. Speakers
  3. Accelerometer
  4. Electrodes
  5. CPU
  6. Wearable head unit

 

Prototype Sketches

A wearable head device for users. This device hosts all of the components except for the wireless electrode sensors used on the upper extremities.

Prototype

Prototype-Head

 

Story Board

This details the most common scenario a user of this system would encounter: wanting to say good morning to someone while outside in polluted air.

12966275_10205555737042764_1533441114_n

12939531_10205555737522776_523254138_n

Interview Questions and Results

  • 10 participants
  • Overall positive (5.7/10.0) average rating
  • Questions
    1. Would you use such a system?
    2. Do you see any challenges of such a system?
    3. What are your thoughts about the system?
    4. On a scale from 1(worst)-10 (best), do you think this system will solve the problem?
    5. Do you feel comfortable monitoring your brainwaves to detect your thoughts? Explain

 

  • Reserves
    1. Fear of accidentally expressing private thoughts
    2. Movements glitches causing translation errors
    3. Privacy: can thoughts be “hacked”?
    4. Lingering effects on the body (e.g. brain cell growth)
  • Our Response to Reserves, respectively.
    1. Functionality to only “suggest” movements
    2. Functionality to only “suggest” movements allowing using to smooth out movement glitches
    3. Adding security features
    4. Studies to see if the health effects from our world outweigh the (potential) health effects from our system
  • Challenges
    1. Inability to properly express emotions
    2. Doubts about efficiently reading thoughts
    3. Implementation difficulties (e.g. component synchronization)
    4. Potentially high price
  • Inspiration/Response to Challenges, respectively.
    1. Emotional cues (e.g. “smile now” from earpiece)
    2. Lots of testing
    3. Minimized hardware components and created a centralized CPU to handle data processing, translation, and distribution
    4. Minimized hardware components and creating a “minimal” functionality version/prototype
  • Quotes:

“I’d be scared that my thoughts would be translated over if I was thinking something private.”

“…looks like the logical next step to what Stephen Hawking uses to talk.”

“It’s hard for a system to recognize human emotions.”

“I would use the system but with reservation. I would have worries about someone reading my thoughts.”

 

Survey Questions and Results

  • 5 participants
  • Overall neutral (3/5) rating
  • Questions
    1. How would you rate this system, on a scale from 1-5 (1-very bad, 3-neutral, 5-very good), as a solution to the problem presented in the setting?
    2. If you were living in such a setting, would you use this system? Explain
    3. Do you have any concerns about this system?
    4. Do you know sign language?
  • Responses
    • Mass confusion about what the system would actually be doing
      • camera captures and translates the gestures of others
      • do not need to learn sigh language to use
    • Neutral on whether they would use the system
    • Unanimously did not speak sign language
  • Quotes

“How does this solve the problem of pollution and smog?”

“I am not sure the camera would accurately capture my hand gestures”

  • Inspiration
    • Clarify the world
      • Make it known we are not solving health issues. We are solving communication issues.
      • Make the function of each hardware component explicitly known.
      • Be clear that users do not have to learn sign language.

 

Research and Inspiration From Past Work/Ongoing Projects

  1. An Approach for Minimizing the Time Taken by Video Processing for Translating Sign Language to Simple Sentence in EnglishAn Approach for Minimizing the Time Taken by Video Processing for Translating Sign Language to Simple Sentence in EnglishThis research explains an approach to minimize time involved in video processing, which inspired us to use this approach in tangent with a prediction model for real time video translation. This would help to ensure that our camera is recognizing gestures at a fast rate in order to accurately keep up with real-time communication.
  2. Utilizing Markov Chains for Speech/Gesture PredictionsUtilizing Markov Chains for Speech and Gesture PredictionsThis research explains a prediction model approach to translation, which inspired us to incorporate a Markov chain prediction model in our system to help with real time visual translation. This will help to improve the speed and accuracy of our translation system.
  3. Kinect Sign Language Translator expands communication possibilitiesKinect Sign Language Translator expands communication possibilitiesKinect Sign Language Translator expands communication possibilities_quote
    This project eliminates the need for a human translator, instead using a Kinect sensor to translate speech and gestures. Our project takes this idea, but instead of spoken speech it translates actual thoughts.
  4. Record Electricity from Your Muscles!Record Electricity from Your Muscles!Record Electricity from Your Muscles! - arm
    This research gave us a better idea of how electrical impulses spread throughout the body, which inspired us to use an electrode transfer approach to simulate gestures with electrical impulses.
  5. TED: How to Control Someone Else’s Arm with Your BrainTED How to Control Someone Else's Arm with you Brain.pngScenario: the user moves their arm and it moves someone else’s arm as well. This is done with electrodes.Our system is similar in the fact that it would use brainwaves to move body limbs, but they would be your own limbs and you wouldn’t think where you want to move them: you would think a regular sentence and your limbs would feel appropriate sensations to suggest (or force) movement in that direction. We could incorporate similar electrodes into our system.

  6. Smart glove can turn sign language into text and speechSmart glove can turn sign language into text and speechThis idea could be reversed to simulate/suggest the correct finger positions for a user to achieve their desired gesture after the thoughts (read as text) are detected.
  7. Watch Microsoft’s Seeing AI help a blind person navigate lifeWatch Microsoft’s Seeing AI help a blind person navigate lifeThis technology can be used to identify human targets for communication (as opposed to a poster of a human or some other non-human object of similar resemblance).
  8. The ‘Not Face’ is a universal part of languageThe ‘Not Face’ is a universal part of languageIf facial expressions like these, which are crucial in determining the meaning or implication of certain gestures, can be recognized on a wide scale, then this will offer a higher level of accuracy for our system.

*      *      *

Working diligently

Critiques

Margaret Ellis (CS):

  • What is the purpose of the sign language route?
  • Why can’t we just text each other instead of sign language?
  • Promotes social interaction
  • Are there any technologies for gesture to sound currently?
  • Is video capturing enough to read other’s gesture?

Response:

Her questions made us realize we are going for a more educational prototype. Gestures are universal and would stimulate communication unlike texting (which might not even exist in our world since people are quickly avoiding anything that requires reading). Additionally texting would require dependence on a device for communication, and we’re hoping our device will only be needed until the user learns the gestures for themselves.

Zach Duer (ICAT):

  • Just pollution as a problem is not enough!
    • There are so many other problems this system can solve. Think bigger.
    • Problem and solution don’t link up.
  • Clarify the concept of electrodes.
  • Why gestures and not a different mode of communication?
    • What if multiple people converse simultaneously?
      • Eye tracking!
    • Devices can’t communicate directly? (Re: no texting)

Response:

His feedback was eyeopening. We realized our system could be utilized for so much more than what we were asking of it. However, given the prompt we don’t think we should change our “problem” because the other things it could be solving are present in this world.

He brought up the possibility of eye tracking (as opposed to our closest person method) for conversation linking which we thought was appropriate since gesture-communication requires you to be looking at someone anyway.

Matt Wisnioski (STS):

  • What’s your time horizon?
  • Problem and solution don’t match!
  • This problem might have a broad scale environmental solutions, which may not require your system!
  • What’s the core of the system? Gesture?
  • The time horizon (of 50-100 years) is too far. If your system can be achieved in 10 years, why target for 50 years?
  • Pollution is too small of a problem.
  • Strong research, good drawings.
  • What have you learned from this?

Response

He was very adamant that our system shouldn’t be as far in the future as we planned, however, we disagree because brain wave reading and translation is not, in our opinions, close to feasible in the next ten years (his estimation).

He also suggested we might not need this system if someone else solves it environmentally, which isn’t a part of our world so we didn’t think this was relevant. (I.e. our world is a world in which no one has solved this environmentally).

Also, pollution isn’t just the problem. It’s the lack of communication resulting from the negative health effects from pollution. This was covered, but maybe we need to make it more apparent?

Liesl Baum (ICAT):

  • Any control over what thoughts get expressed?
  • People adapt to sign language – system will slowly be unnecessary.
  • How does the camera detect who is talking?
  • What about private conversations that need to be censored from people around?
  • Any thoughts about transmitting one person’s gestures to multiple people?
    • For public speaking, for instance.
  • Does the smog interfere with the gesture detecting camera?

Result:

She was the first person to bring up the idea of “gossiping” and it’s effects on our system. She had a very interesting feedback approach. This was helpful because we hadn’t considered how we could handle gossip (i.e. conversations that you are signing but you don’t want other people to be able to see/translate). We don’t think this is possible to avoid, but we also don’t think it’s going to be an issue since you could turn your backs to other people if really needed.

We realized here that we need to make it clear when presenting that multiple people can focus on one person, but one person can only focus on one other person.

Additionally, she thought that it was a good idea to slowly get rid of the system and agreed that with current research the camera should be able to see even in the smog.

Advertisements

One thought on “Gesture Assistant: Sketches/Paper Prototyping

  1. Brittany Barnes says:

    Posted yesterday: “Special software is able to decode his thoughts and convert them into electrical signals in his hand.”

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: