Gesture Assistant Timeline Calibration and Prototype Design

Calibration

  • no eye-tracking as suggested during critiques (too flashy/complicated and not enough positive results)
  •  advertise as an “adult system” to avoid issues with adolescents
  • move the timeline forward as “2030” plan
    • originally we didn’t agree with this idea, but now that recent articles are coming out that detecting and interpreting thoughts is possible we think our system is more achievable
  • will address “broader impacts” since our system is flexible enough to not just solve this pollution problem
    • teaching sign language
    • assisting those with limited motor functions
    • universal communication (e.g. across cultures, deaf community, etc.)

Overall, we have decided to advertise our system as a preventative solution to severe pollution. The pollution will eradicate healthy speech within in the next 20-50 years, so our system will predicatively solve this problem by revolutionizing communication, ultimately resisting the environment, verbal speech, and trust in environmentalist and the government to solve this problem.

Timeline

Week Five: System refinement/polish/documentation of design process

  • April 19-20
    • develop video
      • scenes/script
      • everyone
    • develop presentation
      • Somn/Danny
    • formally write up refinements
      • Brittany
    • pick up/construct/test prototype
      • everyone
    • start video producing
      • Sam with materials and editing
      • everyone else with acting
  • April 21
    • present evolution plan
      • TBD based on setup

Week Six: Stress testing/evaluation/videos

  • April 26-27
    • prototype has been stress tested
      • everyone
    • evaluation study among users
      • Brittany/Danny develop script
      • Somn/Sam conducting study
    • producing video
      • use “scare tactics” to convey impending doom of pollution and need for this system
      • show how this system is educational and will not produce dependent users
      • Danny/Brittany acting
      • Sam/Somn filming
  • April 28
    • system presentation
      • TBD based on setup

Week Seven: final presentations

  • May 2: ICAT day
    • Brittany behind the scenes translator  — acting as CPU
    • Danny demoing the tens unit
    • Danny/Sam conversing with audience member
    • Somn evaluating audience
    • **Roles subject to change due to time availability in on this day
  • May 3: final class exhibit and presentation
    • Same setup as ICAT day

Prototype Setup/Design

We want to go with a Wizard of Oz approach. The materials we ordered include

    1. Headbands
      • to be used as the wearable head unit
    2. Bluetooth Headphones x 2($8.96)
      • user will place these in their ears and they will function as the speakers
      • will function to link phone calls so “behind the scenes” CPU “unit” (actually a person) can instruct users how to sign the gestures
    3. Electrodes x 8
      • these will be placed on the user’s upper extremities
      • won’t actually be connected or functional
    4. Teddy Bear Eyes
      • these will be embedded into the headband to simulate the camera
      • no, these won’t be functional teddy bear eyes
    5. Tens Unit
      • this won’t be a part of the “prototype” but will be used to demonstrate how it would feel to actually have the electrodes move your upper extremities

We will have one person acting as the “CPU” unit and remain behind the scenes. The “camera” and wireless earphones will be attached to the headband and function as the wearable unit. We will also fashion and attach a non-function, placeholder CPU.

Once the user is wearing this, we will place placebo electrodes on their upper extremities.

The user will “think” what they want to say by choosing from a listed phrase that we will have displayed and saying it so the “CPU” can hear. The “CPU” will then “translate” this and describe to them what to do with their arms, hands, and/or fingers (simulating the electrode stimulation and control). Once this is complete the other person’s “camera” will detect these gestures and the “CPU” will translate by emitting the translation into their ear from the wireless headphones. This process can then be repeated.

The connections between the “CPU” and the users will be established through a phone call.

Additionally, there will be a board of phrases to choose from, and a drawing of upper extremities with labeled positions. This will cut down on confusion when directing users how to position their upper extremities.

 

Advertisements
Tagged , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: