Category Archives: Uncategorized

Final Reflection – Jonathan Downs

This class was definitely a venture down a new avenue for me. The class didn’t seem too focused on being a coding class but rather a creative class, and I thought that worked really well. For my group’s final project, 80% of the work didn’t even go towards CS coding-related work. The majority of the work was centered on the ideation process leading up to the final product. The work towards the final deliverable was mainly art-related with group members focusing on design and application of tattoos and using Innovation Space to make a creative final video. The “computing” part only came with the Red Herring app. At the end of it all, our final product really did seem like an embodiment of the name of the course, “Creative Computing.”

The final project was probably the most fun I have had working on a CS project in college. The group would get together on weekdays and over the weekend to work on the film. My photography skills undoubtedly tripled over that week which basically means I know how to open the case and put a camera on a tripod now. Messing around with online facial recognition was really exciting as well. Seeing how simple designs on the face could work so well was eye-opening to me.

One of the best parts of the structure of the final project was how much time was devoted to the ideation process. In most classes, maybe one or two days would be given to the idea itself and then software development would promptly begin. Having multiple weeks to figure the idea out, pitch it, gather feedback, and tweak the idea made the idea work a lot better in the end.

Looking back on the classes themselves, the most memorable classes were the ones that were interactive. These include the classes where we “did stuff” such as littleBits and the classes where intense discussion occurred. One of the most memorable classes in my mind was where we came to class, sat in a circle, and discussed the reading for the whole class. The talking points quickly changed from the original focus, but all the topics that arose provided good discussion and a high level of thought from us.

I thought all of the projects provided insight and were fun to work on, although there was a bit of confusion sometimes. For instance, in the first project, I was unsure of what design process we were meant to do. We had gone over the AEIOU one, though, and that is what everyone was doing, so I guess that is the one, right? Well, not really. We could do any design process we wanted, I see now. But in an age where all our classes teach us a specific process and then dictate to do our homework using that exact process, the openness surrounding the projects could be confusing. That seems like more of a problem with the other classes, though, so maybe a little bit of confusion can be a good thing.

Overall, I thought the class was very successful in promoting the creative process and the ideation of a product over the final product itself. There was a lot more “thinking” and a lot less “doing” than most classes. In my college experience, a project is something that a professor creates with a known input and output. There will be a set input, and the output better be the one that the professor expects. This leaves little or no space for the ideation portion. In this class, however, the input and output were not set. The results from each project varied wildly, giving the class a really unique feel. This uniqueness is what really stands out to me and will make me remember this class for years to come.

Advertisements

Group 7 – Design Process

Design Process:

  • World design

Early on we had a pretty good idea of the kind of world we wanted to craft. We chose a world where the citizens are constantly under heavy surveillance; taking inspiration from movies and shows like Minority Report and Person of Interest. We thought we would take this blatant invasion of privacy a step further though and in our world not only are people constantly watched by cameras with facial recognition software, but also have a GPS tracking chip implanted at birth. The government can then use the chip to track your activities at any time.

  • Rebellious Brainstorming

To rebel against this world we decided on developing a way to obfuscate the data that the government was getting thus resulting in them dropping this invasive technology. The first idea was to stop the chip from transmitting a signal to the system there by denying the much needed data the system needs to work. So, we decided on creating a jamming device to block the signal. This was unfeasible though, as it is illegal to make or own one of these in the US.

  • Idea refinement

We also realized, that the absence of data is also data, and that we would need a different method to shut down the system. So, rather than block the signal we chose to spoof it. This involved sending false GPS data out to give the system the wrong data of the person. Just popping up 10 miles away in a moment is just as flag raising though so, our solution would need to be able to move the person to the desired destination at a reasonable pace. During this time a way to fool the facial recognition in the cameras was suggested to us, CV Dazzle. The idea behind this is that by applying  dark makeup in certain spots and designs on the face the software can be fooled into thinking there is no face. This is a subject already being researched, and for our world we imagine that this sort of design can be marketed as a sort of high fashion to proliferate it through the populace. We would create temporary tattoos that can create these designs with a single stamp. Then a person using the aforementioned device and the tattoos would be able to vanish from the government’s eye whenever they wished.

  • Prototyping

The prototyping design process is documented below.

Red Herring (The App)

We wanted the name to be something that fit the function of our app. In our original idea, the watch only blocked the signal and “made you invisible to the government.” We came up with our first name for this device, RabbitHole, to fit the function of this device. However, we got a lot of feedback that the government would easily notice random GPS signals disappearing from their tracking. So we changed this to fit the feedback. Our new idea was to jam the signal but also create a fake signal that sends false location data to the government. At this point, we looked for names that fit the new function of this device. Since the new functionality was mainly to mislead the government, we came up with the name Red Herring.

Wireframe and Ideation:

This slideshow requires JavaScript.

Our final Red Herring product looked like the following:

CobraCover (The Tattoos)

The name for this was inspired by designs on a cobra that is meant to trick its enemies. Similarly, the function of our app is for people to trick the government by making their facial recognition unreliable, ultimately bringing the system down.

We made some designs that were inspired by existing designs that trick facial recognition.

Sample Designs:

Our final product, applied, looks like:

We tested these designs through a site called Face++. This site lets you upload a picture and it will identify faces and the features of the face (race, sex, head tilt, etc.). The results of our testing looked are below. The blue dots are meant to match up to the eyes, nose, and corners of the mouth.

As you can see from our testing, the facial recognition never 100% identified the location of all major features on the face, although it did identify a face with decent accuracy 2 out of the 3 times. One time, however, the facial detection did not detect a face at all, proving that these types of designs do have some merit in breaking facial recognition.

Logos

CobraCover Logo Design:

cc-logo

Red Herring Logo Design:

red-herring-01

Video

Early Video Concept:

  • Early evening, man walking on crowded (if possible) street (8 seconds)
  • Man stops and touches back of head, thinking (4-5 seconds)
  • Camera can cut to a dark room where people are watching a giant map that tracks people (5 seconds) (not necessary)
  • Looks around and sees camera, keeps looking more and more frantically, camera keeps cutting to different security cams that are watching between shots of person looking around (10 seconds)
  • Takes out tattoo from pocket, applies to face discreetly (8 seconds)
  • Show tattoo up close (5 seconds)
  • Man looks at watch (5 seconds)
  • Camera gets images of watch interface (8 seconds)
  • Back to man walking off (disappearing into crowd if crowd == true)

Storyboard:story board

Our final video did not get all the scenes that we originally wanted, but most scenes from our storyboard did make the final cut. Our final video can be found here.

SPOT Survey

Please fill out the SPOT survey at the start of class: https://spot.tlos.vt.edu

D-Dome (Group 8) Design Documentation

logowithcolor

The Pitch

We initially described our D-Dome system as a “deployable self-defense mechanism for a near-future, highly dangerous environment.” For our original exposition, we crafted an anarchical world, which had formed after a severe political revolution. Inhabitants of this world were nothing short of savage towards one another due to the limiting supply of natural resources and the people’s inability to gather everything they need to survive. The criminalized nature of this world led us to D-Dome. In short, we wanted to develop a system that would provide people with instantaneous safety and security.

Essentially, we envisioned D-Dome as a self-deploying shield that would activate as soon as the wearer’s heart rate raised passed a specific threshold, denoting that the wearer was under high pressure or stress. Once the system is in motion, the user is enveloped by a 360 degree protective barrier. Once the user’s stress level has returned to a relaxed state, the shield will close back into itself, allowing the user to continue on their way.

https://creativecomputing.wordpress.com/2016/03/24/group-8-final-project-pitch/

Re-Draft of System:

After our radical initial pitch, we realized that there were too many hypothetical scenarios that we would not be able to adequately address given the time and scope of this class. Many members of the class brought up concerns about the feasibility of the immediate deployment of the device, the strength of the material, and how people would feel “protected” by the device in long-term situations.
We still wanted to keep our government-devoid world with a collapsed economy, but we withdrew our emphasis on an always dangerous, hostile environment (though raids are still a possibility). Our new idea still involved a wearable, but this time it was a suit, named D-DOME, that could automatically inflate into a tent which would serve as a temporary shelter. This would be vital in our world as resources are rapidly depleting and frequent migration would be necessary for an individual to location himself near pools of resources. Additionally, the tent would feature extra fabric to link other D-DOMEs, enabling other nomads to establish communities. This is an act of resistance/revolution to the current system of self-reliance and hostility towards one another.

IMG_0510.JPG

IMG_0511.JPG

Research:

Primary

We created a survey that provided feedback on how people perceive camping and socializing with strangers while camping. We asked for their general information of who they are, as well as their camping knowledge. This information allowed us to scope who would use this product today. We mainly found that people are willing socializing while camping but have a hard time trusting someone within your camp site. The other primary research we performed was interviewing a mechanical engineer and a material engineer, which helped build our idea. They pointed us towards what material we would use along with how it would transform from wearable suit to the shelter.

Secondary

Once we developed our idea of what our system will solve, within the world we created, we then conducted research on various topics. These topics included materials, technologies, as well as research that helped bring depth to our world. For the product D-Dome, we needed to know what materials it would be made of, and how it would be held together. We found similar ‘survival suits/kits’ that are currently on the market which lead us to form a suit that transforms into a shelter. The next big issue we needed to address was a power source to deploy the dome as well as the security system. We found clothing that has solar panels stitched into it, which feeds a battery that holds the power. Our world, was more focused on a future where society is collapsing, which to gather research for we focused on how the homeless community lives. We found the poor mistreatment and lack of care for this population helped guide our world as well as our product. By including the connector between D-Dome’s we hoped to build a sense of community that changes people’s view of others, therefore changing they way they are treated.

Critiques & Refinement of System:

We already responded to the feedback we received in our guest critiques here: https://creativecomputing.wordpress.com/2016/04/12/group-8-critique-responses/

Essentially, the refinement of our system involved redefining our world’s time period to justify the demand for such a product in desperate times. Here’s a description of our updated world:

Our company created a suit that conveniently and effortlessly transforms into a temporary shelter, with the ability to link tents together to metaphorically and physically establish communities. It was originally marketed towards groups of enthusiast campers and homeless communities who would appreciate the convenience of quickly establishing a shelter as needed. For campers, the primary motivator of the linking feature was to ward off bear attacks. Statistically, bears are less likely to attack groups in the wilderness than individuals traveling on their own. For the homeless communities, if several homeless individuals were linked up, the authorities of an area would be less likely to disband these large groups and address this problem in a more benevolent way.

However, research has shown that the world is on the verge of economic collapse. Our company decides that this is a good opportunity to market our product for a “post-apocalyptic” world where resources are scarce and survival is dependent on migrating frequently. Communities that band together for mutual survival will last much longer than individuals attempting to subsist on their own (like the bear analogy). Therefore, our current product was the perfect fit for a future world where resources are rapidly depleting, forcing the population to migrate often to locate themselves near pools of resources. Additionally, the linking feature would lend itself well to individuals establishing communities for mutual survival.  

Prototype Design and Implementation:

20160426_23460420160426_223550.jpg

Since the time remaining after we confirmed our idea was somewhat limited, we knew that creating one full-scale prototype that could actually demonstrate the suit-to-tent transformation just wasn’t realistic. Instead, we settled on making two separate doll-scale prototypes to showcase the aesthetic and functionality of both the suit and tent forms.

The suit prototype was simply a representation of what the physical suit would look like when worn. To implement this, we first cut out a larger piece of canvas cloth that was comparable to the amount of cloth used for the walls on the tent prototype. Then we fitted the cloth to our doll by folding back and pinning excessive cloth using the safety pins. Next, we measured and cut out the pieces of cloth that we would use for the sleeves and hood on the suit prototype. We attached them to the suit using a hot glue gun. Finally, to represent the straps that would allow a D-Dome user to transform the suit into the backpack form, we glued two adjustable velcro straps to the back of the suit prototype.

IMG_0517.JPG

20160427_002117.jpg

Additionally, we wanted to represent our Intrusion Detection System within our abstract prototype. When the D-Domes are linked, a circuit is formed between the shelters, not dissimilar to the “Wagon Wheel” established during the American frontier.. Once the D-Dome circuit has been created, any pressure applied from the outside of the community onto the tented wall (i.e. an outsider attempts to rip open the D-Dome’s walls) an alarm sounds, and lights flash so that people living together within the D-Dome community will be alerted to the intrusion. In order to demonstrate this idea within our prototype, we decided to utilize the LittleBits kit. We wired LEDs, a pressure sensor, and a harsh buzzer throughout the hidden wall of the tent. The pressure sensor is located directly behind the wall that “links” the two D-Domes together. When someone presses against this wall, LEDs inside the D-Domes light up, and the buzzer continually sounds in order to demonstrate the overall alarm system.

test

Video:

IMAG0605.jpg

In lieu of a Kickstarter-esque trailer for our product, we decided to craft a small story concerning the characters used in our various prototypes: Ken and Barbie. Essentially, we wanted to convey the idea that in our imagined reality, people do not last long by themselves. However, the technology behind D-Dome enables people to commune around one another, promoting an overall sense of camaraderie. The initial storyboard called for a small story that takes place over two days. On day one, Ken is wandering alone through a hostile environment. The D-Dome strips away any concerns of shelter he might have on the first night, though he is still left alone to fend for himself. During the course of the following day, Ken meets Barbie and the two “link” up their D-Domes with the knowledge that they will be safer together.

 

Evaluation:

For our evaluation we talked to other engineering majors that applied to our prototype and deliverable. We had them fill out a survey that was comprised of a series of questions that pertained to design. For example ideal materials for the D-Dome, how should the shelter deploy automatically, and what should be the primary power source. This is to check other possibilities that we might not have thought of as well as reinforce the design process our group had already gone through. The second part of the evaluation was for the participants to watch the short film created about D-Dome, then followed by a few questions that made sure the message and problem would be solved by our prototype. The response backed up our ideas and thoughts that went into the making of D-Dome, for instance being able to carry your shelter wherever you go and being able to create a sense of community.

Group 1 – PubliCam

logo_003

Design Process

  • World
    • For our world, we brainstormed and ended up with the idea of having a future where there exists a mesh network that protects users from hacking dangers associated with data mining and hacking. It also stops corporations from gathering users’ personal data for free. This stemmed from the thought of Facebook collecting information while the user is on the Internet to show advertisements when said user is on Facebook. This idea did not quite pan out because our world was not well defined and it was decided that a prototype would not be plausible with the given constraints. We restarted the brainstorming process and came up with a world where although surveillance is high, video streams are hidden from public view and sometimes altered to give a certain point of view. We decided to come up with a system that would provide a way for any user to access video streams around the world. Our system provides inexpensive and high quality cameras that record at all times. These cameras send a live stream to a simple web interface.
  • Project
    • For the actual project/prototype, we started out brainstorming ideas for a layout for the website and came up with a storyboard. We used this storyboard to come up with a wireframe. The wireframe and idea were presented to various people who critiqued them and gave us feedback. We used that feedback to refine our system and come up with a well-designed web interface that provides a live stream when the selected camera node is active. For the prototype, we use a third party website called Bambuser that allows our smartphones to stream live video that embeds into our interface.

Research

  • For our prototype, there were various things we had to research to get the final prototype designed and functioning. For one, some of us had to brush up on web design skills. We also conducted a market analysis of current similar products such as Periscope and Google Maps, to see how users react to the interface and for Periscope especially, the concept itself.

Prototype/Web Interface

This slideshow requires JavaScript.

This slideshow depicts the flow of navigation for the web interface. It starts out zoomed out to a map of the United States and allows the user to scroll around as well as zoom in and out. The slideshow shows a zoomed in view of Blacksburg next and then a detailed view of one of the markers. Once the link in the detailed view of a marker is clicked, a camera feed page is shown and the user can click play to view a live stream of that certain camera node.

Video

This slideshow requires JavaScript.

Evaluation Plan

  • Focus Group/Individual Interview
    • Set up the web page and allow the user to essentially “play” around with the user interface
    • After about 5 minutes, record any initial comments/inquiries that the user may have
    • Once all notes are recorded, have the user go through a series of specific tasks to see what/where they click
    • Zoom in closer to a specific area on the map
    • Navigate to the camera view of a desired point on the map
      • Points are represented by markers
    • Once at the camera screen, navigate home
    • Use each of the filters appropriately
      • Most Viewers
      • Closest
      • Most Recently Added
    • Throughout each of these tasks, take note of any discrepancies/errors and record them appropriately
    • If there is any confusion along the way, ask the user to describe why and try to guide them along in the right direction but not necessarily give them the answer right away
    • Gather notes and thank the user for their participation
  • Analysis
    • Put together all notes and look for any correlations among the notes
    • If needed, make changes to the interface to make it more user friendly and more intuitive depending on the user feedback
    • If satisfied, move on to the next step in development process
    • If unsatisfied, go back and repeat with a different group/individual
  • Results
    • Although we did not get to a lot of people to speak with about our layout, those who did check it out responded positively. Thanks to our simple layout and few pages to click through, users were able to correctly navigate to the camera stream page. One thing that came up was the suggestion for a home button, however, this was when the user did not realize that the logo in the bottom left was also a button that linked to the home page.

Group 2: UniTongue Design Process

 

unitongue

 

Design Process

World → Resistance/Revolution → Primary/Secondary Research → System → Refinement

We first designed a world where pollution has become so severe that it has almost complete eradicated safe, healthy verbal communication while outdoors. Environmentalists and governments have no made any efforts toward solving this problem, leaving people who want to openly communicate (no face masks) exposed to nasty health effects.

Once we had this world, we had to narrow down our resistance and revolution. This wasn’t an easy task for us, and it developed and matured along the way, especially with the help of guest critics. Overall, we decided that our system should be a preventive solution that revolutionizes communication (with an educational approach), ultimately resisting the environment, verbal speech, and trust in environmentalist and the government to solve our world’s problem.

During primary research we conducted ten interviews and sent out a survey that received five responses. From the interviews we received an overall positive/neutral rating and some popular reserves. These include the fear of accidentally expressing private thoughts; movement glitches; and the inability to properly express emotions. These reserves inspired additional system functionality including the “suggestion only” feature (as opposed to just forcing users into making a gesture) and “emotional cues” (emitted into users ears to cue them when to make facial expressions as well). From the survey we received an overall negative/neutral rating and generally just mass confusion about what the system would actually be doing. From here we realized that we really needed to narrow down our functionality and components since it was not coming across well on its own. We needed to be sure to make it known we are not solving health issues. We are solving communication issues.

Secondary research helped us to realized that if we broke our system down into smaller components, the functionality of those components was currently achievable or really close to being done. Reading brainwaves was the furthest away in the future by far (we’re estimating 10 years). Additionally, we looked at articles talking about current smog pollution levels for cities around the world and predictions for rising pollution levels (examples can be seen below in six major cities). We determined that pollution will eradicate healthy speech within in the next 20-50 years, which helped us to establish our world timeline, and eventually our system timeline. Because we wanted this system to be preventative, and our world is 20-50 years in the future, we are shooting for a 2030 plan.

Finally, for our prototype we designed a gesture assistant system. Along the way we considered multiple hardware units with varying functionality, but in the end we knew we had to narrow down the components and functionality to make it more desirable and less overwhelming to potential users while not distracting them from the actual purpose. We finalized our ideal system to consist of the following six components:

  1. Camera
    • detects the gestures of other people
  2. Speakers
    • emits the gesture’s translation into the wearer’s ear
  3. Electrodes
    • controls/stimulates the user’s upper extremeties to create the desired gestures
  4. CPU
    • the main unit controlling all functionality including translation and proximity detection
  5. Wearable head unit
    • this will host all hardware components besides the electrodes
  6. EEGs
    • these will be used to detect brainwaves/thoughts
    • we won’t showcase these on our prototype since they would be built into the wearable head unit

With this design and approach, users would not be dependent on our system for the rest of their life. They would eventually become proficient enough in this form of sign language that they could sign the appropriate gestures themselves without system stimulation.

overall

Smog examples, prototype sketches, storyboards, and primary/secondary research for our system, as presented to critics.

Further, with prompting from critics, we made system refinements and thought about some of the broader impacts our system, since it is flexible enough to solve more issues. These include:

  • teaching sign language
  • assisting those with limited motor functions
  • universal communication (e.g. across cultures, deaf community, etc.)

Keeping all of this in mind, we went on to design how we wanted to prototype this in today’s world (not all functionality can be present). Additionally, we wanted to make sure we had a memorable name and video to promote our system, which were the last three design steps all detailed below.

Prototype Design → Branding → Video

Prototype Setup/Design

We first created a higher fidelity prototype design, shown below, as opposed to the sketches we had been using. Then worked out how we wanted to prototype this design.

prototype-fullbody-2-logo

Medium fidelity design of our system prototype, officially known as UniTongue. This is the design we based our actual prototype off of.

We want to go with a Wizard of Oz approach to achieve full “functionality” of the system. This allows us to showcase the desired functionality in an affordable manner. The materials we will be using for the prototype include:

materials-ordered

From left to right, wireless earbuds, googly eyes, tens unit, extra electrodes, and headbands. These were all the materials we ordered and were used to create two system prototypes. Not pictured: two more earbuds and two more googly eyes, used to create two more system prototypes (for a total of four).

  1. Headband
    • to be used as the wearable head unit
  2. Bluetooth Headphone
    • user will place it in their ear and it will function as the speaker
    • will serve to link phone calls so “behind the scenes” CPU “unit” (actually a person) can instruct users how to sign the gestures
  3. Electrodes
    • these will be placed on the user’s upper extremities
    • won’t actually be connected or functional
  4. Googly Eyes
    • these will be attached into the headband to simulate the camera
    • no, these won’t be functional eyes

      camera-lens

      “Camera lens” made from googly eyes. We made four total.

  5. Tens Unit
    • this won’t be a part of the “prototype” but will be used to demonstrate how it would feel to actually have the electrodes move your upper extremities
  6. Cardboard CPU and power supply
    • this will not be functional, but will just be attached to the head unit to showcase what they might look like

      cpu

      The outline for our “CPU” and “power supply” before folding. These were made from the headband packaging. Additional ones were made from card stock. We made eight total.

We will have one person acting as the “CPU” unit and remain behind the scenes. The “camera” and wireless earphones will be attached to the headband and function as the wearable unit. We will also fashion and attach a non-function, placeholder CPU.

Once the user is wearing this, we will place placebo electrodes on their upper extremities.

The user will “think” what they want to say by choosing from a listed phrase that we will have displayed and saying it so the “CPU” can hear. The “CPU” will then “translate” this and describe to them what to do with their arms, hands, and/or fingers (simulating the electrode stimulation and control). Once this is complete the other person’s “camera” will detect these gestures and the “CPU” will translate by emitting the translation into their ear from the wireless headphones. This process can then be repeated.

The connections between the “CPU” and the users will be established through a phone call.

Additionally, there will be a board of phrases to choose from, and a drawing of upper extremities with labeled positions. This will cut down on confusion when directing users how to position their upper extremities.

And specifically for presenting, we will all be wearing the prototype (we made four versions), as shown below.

 

IMG_7695.JPG

Each group member is wearing a UniTongue prototype.

Branding

We came up with branding ideas by brainstorming. Ideas included:

  • RosettaTongue, UniTongue, SoloTongue, or GestureTongue
  • Result: UniTongue

Next we made a logo:

unitongue

Branding logo for our system. Each “u” is meant to look like a tongue.

We went one step further and coined a slogan:

“UniTongue: expression without speaking. Tongue not included.”

Video

UniTongue-Video

We began designing for our video by constructing story boards of how we could showcase the prototype, deciding on a particular video style, working out a script and its scene timings, then filming and editing. Each step further detailed below.

Story board

  • without the system to demonstrate negative health effects

    storyboard1

    Story board idea for demonstrating the negative health effects associated with not using our system. In the end, we decided not to use this in our video.

  • with the system to demonstrate functionality

    This slideshow requires JavaScript.

    Story board idea for demonstrating the functionality of our system. In the end, we decided to use this idea for our video.

     

Originally we thought using about using both storyboards: one to demonstrate negative health effects from not using our system, and one to demonstrate positive results of using our system. However, we thought for this video it would be best to solely focus on demonstrating the functionality of the system to not distract users or take away from functionality by focusing on health effects.

Furthermore, to make the video more relatable, we decided to photograph the latter scenario (prototype functionality), as shown below.

This slideshow requires JavaScript.

We later edited the images with thought bubbles and descriptions to convey the “functionality” before adding them to the video.

Style

We first thought about having a stop motion video, and then quickly diverged into a live motion video that we frequently stop/pause to narrate the inner workings of our prototype with “speech bubbles.” Once we started looking up examples and resources for this type, another styling idea came to us. We began planning a “Because A then B. Because B then C. … Because Y then Z.” type of video, which played into the dramatic effect we wanted, however we quickly realized this wouldn’t be informative enough.

So, we took a step back and decided to design the script first. Once we did this the styling fell into place. We decided to have pictures matching the subject of the script. These consisted of polluted cities, sketches and designs of our prototype, actual pictures of the finished prototype, a storyboard to demonstrate functionality, and a clip of our prototype being worn. The style of the video will be mostly educational: it presents a problem in our world and showcases our system as the solution.

Script/Design

The full script with planned narrations and transitions

  • 5-10 seconds: Cover our world and present the pollution problem and timeline
  • 5-10 seconds: Cover system components and functionality (at a very high level)
  • 10-20 seconds: Demonstrate functionality with storyboard and prototype showcase
  • 3-5 seconds: Tagline
  • 5-10 seconds: Credits and Video Acknowledgments 

Filming/editing

  • Photoshop was used as needed for particular images
  • All video was filmed using an iPhone
  • Audio was recorded separately (also using an iPhone)
  • The entire video was created with iMovie
  • Once completed it was uploaded using Vimeo

We decided to film a 360 degree view of our prototype. To do this we rotated the camera around someone wearing the prototype as they stood still. Additionally, we started from afar and zoomed in the view closer on the user for another shot. This allowed us to focus on the wearable head unit (where the majority of the components are), but still display the entire prototype including the electrodes on the arms.

Further, we took pictures of each transition for the storyboard. Later, the thought bubbles and descriptions were editing in before adding them to the video.

Lastly, we borrowed a seven second scene from Guardian News (0:00-0:07 seconds) to showcase actual smog.

Evaluation Plan

Full Evaluation Survey

We started our evaluation plan by working out the main sections we wanted to learn more about. These include usability, aesthetics/design, and the plausibility of functionality. We created a survey with 13, 6, and 6 questions in each category, respectively.

We really wanted to hit upon if people would use the system, want to use the system, how they would use it, and why. For aesthetics/design, we wanted to gauge how well we did with combining the functionality into one unit. Does the design affect if users would want this system? Would users potentially use this if it came in a different form? And Lastly, for the plausibility of functionality we wanted to verify if people believed we picked a feasible timeline and what features might have been too ambitious or not enough. 

The plan is to evaluate our system with previous interviewees and more individuals outside of their majors and/or age ranges.

We also did some impromptu evaluating of peer reactions to the electrodes moving your fingers, as pictured below.

electrodes1

Using the tens unit to evaluate peer reactions to having electrodes influence the behavior of their fingers. This will be what we use to demonstrate the “suggestive” movement that will be simulated by our UniTongue system in order to create the appropriate gestures.

Results

Raw Data from Interviews

We were able to evaluate eight participants. The prototype was showcased to each of them and they were able to wear it or examine it for as long as they liked. 

Analysis

Evaluation-Word-Cloud

Word cloud of the raw data (without the questions) from the evaluations.

Initially, a large amount of participants had a distrust in the system with the continuous fear of their “brains being hacked”, but many of them said that knowing the data transference would be encrypted made them feel much better about the subject. Also, while almost all participants believed that the device accomplishes it purpose, many also believe that the device could be used with other purposes that they believed could be a better use of the system. Many participants also believed that the device might be a bit discomforting at first, but almost all of them believed that after a few uses, they would become accustomed to the device and be much more comfortable using it. A few people also expressed a fair amount of interest in having different or aesthetically modifiable versions of the headset (such as making a glasses versions, or changing the look of the electrodes).

Overall, users predominately said they would use our system as a solution to this problem, and especially as a solution to some of the aforementioned broader impacts. 

ICAT Day 

ICATDAY-1

Professor Aisling Kelliher experiencing our electrode simulation.

ICATDAY-3

An ICAT Day attendee experiencing our electrode simulation. 

Tagged , , , , ,

Group 6 – Timeline & Parts

TheOneRing

Parts List:

Flora Power guide:  https://learn.adafruit.com/getting-started-with-flora/power-your-flora

3D Print Ring-holder for Accelerometer – Dominic

 

Timeline:

  • Tues 19:
    • Have most of app core functionality done by class (UI and back-end)
      • Music [Nick] – Functionalities to play, pause, next, and previous songs
      • Text Reading [Peter] – Acquire text messages from phone and read them out
      • Weather [Dominic] – Acquire weather information from online source and read them out
      • Bluetooth [Kelvin] – To connect hardware to app
    • Video storyboards/script
      • Make sure world is complete
      • Draw out storyboards for video
  • Thur 21
    • Putting together Flora-Accelerometer-Ring-Bluetooth and 3D print ring
    • Test output data of hardware and begin to connect to app
    • Complete video plan
    • Begin video filming, and also editing and adding needed special effects
  • Tues 26
    • Finish app testing and finish with prototype
    • Complete video 
  • Thur 28
    • Finalize prototype
  • Mon 2
    • Presenting
  • Tues 3
    • Presenting