Category Archives: final project

OpenNews 3D Prints


After several weeks of waiting, we finally got 3D prints in for our enclosures! We had hoped to have these ready for ICAT day; however, the prints ended up taking much longer than estimated by the library’s Design Studio. The Raspberry Pi enclosure ended up needing to be printed twice after the first 3D printer malfunctioned and printed an unusable hunk of plastic. As you can see in the photos below, the prints were still imperfect, but fit together well and would have been ideal for a quick prototype demonstration.

We modeled an enclosure for our buttons to fit in with space for LEDs and wires. We also downloaded a 3D model for our raspberry pi enclosure from thingiverse.


Ultimately, we were able to make-do with our quick cardboard version of the enclosure, which was more than sufficient for our purposes, especially considering the technical issues we encountered during our presentation that made the buttons less than functional.

In the future, it might be a better idea to allow for several weeks for printing time to account for potential 3D printer delays and malfunctions.

Tagged , , ,

Group 1 – PubliCam


Design Process

  • World
    • For our world, we brainstormed and ended up with the idea of having a future where there exists a mesh network that protects users from hacking dangers associated with data mining and hacking. It also stops corporations from gathering users’ personal data for free. This stemmed from the thought of Facebook collecting information while the user is on the Internet to show advertisements when said user is on Facebook. This idea did not quite pan out because our world was not well defined and it was decided that a prototype would not be plausible with the given constraints. We restarted the brainstorming process and came up with a world where although surveillance is high, video streams are hidden from public view and sometimes altered to give a certain point of view. We decided to come up with a system that would provide a way for any user to access video streams around the world. Our system provides inexpensive and high quality cameras that record at all times. These cameras send a live stream to a simple web interface.
  • Project
    • For the actual project/prototype, we started out brainstorming ideas for a layout for the website and came up with a storyboard. We used this storyboard to come up with a wireframe. The wireframe and idea were presented to various people who critiqued them and gave us feedback. We used that feedback to refine our system and come up with a well-designed web interface that provides a live stream when the selected camera node is active. For the prototype, we use a third party website called Bambuser that allows our smartphones to stream live video that embeds into our interface.


  • For our prototype, there were various things we had to research to get the final prototype designed and functioning. For one, some of us had to brush up on web design skills. We also conducted a market analysis of current similar products such as Periscope and Google Maps, to see how users react to the interface and for Periscope especially, the concept itself.

Prototype/Web Interface

This slideshow requires JavaScript.

This slideshow depicts the flow of navigation for the web interface. It starts out zoomed out to a map of the United States and allows the user to scroll around as well as zoom in and out. The slideshow shows a zoomed in view of Blacksburg next and then a detailed view of one of the markers. Once the link in the detailed view of a marker is clicked, a camera feed page is shown and the user can click play to view a live stream of that certain camera node.


This slideshow requires JavaScript.

Evaluation Plan

  • Focus Group/Individual Interview
    • Set up the web page and allow the user to essentially “play” around with the user interface
    • After about 5 minutes, record any initial comments/inquiries that the user may have
    • Once all notes are recorded, have the user go through a series of specific tasks to see what/where they click
    • Zoom in closer to a specific area on the map
    • Navigate to the camera view of a desired point on the map
      • Points are represented by markers
    • Once at the camera screen, navigate home
    • Use each of the filters appropriately
      • Most Viewers
      • Closest
      • Most Recently Added
    • Throughout each of these tasks, take note of any discrepancies/errors and record them appropriately
    • If there is any confusion along the way, ask the user to describe why and try to guide them along in the right direction but not necessarily give them the answer right away
    • Gather notes and thank the user for their participation
  • Analysis
    • Put together all notes and look for any correlations among the notes
    • If needed, make changes to the interface to make it more user friendly and more intuitive depending on the user feedback
    • If satisfied, move on to the next step in development process
    • If unsatisfied, go back and repeat with a different group/individual
  • Results
    • Although we did not get to a lot of people to speak with about our layout, those who did check it out responded positively. Thanks to our simple layout and few pages to click through, users were able to correctly navigate to the camera stream page. One thing that came up was the suggestion for a home button, however, this was when the user did not realize that the logo in the bottom left was also a button that linked to the home page.

Group 2: UniTongue Design Process




Design Process

World → Resistance/Revolution → Primary/Secondary Research → System → Refinement

We first designed a world where pollution has become so severe that it has almost complete eradicated safe, healthy verbal communication while outdoors. Environmentalists and governments have no made any efforts toward solving this problem, leaving people who want to openly communicate (no face masks) exposed to nasty health effects.

Once we had this world, we had to narrow down our resistance and revolution. This wasn’t an easy task for us, and it developed and matured along the way, especially with the help of guest critics. Overall, we decided that our system should be a preventive solution that revolutionizes communication (with an educational approach), ultimately resisting the environment, verbal speech, and trust in environmentalist and the government to solve our world’s problem.

During primary research we conducted ten interviews and sent out a survey that received five responses. From the interviews we received an overall positive/neutral rating and some popular reserves. These include the fear of accidentally expressing private thoughts; movement glitches; and the inability to properly express emotions. These reserves inspired additional system functionality including the “suggestion only” feature (as opposed to just forcing users into making a gesture) and “emotional cues” (emitted into users ears to cue them when to make facial expressions as well). From the survey we received an overall negative/neutral rating and generally just mass confusion about what the system would actually be doing. From here we realized that we really needed to narrow down our functionality and components since it was not coming across well on its own. We needed to be sure to make it known we are not solving health issues. We are solving communication issues.

Secondary research helped us to realized that if we broke our system down into smaller components, the functionality of those components was currently achievable or really close to being done. Reading brainwaves was the furthest away in the future by far (we’re estimating 10 years). Additionally, we looked at articles talking about current smog pollution levels for cities around the world and predictions for rising pollution levels (examples can be seen below in six major cities). We determined that pollution will eradicate healthy speech within in the next 20-50 years, which helped us to establish our world timeline, and eventually our system timeline. Because we wanted this system to be preventative, and our world is 20-50 years in the future, we are shooting for a 2030 plan.

Finally, for our prototype we designed a gesture assistant system. Along the way we considered multiple hardware units with varying functionality, but in the end we knew we had to narrow down the components and functionality to make it more desirable and less overwhelming to potential users while not distracting them from the actual purpose. We finalized our ideal system to consist of the following six components:

  1. Camera
    • detects the gestures of other people
  2. Speakers
    • emits the gesture’s translation into the wearer’s ear
  3. Electrodes
    • controls/stimulates the user’s upper extremeties to create the desired gestures
  4. CPU
    • the main unit controlling all functionality including translation and proximity detection
  5. Wearable head unit
    • this will host all hardware components besides the electrodes
  6. EEGs
    • these will be used to detect brainwaves/thoughts
    • we won’t showcase these on our prototype since they would be built into the wearable head unit

With this design and approach, users would not be dependent on our system for the rest of their life. They would eventually become proficient enough in this form of sign language that they could sign the appropriate gestures themselves without system stimulation.


Smog examples, prototype sketches, storyboards, and primary/secondary research for our system, as presented to critics.

Further, with prompting from critics, we made system refinements and thought about some of the broader impacts our system, since it is flexible enough to solve more issues. These include:

  • teaching sign language
  • assisting those with limited motor functions
  • universal communication (e.g. across cultures, deaf community, etc.)

Keeping all of this in mind, we went on to design how we wanted to prototype this in today’s world (not all functionality can be present). Additionally, we wanted to make sure we had a memorable name and video to promote our system, which were the last three design steps all detailed below.

Prototype Design → Branding → Video

Prototype Setup/Design

We first created a higher fidelity prototype design, shown below, as opposed to the sketches we had been using. Then worked out how we wanted to prototype this design.


Medium fidelity design of our system prototype, officially known as UniTongue. This is the design we based our actual prototype off of.

We want to go with a Wizard of Oz approach to achieve full “functionality” of the system. This allows us to showcase the desired functionality in an affordable manner. The materials we will be using for the prototype include:


From left to right, wireless earbuds, googly eyes, tens unit, extra electrodes, and headbands. These were all the materials we ordered and were used to create two system prototypes. Not pictured: two more earbuds and two more googly eyes, used to create two more system prototypes (for a total of four).

  1. Headband
    • to be used as the wearable head unit
  2. Bluetooth Headphone
    • user will place it in their ear and it will function as the speaker
    • will serve to link phone calls so “behind the scenes” CPU “unit” (actually a person) can instruct users how to sign the gestures
  3. Electrodes
    • these will be placed on the user’s upper extremities
    • won’t actually be connected or functional
  4. Googly Eyes
    • these will be attached into the headband to simulate the camera
    • no, these won’t be functional eyes


      “Camera lens” made from googly eyes. We made four total.

  5. Tens Unit
    • this won’t be a part of the “prototype” but will be used to demonstrate how it would feel to actually have the electrodes move your upper extremities
  6. Cardboard CPU and power supply
    • this will not be functional, but will just be attached to the head unit to showcase what they might look like


      The outline for our “CPU” and “power supply” before folding. These were made from the headband packaging. Additional ones were made from card stock. We made eight total.

We will have one person acting as the “CPU” unit and remain behind the scenes. The “camera” and wireless earphones will be attached to the headband and function as the wearable unit. We will also fashion and attach a non-function, placeholder CPU.

Once the user is wearing this, we will place placebo electrodes on their upper extremities.

The user will “think” what they want to say by choosing from a listed phrase that we will have displayed and saying it so the “CPU” can hear. The “CPU” will then “translate” this and describe to them what to do with their arms, hands, and/or fingers (simulating the electrode stimulation and control). Once this is complete the other person’s “camera” will detect these gestures and the “CPU” will translate by emitting the translation into their ear from the wireless headphones. This process can then be repeated.

The connections between the “CPU” and the users will be established through a phone call.

Additionally, there will be a board of phrases to choose from, and a drawing of upper extremities with labeled positions. This will cut down on confusion when directing users how to position their upper extremities.

And specifically for presenting, we will all be wearing the prototype (we made four versions), as shown below.



Each group member is wearing a UniTongue prototype.


We came up with branding ideas by brainstorming. Ideas included:

  • RosettaTongue, UniTongue, SoloTongue, or GestureTongue
  • Result: UniTongue

Next we made a logo:


Branding logo for our system. Each “u” is meant to look like a tongue.

We went one step further and coined a slogan:

“UniTongue: expression without speaking. Tongue not included.”



We began designing for our video by constructing story boards of how we could showcase the prototype, deciding on a particular video style, working out a script and its scene timings, then filming and editing. Each step further detailed below.

Story board

  • without the system to demonstrate negative health effects


    Story board idea for demonstrating the negative health effects associated with not using our system. In the end, we decided not to use this in our video.

  • with the system to demonstrate functionality

    This slideshow requires JavaScript.

    Story board idea for demonstrating the functionality of our system. In the end, we decided to use this idea for our video.


Originally we thought using about using both storyboards: one to demonstrate negative health effects from not using our system, and one to demonstrate positive results of using our system. However, we thought for this video it would be best to solely focus on demonstrating the functionality of the system to not distract users or take away from functionality by focusing on health effects.

Furthermore, to make the video more relatable, we decided to photograph the latter scenario (prototype functionality), as shown below.

This slideshow requires JavaScript.

We later edited the images with thought bubbles and descriptions to convey the “functionality” before adding them to the video.


We first thought about having a stop motion video, and then quickly diverged into a live motion video that we frequently stop/pause to narrate the inner workings of our prototype with “speech bubbles.” Once we started looking up examples and resources for this type, another styling idea came to us. We began planning a “Because A then B. Because B then C. … Because Y then Z.” type of video, which played into the dramatic effect we wanted, however we quickly realized this wouldn’t be informative enough.

So, we took a step back and decided to design the script first. Once we did this the styling fell into place. We decided to have pictures matching the subject of the script. These consisted of polluted cities, sketches and designs of our prototype, actual pictures of the finished prototype, a storyboard to demonstrate functionality, and a clip of our prototype being worn. The style of the video will be mostly educational: it presents a problem in our world and showcases our system as the solution.


The full script with planned narrations and transitions

  • 5-10 seconds: Cover our world and present the pollution problem and timeline
  • 5-10 seconds: Cover system components and functionality (at a very high level)
  • 10-20 seconds: Demonstrate functionality with storyboard and prototype showcase
  • 3-5 seconds: Tagline
  • 5-10 seconds: Credits and Video Acknowledgments 


  • Photoshop was used as needed for particular images
  • All video was filmed using an iPhone
  • Audio was recorded separately (also using an iPhone)
  • The entire video was created with iMovie
  • Once completed it was uploaded using Vimeo

We decided to film a 360 degree view of our prototype. To do this we rotated the camera around someone wearing the prototype as they stood still. Additionally, we started from afar and zoomed in the view closer on the user for another shot. This allowed us to focus on the wearable head unit (where the majority of the components are), but still display the entire prototype including the electrodes on the arms.

Further, we took pictures of each transition for the storyboard. Later, the thought bubbles and descriptions were editing in before adding them to the video.

Lastly, we borrowed a seven second scene from Guardian News (0:00-0:07 seconds) to showcase actual smog.

Evaluation Plan

Full Evaluation Survey

We started our evaluation plan by working out the main sections we wanted to learn more about. These include usability, aesthetics/design, and the plausibility of functionality. We created a survey with 13, 6, and 6 questions in each category, respectively.

We really wanted to hit upon if people would use the system, want to use the system, how they would use it, and why. For aesthetics/design, we wanted to gauge how well we did with combining the functionality into one unit. Does the design affect if users would want this system? Would users potentially use this if it came in a different form? And Lastly, for the plausibility of functionality we wanted to verify if people believed we picked a feasible timeline and what features might have been too ambitious or not enough. 

The plan is to evaluate our system with previous interviewees and more individuals outside of their majors and/or age ranges.

We also did some impromptu evaluating of peer reactions to the electrodes moving your fingers, as pictured below.


Using the tens unit to evaluate peer reactions to having electrodes influence the behavior of their fingers. This will be what we use to demonstrate the “suggestive” movement that will be simulated by our UniTongue system in order to create the appropriate gestures.


Raw Data from Interviews

We were able to evaluate eight participants. The prototype was showcased to each of them and they were able to wear it or examine it for as long as they liked. 



Word cloud of the raw data (without the questions) from the evaluations.

Initially, a large amount of participants had a distrust in the system with the continuous fear of their “brains being hacked”, but many of them said that knowing the data transference would be encrypted made them feel much better about the subject. Also, while almost all participants believed that the device accomplishes it purpose, many also believe that the device could be used with other purposes that they believed could be a better use of the system. Many participants also believed that the device might be a bit discomforting at first, but almost all of them believed that after a few uses, they would become accustomed to the device and be much more comfortable using it. A few people also expressed a fair amount of interest in having different or aesthetically modifiable versions of the headset (such as making a glasses versions, or changing the look of the electrodes).

Overall, users predominately said they would use our system as a solution to this problem, and especially as a solution to some of the aforementioned broader impacts. 



Professor Aisling Kelliher experiencing our electrode simulation.


An ICAT Day attendee experiencing our electrode simulation. 

Tagged , , , , ,

Group 5 – Progress Update


This week, we’ve obtained and repurposed an open-source academic NLP objective/subjective classification project. We intend to use this project as a proof-of-concept for our final presentation. At this point, we have engineered it to accept input data on the fly and generate a readable HTML file. We intend to have a basic prototype for our physical implementation running by the 28th of April. Due to the late arrival of our hardware components, we were unable to meet our projected deadline for having the NLP python application running on our Raspberry Pi; however, we intend to make up lost time this weekend by meeting and working on both the Pi and the video presentation.

For our final presentation, we’ve begun designing some mockups for an interactive natural language classification game where users analyze segments of news articles and compare their opinion against our algorithm. Those mockups are attached below:

We intend to build this game by creating a javascript application, parsing the pre-rendered html sentiment analysis files. We intend to pre-render in order to prevent long loading screens while the algorithm runs on the relatively slow Pi.

Group 5 – Parts & Timeline


Final Parts List:


  • Interactive classification demo
    • Display monitor
    • Enclosure (3D print)
    • Raspberry Pi with 2 buttons
  • Website mockups
    • Create a slideshow for our demo
    • Flesh out more areas of the website
      • Home page
      • Article view
      • Article editor
      • Report page
      • Moderation/administration page
    • Write mock articles
    • Elaborate on opposing viewpoints feature
  • Depiction of the future world
    • Exaggerate existing articles
    • Design fictional future news sources
    • Create a slideshow for demo as well
  • Video trailer
    • 1m 30s (?) – need to confirm details


  • On April 14th
    • Aisling: Order parts
  • By April 19th
    • Victoria: Write future OpenNews articles for demo
      • (can grab & change articles)
      • 2 articles
      • 5 headlines
    • Ransom: Work website mockups
    • Matt: Create python interface for NLP demo
  • On April 19th
    • Get NLP demo working on Raspberry Pi
    • Design skeleton of slideshows
    • Ransom’s birthday
  • By April 21st
    • Victoria: Obtain/write current and projected future articles for presentation
    • Matt: Create images for articles
  • On April 21st
    • Plan out video trailer to film over weekend
  • By April 26th
    • Complete video trailer
    • Finalize mockups
  • On April 26th
    • Plan out final build
  • By April 28th
    • 90% done
    • Button demo working
    • 3D print enclosure for demo
  • By April 30th
    • Aiming for 100% done by this date
    • Set up slideshows
    • Test with monitors
  • By May 2nd
    • Final presentation ready to go
Tagged , ,

Group 5 – OpenNews: Critical Response


In this update, we will discuss and address some of the concerns that were brought to our attention by various critics of our system. We have chosen to omit generally positive comments and focus on areas that need improvement and better definition.

Notable Responses

Reviewer: Kari

Process and Methods: 5 / 5. “Lots of supporting documentation/research. I wish they approached them them with a more [unknown word] eye”.
Response: That is true, during our research we may have been looking for evidence that supports trends rather than evidence that does not.

Quality of Proposed System: 3 / 5. “I’m concerned about the desire to eliminate bias from news, as we discussed during the feedback session. The aims are [unknown], just I’m not sure that the prototype will accomplish them”. Response: I think the reviewer is pointing out that we may have issues eliminating bias in our prototype or that our prototype will not be robust enough to demonstrate. I believe this is something we will address more closely in the coming weeks as we begin to flush out our presentation ideas. Ultimately, we realize that our idea is too ambitious to fully implement; however, we believe we can implement aspects (including a rudimentary classification system) that will create a compelling argument for the existence of our entire system. 

Reviewer: Ellis

Process and Methods: 4 / 5. “I was curious about related work. I saw a lot of what was wrong but not of anything with similar solution”.
Response: This comment indicates the importance of understanding our system and where it gets its roots. That is to say, we should have done and should do more research and examination of similar existing system such as Wikinews and Politifact in order to clearly demonstrate how we will address the weak points and problems with these. Generally, we believe our system gets it strength and distinction from having an intuitive and clean user experience, a robust natural language processing backend, and a strong crowd driven experience with defined checks and balances.

Reviewer: Wisnioski

Presentation and Communication: 3 / 5. “Strong desire to tackle key issue. Is objectivity possible in a media environment?”
Response: We need to research and analyze whether objectivity really
is possible in a media environment and present that in a clear manner. We believe this really comes down to better defining how we are quantifying the quality of news and preventing the introduction of slow-moving, hard-to-see algorithmic biases. We will begin to address this more closely in the coming weeks as we begin to think about our prototype more.

Process and Methods: 3.5 / 5. “Lots of exciting literature on this. What systems currently exist? (re: politifact). Response: This question was addressed two comments up. The fact that it came up again indicates that we should prioritize this discussion.

Quality of Proposed System: 3.5 / 5. “Important domain space, I suggest focusing on an element of “news” that especially fits your model”.
Response: We should identify and then focus on different elements of news. We likely do not know enough
about news itself.

Reviewer: Zach Duer

Process and Methods: 4 / 5. Also commented next to the bullet point of “is there an appropriate review of related work and existing projects?” with “not enough” then commented “I’m deeply concerned about the idea that NLP can be trained on unbiased vs biased articles, and that it wouldn’t understand bias-by-omissions for example, and would reflect the bias of the people labeling articles as biased/unbiased for training”.
Response: We need to more clearly present our solution for avoiding bias in labeling, which is crowdsourcing to people from all demographics and having each article reach a certain percent agreement on whether or not it is biased. Bias by omission is a strong concern but we are hoping that the open source aspect of OpenNews will encourage those to add details that were omitted and refine algorithms.

Quality of Proposed System: 5 / 5. “Yes, great idea… is an AI for automatic first-layer WikiNews editing, makes total sense”.
Response: Let’s ensure we continue to focus on our AI and continue defining it. This is after all what makes our system unique.

Key Notes and Details

  • We need to explore and elaborate on our NLP ideas more – how do we quantify the quality of news exactly? We don’t want to focus on eliminating bias but rather making it clear when bias exists.
    • How do we keep our NLP from getting trained improperly such that biases are introduced through less obvious avenues (bias by omission)
    • How bad is bias? Is purely factual/unbiased news worth anything? We should mock up examples of what we consider to be ideal articles and unideal articles
  • There were two comments regarding pre-existing systems, noting that we should explore and understand exactly what these systems did wrong and how we’re improving on them with ours. Notably, WikiNews and Poltifact
  • We need to better define what “news” is to OpenNews

Progress Update

We’ve started mocking up some designs for OpenNews, these were shown to critics as part of the review process this post is covering. These mockups are to help us understand what news looks like and what information is important to a reader.


Article View Mockup


Homepage View Mockup

This class we’ve also determined the materials needed for our final project. So far, this includes:

We also expect to bring some monitors and computers to display our presentation, website, and possibly our movie trailer or some slideshow.

Tagged , , , , ,

Wandr (Group 4) – Research

Modern Research:­report­precrime­test­claims­success­seeks-expansion­worldwide.html

Precrime is already being implemented around the US and the technology used is evolving at a rapid rate. There are many states around the US that are already using these systems to predict the threats of specific people. They have also developed several algorithms that calculate the chances of a crime, such as burglary, to happen in different areas of a city. While there are very few states that actually utilize these systems at this point in time, there is a lot of talk about rolling these systems out across the nation. The government is willing to provide federal grants in order to help install and improve these precrime systems.­03­03/china­tries­its­hand­at­pre­crime

China is taking the act of precrime to the next level. They are currently contracting software companies to build software that will collect an assortment of data about their citizens. This data will consist of anything such as what they consume, their hobbies, what they do for a living, the medications they take, and about everything else a person may do. They will then utilize an algorithm that to determine who might be a terrorist within the country.

While these systems are being put in place there still isn’t much data that can be used as a model to determine who could possibly be a terrorist.­tank/pre­crime­detection­system­now­being­tested­in­the­us

The U.S. Department of Homeland Security is working on building a system called FAST which will utilize sensors in order to collect data on people. Without having to actually touch the person, these sensors will be capable of collecting a person’s heartbeat, temperature, and even their eye movement. As most other systems, this system will also run all of this data in real­time through their algorithms and in the end determine if a specific person is likely to commit a crime.­08­05/pennsylvania­becomes­first­state­use­pre-crime­statistics­criminal­sentencing

Article that states Pennsylvania is the first state to allow precrime statistics to be used in sentencing and discusses the possible benefits and issues with this.­policy/2015/12/pre­crime­arrives­in­the­uk­better­make-

Article discussing precrime in the UK that includes a crowdsourced watchlist.

The basic principle around our world that is put to the extreme in our world.

Website for the IAA who wrote an app that allowed people in New York City to travel around and avoid security cameras.

Group that wrote an app that allowed people in New York City to travel around and avoid security cameras.


Watched an episode of psycho-­pass


History Research:

A major use of precrime by the US government against Japanese Americans after the attacks on Pearl Harbor.




Tagged , , ,