OpenNews 3D Prints


After several weeks of waiting, we finally got 3D prints in for our enclosures! We had hoped to have these ready for ICAT day; however, the prints ended up taking much longer than estimated by the library’s Design Studio. The Raspberry Pi enclosure ended up needing to be printed twice after the first 3D printer malfunctioned and printed an unusable hunk of plastic. As you can see in the photos below, the prints were still imperfect, but fit together well and would have been ideal for a quick prototype demonstration.

We modeled an enclosure for our buttons to fit in with space for LEDs and wires. We also downloaded a 3D model for our raspberry pi enclosure from thingiverse.


Ultimately, we were able to make-do with our quick cardboard version of the enclosure, which was more than sufficient for our purposes, especially considering the technical issues we encountered during our presentation that made the buttons less than functional.

In the future, it might be a better idea to allow for several weeks for printing time to account for potential 3D printer delays and malfunctions.

Tagged , , ,

Final Reflection – Jonathan Downs

This class was definitely a venture down a new avenue for me. The class didn’t seem too focused on being a coding class but rather a creative class, and I thought that worked really well. For my group’s final project, 80% of the work didn’t even go towards CS coding-related work. The majority of the work was centered on the ideation process leading up to the final product. The work towards the final deliverable was mainly art-related with group members focusing on design and application of tattoos and using Innovation Space to make a creative final video. The “computing” part only came with the Red Herring app. At the end of it all, our final product really did seem like an embodiment of the name of the course, “Creative Computing.”

The final project was probably the most fun I have had working on a CS project in college. The group would get together on weekdays and over the weekend to work on the film. My photography skills undoubtedly tripled over that week which basically means I know how to open the case and put a camera on a tripod now. Messing around with online facial recognition was really exciting as well. Seeing how simple designs on the face could work so well was eye-opening to me.

One of the best parts of the structure of the final project was how much time was devoted to the ideation process. In most classes, maybe one or two days would be given to the idea itself and then software development would promptly begin. Having multiple weeks to figure the idea out, pitch it, gather feedback, and tweak the idea made the idea work a lot better in the end.

Looking back on the classes themselves, the most memorable classes were the ones that were interactive. These include the classes where we “did stuff” such as littleBits and the classes where intense discussion occurred. One of the most memorable classes in my mind was where we came to class, sat in a circle, and discussed the reading for the whole class. The talking points quickly changed from the original focus, but all the topics that arose provided good discussion and a high level of thought from us.

I thought all of the projects provided insight and were fun to work on, although there was a bit of confusion sometimes. For instance, in the first project, I was unsure of what design process we were meant to do. We had gone over the AEIOU one, though, and that is what everyone was doing, so I guess that is the one, right? Well, not really. We could do any design process we wanted, I see now. But in an age where all our classes teach us a specific process and then dictate to do our homework using that exact process, the openness surrounding the projects could be confusing. That seems like more of a problem with the other classes, though, so maybe a little bit of confusion can be a good thing.

Overall, I thought the class was very successful in promoting the creative process and the ideation of a product over the final product itself. There was a lot more “thinking” and a lot less “doing” than most classes. In my college experience, a project is something that a professor creates with a known input and output. There will be a set input, and the output better be the one that the professor expects. This leaves little or no space for the ideation portion. In this class, however, the input and output were not set. The results from each project varied wildly, giving the class a really unique feel. This uniqueness is what really stands out to me and will make me remember this class for years to come.

Group 7 – Design Process

Design Process:

  • World design

Early on we had a pretty good idea of the kind of world we wanted to craft. We chose a world where the citizens are constantly under heavy surveillance; taking inspiration from movies and shows like Minority Report and Person of Interest. We thought we would take this blatant invasion of privacy a step further though and in our world not only are people constantly watched by cameras with facial recognition software, but also have a GPS tracking chip implanted at birth. The government can then use the chip to track your activities at any time.

  • Rebellious Brainstorming

To rebel against this world we decided on developing a way to obfuscate the data that the government was getting thus resulting in them dropping this invasive technology. The first idea was to stop the chip from transmitting a signal to the system there by denying the much needed data the system needs to work. So, we decided on creating a jamming device to block the signal. This was unfeasible though, as it is illegal to make or own one of these in the US.

  • Idea refinement

We also realized, that the absence of data is also data, and that we would need a different method to shut down the system. So, rather than block the signal we chose to spoof it. This involved sending false GPS data out to give the system the wrong data of the person. Just popping up 10 miles away in a moment is just as flag raising though so, our solution would need to be able to move the person to the desired destination at a reasonable pace. During this time a way to fool the facial recognition in the cameras was suggested to us, CV Dazzle. The idea behind this is that by applying  dark makeup in certain spots and designs on the face the software can be fooled into thinking there is no face. This is a subject already being researched, and for our world we imagine that this sort of design can be marketed as a sort of high fashion to proliferate it through the populace. We would create temporary tattoos that can create these designs with a single stamp. Then a person using the aforementioned device and the tattoos would be able to vanish from the government’s eye whenever they wished.

  • Prototyping

The prototyping design process is documented below.

Red Herring (The App)

We wanted the name to be something that fit the function of our app. In our original idea, the watch only blocked the signal and “made you invisible to the government.” We came up with our first name for this device, RabbitHole, to fit the function of this device. However, we got a lot of feedback that the government would easily notice random GPS signals disappearing from their tracking. So we changed this to fit the feedback. Our new idea was to jam the signal but also create a fake signal that sends false location data to the government. At this point, we looked for names that fit the new function of this device. Since the new functionality was mainly to mislead the government, we came up with the name Red Herring.

Wireframe and Ideation:

This slideshow requires JavaScript.

Our final Red Herring product looked like the following:

CobraCover (The Tattoos)

The name for this was inspired by designs on a cobra that is meant to trick its enemies. Similarly, the function of our app is for people to trick the government by making their facial recognition unreliable, ultimately bringing the system down.

We made some designs that were inspired by existing designs that trick facial recognition.

Sample Designs:

Our final product, applied, looks like:

We tested these designs through a site called Face++. This site lets you upload a picture and it will identify faces and the features of the face (race, sex, head tilt, etc.). The results of our testing looked are below. The blue dots are meant to match up to the eyes, nose, and corners of the mouth.

As you can see from our testing, the facial recognition never 100% identified the location of all major features on the face, although it did identify a face with decent accuracy 2 out of the 3 times. One time, however, the facial detection did not detect a face at all, proving that these types of designs do have some merit in breaking facial recognition.


CobraCover Logo Design:


Red Herring Logo Design:



Early Video Concept:

  • Early evening, man walking on crowded (if possible) street (8 seconds)
  • Man stops and touches back of head, thinking (4-5 seconds)
  • Camera can cut to a dark room where people are watching a giant map that tracks people (5 seconds) (not necessary)
  • Looks around and sees camera, keeps looking more and more frantically, camera keeps cutting to different security cams that are watching between shots of person looking around (10 seconds)
  • Takes out tattoo from pocket, applies to face discreetly (8 seconds)
  • Show tattoo up close (5 seconds)
  • Man looks at watch (5 seconds)
  • Camera gets images of watch interface (8 seconds)
  • Back to man walking off (disappearing into crowd if crowd == true)

Storyboard:story board

Our final video did not get all the scenes that we originally wanted, but most scenes from our storyboard did make the final cut. Our final video can be found here.

Group 6: TheOne Ring (Smart Ring)


Design Documentation


Throughout our brainstorming and development of our system, we have had many changes. Starting from the very first week of idea development, our first idea was to develop TreeBot, an autonomous robot that would plant and maximize tree growth in a world where deforestation has begun to create serious environmental effects. After further discussion among ourselves, we decided that this idea would be very difficult to implement, and we had to brainstorm new ideas. We had a thorough list of cool futuristic ideas, ranging from Super-bugs too Artificial intelligence. In the end, we all came to a consensus to work with an imagine reality where advanced genetic modification was commonplace among the wealthy, and our system would be a physical augmentation device that could be afforded by the non-elite.

The next step was to decide on an actual device and system that we could prototype for our project. We looked at possible genetic modifications that could be mimicked in a simpler form. The idea was that in the future, people would have cybernetic implants that allowed “telepathic” communication between the wealthy who could afford these implants. This idea stuck with us, because designing a system that would mimic that sort of technology would be a fun project to pursue, and we had good ideas for our prototype. At first we thought of creating a braille glove that would send messages to other glove users through a braille language system. We also added aspects to our world that using phones were frowned upon by the upper class, and therefore using your phone to communicate was not an option for the lower class who could not afford the implants. Our next step was to do field work on our system.

After online research and interviews, we found that we got a lot of constructive criticism from other people’s thoughts on our idea. Some examples were that the Braille language would be hard to learn, our glove wasn’t discrete and people would know if you were using a device, the world was a little hard to understand, and how the receiving end of our messages would work. Our group then narrowed our idea to a gesture ring that would have similar functions, but lean more toward the functions of an actual phone. This led up to our current idea of our prototype.

World Description

In a world where the use of cellphones while in motion has been outlawed, the upper class can afford cybernetic implants that allow them to use phone functions through their mind. The less fortunate however cannot afford these implants, and don’t have a way to access functions on their phones in public while moving. To resist this world, we have designed a small gesture ring system, that will be affordable and allows users to access the functions on their phones, without breaking the law.



After deciding conceptually what our final project would look like and how it would function, we brainstormed what hardware components we could physically use to represent this device in our world as closely as possible.

We considered various mini computer and hobby micro controllers like a Raspberry Pi or an Arduino. Ultimately, we decided the most important goal in bringing this device to life was to make the pieces as small as possible. So we dug deeper and decided the Flora line of wearable circuit boards would be excellent for this project since the emphasis in their design is minimalism. We ended up using these three Flora components to demonstrate the device:

Flora v2 -The “brain” of the ring. We used it to collect the data from the accelerometer and pass that data onto the bluetooth module which then passes that data wirelessly to our Android App (more on that below).

Flora Bluetooth – This is how the ring device sends data to other devices. For the purposes of this prototype, it sends gesture signals to an Android app.

Flora Accelerometer – The accelerometer acts as the main and most essential hardware component for this ring device since it detects motion.

The following lists off everything we purchased to assemble our prototype:

Parts List:

Prototype Sketch


In the above image, we essentially decided we would use an armband to hold the battery, Flora, and Bluetooth all securely in one place. The accelerometer is attached to a ring base with wires connecting it to the Flora for processing the accelerometer values.

Below are some photos of the prototype as we put the pieces together.

The images above show the Flora board we used. The left image shows the top side while the right shows the underside. Notice the bluetooth module is soldered directly onto the backside of the Flora. We decided this would save us the most amount of space as possible in our final prototype design.

This next set of images shows the accelerometer. The first two images show how the wires were organized. We wanted to keep the length of the individual wires uniform so we applied dots of hot glue to the backside of the wiring to keep them together. In the third photo, we wrapped the wire in a long strip of white electric tape. This helped us keep the final prototype design as clean as possible.


We also decided to wrap the accelerometer in red electric tape to conceal the appearance of the accelerometer and then hot glue it to the base of a ring pop ring. We believed this would keep the wearers of the device from actively thinking about the circuit boards they are wearing.

These images show the final design. The first image shows an overview of the prototype. It looks like a ring attached by wire to a box with an arm strap. The second image reveals the contents of the box. It holds the Flora, bluetooth module (soldered behind the Flora) and the battery pack powering the device.



Sightless UI (Blind user inspiration): During the decision process of our prototype, we had discussed similar real-world applications that could be used for a gesture ring. What came up during our research was the lack of technology for the blind. There were a few articles illustrating how modern smartphones are made as a visual tool, with a touch screen that can only be used by people who can actually see it. These smartphones are usually adapted for blind persons through screen reading software. While this works, it’s not entirely ideal for blind users. We found that technology for the blind consists of the use of touch or sound.

We also thought of some non screen based technology and UIs that perhaps weren’t intentionally designed for blind individuals but would work very well for them regardless. One great example is the iPod shuffle or any iPod with a UI designed for navigation via click wheel. These kinds of devices were and still are much more easy to navigate without a screen than today’s touch based smartphones because the UI feedback was either haptic (pressing physical buttons) or auditory (clicking sound while dragging finger around click wheel circle). We used this as inspiration to design an audio feedback based user interface that would respond to finger/ring gestures.

Android: For the software component of our prototype, our group decided on developing an android app that would hold a few phone functions that we wanted to control by our ring. We decided Android would be the best decision because we all had experience writing code in Java and some experience with android UI development.

We decided that the main features we wanted to implement were weather, music, and text messages. The first step was to setup the UI so that we knew what specific “button” functions we wanted. There was one main UI with buttons for playing, pausing, next, and previous songs, reading weather, and reading text messages. We wanted to implement access to the phone’s actual music library, text message, and a weather applications information, but decided that creating static fields for each functionality was viable for the purpose of our prototype.

To make the UI our prototype system more appropriate for how we intend it to be used, we switched the UI layout to match our rings actual gesture controls. To do so, we converted the layout into 4 arrows representing the cardinal directions that our ring gestures would understand. From there we made sure to implement what each direction would day according to our previous methods. Here is a look at the gestures of our ring:

  • Up:
    • Cycle through the menu’s of our app. Music -> Weather -> Text Messages
  • Right:
    • Music: Play next song
    • Weather: Read tomorrow’s weather
    • Texts: Read next text message
  • Down:
    • Music: Play/Pause song
    • Weather: Read today’s weather
  • Left:
    • Music: Play previous song

The three images below show how our app design changed over time and demonstrates the differences between traditional mobile UI design and our auditory feedback UI. The towo images on the left show what the UI of the mobile device would look like if it were button based and intended to for sighted users via touch screen. The image on the right completely removes the buttons for the purposes of sightless navigation via our smart ring. The two different UI types can function in essentially the same way but one requires a screen while the other (ours) does not.

Hardware-Software Integration

These are the following resources we used from the Adafruit website:

Bluetooth Module:

Bluetooth Android App:


Adafruit has a fully fledged Android app available for the bluetooth modules it sells called Bluefruit LE Connect. Basically it is a Bluetooth Low Energy Scanner app that allows you to connect to various bluetooth devices and interact with the devices services and characteristics. It allows more advanced interaction and options like updating firmware for Adafruit specific bluetooth modules.

The specific Flora Bluetooth module we used was designed to advertise a UART service by default. Connecting to that service allows devices to communicate with each other. The Adafruit BLE app has specific options for this type of connection/communication.

Although we had prior experience writing Bluetooth apps, we decided rather than starting from scratch, it would be best to use a source of Adafruit’s Android app, strip away all the non essential functions, and add our Ring Gesture auditory feedback UI code. This allowed us to focus much more on the functional components of our specific project rather then testing Bluetooth code we wrote ourselves that may or may not be compatible with Adafruit’s Flora Bluetooth module. After a two weeks of reading, running, and testing the source of Adafruit’s Bluetooth app, we were able to fully strip or comment out the unnecessary functionality while maintaining the connection and UART communication code. Our app ended up looking similar to Adafruit’s but with a lot of options removed and instead of going to the UART activity upon selecting UART as the communication option, it went straight to our blank activity that waited for commands.

The accelerometer used SPI communication protocol to send data to our Flora. Adafruit also had example code for this as well. We added a basic algorithm to detect certain gestures from the three X, Y, Z values on a single plane (hand/fingers pointing to the ground). The algorithm would then determine if an Up, Left, Down, or Right gesture was just detected and then send that data to the Bluetooth UART to send to our Android app. The Android device/app would wait for these Bluetooth signals to come in, active a function or change the menu, and announce the action taken via auditory feedback.

Final Prototype

These images above show one of us (Dominic) wearing the device with a connected Android phone to the left.

Here is a link to the video of it working: TheOne Ring Video Demonstration

Concept Video

We made the video primarily for presentation purposes on ICAT day.  We learned very quickly after feedback from initial presentations that our world was a bit confusing.  We needed a video to clearly show and explain the quirks of our world, and how our gesture control device supports a less economically prosperous class of people.

We shot the video with a digital camcorder device, and used Adobe Premiere to sequence the footage.

Evaluation Plan Discussion

Once we had the device and video we were ready for ICAT day. At first not many people came to our exhibit, but before long we had a slow but steady stream of people. Most people grasped our world quickly, and understood both the need that our project was filling, and how we went about filling it. When we moved on to actually demonstrating the device, most people struggled to get the hang of it at first, but once they got the first gesture or two to work, the others followed much more easily. Once visitors became comfortable with how to use the Ring they typically wanted to talk some more about potential uses, difficulties, or what the project would look like if we were to continue.

Our project could be improved in many ways. The main struggle that people had at first was not understanding what they needed to do, but rather doing the gestures right, in such a way that the device actually read them correctly. This often involved accidentally doing a downward flick when meaning to do an upward one, though this could just be because we always started them with the upward flick to demonstrate cycling through the modes. One thing we did not try was having someone who knew nothing about our world, or what our device was supposed to do, use it to see how easily they could figure out what the device does and how to use it. While we think that this is an unlikely situation, as anyone purchasing our device should know that it is a motion controller for their phone, we could have tested to see just how intuitive our device actually is.

While we received valuable feedback from visitors to our table, in a more formal setting we would have tabulated the questions that they had for us, as well has how easy they though our device was to use, how comfortable they felt while using it, as well as how they would rate their overall experience while using our device. This would allow us to determine how effective our device was and if it would be marketable. Overall we are quite satisfied with our project and the prototype that we created, and grateful for the learning opportunity that it provided.


SPOT Survey

Please fill out the SPOT survey at the start of class:

D-Dome (Group 8) Design Documentation


The Pitch

We initially described our D-Dome system as a “deployable self-defense mechanism for a near-future, highly dangerous environment.” For our original exposition, we crafted an anarchical world, which had formed after a severe political revolution. Inhabitants of this world were nothing short of savage towards one another due to the limiting supply of natural resources and the people’s inability to gather everything they need to survive. The criminalized nature of this world led us to D-Dome. In short, we wanted to develop a system that would provide people with instantaneous safety and security.

Essentially, we envisioned D-Dome as a self-deploying shield that would activate as soon as the wearer’s heart rate raised passed a specific threshold, denoting that the wearer was under high pressure or stress. Once the system is in motion, the user is enveloped by a 360 degree protective barrier. Once the user’s stress level has returned to a relaxed state, the shield will close back into itself, allowing the user to continue on their way.

Re-Draft of System:

After our radical initial pitch, we realized that there were too many hypothetical scenarios that we would not be able to adequately address given the time and scope of this class. Many members of the class brought up concerns about the feasibility of the immediate deployment of the device, the strength of the material, and how people would feel “protected” by the device in long-term situations.
We still wanted to keep our government-devoid world with a collapsed economy, but we withdrew our emphasis on an always dangerous, hostile environment (though raids are still a possibility). Our new idea still involved a wearable, but this time it was a suit, named D-DOME, that could automatically inflate into a tent which would serve as a temporary shelter. This would be vital in our world as resources are rapidly depleting and frequent migration would be necessary for an individual to location himself near pools of resources. Additionally, the tent would feature extra fabric to link other D-DOMEs, enabling other nomads to establish communities. This is an act of resistance/revolution to the current system of self-reliance and hostility towards one another.





We created a survey that provided feedback on how people perceive camping and socializing with strangers while camping. We asked for their general information of who they are, as well as their camping knowledge. This information allowed us to scope who would use this product today. We mainly found that people are willing socializing while camping but have a hard time trusting someone within your camp site. The other primary research we performed was interviewing a mechanical engineer and a material engineer, which helped build our idea. They pointed us towards what material we would use along with how it would transform from wearable suit to the shelter.


Once we developed our idea of what our system will solve, within the world we created, we then conducted research on various topics. These topics included materials, technologies, as well as research that helped bring depth to our world. For the product D-Dome, we needed to know what materials it would be made of, and how it would be held together. We found similar ‘survival suits/kits’ that are currently on the market which lead us to form a suit that transforms into a shelter. The next big issue we needed to address was a power source to deploy the dome as well as the security system. We found clothing that has solar panels stitched into it, which feeds a battery that holds the power. Our world, was more focused on a future where society is collapsing, which to gather research for we focused on how the homeless community lives. We found the poor mistreatment and lack of care for this population helped guide our world as well as our product. By including the connector between D-Dome’s we hoped to build a sense of community that changes people’s view of others, therefore changing they way they are treated.

Critiques & Refinement of System:

We already responded to the feedback we received in our guest critiques here:

Essentially, the refinement of our system involved redefining our world’s time period to justify the demand for such a product in desperate times. Here’s a description of our updated world:

Our company created a suit that conveniently and effortlessly transforms into a temporary shelter, with the ability to link tents together to metaphorically and physically establish communities. It was originally marketed towards groups of enthusiast campers and homeless communities who would appreciate the convenience of quickly establishing a shelter as needed. For campers, the primary motivator of the linking feature was to ward off bear attacks. Statistically, bears are less likely to attack groups in the wilderness than individuals traveling on their own. For the homeless communities, if several homeless individuals were linked up, the authorities of an area would be less likely to disband these large groups and address this problem in a more benevolent way.

However, research has shown that the world is on the verge of economic collapse. Our company decides that this is a good opportunity to market our product for a “post-apocalyptic” world where resources are scarce and survival is dependent on migrating frequently. Communities that band together for mutual survival will last much longer than individuals attempting to subsist on their own (like the bear analogy). Therefore, our current product was the perfect fit for a future world where resources are rapidly depleting, forcing the population to migrate often to locate themselves near pools of resources. Additionally, the linking feature would lend itself well to individuals establishing communities for mutual survival.  

Prototype Design and Implementation:


Since the time remaining after we confirmed our idea was somewhat limited, we knew that creating one full-scale prototype that could actually demonstrate the suit-to-tent transformation just wasn’t realistic. Instead, we settled on making two separate doll-scale prototypes to showcase the aesthetic and functionality of both the suit and tent forms.

The suit prototype was simply a representation of what the physical suit would look like when worn. To implement this, we first cut out a larger piece of canvas cloth that was comparable to the amount of cloth used for the walls on the tent prototype. Then we fitted the cloth to our doll by folding back and pinning excessive cloth using the safety pins. Next, we measured and cut out the pieces of cloth that we would use for the sleeves and hood on the suit prototype. We attached them to the suit using a hot glue gun. Finally, to represent the straps that would allow a D-Dome user to transform the suit into the backpack form, we glued two adjustable velcro straps to the back of the suit prototype.



Additionally, we wanted to represent our Intrusion Detection System within our abstract prototype. When the D-Domes are linked, a circuit is formed between the shelters, not dissimilar to the “Wagon Wheel” established during the American frontier.. Once the D-Dome circuit has been created, any pressure applied from the outside of the community onto the tented wall (i.e. an outsider attempts to rip open the D-Dome’s walls) an alarm sounds, and lights flash so that people living together within the D-Dome community will be alerted to the intrusion. In order to demonstrate this idea within our prototype, we decided to utilize the LittleBits kit. We wired LEDs, a pressure sensor, and a harsh buzzer throughout the hidden wall of the tent. The pressure sensor is located directly behind the wall that “links” the two D-Domes together. When someone presses against this wall, LEDs inside the D-Domes light up, and the buzzer continually sounds in order to demonstrate the overall alarm system.




In lieu of a Kickstarter-esque trailer for our product, we decided to craft a small story concerning the characters used in our various prototypes: Ken and Barbie. Essentially, we wanted to convey the idea that in our imagined reality, people do not last long by themselves. However, the technology behind D-Dome enables people to commune around one another, promoting an overall sense of camaraderie. The initial storyboard called for a small story that takes place over two days. On day one, Ken is wandering alone through a hostile environment. The D-Dome strips away any concerns of shelter he might have on the first night, though he is still left alone to fend for himself. During the course of the following day, Ken meets Barbie and the two “link” up their D-Domes with the knowledge that they will be safer together.



For our evaluation we talked to other engineering majors that applied to our prototype and deliverable. We had them fill out a survey that was comprised of a series of questions that pertained to design. For example ideal materials for the D-Dome, how should the shelter deploy automatically, and what should be the primary power source. This is to check other possibilities that we might not have thought of as well as reinforce the design process our group had already gone through. The second part of the evaluation was for the participants to watch the short film created about D-Dome, then followed by a few questions that made sure the message and problem would be solved by our prototype. The response backed up our ideas and thoughts that went into the making of D-Dome, for instance being able to carry your shelter wherever you go and being able to create a sense of community.

Group 1 – PubliCam


Design Process

  • World
    • For our world, we brainstormed and ended up with the idea of having a future where there exists a mesh network that protects users from hacking dangers associated with data mining and hacking. It also stops corporations from gathering users’ personal data for free. This stemmed from the thought of Facebook collecting information while the user is on the Internet to show advertisements when said user is on Facebook. This idea did not quite pan out because our world was not well defined and it was decided that a prototype would not be plausible with the given constraints. We restarted the brainstorming process and came up with a world where although surveillance is high, video streams are hidden from public view and sometimes altered to give a certain point of view. We decided to come up with a system that would provide a way for any user to access video streams around the world. Our system provides inexpensive and high quality cameras that record at all times. These cameras send a live stream to a simple web interface.
  • Project
    • For the actual project/prototype, we started out brainstorming ideas for a layout for the website and came up with a storyboard. We used this storyboard to come up with a wireframe. The wireframe and idea were presented to various people who critiqued them and gave us feedback. We used that feedback to refine our system and come up with a well-designed web interface that provides a live stream when the selected camera node is active. For the prototype, we use a third party website called Bambuser that allows our smartphones to stream live video that embeds into our interface.


  • For our prototype, there were various things we had to research to get the final prototype designed and functioning. For one, some of us had to brush up on web design skills. We also conducted a market analysis of current similar products such as Periscope and Google Maps, to see how users react to the interface and for Periscope especially, the concept itself.

Prototype/Web Interface

This slideshow requires JavaScript.

This slideshow depicts the flow of navigation for the web interface. It starts out zoomed out to a map of the United States and allows the user to scroll around as well as zoom in and out. The slideshow shows a zoomed in view of Blacksburg next and then a detailed view of one of the markers. Once the link in the detailed view of a marker is clicked, a camera feed page is shown and the user can click play to view a live stream of that certain camera node.


This slideshow requires JavaScript.

Evaluation Plan

  • Focus Group/Individual Interview
    • Set up the web page and allow the user to essentially “play” around with the user interface
    • After about 5 minutes, record any initial comments/inquiries that the user may have
    • Once all notes are recorded, have the user go through a series of specific tasks to see what/where they click
    • Zoom in closer to a specific area on the map
    • Navigate to the camera view of a desired point on the map
      • Points are represented by markers
    • Once at the camera screen, navigate home
    • Use each of the filters appropriately
      • Most Viewers
      • Closest
      • Most Recently Added
    • Throughout each of these tasks, take note of any discrepancies/errors and record them appropriately
    • If there is any confusion along the way, ask the user to describe why and try to guide them along in the right direction but not necessarily give them the answer right away
    • Gather notes and thank the user for their participation
  • Analysis
    • Put together all notes and look for any correlations among the notes
    • If needed, make changes to the interface to make it more user friendly and more intuitive depending on the user feedback
    • If satisfied, move on to the next step in development process
    • If unsatisfied, go back and repeat with a different group/individual
  • Results
    • Although we did not get to a lot of people to speak with about our layout, those who did check it out responded positively. Thanks to our simple layout and few pages to click through, users were able to correctly navigate to the camera stream page. One thing that came up was the suggestion for a home button, however, this was when the user did not realize that the logo in the bottom left was also a button that linked to the home page.