Monthly Archives: May 2014

The Digital Prophet

The digital prophet is a rendition of Gibran Khalil Gibran’s “The Prophet” as told by the Internet, composed of tweets, images, wiki snippets and mechanical turk input.

latte

The PDF is available at : http://secure-tundra-7963.herokuapp.com/

A new version can be generated at :  http://secure-tundra-7963.herokuapp.com/generate

It appears you don’t have a PDF plugin for this browser.
You can click here to
download the PDF file.

The Digital Prophet is a generative essay collection and a study of internet anthropology, the relationship between humans and the internet. Using random access to various parts of the Internet, is it possible to gain sufficient understanding of a body of knowledge? To explore this topic, random blurbs of data related to facets of life such as love, death, children, etc. are collected via stochastic processes from various corners of the internet. The book is augmented via images and drawings collected from internet communities. In a way, the internet communities and by extension the the Internet become the book’s autobiographical author. The process and content of this work is a tribute to philosopher Gibran Khalil Gibran’s The Prophet.

Generative processes

The Digital Prophet is a story told by The Internet, an autobiographical author composed of various generative stochastic processes that pull data from different parts of the Internet. The author is an amalgam of content from digital communities such as Twitter, Wikipedia, Flickr and Mechanical Turk.

As it is not (yet) possible to directly ask a question to The Internet, each community played a role in two parts. The first as a fountain of new data and the second as a filter of the raw form of the internet through the categorization and annotation of the data. In effect, peering through the lens of an internet community we can extract answers to questions about all facets of life.

By asking personal questions to Mechanical Turk, a community of ephemeral worker processes, we obtain deep and meaningful answers. `What do you think death is?`, `Write a letter to your ex-lover`, `Draw what you think pain is`, etc. brings about stories of love, cancer, family, religion, molestation, and a myriad other topics.

Twitter on the other hand is a firehose of raw information. Posts had to be parsed and filtered using Natural Language Processing to derive tweets that would have the most meaningful content. Sentiment analysis was used to gather tweets that were charged with the most emotional bias.

sample-twitter

Flickr data corresponds to a random stream of user contributed images. In many cases, drawing a random image related to a certain topic and juxtaposing it with other content serendipitously creates entirely different stories.

Screen Shot 2014-05-08 at 4.37.47 AM

Wikipedia articles have an air of authority, it is narrated by thousands of different authors and different voices, converging into a single (although temporary) opinion. Compared and mixed with the other content, it provides a cohesive descriptive tone to the narrative.

The system

The author is generated through calls to APIs to gather, sort and sift through the internet in order to get the newest most complete answer to the question`What is life?`. When accessing the project’s website, a user sets into motion this process and generates both a unique author and a unique book each time. The book is timestamped with the date it was generated, and dedicated to a random user (a random part) of The Internet. Hitting refresh disappears this author, as no book can be generated in the same way again.

Screen Shot 2014-05-11 at 1.50.54 PM

Technology used

  • Flickr API
  • Twitter Search API
  • Mechanical Turk (Java command line tools)
  • Wikipeida REST API
  • Node.JS
  • npm ‘natural’ (NLP) module
  • wkhtmltopdf
  • Hosted on Heroku

Kevan Loney

13 May 2014

Banner_Encounter

Tweet:

Witness THE ENCOUNTER of two complex creatures. Teaching and learning causes a relationship to bloom.

Abstract:

“The Encounter” tells the tale of two complex creatures. A human and industrial robotic arm meet for the first time and engage in an unexpected playful interaction, teaching and learning from each other. This story represents how we, as a society, have grown to learn from and challenge technology as we move forward in time. Using Touch Designer, RAPID, and the Microsoft Kinect we have derived a simplistic theatrical narrative through means of experimentation and collaboration.

Narrative:

During the planning stages of the capstone project, Golan Levin introduced us, the authors of the piece, to one another saying that we could benefit from working with each other. Mauricio, from Tangible Interaction Design, was wanting to explore the use of the robotic arm combined with mobile projections, while Kevan, from the School of Drama VMD, was wanting to explore interactive theatre. This pairing ended up with the two of us exploring and devising a performative piece from the ground up. In the beginning, we had no idea what story we wanted to tell or what we wanted it to look like. We just knew a few things we wanted to explore. We ultimately decided to let our tests and experiments drive the piece and let the story form naturally from the playing between us, the robot, the tools, and the actress, Olivia Brown.

We tried out many different things and built different testing tools. Such as a way to send Kinect Data from a custom TouchDesigner script through a TCP/IP connection so that RAPID could read it and allow the robot to move in sync with an actor. This experiment proved to be a valuable tool and research experience. Also a Processing sketch was used during the testing phases to evaluate some of the robot’s movement capabilities and responsiveness. Although we ended up dropping this from the final performance for more cue based triggering, ultimately the idea of the robot moving and responding to the humans movement drove the narrative of the final piece.

We had planned many different things, such as using the entire dFab (dfabserver.arc.cmu.edu) lab space as the setting for the piece. This would have included multiple projector outputs/ rigging, further planning, extra safety precautions for our actress, etc. Little did we realize that this would have been trouble for us due to the quickly shortening deadline. Four days before we were suppose to have our performance, an unexpecting individual came with a few words of wisdom for us. “Can a robot make shadow puppets?”. Yes, Golan’s own son’s curiosity was the jumping off point for our project to come together not only visually, but also realistically for time sake. From that moment on we set out to create a theatrical interactive robotic shadow performance. The four days included researching the best way to set up the makeshift stage in the small confines of dFAB and finishing the performance as strong as we could. To make the stage, we rigged unistrut (lined with velcro) along the pipes of the ceiling. From there we used a Velcro lined RP screen from the VMD department and attached it to the unistrut and two Autopoles for support. This created a sturdy/clean looking projection/shadow surface to have in the space.

Conceptually we used two different light sources for the narrative. A ellipsoidal stage light and a 6000 Lumen Panasonic Standard Lens Projector. In the beginning of the piece, the stage light was used to cast the shadows. Through experimenting with light sources, we found that this type of light gave the “classic” shadow puppet atmosphere. It gave it a vintage vibe of the yesteryears before technology. As the story progresses and the robot turns on, the stage light flickers off while the digital projector takes over as the main source of light. This transition is meant to show the evolution of technology being introduced to our society while expressing the contrast between analog and digital.

The concept for the digital world was that it would be reactive to both the robot and the human inputs. For example the human brings an organic life to the world. This is represented through the fluctuation of the color spectrum brought upon the cold muted world of the robot. As this is the robot’s world, as it moves around the space and begins to make more intricate maneuvers, the space responds like a machine. Boxes push in and out of the wall, as if the environment is alive and part of the machine. The colors of the world are completely controlled by the actress during the performance, while the boxes where cue based. They used noise patterns to control the speed of movements. If there was one thing would be great to expand on, it would have been for the robot’s data to be sent back to TouchDesigner so that it would live control the box’s movements instead of it being cued by an operator.

For the robots movements, we ended up making them cue based for the performance. This was done directly in RAPID code, cued from an ABB controller “teach pendant”; a touch and joystick based interface for controlling ABB industrial arms. We gave the robot pre planned coordinates and movement operations to do based on rehearsals with the Actress. One thing we would love to expand on further is to allow the recording of the Actress’ movements so that we can play them back live instead of pre defining them in a staged setting. In general we would love to make the whole play between the robot and human more improv rather than staged. Yet, robotic motion proved to be the main challenge to overcome, since the multitude of axis (hence possible motions) of the machine we used (6 axis), makes it very easy to inadvertently reach positions where the robot is “tangled”, and will throw an error and stop. It was interesting to know, specially working on this narrative, that this robotic arms and their programming workflows do not provide any notion of “natural motion” (as in the embodied intelligence humans have when moving their limbs in graceful and efficient manners, without becoming blocked with oneself, for example), and are definitely not targeted towards real time input. This robots are not meant to interact with humans, which was our challenge both technically and to our narrative.

In the end we created a narrative theatrical performance that we feel proud of. One that was created through much discussion and experimenting/play in the dFAB lab. There is much potential for more in this concept, and we hope to maybe explore it further one day!

4 3 2 En1

Github:

https://github.com/maurothesandman/IACD-Robot-performance

https://github.com/kdloney/TouchDesigner_RoboticPerformance_IACD

***The TouchDesigner github link is currently the old file used during rehearsals. The final file will be uploaded at a later date due to travel and insufficient file transfers.

 SPECIAL THANK YOU:

Golan Levin, Larry Shea, Buzz Miller, CMU dFAB, CMU VMD, Mike Jeffers, Almeda Beynon, Anthony Stultz, Olivia Brown, and everyone that was involved with Spring 2014 IACD!

Best,

Kevan Loney

Mauricio Contreras

Collin Burger

12 May 2014

loop findr bannerBanner Design by Aderinsola Akintilo

Video:

Loop Findr from Collin Burger on Vimeo.

Tweet:
Loop Findr is a tool that automatically finds loops in videos so you can turn them into seamless gifs.

Blurb:
Since their creation in 1987, animated GIFs have become one of the most popular means of expression on the Internet. They have evolved into their own artistic medium due to their ability to capture a particular feeling and the format’s portable nature. Loop Findr seeks to usher in a new era of seamless GIFs created from loops found in the videos the populate the Internet. Loop Findr is a tool that automatically finds these loops so users can turn them into GIFs that can then be shared all over the Web.

Narrative:
Inception:
The idea for Loop Findr came about during a conversation with Professor Golan Levin about research into pornographic video detection in which the researchers analyzed the optical flow of videos in order to detect repetitive reciprocal motion. During this conversation the idea of using optical flow to detect and extract repetitive motion in videos emerged, and its potential for automatically retrieving nicely-looped, seamless GIFs.

Research:
Professor Levin and I devised an algorithm for detecting loops based on finding periodicity in a sparse sampling of the optical flow of pixels in videos. After doing some research, I was inspired by the pixel difference compression method employed by the GIF file format specification. It became clear to me that for a GIF to appear to loop without any discontinuity, the pixel difference between the first and final frames must be relatively small.

Algorithm:
After performing the research, I decided to implement the loop detection by analyzing the percent pixel difference between video frames.  This is enacted by keeping a ring buffer that is filled with video frames that are resized and and converted to sixty-four by sixty-four, greyscale images. For each potential start of a loop, the percent pixel difference of all the frames within the acceptable loop length range is calculated. This metric is calculated with the mean intensity value of the starting frame subtracted from both the starting frame and each of the potential ending frames. If the percent pixel difference is below the accuracy threshold specified by the user, then those frames constitute the beginning and end of a loop. If the percent pixel difference between the first frame of a new loop and the first frame of the previously found loop is within the accuracy threshold, then the one with the greater percent pixel difference is discarded. Additionally, minimum and maximum movement thresholds can be activated and adjusted to disregard video sequences without movement, such as title screens, or parts of the video with discontinuities such as cuts or flashes, respectively. The metric used to estimate the amount of movement is similar to the one used to detect loops, but in the case of calculating movement, the cumulative percent pixel difference is added for all frames in the potential loop.

Development:
There was approximately a forty-eight hour span between deciding to take on the project and having a functioning prototype with the basic loop detection algorithm in place. Therefore, the vast majority of the time spent on development was dedicated to optimization and creating a fully-featured user interface. The galleries below show the progression of the user interface.

This first version of Loop Findr simply displayed the current frame that was being considered for the start of a loop. Any loops found were simply appended to the grid at the bottom right of the screen. Most of the major features were in place, including exporting GIFs.

The next iteration came with the addition of ofxTimeline and the ability to easily navigate to different parts of the video with the graphical interface. The other major addition was the ability to refine the loops found by moving the ends of the loops forward or backwards frame by frame.

In the latest version, the biggest change came with moving the processing of the video frames to an additional thread. The advantage of this was that it kept the user interface responsive at all times. This version also cleaned up the display of the found loops by creating a paginated grid.

Future Work:
Rather than focus on improving this openFrameworks implementation of Loop Findr, I will investigate the potential of implementing a web-based version so that it might reach as many people as possible.  I envision a website where users might be able to just supply a youTube link and have any potential loops extracted and given back to them. Additionally, I would like to employ the algorithm along with some web crawling and find loops in video streams on the internet or perhaps just scrape popular video hosting websites for loops.

 

Andrew Russell

12 May 2014

Beats Tree is an online, collaborative, musical beat creation tool.

Abstract

The goal of creating Beats Tree was to adapt the idea of an exquisite corpse to musical loops. The first user creates a tree with four empty bars and can record any audio they want in those four bars. Subsequent users then add multiple layers on top of the first track. More and more layers can then be added, however, only the previous four layers are played back at any time. The reason why these are called “trees” is because users can create a new tree branch at any time. If the user does not like how a certain layer sounds, they can easily create their own layer at that point, ignoring the already existing layer.

Documentation

Beats Tree is a collaborative website to allow multiple users to create beats together. Users are restricted to just four bars of audio that, when played back, are looped infinitely.  More layers can then be added on top to have multiple instruments playing at the same time.  However, only four layers can be played back at once.  When more than four layers exist, the playback will browse through different combinations of the layers to give a unique and constantly changing musical experience.

Beats Tree - Annotated Beat Tree

When a tree has enough layers, playback will randomly browser through the tree.  When the active layer is finished being played, the playback will randomly perform one of four actions: it may repeat the active layer; it may randomly choose one of its child layers to play; it may play its parent’s layer; it may play a random layer from anywhere in the tree. When a layer is being played back, its three parents’ layers, if they exist, will also be played back.

Beats Tree - View Mode

Users can also view and playback a single layer. Instead of randomly moving to a different layer after completion, it will simply loop that single layer again and again, with its parents’ layers also playing.  At this point, if the user likes what this layer sounds like, they can record their own layer on top.  If they choose to do so, they can record directly from the browser on top of the old layer.  The old layer will be played back while the new layer is recorded.

Beats Tree - Record Mode

The inspiration for this project came from the idea of an exquisite corpse. In an exquisite corpse, the first member either draws or writes something then passes what they have to the next member. This continues on until all members are done and you have the final piece of art. The main inspiration came from the Exquisite Forest, which is a branching exquisite corpse based around animation.  Beats Tree is like the Exquisite Forest, but with musical beats layered on top each other instead of animations displayed over time.

Github

https://github.com/DeadHeadRussell/beats_tree

Sketches

Here are some sketches / rough code done while developing this application.

Beats Tree - Sketch 1

Beats Tree - Sketch 2

Beats Tree - Sketch 3

Beats Tree - Sketch 4

Nastassia Barber

12 May 2014

dancing men

A caricature of your ridiculous interpretive dances!

This is an interactive piece which gives participants a set of strange prompts (i.e. “virus”, or “your best friend”) to interpret into dance.  At the end, the participant sees a stick figure performing a slightly exaggerated interpretation of their movements.  This gives participants a chance to laugh with/at their friends, and also to see their movements as an anonymized figure that removes any sense of embarrassment and often allows people to say “wow, I’m a pretty good dancer!” or at least have a good laugh at their own expense.

IMG_3555ridiculous dancing

Some people dancing to the prompts.

skeletons

skeletons2

Some screenshots of caricatures in action.

For this project, I really wanted to make people re-examine the way they move and maybe make fun of them a little.  I started with the idea of gait analysis/caricature, but the Kinect was relatively glitchy when recording people turned sideways (the only really good way to record a walk) and has too small of a range for a reasonable number of walk cycles to fit in the frame.  I eventually switched to dancing, which I still think achieves my objectives because it forces people to move in a way they might normally be too shy to move in public.  Then, after they finish, they see an anonymous stick figure dancing and can see the way they move separated from the appearance of their body, which is an interesting new perspective.  The very anonymous stick figure dance is kept for the next few dancers, who see previous participants as a type of “back-up” dancers to their own dance.  All participants get the same prompts, so it can be interesting to compare everyone’s interpretations of the same words.  I purposefully chose weird prompts to make people think and be spontaneous– “mountain,” “virus,” “yesterday’s breakfast,” “your best friend,” “fireworks,” “water bottle,” and “alarm clock.”  It has been really fun to laugh with friends and strangers who play with my piece, and to see the similarities and differences between different people’s interpretations of the same prompts.

Dance Caricature! from Nastassia Barber on Vimeo.

Spencer Barton

12 May 2014

I recently used Amazon’s Mechanical Turk for the quantified selfie project. Mechanical Turk is a crowd sourced marketplace where you submit small tasks for hundreds of people to complete. Mechanical Turk is used to tag images, transcribe text, analyze sentiment and perform other tasks. A request puts up a HIT (Human Intelligence Task) and offers a small reward for completion. People from all over the world then complete the task (if you priced it right). The result is large, hard to compute tasks are completed quickly for far less then minimum wage. Turkers are choosing to work

Turking is a bit magical. You put HITs (Human Intelligence Tasks) up and a few hours later a mass of humanity has completed your task, unless you screw up.

I screwed-up a bit and I learned a few lessons. First it is essential to keep it simple. My first HIT had directions to include newlines. I got a few emails from Turkers – it appears that newlines were a bit confusing. I also learned that task completion is completely dependent on the price paid. Make sure to pay enough – look at similar projects that are currently running.

Spencer Barton

12 May 2014

banner2

Young readers bring storybook characters to life through the Looking Glass.

Looking Glass explores augmented storytelling. The reader guides the Looking Glass over the pages in a picture book and animations appear on the display at set points on the page. These whimsical animations bring characters to life and enable writers to add interactive content.

I was inspired to create this project after seeing the OLED display for the first time. I saw the display as a looking glass through which I could create and uncover hidden stories. Storybooks were an ideal starting point because of a younger readership that is these days very eager to use technology like tablets and smartphones. However unlike a tablet, Looking Glass requires the book and more importantly requires the reader to engage in the book.

For more technical details please see this prior post.

MacKenzie Bates

12 May 2014

Finger Launchpad

Finger_Launchpad

Tweet:

Launch your fingertips at your opponent. Think Multi-Touch Air Hockey. A game for MacBooks.

________________________________________________________________________________

Blurb:

Launch your fingertips at your opponent. Think Multi-Touch Air Hockey. Using the MacBook’s touchpad (which is the same as that of an iPad), use up to 11 fingers to try and lower your opponents health to 0. Hold a finger on the touchpad for a second and then once you lift your finger it will be launched in the direction it was pointing. When fingertips collide, the bigger one wins. Large fingertips do more damage than small ones. Skinny fingertips go faster than wide ones. Engage in the ultimate one-on-one multi-touch battle.

________________________________________________________________________________

Gameplay Video:

________________________________________________________________________________

Photos:

________________________________________________________________________________

Narrative:

In Golan’s IACD studio, he told me all semester that I would get to make a game and then the final project came around and it was time to make a game. But what game to make? I was paralyzed with possibilities and with the fear that after a semester of anticipation that I wouldn’t make a game that lived up to mine or Golan’s expectations.

After talking about what I should make a game on, Golan gave me this tool that a previous IACD student made that allows you to easily get the multi-touch interaction that occur on a MacBook trackpad, which meant that I could easily make a multi-touch game without having to jump through hoops to make it on a mobile device.

So I sat there pondering what game to make with this technology and the ideas that were instilled in Paolo’s Experimental Game Design – Alternative Interfaces popped into my mind. If it is multi-touch then it should truly be multi-touch at its core (using multiple fingers at once should be central to gameplay). The visuals should be simple and minimalist (there is no need to do some random theme that masks the game). This is the game that I came up with and it serves as a combination of what I have learned from Paolo and Golan. I think this might be the best designed game I have made yet and so far it is certainly the one I am most proud of having made.

________________________________________________________________________________

Links:

View/Download Code @: GitHub
Download Game @: MacKenzie Bates’ Website
Download SendMultiTouches @:
Duncan Boehle’s Website
Read More About Game @: 
MacKenzie Bates’ Website

Austin McCasland

12 May 2014

Abstract:

Genetically Modified Tree of Life is an interactive display for the Center for Postnatural History in Pittsburgh.  “The PostNatural  refers to living organisms that have been altered through processes such as selective breeding or genetic engineering.” [www.postnatural.org]

Model organisms are the building blocks for these organisms, also known as Genetically Modified Organisms.

This app shows the tree of life ending in every model organism used to make these GMOs, as well as allowing people to select organisms to read the story behind them.

 

Description:

History museums are a fun and interesting avenue for people to experience things which existed long ago.  If people want to experience things which have happened more recently, however, there is one outlet – the Center for Postnatural History.  “The PostNatural  refers to living organisms that have been altered through processes such as selective breeding or genetic engineering.” [www.postnatural.org].  Children’s imaginations light up at the prospect of mammoths walking the earth, or terrifyingly large dinosaurs from thousands of years ago, but today is no less exciting.  Mutants roam the earth, large and small, some ordinary and some fantastic.

 

Take, for example, the BioSteel Goat.  These goats have their genes genetically modified with spider genes so that spider web fibers are produced in their milk.  They are milked, and that milk is processed, creating huge amounts of incredibly strong fiber which is stronger than steel.

The Genetically Modified Tree of Life is an interactive display which I created for the Center for Postnatural History under the advisement of Richard Pell.  This app will exist in its final form as an interactive installation on a touch screen which will allow visitors to come up and learn more about certain genetically modified organisms in a fun and informative way.  The app visualizes the tree of life as seen through the perspective of genetically modified organisms by showing the genetic path of every model organism from the root of all life to the modern day in the form of a tree.  These model organism’s genes are what scientists use to create all genetically modified organisms as they are representative of a wide array of genetic diversity.  Visitors to the exhibit will be able to drag around the tree, mixing up the branches of the model organisms, as well as selecting individual genetically  modified organisms from the lower portion of the screen to learn more about them.  These are pulled from the Center for Postnatural History’s Database.  The objective of this piece is to be educational and fun in an active state, as well as being visually attractive in a passive state.

 

Tweet:

Visualization of the tree of life as seen by GMOs.

 

1 2 3

Ticha Sethapakdi

12 May 2014

euphony 1200x400

Tweet
Euphony: A pair of sound sculptures which explore audio-based memories.

Overview
“Euphony” is a pair of telephones found in an antique shop in Pittsburgh. Through the use of an Audio Recording / Playback breakout board, a simple circuit, and an Arduino, the phones were transformed into peculiar sculptures which investigate memories in the form of sound. The red phone is exclusively an audio playback device, which plays sound files based on phone number combinations, while the black phone is a playback and recording device. Together, these ‘sound sculptures’ house echoes of the remarkable, the mundane, the absurd, and sometimes even the sublime.

A Longer Narrative
One day I was sifting through old pictures in my phone, and as I was looking through them I had a strange feeling of disconnect between myself and the photos. While the photos evoked a sense of nostalgia, I was disappointed that I was unable to re-immerse myself in those points in time. It was then that I realized how a photograph may be a nice way to preserve a certain moment of your life, but it does not allow to you actually ‘relive’ that moment. Afterwards, I tried to think about what medium would be simple, yet effective for memory capture that is also immersive. Then I thought, “if visuals fail, why not try sound?”–which prompted me to browse my small collection of audio recordings. As I listened to a particular recording of me and my friends discussing the meaning of humor in a noisy cafeteria, I noticed how close I felt to that memory; it seemed as if I was in 2012 again, a freshman trying to be philosophical with her friends in Resnik cafe and giggling half the time. Thus, I was motivated to make something that allowed people to access audial memories, but in a less conventional way than a dictation machine.

I chose the telephone because it is traditionally a device that accesses whatever is happening in the present. I was interested in breaking that function and transforming the phone into something that echoes back something in the past. As a result, I made it so that it would play recordings that could only be accessed through dialing certain phone numbers and wrote down the ‘available’ phone numbers in a small phone book for people to flip through. This notion of ‘echoing the past’ was incorporated in the second phone, but in a slightly different way. While the first phone (the red one) had the more distant past, the second phone (the black one) kept the intermediate past. With the black phone, I wanted to explore the idea of communication and the indirect interaction between people. I made the black phone into an audio recording and playback device, which first plays the previous recording made and then records until the user hangs up the telephone. All the recordings have the same structure: a person answers whatever question the previous person asked and then they ask a different question. I really liked the idea of making a chain of people communicating disjointly with each other, and since the Arduino would keep each recording I was curious to see whether the compiled audio would be not so much a chain as it was a trainwreck.

People responded very positively to the telephones, especially the second one. To my surprise, there were actually people outside my circle of friends who were interested in the red phone despite it being a more personal project that only had recordings made by me and my family in Thailand. I am also glad that the black telephone was a success and people responded to it in very interesting ways. My only regret was that I was unable to place the phones in their “ideal” environment–a small, quiet room where people can listen to and record audio without any disruptions.

Some feedback:

  • Slightly modify the physical appearance of the phones in a way that succinctly conveys their functions.
  • Golan also suggested that I look into the Asterisk system if I want to explore telephony. I was unable to use it for this project because the phones were so old that, in order to be used as regular phones, they needed to be plugged into special jacks that you can’t find anymore.
  • Provide some feedback to the user to indicate that the device is working. The first phone might have caused confusion for some people because, while they expected to hear a dial tone when they picked up the receiver, they instead heard silence. It also would have been nice to play DTMF frequencies as the user is dialing.
  • Too much thinking power was needed for the black phone because the user had to both answer a question and conceive a question in such a short amount of time. While this may be true, I initially did it that way because I wanted people to feel as if they were in a real conversational situation; conversations can get very awkward and may induce pressure or discomfort in people. When having a conversation, you have to think on your feet in order to come up with something to say in a reasonable amount of time.

Pictures!

Each phone was made with an Arduino Uno and a VS1053 Breakout Board from Adafruit.

Also many thanks to Andre for taking pictures at the exhibition. :)

Github code here.