Daily Archives: 13 May 2014

Emily Danchik

13 May 2014

Finding emilyisms in my online interactions.

This post is long overdue, and exemplifies the time-honored MHCI mantra of “done is better than perfect.”

I downloaded my entire Facebook and Google Hangouts history, hoping to find examples of “emilyisms.” By that, I mean key words or phrases that I repeat commonly enough for someone to associate them with me.

Once I isolated the text itself, I read it into NLTK, and used it to find n-grams of words, for combinations 2-7 words long. Then, I put the data into a bubble cloud using D3, hoping to visually find phrases which identify my speech. Here is the result: (you can see the full version here)

big

 

My original intent was for phrases with fewer words to be lighter colors, and phrases with more words to be darker. This way, I hoped to easily point out phrases which were uniquely mine. Many of the larger circles represent two-word combinations that I use frequently, but are not particularly Emily-like.

I mean, of course I say "and I" a lot

I mean, of course I say “and I” a lot

Through exploring data in the visualization, I did find some interesting patterns. For example, during my in-class critique, it was pointed out that I say “can you” twice as often as I say “I can.” That realization actually helped me shape the rest of my semester here, as silly as it sounds.

There are some definite emilyisms mixed in, but they are not highlighted:

Screen Shot 2014-05-13 at 4.47.01 PM

Screen Shot 2014-05-13 at 4.47.46 PM

Screen Shot 2014-05-13 at 4.48.11 PM

The last picture represents a feature / quirk of NLTK: it knows to analyze conjunctions as two separate words. This may have affected my emilyism search.

Once I figure out coffeescript, I hope to highlight the phrases with fewer words, so the majority of the bubbles will be light green, and the ones with more words will be darker.

 

Emily Danchik

13 May 2014

TEDraps2

Computationally generating raps out of TED talks.

About

TEDraps is a project by Andrew Sweet and Emily Danchik.
We have developed a system which allows for the creation of computationally-generated, human-assisted raps from TED talks. Sentences from a 100GB corpus of talks are analyzed for syllables and rhyme, and are paired with similar sentences. The database can also be queried for sentences with certain keywords, generating a rap with a consistent theme. After the sentences for the rap are chosen, we manually clip the video segments, then have the system squash or stretch them into the beat of the backing track.

Text generation

We scraped the TED website to generate a database of over 100GB of TED talk videos and transcripts. We chose to focus on TED talks because most of them have an annotated transcripts with approximate start and end points for each phrase spoken.

Once the phrases were in the database, we could query for phrases that included keywords. For example, here is the result of a query for swear words:
Here you can see that even on the elevated stage, we have many swearers.

Here is another example of a query, instead looking for all phrases that start with “I”:
Here we can see what TEDTalkers are.

Using NLTK, we were able to analyze the corpus based on the number of syllables per phrase and the rhymability of phrases. For example, here is a result of several phrases that rhyme with “bet”:

to the budget
to addressing this segment
of the market
in order to pay back the debt
that the two parties never met
I was asked that I speak a little bit
Then the question is what do you get
And so this is one way to bet
in order to pay back the debt

Later, we modified the algorithm to match up phrases which rhymed and had a similar number of syllables, effectively generating verses for the rap. We then removed the sentences that we felt didn’t make the cut, and proceeded to the audio generation step.

Audio generation

Once we identified the phrases that would create the rap, we manually isolated the part of each video that represented each phrase. This had to be done by hand, because the TED timestamps were not absolutely accurate, and because computational linguistic research has yet to develop a completely accurate computational method for separating spoken word.

Once we had each phrase on its own, we applied an algorithm to squash or stretch the segment to the beats per minute of the backing track. For example, here is a segment from Canadian astronaut Chris Hadfield’s TED talk:
First, the original clip:

Second, the clip again, stretched slightly to fit the backing track we wanted:

Finally, we placed the phrase on top of the backing track:

We did not need to perform additional editing for the speech track, because people tend to speak in a rhythmic pattern on their own. By adding ~15 rhyming couplets together over the backing track, we ended up with a believable rap.

The Digital Prophet

The digital prophet is a rendition of Gibran Khalil Gibran’s “The Prophet” as told by the Internet, composed of tweets, images, wiki snippets and mechanical turk input.

latte

The PDF is available at : http://secure-tundra-7963.herokuapp.com/

A new version can be generated at :  http://secure-tundra-7963.herokuapp.com/generate

It appears you don’t have a PDF plugin for this browser.
You can click here to
download the PDF file.

The Digital Prophet is a generative essay collection and a study of internet anthropology, the relationship between humans and the internet. Using random access to various parts of the Internet, is it possible to gain sufficient understanding of a body of knowledge? To explore this topic, random blurbs of data related to facets of life such as love, death, children, etc. are collected via stochastic processes from various corners of the internet. The book is augmented via images and drawings collected from internet communities. In a way, the internet communities and by extension the the Internet become the book’s autobiographical author. The process and content of this work is a tribute to philosopher Gibran Khalil Gibran’s The Prophet.

Generative processes

The Digital Prophet is a story told by The Internet, an autobiographical author composed of various generative stochastic processes that pull data from different parts of the Internet. The author is an amalgam of content from digital communities such as Twitter, Wikipedia, Flickr and Mechanical Turk.

As it is not (yet) possible to directly ask a question to The Internet, each community played a role in two parts. The first as a fountain of new data and the second as a filter of the raw form of the internet through the categorization and annotation of the data. In effect, peering through the lens of an internet community we can extract answers to questions about all facets of life.

By asking personal questions to Mechanical Turk, a community of ephemeral worker processes, we obtain deep and meaningful answers. `What do you think death is?`, `Write a letter to your ex-lover`, `Draw what you think pain is`, etc. brings about stories of love, cancer, family, religion, molestation, and a myriad other topics.

Twitter on the other hand is a firehose of raw information. Posts had to be parsed and filtered using Natural Language Processing to derive tweets that would have the most meaningful content. Sentiment analysis was used to gather tweets that were charged with the most emotional bias.

sample-twitter

Flickr data corresponds to a random stream of user contributed images. In many cases, drawing a random image related to a certain topic and juxtaposing it with other content serendipitously creates entirely different stories.

Screen Shot 2014-05-08 at 4.37.47 AM

Wikipedia articles have an air of authority, it is narrated by thousands of different authors and different voices, converging into a single (although temporary) opinion. Compared and mixed with the other content, it provides a cohesive descriptive tone to the narrative.

The system

The author is generated through calls to APIs to gather, sort and sift through the internet in order to get the newest most complete answer to the question`What is life?`. When accessing the project’s website, a user sets into motion this process and generates both a unique author and a unique book each time. The book is timestamped with the date it was generated, and dedicated to a random user (a random part) of The Internet. Hitting refresh disappears this author, as no book can be generated in the same way again.

Screen Shot 2014-05-11 at 1.50.54 PM

Technology used

  • Flickr API
  • Twitter Search API
  • Mechanical Turk (Java command line tools)
  • Wikipeida REST API
  • Node.JS
  • npm ‘natural’ (NLP) module
  • wkhtmltopdf
  • Hosted on Heroku

Kevan Loney

13 May 2014

Banner_Encounter

Tweet:

Witness THE ENCOUNTER of two complex creatures. Teaching and learning causes a relationship to bloom.

Abstract:

“The Encounter” tells the tale of two complex creatures. A human and industrial robotic arm meet for the first time and engage in an unexpected playful interaction, teaching and learning from each other. This story represents how we, as a society, have grown to learn from and challenge technology as we move forward in time. Using Touch Designer, RAPID, and the Microsoft Kinect we have derived a simplistic theatrical narrative through means of experimentation and collaboration.

Narrative:

During the planning stages of the capstone project, Golan Levin introduced us, the authors of the piece, to one another saying that we could benefit from working with each other. Mauricio, from Tangible Interaction Design, was wanting to explore the use of the robotic arm combined with mobile projections, while Kevan, from the School of Drama VMD, was wanting to explore interactive theatre. This pairing ended up with the two of us exploring and devising a performative piece from the ground up. In the beginning, we had no idea what story we wanted to tell or what we wanted it to look like. We just knew a few things we wanted to explore. We ultimately decided to let our tests and experiments drive the piece and let the story form naturally from the playing between us, the robot, the tools, and the actress, Olivia Brown.

We tried out many different things and built different testing tools. Such as a way to send Kinect Data from a custom TouchDesigner script through a TCP/IP connection so that RAPID could read it and allow the robot to move in sync with an actor. This experiment proved to be a valuable tool and research experience. Also a Processing sketch was used during the testing phases to evaluate some of the robot’s movement capabilities and responsiveness. Although we ended up dropping this from the final performance for more cue based triggering, ultimately the idea of the robot moving and responding to the humans movement drove the narrative of the final piece.

We had planned many different things, such as using the entire dFab (dfabserver.arc.cmu.edu) lab space as the setting for the piece. This would have included multiple projector outputs/ rigging, further planning, extra safety precautions for our actress, etc. Little did we realize that this would have been trouble for us due to the quickly shortening deadline. Four days before we were suppose to have our performance, an unexpecting individual came with a few words of wisdom for us. “Can a robot make shadow puppets?”. Yes, Golan’s own son’s curiosity was the jumping off point for our project to come together not only visually, but also realistically for time sake. From that moment on we set out to create a theatrical interactive robotic shadow performance. The four days included researching the best way to set up the makeshift stage in the small confines of dFAB and finishing the performance as strong as we could. To make the stage, we rigged unistrut (lined with velcro) along the pipes of the ceiling. From there we used a Velcro lined RP screen from the VMD department and attached it to the unistrut and two Autopoles for support. This created a sturdy/clean looking projection/shadow surface to have in the space.

Conceptually we used two different light sources for the narrative. A ellipsoidal stage light and a 6000 Lumen Panasonic Standard Lens Projector. In the beginning of the piece, the stage light was used to cast the shadows. Through experimenting with light sources, we found that this type of light gave the “classic” shadow puppet atmosphere. It gave it a vintage vibe of the yesteryears before technology. As the story progresses and the robot turns on, the stage light flickers off while the digital projector takes over as the main source of light. This transition is meant to show the evolution of technology being introduced to our society while expressing the contrast between analog and digital.

The concept for the digital world was that it would be reactive to both the robot and the human inputs. For example the human brings an organic life to the world. This is represented through the fluctuation of the color spectrum brought upon the cold muted world of the robot. As this is the robot’s world, as it moves around the space and begins to make more intricate maneuvers, the space responds like a machine. Boxes push in and out of the wall, as if the environment is alive and part of the machine. The colors of the world are completely controlled by the actress during the performance, while the boxes where cue based. They used noise patterns to control the speed of movements. If there was one thing would be great to expand on, it would have been for the robot’s data to be sent back to TouchDesigner so that it would live control the box’s movements instead of it being cued by an operator.

For the robots movements, we ended up making them cue based for the performance. This was done directly in RAPID code, cued from an ABB controller “teach pendant”; a touch and joystick based interface for controlling ABB industrial arms. We gave the robot pre planned coordinates and movement operations to do based on rehearsals with the Actress. One thing we would love to expand on further is to allow the recording of the Actress’ movements so that we can play them back live instead of pre defining them in a staged setting. In general we would love to make the whole play between the robot and human more improv rather than staged. Yet, robotic motion proved to be the main challenge to overcome, since the multitude of axis (hence possible motions) of the machine we used (6 axis), makes it very easy to inadvertently reach positions where the robot is “tangled”, and will throw an error and stop. It was interesting to know, specially working on this narrative, that this robotic arms and their programming workflows do not provide any notion of “natural motion” (as in the embodied intelligence humans have when moving their limbs in graceful and efficient manners, without becoming blocked with oneself, for example), and are definitely not targeted towards real time input. This robots are not meant to interact with humans, which was our challenge both technically and to our narrative.

In the end we created a narrative theatrical performance that we feel proud of. One that was created through much discussion and experimenting/play in the dFAB lab. There is much potential for more in this concept, and we hope to maybe explore it further one day!

4 3 2 En1

Github:

https://github.com/maurothesandman/IACD-Robot-performance

https://github.com/kdloney/TouchDesigner_RoboticPerformance_IACD

***The TouchDesigner github link is currently the old file used during rehearsals. The final file will be uploaded at a later date due to travel and insufficient file transfers.

 SPECIAL THANK YOU:

Golan Levin, Larry Shea, Buzz Miller, CMU dFAB, CMU VMD, Mike Jeffers, Almeda Beynon, Anthony Stultz, Olivia Brown, and everyone that was involved with Spring 2014 IACD!

Best,

Kevan Loney

Mauricio Contreras