Category Archives: Final Project

Afnan Fahim

26 May 2014

ScreenShot002

This post documents the final project carried out by Sama Kanbour and Afnan Fahim.

Couplets
For our final project, we built a way for people to interact with artworks using their face as a puppeteering tool. Visitors to the project use their face to distort and puppeteer images of male/female “couples”. Each half of the user’s face controls one member of the couple.

We found the couples by exhaustively searching the online image archives made available by the Metropolitan Museum of Art. We then edited these images using Photoshop to make the edges of the couples clear, and then used computer vision techniques to find contours of the bodies and/or faces of each of the couples. We then triangulated the couples, and built puppets from the resulting mesh. We then used a face-tracker to detect the facial expressions of the user, and then used this data to control the couples’ puppets.

The project was built with the openFrameworks arts-engineering toolkit, and various addons including ofxCV, ofxDelaunay, ofxPuppet, and ofxFaceTracker. Source code for the project is available here: https://github.com/afahim/Mimes

Tweet:
We brought couples from the Met museum back to life.

Abstract:
This interactive art piece brings fourteen couples from the Metropolitan Museum back to life. It allows people to discover the stories of different couples in history. Viewers can animate facial expressions of these couples by use of puppeteering. The left half of the viewer’s face puppeteers the male partner, while the right half of the face puppeteers the female partner. By using the two halves of their face, the viewer can simulate the conversation that was potentially happening between these couples back in history.

Narrative:
This interactive art piece brings fourteen carefully selected couples from the Metropolitan Museum of Art back to life. It allows people to discover the stories of different couples in history. Viewers can animate facial expressions of these couples by use of puppeteering. The left half of the viewer’s face puppeteers the male partner, while the right half of the face puppeteers the female partner. By using the two halves of their face, the viewer can simulate the conversation that was potentially happening between these couples back in history.

We desire to make historical art more tangible and accessible to today’s audience. We hope to help garner meaningful interactions between the audience and our selected artwork, and perhaps interest the audience to learn more about the artwork presented.

The piece was built using OpenFrameworks. We used Kyle McDonald’s ofxFaceTracker to detect the viewer’s facial features; ofxCV and ofxDelaunay for creating a mesh out of couples; and ofxPuppet to animate their expressions. Oriol Messia’s prototypes helped kickstart the project. The artworks were handpicked from the Metropolitan Museum of Art’s online Collections Pages. The project was carried out under supervision of Professor Golan Levin, with additional direction by Professor Ali Momeni.

Video: 
Couplets :  https://vimeo.com/96155271

ScreenShot (1000x563)

Wanfang Diao

16 May 2014

Windsound1

tweet

Motion Sound :  Transferring kinematics models to sound.

Overview

Motion Sound is an installation that can make sound  based on its motion.  Kinematic models (gravity, inertia, momentum) help us create a continuation of our gestures, it is a duplication of people’s gesture goes on into the future. My idea is make it to be heard.  In this project, I made a wooden bell shape pendulum. When people touch it, different sound can be triggered based on its motion.

A Longer Narrative:

When I study kinematics, I learn a lot of patterns. I always feel that there is a tight relationship between physics and sound. This idea is also inspired by Mitchel Resnick’s  BitBall  of “digital manipulatives”.

On other hands, kinematic models (gravity, inertia, momentum) help us create a continuation of our gestures, it is a duplication of people’s gesture goes on into the future. What if I can make it be heard?

My first prototype is use touchOSC app in iphone connect to max/msp to get the data of iphone’s accelerometer data and map them to sound. I 3D print a tumbler holder for iphone. Here is the video:

In the final version, I chose pendulum as kinematics model. In the hardware part, I used ardunio Nano as controller and wind sensor and accelerometer to collect data. Programming part, I used Mozzi, a sound library for arduino to design the sound effect. After some experiments, my final prototype’s sound effect sounds like a e-wind chime__ 2 (7)

I lathed a wooden bell shape shell for arduino, sensors and the speaker.

__ 1 (5) __ 2 (6)

 

__ 1 (6)

 

 

 

Joel Simon

15 May 2014

I made this last week and forgot to post it here http://www.joelsimon.net/fb-graffiti.html

Tweet : “A Chrome Extension that allows any post or photo on Facebook to be publicly drawn over.”

Blurb:

FB Graffiti is a chrome extension that exposes every wall post and photo on Facebook to graffiti. 

All drawings are:

  • Public (for everyone else with the extension).
  • Anonymous (completely).
  • Permanent (no undo or erase options). 

The purpose of FB Graffiti is twofold. First, to enable a second layer of conversation on top of Facebook. The highly controlled and curated nature of conversations on Facebook is not conducive to many forms of conversation and also not analogous to real ‘walls.’ For better or worse, anonymous writings allow this.
Second, it allows any image to be a space for collaborative art that is deeply connected to its context (the page it is on and the content it is on).  Wandering Facebook can now be a process of discovery, coming across old artworks and conversations scattered across all of the site.

Narrative:

I went through a lot of ideas for this project and a lot of uncertainty if I would find a project I liked. I wanted something that would be an online tool that could be sharable and involve facebook. This was, of course, after spending 2 weeks on the faceboook phrenology idea and 2 weeks before that on an online collaborative sculpture program. Each of those ideas actually had decent progress, including full ngram generation from all fb messages for the phrenology idea. I had the idea to create a full 3d living creature that would be built out of your fb div elements using webgl css3d rendering.  Once I got complete control of facebook in 3d I got really excited because I knew that had not been done before and I had just stumbled into a lot of potential. After working through some 3d ideas such as a museum generator or games I realized that 3d was actually holding me back since it was a lot of complexity for not much gain (the internet is in 2 dimensions for many reasons). I realized I had been distracted by the technics of implementation and had to go back to the meaning of what I was doing. I gave myself the restriction of still using facebook otherwise I was at step 0.

I decided to look at the basic analogies of facebook and try to build from there. That’s when I began to think about the ‘wall’ analogy and how to expand it. I thought about poster covered walls and how those are different than their virtual counterpart. I had also recently watched a documentary about graffiti and its history in NY which a good way to ground my thinking in the history of graffiti.

Our walls on facebook are very curated, polished and non anonymous. All of these descriptors are polar opposites the ‘real’ walls which are exposed, unprotected and anonymous places. I wanted to bring that vulnerability of the real world to facebook. Obviously the quality of the content is going to be mostly poor (penises). However, by giving it to members of this class on the first day I was able to see a lot of really great content come out of it. I am totally ok if only  a minority of the pieces are creative collaborative works if the rest of them are still fun and non-destructive.

I have been working hard the last two weeks to improve it. I redid all the logging yesterday to use a dedicated database and have been working hard to try and have the ability to share the drawings directly from FB. There are a lot of technical challenges there. I look forward to improving FB-Graffiti all summer.

Jeff Crossman

15 May 2014

Banner

Tagline
Industrial Robot + a LED + Some Code = Painting in the physical world in all 3 dimensions

About the Project
Light painting is a photographic technique where light is moved in front of a camera taking a long exposure. The result is a streaking effect that resembles a stroke on a canvas. This is usually accomplished using a free moving handheld light source which creates paintings with lots of arcs and random patterns. While some artists can achieve recognizable shapes and figures in their paintings, they usually lack proper proportions and appear more abstracted due to the lack of real-time visual feedback while painting. Unlike traditional painting, the lines the artist makes does not persist in the physical space and is only visible using a camera. Recently, arrays of computer controlled LEDs placed on a rigid rod have allowed for highly precise paintings, but only on a single plane.

Industrial Light Painting is a project that for the first time aims to merge the three-dimensional flexibility of a free moving light with the precision of computer controlled light source. Together, these two methods allows for the creation of highly accurate, in both terms of structure and color, light paintings in full three-dimensional space. As in a manufacturing environment, an industrial robot replaces the fluid, less precise movements of a human with highly accurate and controlled motions of a machine. The automated motions of the industrial robot solves the problem of lack of visual feedback to the artist while painting in light, by allowing him or her to create the painting virtually within the software used to instruct the robot as well as the light attached to it.

How it Works
Industrial Light Painting creates full color three-dimensional point clouds in real space using an ABB manufactured IRB 6640 industrial robot. The point clouds are captured and stored using a Processing script and a Microsoft Kinect camera. The stored depth and RGB color values for each point are then fed into Grasshopper and HAL, which are plugins to Rhino, a 3-D modeler. Within Rhino, toolpath commands are created for the industrial robot which instruct the arm how to move to each location in the point cloud. Custom written instructions are also added to make use of the robots built-in low-power digital and analog lines which run to the end of the arm. This allows for precise control of a BlinkM smart LED which is mounted at the end of the arm along with a Teensy microcontroller.

Using DSLR cameras set to capture long exposures, the commanded robot movements along with precise control over the LED recreate the colored point clouds of approximately 5,000 points, within about a 25 minute period.

Result Photos

GIFS!
gtony

Process Photos

About the Creators
Jeff Crossman is a master’s student at Carnegie Mellon University studying human-computer interaction. He is a software engineer turned designer who is interested in moving computing out of the confines of a screen and into the physical world.
www.jeffcrossman.com

Kevyn McPhail is a undergraduate student at Carnegie Mellon University studying architecture. He concentrates heavily on fabrication, crafting objects in a variety of mediums pushing the limits of the latest CNC machines, laser cutters, 3D printers, and industrial robots.
www.kevynmc.com

Special Thanks To
Golan Levin for concept development support, equipment, and software.
Carnegie Mellon Digital Fabrication Lab for proving access to its industrial robots.
Carnegie Mellon Art Fabrication Studio for microcontroller and other electronic components.
ThingM for providing BlinkM ultra bright LEDs

Additionally the creators would like to thank the following people for their help and support during the making of this project: Mike Jeffers, Tony Zhang, Clara Lee, Feyisope Quadri, Chris Ball, Samuel Sanders, Lauren Krupsaw

Haris Usmani

14 May 2014

banner_ofxCP

Tagline
ofxCorrectPerspective: Makes parallel lines parallel- an OF addon for auto 2d rectification

Abstract
ofxCorrectPerspective is an OpenFrameworks add-on that performs automatic 2d rectification of images. It’s based on work done in “Shape from Angle Regularity” by Zaheer et al., ECCV 2012. Unlike previous methods of perspective correction, it does not require any user input (provided the image has EXIF data). Instead, it relies on the geometric constraint of ‘angle regularity’ where we leverage the fact that man-made designs are dominated by the 90 degree angle. It solves for the camera tilt and pan that maximizes the number of right angles, resulting in the fronto-parallel view of the most dominant plane in the image.

image1_ofxCP-c

2d image rectification involves finding the homography that maps the current view of an image to its fronto-parallel view. It is usually required as an intermediate step for a number of applications- for example, to create disparity maps for stereo camera images, or to make projections over planes non-orthogonal to the projector. Current techniques of image 2d rectification require the user to either manually input corresponding points between stereo images, or adjust tilt and pan until a desired image is obtained. ofxCorrectPerspective aims to change all this.

How it Works
ofxCorrectPerspective
automatically solves for the fronto-parallel view, without requiring any user input (if EXIF data is available, for Focal Length and Camera Model). Based on work by Zaheer et al., ofxCorrectPerspective uses ‘angular regularity’ to rectify images. Angle regularity is a geometric constraint which relies on the fact that in structures around us (buildings, floors, furniture etc.), straight lines meet at a particular angle. Predominantly this angle is 90 degrees. If we know the pairs of lines that meet at this angle, we can use the ‘distortion of this angle under projection’ as a constraint to solve for the camera tilt and pan that results in the fronto-parallel view of that image.

In order to learn about these pairs of lines, ofxCorrectPerspective starts by detecting lines using LSD (Line Segment Detector, RG von Gioi et al.). It then extends these lines, for robustness against noise, and computes an adjacency matrix. This adjacency matrix tells us what pairs we should consider, as pairs of lines ‘probably’ orthogonal to each other. After finding these probable pairs of lines, ofxCorrectPerspective uses RANSAC to separate the inlying and outlying pairs. An inlier pair of lines is one which minimizes distortion of right angles for all prospective pairs. Finally, the best RANSAC solution tells us the tilt and pan required for rectification.

image_small_ofxCP

Possible Applications
ofxCorrectPerspective
can be used on photos, similar to how you’d use a tilt-shift lens. It can compute rectifying homography in a stereo image, to speed up the process of finding disparity maps. This homography can also be used to correct an image projected using a projector that is non-orthogonal to the screen. ofxCorrectPerspective can very robustly remove perspective from planar images, such as a paper scan attempted by a phone camera. It produces some interesting artifacts as well for example, it modulates a camera tilt or pan as a zoom (as shown in the demo video).

image2_ofxCP-c

Limitations & Future Work
ofxCorrectPerspective
works best on images that have a dominant plane, with a set of lines or patterns on it. It also works on multi-planar images but usually ends up rectifying one of the visible plane, as ‘angle regularity’ is a local constraint. One approach to customize this would be to apply some form of segmentation on the image, before running it through this add-on (as done by Zaheer et al.). Another approach could be to allow the user to select a particular area of the image, as the plane to be rectified.

About the Author
Haris Usmani is a grad student in the M.S. Music & Technology program at Carnegie Mellon University. He did his undergrad in Electrical Engineering from LUMS, Pakistan. In his senior year at LUMS, he worked at the CV Lab where he came across this publication.
www.harisusmani.com

Special Thanks To
Golan Levin
Aamer Zaheer
Muhammad Ahmed Riaz

Chanamon Ratanalert

14 May 2014

bannerSm2

Tweet: Tilt, shake, and discover environments with PlayPlanet, an interactive app for the iPad

Overview:
PlayPlanet is a web (Chrome) application made for the iPad designed for users to interact with it in ways other than the traditional touch method. PlayPlanet has a variety of interactive environments from which users can choose to explore. Users tilt and shake the iPad to discover reactions in each biome. The app was created such that the user must trigger events in each biome themselves, unfolding the world around them through their own actions.

Go to PlayPlanet
PlayPlanet Github

My initial idea had been to create an interactive children’s book. Now you may think that that idea is pretty different than what my final product is. And you’re right. But PlayPlanet is much more fitting toward the sprout of an idea that first lead to the children’s book concept. What I ultimately wanted to create was an experience that users unfold for themselves. Nothing to be triggered by a computer system. Just pure user input directed into the iPad via shakes and tilts to create a response.

After many helpful critiques and consultations with Golan and peers (muchas gracias to Kevyn, Andrew, Joel, Jeff, and Celine in particular), I landed upon the idea of interactive environments. What could be more user-input direct than a shakable world that flips and turns at every move of the iPad. With this idea in hand, I had to make sure that it grew into a better project than my book had been looking.

The issue with my book was that it had been too static, too humdrum. Nothing was surprisingly, or too interesting, for that matter. I needed the biomes to be exploratory, discoverable, and all-in-all fun to play with. That is where the balance between what was already moving on the screen and what could be moved came into play. The environments while the iPad was still had to be interesting on their own, but had to be just mundane enough that the user would want to explore more—uncover what else the biome contained. This curiosity would lead the user to unleash these secrets through physical movement of the iPad.

After many hours behind a sketchbook, Illustrator, and code, this is my final result. I’m playing it pretty fast and loose with the word “final” here, because while it is what I am submitting as a final product for the IACD capstone project, this project has a much greater potential. I hope to continue to expand PlayPlanet, creating more biomes, features, and interactions that the user can explore. Nevertheless, I am proud of the result I’ve achieved and am thankful to have had the experience with this project and this studio.

Emily Danchik

13 May 2014

TEDraps2

Computationally generating raps out of TED talks.

About

TEDraps is a project by Andrew Sweet and Emily Danchik.
We have developed a system which allows for the creation of computationally-generated, human-assisted raps from TED talks. Sentences from a 100GB corpus of talks are analyzed for syllables and rhyme, and are paired with similar sentences. The database can also be queried for sentences with certain keywords, generating a rap with a consistent theme. After the sentences for the rap are chosen, we manually clip the video segments, then have the system squash or stretch them into the beat of the backing track.

Text generation

We scraped the TED website to generate a database of over 100GB of TED talk videos and transcripts. We chose to focus on TED talks because most of them have an annotated transcripts with approximate start and end points for each phrase spoken.

Once the phrases were in the database, we could query for phrases that included keywords. For example, here is the result of a query for swear words:
Here you can see that even on the elevated stage, we have many swearers.

Here is another example of a query, instead looking for all phrases that start with “I”:
Here we can see what TEDTalkers are.

Using NLTK, we were able to analyze the corpus based on the number of syllables per phrase and the rhymability of phrases. For example, here is a result of several phrases that rhyme with “bet”:

to the budget
to addressing this segment
of the market
in order to pay back the debt
that the two parties never met
I was asked that I speak a little bit
Then the question is what do you get
And so this is one way to bet
in order to pay back the debt

Later, we modified the algorithm to match up phrases which rhymed and had a similar number of syllables, effectively generating verses for the rap. We then removed the sentences that we felt didn’t make the cut, and proceeded to the audio generation step.

Audio generation

Once we identified the phrases that would create the rap, we manually isolated the part of each video that represented each phrase. This had to be done by hand, because the TED timestamps were not absolutely accurate, and because computational linguistic research has yet to develop a completely accurate computational method for separating spoken word.

Once we had each phrase on its own, we applied an algorithm to squash or stretch the segment to the beats per minute of the backing track. For example, here is a segment from Canadian astronaut Chris Hadfield’s TED talk:
First, the original clip:

Second, the clip again, stretched slightly to fit the backing track we wanted:

Finally, we placed the phrase on top of the backing track:

We did not need to perform additional editing for the speech track, because people tend to speak in a rhythmic pattern on their own. By adding ~15 rhyming couplets together over the backing track, we ended up with a believable rap.

The Digital Prophet

The digital prophet is a rendition of Gibran Khalil Gibran’s “The Prophet” as told by the Internet, composed of tweets, images, wiki snippets and mechanical turk input.

latte

The PDF is available at : http://secure-tundra-7963.herokuapp.com/

A new version can be generated at :  http://secure-tundra-7963.herokuapp.com/generate

It appears you don’t have a PDF plugin for this browser.
You can click here to
download the PDF file.

The Digital Prophet is a generative essay collection and a study of internet anthropology, the relationship between humans and the internet. Using random access to various parts of the Internet, is it possible to gain sufficient understanding of a body of knowledge? To explore this topic, random blurbs of data related to facets of life such as love, death, children, etc. are collected via stochastic processes from various corners of the internet. The book is augmented via images and drawings collected from internet communities. In a way, the internet communities and by extension the the Internet become the book’s autobiographical author. The process and content of this work is a tribute to philosopher Gibran Khalil Gibran’s The Prophet.

Generative processes

The Digital Prophet is a story told by The Internet, an autobiographical author composed of various generative stochastic processes that pull data from different parts of the Internet. The author is an amalgam of content from digital communities such as Twitter, Wikipedia, Flickr and Mechanical Turk.

As it is not (yet) possible to directly ask a question to The Internet, each community played a role in two parts. The first as a fountain of new data and the second as a filter of the raw form of the internet through the categorization and annotation of the data. In effect, peering through the lens of an internet community we can extract answers to questions about all facets of life.

By asking personal questions to Mechanical Turk, a community of ephemeral worker processes, we obtain deep and meaningful answers. `What do you think death is?`, `Write a letter to your ex-lover`, `Draw what you think pain is`, etc. brings about stories of love, cancer, family, religion, molestation, and a myriad other topics.

Twitter on the other hand is a firehose of raw information. Posts had to be parsed and filtered using Natural Language Processing to derive tweets that would have the most meaningful content. Sentiment analysis was used to gather tweets that were charged with the most emotional bias.

sample-twitter

Flickr data corresponds to a random stream of user contributed images. In many cases, drawing a random image related to a certain topic and juxtaposing it with other content serendipitously creates entirely different stories.

Screen Shot 2014-05-08 at 4.37.47 AM

Wikipedia articles have an air of authority, it is narrated by thousands of different authors and different voices, converging into a single (although temporary) opinion. Compared and mixed with the other content, it provides a cohesive descriptive tone to the narrative.

The system

The author is generated through calls to APIs to gather, sort and sift through the internet in order to get the newest most complete answer to the question`What is life?`. When accessing the project’s website, a user sets into motion this process and generates both a unique author and a unique book each time. The book is timestamped with the date it was generated, and dedicated to a random user (a random part) of The Internet. Hitting refresh disappears this author, as no book can be generated in the same way again.

Screen Shot 2014-05-11 at 1.50.54 PM

Technology used

  • Flickr API
  • Twitter Search API
  • Mechanical Turk (Java command line tools)
  • Wikipeida REST API
  • Node.JS
  • npm ‘natural’ (NLP) module
  • wkhtmltopdf
  • Hosted on Heroku

Kevan Loney

13 May 2014

Banner_Encounter

Tweet:

Witness THE ENCOUNTER of two complex creatures. Teaching and learning causes a relationship to bloom.

Abstract:

“The Encounter” tells the tale of two complex creatures. A human and industrial robotic arm meet for the first time and engage in an unexpected playful interaction, teaching and learning from each other. This story represents how we, as a society, have grown to learn from and challenge technology as we move forward in time. Using Touch Designer, RAPID, and the Microsoft Kinect we have derived a simplistic theatrical narrative through means of experimentation and collaboration.

Narrative:

During the planning stages of the capstone project, Golan Levin introduced us, the authors of the piece, to one another saying that we could benefit from working with each other. Mauricio, from Tangible Interaction Design, was wanting to explore the use of the robotic arm combined with mobile projections, while Kevan, from the School of Drama VMD, was wanting to explore interactive theatre. This pairing ended up with the two of us exploring and devising a performative piece from the ground up. In the beginning, we had no idea what story we wanted to tell or what we wanted it to look like. We just knew a few things we wanted to explore. We ultimately decided to let our tests and experiments drive the piece and let the story form naturally from the playing between us, the robot, the tools, and the actress, Olivia Brown.

We tried out many different things and built different testing tools. Such as a way to send Kinect Data from a custom TouchDesigner script through a TCP/IP connection so that RAPID could read it and allow the robot to move in sync with an actor. This experiment proved to be a valuable tool and research experience. Also a Processing sketch was used during the testing phases to evaluate some of the robot’s movement capabilities and responsiveness. Although we ended up dropping this from the final performance for more cue based triggering, ultimately the idea of the robot moving and responding to the humans movement drove the narrative of the final piece.

We had planned many different things, such as using the entire dFab (dfabserver.arc.cmu.edu) lab space as the setting for the piece. This would have included multiple projector outputs/ rigging, further planning, extra safety precautions for our actress, etc. Little did we realize that this would have been trouble for us due to the quickly shortening deadline. Four days before we were suppose to have our performance, an unexpecting individual came with a few words of wisdom for us. “Can a robot make shadow puppets?”. Yes, Golan’s own son’s curiosity was the jumping off point for our project to come together not only visually, but also realistically for time sake. From that moment on we set out to create a theatrical interactive robotic shadow performance. The four days included researching the best way to set up the makeshift stage in the small confines of dFAB and finishing the performance as strong as we could. To make the stage, we rigged unistrut (lined with velcro) along the pipes of the ceiling. From there we used a Velcro lined RP screen from the VMD department and attached it to the unistrut and two Autopoles for support. This created a sturdy/clean looking projection/shadow surface to have in the space.

Conceptually we used two different light sources for the narrative. A ellipsoidal stage light and a 6000 Lumen Panasonic Standard Lens Projector. In the beginning of the piece, the stage light was used to cast the shadows. Through experimenting with light sources, we found that this type of light gave the “classic” shadow puppet atmosphere. It gave it a vintage vibe of the yesteryears before technology. As the story progresses and the robot turns on, the stage light flickers off while the digital projector takes over as the main source of light. This transition is meant to show the evolution of technology being introduced to our society while expressing the contrast between analog and digital.

The concept for the digital world was that it would be reactive to both the robot and the human inputs. For example the human brings an organic life to the world. This is represented through the fluctuation of the color spectrum brought upon the cold muted world of the robot. As this is the robot’s world, as it moves around the space and begins to make more intricate maneuvers, the space responds like a machine. Boxes push in and out of the wall, as if the environment is alive and part of the machine. The colors of the world are completely controlled by the actress during the performance, while the boxes where cue based. They used noise patterns to control the speed of movements. If there was one thing would be great to expand on, it would have been for the robot’s data to be sent back to TouchDesigner so that it would live control the box’s movements instead of it being cued by an operator.

For the robots movements, we ended up making them cue based for the performance. This was done directly in RAPID code, cued from an ABB controller “teach pendant”; a touch and joystick based interface for controlling ABB industrial arms. We gave the robot pre planned coordinates and movement operations to do based on rehearsals with the Actress. One thing we would love to expand on further is to allow the recording of the Actress’ movements so that we can play them back live instead of pre defining them in a staged setting. In general we would love to make the whole play between the robot and human more improv rather than staged. Yet, robotic motion proved to be the main challenge to overcome, since the multitude of axis (hence possible motions) of the machine we used (6 axis), makes it very easy to inadvertently reach positions where the robot is “tangled”, and will throw an error and stop. It was interesting to know, specially working on this narrative, that this robotic arms and their programming workflows do not provide any notion of “natural motion” (as in the embodied intelligence humans have when moving their limbs in graceful and efficient manners, without becoming blocked with oneself, for example), and are definitely not targeted towards real time input. This robots are not meant to interact with humans, which was our challenge both technically and to our narrative.

In the end we created a narrative theatrical performance that we feel proud of. One that was created through much discussion and experimenting/play in the dFAB lab. There is much potential for more in this concept, and we hope to maybe explore it further one day!

4 3 2 En1

Github:

https://github.com/maurothesandman/IACD-Robot-performance

https://github.com/kdloney/TouchDesigner_RoboticPerformance_IACD

***The TouchDesigner github link is currently the old file used during rehearsals. The final file will be uploaded at a later date due to travel and insufficient file transfers.

 SPECIAL THANK YOU:

Golan Levin, Larry Shea, Buzz Miller, CMU dFAB, CMU VMD, Mike Jeffers, Almeda Beynon, Anthony Stultz, Olivia Brown, and everyone that was involved with Spring 2014 IACD!

Best,

Kevan Loney

Mauricio Contreras

Spencer Barton

12 May 2014

banner2

Young readers bring storybook characters to life through the Looking Glass.

Looking Glass explores augmented storytelling. The reader guides the Looking Glass over the pages in a picture book and animations appear on the display at set points on the page. These whimsical animations bring characters to life and enable writers to add interactive content.

I was inspired to create this project after seeing the OLED display for the first time. I saw the display as a looking glass through which I could create and uncover hidden stories. Storybooks were an ideal starting point because of a younger readership that is these days very eager to use technology like tablets and smartphones. However unlike a tablet, Looking Glass requires the book and more importantly requires the reader to engage in the book.

For more technical details please see this prior post.