Daily Archives: 12 May 2014

Collin Burger

12 May 2014

loop findr bannerBanner Design by Aderinsola Akintilo


Loop Findr from Collin Burger on Vimeo.

Loop Findr is a tool that automatically finds loops in videos so you can turn them into seamless gifs.

Since their creation in 1987, animated GIFs have become one of the most popular means of expression on the Internet. They have evolved into their own artistic medium due to their ability to capture a particular feeling and the format’s portable nature. Loop Findr seeks to usher in a new era of seamless GIFs created from loops found in the videos the populate the Internet. Loop Findr is a tool that automatically finds these loops so users can turn them into GIFs that can then be shared all over the Web.

The idea for Loop Findr came about during a conversation with Professor Golan Levin about research into pornographic video detection in which the researchers analyzed the optical flow of videos in order to detect repetitive reciprocal motion. During this conversation the idea of using optical flow to detect and extract repetitive motion in videos emerged, and its potential for automatically retrieving nicely-looped, seamless GIFs.

Professor Levin and I devised an algorithm for detecting loops based on finding periodicity in a sparse sampling of the optical flow of pixels in videos. After doing some research, I was inspired by the pixel difference compression method employed by the GIF file format specification. It became clear to me that for a GIF to appear to loop without any discontinuity, the pixel difference between the first and final frames must be relatively small.

After performing the research, I decided to implement the loop detection by analyzing the percent pixel difference between video frames.  This is enacted by keeping a ring buffer that is filled with video frames that are resized and and converted to sixty-four by sixty-four, greyscale images. For each potential start of a loop, the percent pixel difference of all the frames within the acceptable loop length range is calculated. This metric is calculated with the mean intensity value of the starting frame subtracted from both the starting frame and each of the potential ending frames. If the percent pixel difference is below the accuracy threshold specified by the user, then those frames constitute the beginning and end of a loop. If the percent pixel difference between the first frame of a new loop and the first frame of the previously found loop is within the accuracy threshold, then the one with the greater percent pixel difference is discarded. Additionally, minimum and maximum movement thresholds can be activated and adjusted to disregard video sequences without movement, such as title screens, or parts of the video with discontinuities such as cuts or flashes, respectively. The metric used to estimate the amount of movement is similar to the one used to detect loops, but in the case of calculating movement, the cumulative percent pixel difference is added for all frames in the potential loop.

There was approximately a forty-eight hour span between deciding to take on the project and having a functioning prototype with the basic loop detection algorithm in place. Therefore, the vast majority of the time spent on development was dedicated to optimization and creating a fully-featured user interface. The galleries below show the progression of the user interface.

This first version of Loop Findr simply displayed the current frame that was being considered for the start of a loop. Any loops found were simply appended to the grid at the bottom right of the screen. Most of the major features were in place, including exporting GIFs.

The next iteration came with the addition of ofxTimeline and the ability to easily navigate to different parts of the video with the graphical interface. The other major addition was the ability to refine the loops found by moving the ends of the loops forward or backwards frame by frame.

In the latest version, the biggest change came with moving the processing of the video frames to an additional thread. The advantage of this was that it kept the user interface responsive at all times. This version also cleaned up the display of the found loops by creating a paginated grid.

Future Work:
Rather than focus on improving this openFrameworks implementation of Loop Findr, I will investigate the potential of implementing a web-based version so that it might reach as many people as possible.  I envision a website where users might be able to just supply a youTube link and have any potential loops extracted and given back to them. Additionally, I would like to employ the algorithm along with some web crawling and find loops in video streams on the internet or perhaps just scrape popular video hosting websites for loops.


Andrew Russell

12 May 2014

Beats Tree is an online, collaborative, musical beat creation tool.


The goal of creating Beats Tree was to adapt the idea of an exquisite corpse to musical loops. The first user creates a tree with four empty bars and can record any audio they want in those four bars. Subsequent users then add multiple layers on top of the first track. More and more layers can then be added, however, only the previous four layers are played back at any time. The reason why these are called “trees” is because users can create a new tree branch at any time. If the user does not like how a certain layer sounds, they can easily create their own layer at that point, ignoring the already existing layer.


Beats Tree is a collaborative website to allow multiple users to create beats together. Users are restricted to just four bars of audio that, when played back, are looped infinitely.  More layers can then be added on top to have multiple instruments playing at the same time.  However, only four layers can be played back at once.  When more than four layers exist, the playback will browse through different combinations of the layers to give a unique and constantly changing musical experience.

Beats Tree - Annotated Beat Tree

When a tree has enough layers, playback will randomly browser through the tree.  When the active layer is finished being played, the playback will randomly perform one of four actions: it may repeat the active layer; it may randomly choose one of its child layers to play; it may play its parent’s layer; it may play a random layer from anywhere in the tree. When a layer is being played back, its three parents’ layers, if they exist, will also be played back.

Beats Tree - View Mode

Users can also view and playback a single layer. Instead of randomly moving to a different layer after completion, it will simply loop that single layer again and again, with its parents’ layers also playing.  At this point, if the user likes what this layer sounds like, they can record their own layer on top.  If they choose to do so, they can record directly from the browser on top of the old layer.  The old layer will be played back while the new layer is recorded.

Beats Tree - Record Mode

The inspiration for this project came from the idea of an exquisite corpse. In an exquisite corpse, the first member either draws or writes something then passes what they have to the next member. This continues on until all members are done and you have the final piece of art. The main inspiration came from the Exquisite Forest, which is a branching exquisite corpse based around animation.  Beats Tree is like the Exquisite Forest, but with musical beats layered on top each other instead of animations displayed over time.




Here are some sketches / rough code done while developing this application.

Beats Tree - Sketch 1

Beats Tree - Sketch 2

Beats Tree - Sketch 3

Beats Tree - Sketch 4

Nastassia Barber

12 May 2014

dancing men

A caricature of your ridiculous interpretive dances!

This is an interactive piece which gives participants a set of strange prompts (i.e. “virus”, or “your best friend”) to interpret into dance.  At the end, the participant sees a stick figure performing a slightly exaggerated interpretation of their movements.  This gives participants a chance to laugh with/at their friends, and also to see their movements as an anonymized figure that removes any sense of embarrassment and often allows people to say “wow, I’m a pretty good dancer!” or at least have a good laugh at their own expense.

IMG_3555ridiculous dancing

Some people dancing to the prompts.



Some screenshots of caricatures in action.

For this project, I really wanted to make people re-examine the way they move and maybe make fun of them a little.  I started with the idea of gait analysis/caricature, but the Kinect was relatively glitchy when recording people turned sideways (the only really good way to record a walk) and has too small of a range for a reasonable number of walk cycles to fit in the frame.  I eventually switched to dancing, which I still think achieves my objectives because it forces people to move in a way they might normally be too shy to move in public.  Then, after they finish, they see an anonymous stick figure dancing and can see the way they move separated from the appearance of their body, which is an interesting new perspective.  The very anonymous stick figure dance is kept for the next few dancers, who see previous participants as a type of “back-up” dancers to their own dance.  All participants get the same prompts, so it can be interesting to compare everyone’s interpretations of the same words.  I purposefully chose weird prompts to make people think and be spontaneous– “mountain,” “virus,” “yesterday’s breakfast,” “your best friend,” “fireworks,” “water bottle,” and “alarm clock.”  It has been really fun to laugh with friends and strangers who play with my piece, and to see the similarities and differences between different people’s interpretations of the same prompts.

Dance Caricature! from Nastassia Barber on Vimeo.

Spencer Barton

12 May 2014

I recently used Amazon’s Mechanical Turk for the quantified selfie project. Mechanical Turk is a crowd sourced marketplace where you submit small tasks for hundreds of people to complete. Mechanical Turk is used to tag images, transcribe text, analyze sentiment and perform other tasks. A request puts up a HIT (Human Intelligence Task) and offers a small reward for completion. People from all over the world then complete the task (if you priced it right). The result is large, hard to compute tasks are completed quickly for far less then minimum wage. Turkers are choosing to work

Turking is a bit magical. You put HITs (Human Intelligence Tasks) up and a few hours later a mass of humanity has completed your task, unless you screw up.

I screwed-up a bit and I learned a few lessons. First it is essential to keep it simple. My first HIT had directions to include newlines. I got a few emails from Turkers – it appears that newlines were a bit confusing. I also learned that task completion is completely dependent on the price paid. Make sure to pay enough – look at similar projects that are currently running.

Spencer Barton

12 May 2014


Young readers bring storybook characters to life through the Looking Glass.

Looking Glass explores augmented storytelling. The reader guides the Looking Glass over the pages in a picture book and animations appear on the display at set points on the page. These whimsical animations bring characters to life and enable writers to add interactive content.

I was inspired to create this project after seeing the OLED display for the first time. I saw the display as a looking glass through which I could create and uncover hidden stories. Storybooks were an ideal starting point because of a younger readership that is these days very eager to use technology like tablets and smartphones. However unlike a tablet, Looking Glass requires the book and more importantly requires the reader to engage in the book.

For more technical details please see this prior post.

MacKenzie Bates

12 May 2014

Finger Launchpad



Launch your fingertips at your opponent. Think Multi-Touch Air Hockey. A game for MacBooks.



Launch your fingertips at your opponent. Think Multi-Touch Air Hockey. Using the MacBook’s touchpad (which is the same as that of an iPad), use up to 11 fingers to try and lower your opponents health to 0. Hold a finger on the touchpad for a second and then once you lift your finger it will be launched in the direction it was pointing. When fingertips collide, the bigger one wins. Large fingertips do more damage than small ones. Skinny fingertips go faster than wide ones. Engage in the ultimate one-on-one multi-touch battle.


Gameplay Video:





In Golan’s IACD studio, he told me all semester that I would get to make a game and then the final project came around and it was time to make a game. But what game to make? I was paralyzed with possibilities and with the fear that after a semester of anticipation that I wouldn’t make a game that lived up to mine or Golan’s expectations.

After talking about what I should make a game on, Golan gave me this tool that a previous IACD student made that allows you to easily get the multi-touch interaction that occur on a MacBook trackpad, which meant that I could easily make a multi-touch game without having to jump through hoops to make it on a mobile device.

So I sat there pondering what game to make with this technology and the ideas that were instilled in Paolo’s Experimental Game Design – Alternative Interfaces popped into my mind. If it is multi-touch then it should truly be multi-touch at its core (using multiple fingers at once should be central to gameplay). The visuals should be simple and minimalist (there is no need to do some random theme that masks the game). This is the game that I came up with and it serves as a combination of what I have learned from Paolo and Golan. I think this might be the best designed game I have made yet and so far it is certainly the one I am most proud of having made.



View/Download Code @: GitHub
Download Game @: MacKenzie Bates’ Website
Download SendMultiTouches @:
Duncan Boehle’s Website
Read More About Game @: 
MacKenzie Bates’ Website

Austin McCasland

12 May 2014


Genetically Modified Tree of Life is an interactive display for the Center for Postnatural History in Pittsburgh.  “The PostNatural  refers to living organisms that have been altered through processes such as selective breeding or genetic engineering.” [www.postnatural.org]

Model organisms are the building blocks for these organisms, also known as Genetically Modified Organisms.

This app shows the tree of life ending in every model organism used to make these GMOs, as well as allowing people to select organisms to read the story behind them.



History museums are a fun and interesting avenue for people to experience things which existed long ago.  If people want to experience things which have happened more recently, however, there is one outlet – the Center for Postnatural History.  “The PostNatural  refers to living organisms that have been altered through processes such as selective breeding or genetic engineering.” [www.postnatural.org].  Children’s imaginations light up at the prospect of mammoths walking the earth, or terrifyingly large dinosaurs from thousands of years ago, but today is no less exciting.  Mutants roam the earth, large and small, some ordinary and some fantastic.


Take, for example, the BioSteel Goat.  These goats have their genes genetically modified with spider genes so that spider web fibers are produced in their milk.  They are milked, and that milk is processed, creating huge amounts of incredibly strong fiber which is stronger than steel.

The Genetically Modified Tree of Life is an interactive display which I created for the Center for Postnatural History under the advisement of Richard Pell.  This app will exist in its final form as an interactive installation on a touch screen which will allow visitors to come up and learn more about certain genetically modified organisms in a fun and informative way.  The app visualizes the tree of life as seen through the perspective of genetically modified organisms by showing the genetic path of every model organism from the root of all life to the modern day in the form of a tree.  These model organism’s genes are what scientists use to create all genetically modified organisms as they are representative of a wide array of genetic diversity.  Visitors to the exhibit will be able to drag around the tree, mixing up the branches of the model organisms, as well as selecting individual genetically  modified organisms from the lower portion of the screen to learn more about them.  These are pulled from the Center for Postnatural History’s Database.  The objective of this piece is to be educational and fun in an active state, as well as being visually attractive in a passive state.



Visualization of the tree of life as seen by GMOs.


1 2 3

Ticha Sethapakdi

12 May 2014

euphony 1200x400

Euphony: A pair of sound sculptures which explore audio-based memories.

“Euphony” is a pair of telephones found in an antique shop in Pittsburgh. Through the use of an Audio Recording / Playback breakout board, a simple circuit, and an Arduino, the phones were transformed into peculiar sculptures which investigate memories in the form of sound. The red phone is exclusively an audio playback device, which plays sound files based on phone number combinations, while the black phone is a playback and recording device. Together, these ‘sound sculptures’ house echoes of the remarkable, the mundane, the absurd, and sometimes even the sublime.

A Longer Narrative
One day I was sifting through old pictures in my phone, and as I was looking through them I had a strange feeling of disconnect between myself and the photos. While the photos evoked a sense of nostalgia, I was disappointed that I was unable to re-immerse myself in those points in time. It was then that I realized how a photograph may be a nice way to preserve a certain moment of your life, but it does not allow to you actually ‘relive’ that moment. Afterwards, I tried to think about what medium would be simple, yet effective for memory capture that is also immersive. Then I thought, “if visuals fail, why not try sound?”–which prompted me to browse my small collection of audio recordings. As I listened to a particular recording of me and my friends discussing the meaning of humor in a noisy cafeteria, I noticed how close I felt to that memory; it seemed as if I was in 2012 again, a freshman trying to be philosophical with her friends in Resnik cafe and giggling half the time. Thus, I was motivated to make something that allowed people to access audial memories, but in a less conventional way than a dictation machine.

I chose the telephone because it is traditionally a device that accesses whatever is happening in the present. I was interested in breaking that function and transforming the phone into something that echoes back something in the past. As a result, I made it so that it would play recordings that could only be accessed through dialing certain phone numbers and wrote down the ‘available’ phone numbers in a small phone book for people to flip through. This notion of ‘echoing the past’ was incorporated in the second phone, but in a slightly different way. While the first phone (the red one) had the more distant past, the second phone (the black one) kept the intermediate past. With the black phone, I wanted to explore the idea of communication and the indirect interaction between people. I made the black phone into an audio recording and playback device, which first plays the previous recording made and then records until the user hangs up the telephone. All the recordings have the same structure: a person answers whatever question the previous person asked and then they ask a different question. I really liked the idea of making a chain of people communicating disjointly with each other, and since the Arduino would keep each recording I was curious to see whether the compiled audio would be not so much a chain as it was a trainwreck.

People responded very positively to the telephones, especially the second one. To my surprise, there were actually people outside my circle of friends who were interested in the red phone despite it being a more personal project that only had recordings made by me and my family in Thailand. I am also glad that the black telephone was a success and people responded to it in very interesting ways. My only regret was that I was unable to place the phones in their “ideal” environment–a small, quiet room where people can listen to and record audio without any disruptions.

Some feedback:

  • Slightly modify the physical appearance of the phones in a way that succinctly conveys their functions.
  • Golan also suggested that I look into the Asterisk system if I want to explore telephony. I was unable to use it for this project because the phones were so old that, in order to be used as regular phones, they needed to be plugged into special jacks that you can’t find anymore.
  • Provide some feedback to the user to indicate that the device is working. The first phone might have caused confusion for some people because, while they expected to hear a dial tone when they picked up the receiver, they instead heard silence. It also would have been nice to play DTMF frequencies as the user is dialing.
  • Too much thinking power was needed for the black phone because the user had to both answer a question and conceive a question in such a short amount of time. While this may be true, I initially did it that way because I wanted people to feel as if they were in a real conversational situation; conversations can get very awkward and may induce pressure or discomfort in people. When having a conversation, you have to think on your feet in order to come up with something to say in a reasonable amount of time.


Each phone was made with an Arduino Uno and a VS1053 Breakout Board from Adafruit.

Also many thanks to Andre for taking pictures at the exhibition. :)

Github code here.

Kevyn McPhail

12 May 2014


“3D portraits drawn by a light-painting industrial robotic arm.”


My partner Jeff Crossman and I are working on getting a industrial robotic arm to paint a 3D, digital scan of a person or object, in full color. The project makes use of a kinect to get the scanned image of a person, and uses Processing to output the points and their associated pixel color. From there we use the plugins Grasshopper and HAL in Rhino, to generate the point cloud and subsequently the robot code. The plugins also allow us to control the robot’s built-in ability to send digital and analog signals, which we use pulse a Blinkm led at the end of the arm at precisely the right time, drawing the point cloud.


Here are some of our process photos as well.