Alex Wolfe | 2012 | Looking Outwards1

by Alex Wolfe @ 7:06 am 24 January 2012


Doggleganger  is an app  developed by the Pedigree Adoption Drive and NEC. It uses a simple face recognition system to match dogs in need of a home to the humans that use it.

The Artist is Present

Developed by pippinbar games, The Artist is Present is an old school Sierra-style recreation of the famous performance piece of the same name by Marina Abramovic. The user can relieve the experience including the incredibly long and frustrating line into the museum.

“Have the experience only a lucky few have ever had! Stare into Marina Abramovic’s eyes! Make of it what you will! Just like art!”


 Nando Costa | The New America

Nando Costa is currently producing a “crowd sourced” animation through Each frame of the finished production is being laser engraved on a separate block of wood, which those who would like to contribute can purchase to help fund the project.


Alex Wolfe | 2012 | Gauntlet

by Alex Wolfe @ 6:53 am

import processing.opengl.*;

/*Alex Wolfe | Gauntlet*/

//Globals, adjust to edit nitpicky params
int numCirc = 4;
int circMax = 100;
int circMin = 10;
int minRowSpace = 50;
int maxRowSpace = 100;
int minColSpace = 30;
int maxColSpace;
int minCols = 8;
int maxCols = 20;
int numrows, numcols;
int linewaver = 8;
int[] colSpace;
boolean[][] gridCheck;

int minLines = 10;
int maxLines = 20;

RowLine[] grid;

void setup() {
  size(600, 600, P2D);

  //init colSpace grid
  int j =0;
  numcols = int(random(minCols, maxCols));
  maxColSpace = width/numcols;
  colSpace = new int[numcols];
  for (int i=0; i

class RowLine{
  PVector[] segPoints;
  int rowStart;
 // RowLine prev, next;
  public RowLine(int rowStart1){
    segPoints = new PVector[numcols];
    if( (rowStart == 0) || (rowStart == height))
  void initLine(){
    int x,y;
    y= rowStart;
    for(int i=0; i

Quick Links to our Spring 2011 Kinect Projects

by Golan Levin @ 1:56 pm 26 May 2011

This page presents a quick link index to all of the interactive projects created in our semester course which made use of the new Microsoft Kinect depth-camera. These included a number of 7-week final projects, as well as sixteen small projects from a 2-week Kinect-specific unit.

Kinect-based final projects (May 2011):

Kinect-based small projects (February 2011):


Alex Wolfe | Final Project | Reaction Diffusion Textiles

by Alex Wolfe @ 4:17 pm 12 May 2011


Knitting is often dismissed as a way to pass time for the excessively hip, or the excessively old, and it’s potential for digital fabrication ignored. But really, at its most basic level, knitting is an extremely robust method of transforming one continuous line into a three dimensional flexible form, incorporating color pattern, and also texture and volume seamlessly.

For this project I wanted to explore knitting as a way to fabricate textiles based off of generative systems, choosing to focus on Reaction Diffusion.



Although digitally controlled generative patterns aren’t really in existence, hand knitters for ages have been applying mathematical principles in order to not only produce interesting color patterns, but also form and texture. 

(from left to right, machine knit generative fair-isle by Becky Stern, hand knit from Sandra Backlund's pool collection, and Cables with Whisky sweater by Lucy Neatbody)

My inspiration for this project largely came from Sandra Backlund, one of my absolute favorite knitwear designers. By hand, and without any particular pattern she creates beautiful volumes and textures. However, due to the tedious nature of the work, she is only able to produce a handful of pieces each season, each entirely unique and unreplicable. I started wondering if this process of creating knitwear could be digitalized and algorithmically generated, like the work I’d done before with flocking. Becky stern also has been doing some amazing work with hacking knitting machines to print out patterns in fair-isle, and I was delighted to find out that similar texture producing stitches could also be created in the same methodology. However, as anyone who has knitted before knows, just randomly generating stitches, crossing your fingers, and hoping for the best often produces horrible and ill fitting results. Fortunately, there is a large community of knitters who have been applying mathematics, such as fibonacci sequences, cellular automata, and mobious strips, to the craft for ages. Lucy Neatbody was actually one of the first to apply such methodology to texture, using probability to pseduo randomly cable a sweater.


Since even the most intricate patterns can be reduced to a very simple set of rules, knitting lends itself well to combining with more traditional generative system algorithms. After experimenting with several, including Voronoi diagrams and Diffusion-limited aggragation, I decided to focus on Reaction Diffusion, using Grey Scott’s equations.

Reaction Diffusion is a chemical reaction that produces a large range of emergent patterns in nature, from fish scales, to leopard spots and zebra stripes, whose hosts are often killed so their pelts can be used for textiles. By using it as a basis for a pattern making application, I liked the potential for creating and then fabrication patterns of our own.

I began experimenting in processing, using Toxiclib’s excellent simUtils library, “growing” reaction diffusion to fit various patterns and forms, and then focusing on creating multiple layers of it in order to produce variety in pattern

However, Processing became frustratingly slow and unwieldy, especially when it came to seeding the diffusion off an underlying layer of the reaction. So following a tutorial by Robert Hodgin and rdex-fluxus on how to recreate Reaction Diffusion with shaders in Cinder I developed my final application.

Producing “swatches” of pattern by adjusting the parameters of the equation and user interaction, I then translated them into patterns for knitting by analyzing the colors of pixels in the corresponding row, and translating them into a series of cables and eyelets.

Since plans for using a computer controlled knitting machine fell through (which ended up being a much larger part of the project than I had ever intended), I then hand produced the above pattern on my own machine.

Doing it by hand instead of on a computer controlled machine took a significantly longer time (for something that was rather small), but it allowed me to incorporate certain textural stitches such as cables that would have been impossible otherwise. I really like the final product, and actually wherever there were 3×3 cables in the pattern produced the best results, but those are really difficult to produce on the machine (you have to knit those rows manually since the stitches are too tightly pulled). The 1×1 cables of which there was an abundance of in that middle section are much more subtle.


Although I am extremely pleased with the final swatch, ideally I wanted this project to operate on a much larger scale. Hopefully, one could have the application and machine set up so that once the user finishes creating their own swatch and entering their measurements, garments could be printed out immediately incorporating their unique texture in a few hours rather than the 28-30 it took to produce the swatch. Working with more advanced shaders in Cinder, and learning more than I ever thought possible about knitting machines, how they operate, (and also how to fix terrible broken ones purchased from craigslist!) were also excellent things to know I learned from this project. I’m very excited to pursue this project further, and am working on making a prototype dress using the algorithm, and modding a machine to just print this during the summer


TracEMAIL: Final Project

by nkurani @ 3:00 am 10 May 2011

TracEMAIL attempts to better understand the relationships between time, people, and emails. The user can control a slider to view a snapshot of their email traffic on a selected date. This is achieved by using processing to read gmail messages via thunderbird.

I was inspired by projects that were created by Ben Fry and Fernanda Viegas. Here are some screenshots of visualizations that I impacted my final project:

Here are the links of papers and visualization descriptions that I read through to learn more about how others have parsed similar types of data: (5 different email projects)

For my final project, I decided to revisit data visualizations. I felt as if this is an area where I can grow and really benefit from a second attempt. I decided to explore the realm of email data.

At first, I started by using applescript to extract information from my emails. I created a text file aggregating all the information into a single tab delimited table. I also used it to convert the emails into text files.

Since this process took far too long, I was advised by Golan Levin to instead use Processing to parse the emails in a faster way. He got me started with the scraping of the .elmx files stored by the Mail application on my Macbook. It was tricky because the format of the information in the .elmx file isn’t the same from email to email. This slowed me down in the parsing process; however, I overcame this obstacle by simply revising my code to handle various formats for the same kind of data, i.e., the date.

Once I was able to parse through the files, I experimented with the visualization. I began by simply drawing a square for each email. I made the squares where the email frequency was higher a darker shade. That way emails from frequent senders are highlighted. When you select an email, I had it display onto the screen.

I realized very quickly that this isn’t the best visualization. I had tons of data, and I was now at the point where I wanted to present it in a meaningful way. I experimented different ways to present the information. I also sketched out several of the ideas. To help explore multiple ways to visualize this data. I also read papers for inspiration. The links provided under “BACKGROUND REFERENCES” have various .pdf files of papers, which I read to learn how people have handled email data in the past.

In the end, I was most inspired by Ben Fry’s baseball example, the email mountain, and the frequency of emails. I really liked the clean way Ben Fry was able to compare the salary vs performance of the baseball teams, and the way the Fernanda Viegas was able to visualize a new phase in her life through the creation of email mountains while also using her email data to view how many emails she is sending and receiving from specific individuals. These really inspired me to create my final project.

Through the use of tracEMAIL a user can view how many emails are being sent and recieved from a specific individual. You can track this information over time to see how your relationships have changed. This really combines some of the concepts I learned about while searching for inspiration.

Here is an image of my work at the final show:

Here is a screenshot of the visualization:

Here is a short clip filmed at the final show:
[Coming soon! Vimeo/YouTube currently blocked by hotel!]

I feel that this was definitely a successful exploration of email data. The data was very difficult to parse especially because the .elmx files were so hairy and my computer is so super slow! However, I was able to work past that and create a meaningful, interesting visualization. There is definitely still some room for growth; on my next iteration, I would like to change the thickness of the connecting lines based on the avg length of the emails that are being sent back and forth. I also need to add an indicator to point out that the names that appear on the top of the list are the most frequent senders/receivers of my email. There was definitely some confusion over that. I’d also like to make the scrolling of the slider to view the date a bit smoother. Overall, I am proud of my attempt and final project.

Here is a link to the code:

Le Wei – Final Project Final Update

by Le Wei @ 7:57 am 25 April 2011

I had a hard time coming up with a concrete concept for my project, so what I have so far is a bit of a hodge-podge of little exercises I did. I wanted to achieve the effect of finger painting with sound, with different paints representing different sounds. However, I’m having a really hard time using the maximilian library to make sounds that actually sound good and mix well together. So as a proof to myself that some reasonable music can be made, I implemented a little keyboard thing and stuck it in as well. I think the project would be immensely better to use with the wireless trackpad, since it’s bigger and you can hold it in your hand, but I haven’t gotten it to work with my program on my computer (although it might on another computer w/o a trackpad).

So what I did get done was this:

  • Multi touch, so different sounds can play at the same time. But the finger tracker is kind of imperfect.
  • Picking up different sounds by dipping your finger in a paint bucket.
  • One octave keyboard

And what I desperately need to get done for Thursday:

  • Nicer sounds
  • Nicer looks
  • Getting the magic trackpad working
  • A paper(?) overlay on the trackpad so that its easier to see where to touch.


Special Thanks

Nisha Kurani

Ben Gotow

Maya Irvine – Hard Part Solved – Final Project

by mirvine @ 6:56 am

So, there have been some changes to my project, and it’s been a bit of a learning curve to try to touch on everything I want to do, but I feel like I made good progress in the past week and I am fairly confident with how I will progress from here.

After our discussions in class I decided that in order to make the generative work I was envisioning, I really needed to work with a data set. I also liked everyone’s input that it would be interesting to work with the transactional aspect of this project, while keeping in mind the idea of “customizable album art for mass production.”

After a bunch of searching through song databases and looking at alot of web-apps, I landed on Last.FM.

Last.FM is a site that allows people track what music they are listening too. It also works as an “online personal radio” suggesting songs based on the users past listening. After checking out the API, it seemed like exactly what I needed, a customizable dataset relating to a users musical taste. The only problem being that I don’t know anything about xml.

After looking into using xml and getting scared, I found this handy dandy java binder that someone made, allowing you to use the Last.FM API with java. Hooray!

It is a bit badly documented so I sent a lot of time trying to figure out what kind of classes had been written before my friend paul showed me the trick of unzipping the jar file and opening it in xcode.

So far I have been able to retrieve my top artists a playcount for all of them. I theoretically can retrieve the tags aswell but they seem to be all gobeldy-gook for some reason so I need to figure that out. My next step will be to put all this information into a multi-dimensional array so it can be retrieved individually.

Next, I got a bit caught up in the idea of keeping this application on the web. That would allow it to be used by anyone, and solve the problem of gaining access to someone’s Last.FM account. SO! I asked Max how you do this. And so I was introduced to the fab world of RUBY! He helped me mock up the login link below. so cool!

Right now, this doesn’t integrate with my processing sketch at all but I hope I will be able to figure that out.
I would really like to take a stab at a web app. I think I could learn a lot from it.

This is a sketch of my plan right now.
the final product will be a design of simple elements generated by each users history. the final out-put will be a pdf that could be applied to many appications, such as making screen-printed shirts.

madMeshMaker :: final update

by Madeline Gannon @ 5:45 am


closed mesh for 3dPrinting

trouble with face normals

flip milling!

Checkpoint 04/25

by Chong Han Chua @ 2:57 am

The previous checkpoint was a reality check, and I scrapped the computer vision project for a continuation of my twingring project.

A short list of things I have to do and where I am:
1. Put on the web
2. Fix bugs
3. Modulate sound to create different voices
4. Do dictionary swaps and replacements of text
5. Switch to real time API and increase filtering options
6. Design and multiple parties

Instead of doing a search, the new option will revolve around looking for hashtags or using a starting message id. With this, we can prototype a play with multiple actors as well as real time. This would enable twingring to act as a real life twitter play of some sort, which should be fun to watch.

On the user interface side, there’ll be some work required to improve the display of messages, the display of users, as well as a way to visualize who is talking and who isn’t. Some other work includes making it robust and possibly port it for iPad (probably not).

To check out the current progress, visit

Three Red Blobs

by ppm @ 2:34 am

I have a Pure Data patch supplying pitch detection to a Java application, which will be drawing and animating fish in a fish tank based on the sounds people make with their voices. These red blobs are the precursors to the fish, where vertical width corresponds to pitch over a one-second recording. I plan to add colors, smooth contours, fins, and googly eyes.

Here is the patch:

I may end up ditching the cell phones. The point of the phone integration was so that many people could interact simultaneously, but now that I’m using Pure Data, which does real-time processing (not exactly what I wanted in the first place) it would be inconvenient to process more than one voice at a time.

Emily Schwartzman – Final Project Update

by ecschwar @ 1:41 am

Final Project Update

Since my last update I was able to get my data finalized and prepped to start working on the visualization. Thanks to Mauricio for helping me out with a PHP script to load all of the lyrics files onto the LIWC site. Below is a snapshot of the final data that I am working with.

I did some initial tests to see how the data mapped out, and then added in the artist names to see where each artist fell on the spectrum.

I also created a full set of comparisons for each possible combination of variables to see where the most significant correlations were. Surprisingly the metrics seem fairly well correlated across the board. (

PDF of Charts)




I was hoping to try and create one 2D plot of artists that would look at similarity based on lyrics by reducing all of these metrics down to 2-dimensional data, per Golan’s suggestion, but was unable to successfully figure this out. Instead I decided to build on one of the visualizations I had by adding some interactivity to it. I reduced the opacity of the artist names so that only the selected artist would be highlighted across. There are still some issues with speed though that might pertain to loading the text/font. Below is a screenshot of what this looks like:

I’m still working on trying to add another layer of information to the visualization. I’ve collected genre information by accessing the API for the top tag for each artist. I would like to allow the user to select a genre and see if there are any patterns of where the artists fall based on what genre of music they are classified as. I am also considering integrating an image of the artist and perhaps other secondary information as well when you rollover an artist’s name. If I have any more time, I would like to refine this to a point where I can create a supporting print piece to work with the interactive component. Before creating the final visualization, I want to go through and clean up some of the data further to make sure it is as accurate as possible (some of the lyrics have secondary information that was pulled in when scraped which could be throwing off the liwc metrics).


Any suggestions for other ways to visualize this information that would be interesting or take this to the next level?



Are there other layers of information that you think would work well or communicate something interesting about this data (besides genre)?



Any suggestions for how to improve the performance/speed of the interactive visualization?


Charles Doomany: Final Project Concept: UPDATE

by cdoomany @ 10:22 am 20 April 2011


Hard Part Over

by nkurani @ 1:29 am 15 April 2011

Dealing with applescript was one of my hardest parts! It really held me back because many times I would be guessing why my code was not pulling the data I wanted or throwing certain errors. In the end I realized that it wasn’t working because I was using “message i” instead of “item i”. That’s just one example of a problem I had to resolve on my own. Things are finally coming together, so I’m really excited to see what transpires!

Next step? I’ll read my text table of emails with Processing to spit out an initial visualization. Then when an “email” is clicked on, I’ll have the email pop up on screen, since I have created individual text files for each email. Once that is over, I will be able to just tweak the visualization to make it fun and insightful at a glance.


by Samia @ 6:11 am 13 April 2011

Thus far, I’ve been building the pieces of my project, and have a somewhat working, page-generator
Roadmap sketching:

Half one: generating pdfs! It’s happening! Currently generating single pages, this may be ideal, however, because I’m going to have to make an action in photoshop to automatically cut up spreads to be printed (hooray doublesided printing and perfect binding!).

Half two: building components of “visualizations” using my personal-schedule-data as a jumping off point to generate small viz that will be recombined on spreads to create the book. Right now, I only have two, and the mostly suck.

So right now, I can generate a rather simple, currently boring book, with a user specified page count

Now that the frame work functions, I need to really build all of the different visualization pieces, as well as rules for combining them, as well as start to deal with things such as variable pages size.

Updates Not So Galore

by Asa Foster @ 12:01 am

So there were speedbumps of many shapes and sizes. Large, small, technical and personal; production just kind of ground to a halt. This prompted us to basically bunker down and focus on our two largest challenges: a.) recording a file consisting of Kinect data to use during debugging, and b.) nailing down the angle tracking algorithm for our baseline skeleton-tracking-stick-figure-program. Caitlin took the reins on the data recorder, I worked on the maff.

The Kinect data recording bit is pretty straightforward and rather uninteresting, so I’ll just summarize: instead of having to get off our ass every time we press compile, we want it to just play a dummy so we can code a LOT more efficiently. We currently have a small snippet of data to access any time we need it.

The math bit is a hair more complicated, but can be summarized by saying that there are quite a few sets of quite a few points, which all need to run through an algorithm quite a few times. The goal is to create a baseline stick figure program, onto which we can build our puppets (or any other future programs for that matter), and we needed angle data for each of the joints on the skeleton, in three-dimensional space. The initial equation worked to calculate one of the angles at a time, but calculating all eight simultaneously became a data structure puzzle. My first instinct was to make arrays to hold these points, and multi-dimensional arrays to hold the sets of points, and then doing math with waaaay too many of [these] suckers. With Golan’s help explaining an object-oriented approach vs. the convoluted arrays that I had been using, we are now well on our way to finishing up the integration into the stick figure program.

In other news, this whole thing is somewhat changed by the fact that we did not get the Tough Art residency at the Pittsburgh Children’s Museum, meaning that we now do not have some floating due date in the future to worry about and thus just want to make this thing WORK. And well. More updates to follow.

Charles Doomany: Final Project Concept- Experimental Musical Instrument

by cdoomany @ 12:48 am 4 April 2011


“Musical instruments come in a wide variety of shapes, sizes, and forms. Most have a long history, sometimes thousands of years, and their basic structure derives in part from the accidental discoveries of early musicians, in part from the properties of the physics of vibrating strings, columns of air, membranes, and reeds. Very little attention has been paid to the ergonomics of the instruments. As a result, they often require awkward body positions, such as the contortion of the left hand required to play the violin and related stringed instruments, and sometimes exert great strain: look at the bulging cheeks of brass players, or the calluses on fingers tips of string players.

I am convinced that if the instruments were introduced today and forced to undergo ergonomic review for health and safety, they would fail.

The piano, for example, is relatively straightforward to understand, but incredibly difficult to master. The learning time is measured in years. Note that there are two parts to learning an instrument. One is the physical mastering of the mechanics itself: how to hold the hands, posture, and breathing. Many instruments require demanding physical exertion or special blowing techniques. Some require different rhythms in each hand simultaneously, and some require use of both hands and feet simultaneously (harp, piano, organ, percussion).”

– Don Norman, Living With Complexity


Design Goal:

Create an experimental digital instrument – the design of the instrument will not be constrained by the acoustic properties of its physical form or by traditional design paradigms, but will be centered on providing an ergonomic and intuitive interface for musical expression.



• Arduino + Processing + MIDI


• Possibly incorporate gesture recognition/ haptic feedback/ parallel screen-based feedback with physical interaction

• Synesthetic associations for notation and aiding performance; utilize multiple modalities



Are there any projects that you have seen that address a similar goal?

Final Project Thoughts

by Max Hawkins @ 9:21 am 30 March 2011


Marynel Vázquez – Final Project Ideas

by Marynel Vázquez @ 8:01 am

Interactive 3D Drawing Generator

The motivation for this project is the same I had for Project 4 ((r)evolve).

Graffity Analysis v3.0

The idea is that a person holds a paper with a drawing made out of lines and this drawing is then tracked as the person moves the paper in 3D space. The drawing plus the motion generates a 3D model that can be visualized in the computer.



Personal Space Competition

Have you thought of how your personal space changes as your day goes along? This idea involves a small installation driven by transit data (maybe another type of data if new ideas come to mind).

The installation is composed by a constrained space (a box for example) and two balloons that compete for this space while inflating. Here’s an example of an inflating balloon controlled with an Arduino:

The amount of space they will take depends on the available space in a particular bus in Pit. Ideally, the project will run with online data (collected by the RERC-APT), but the iPhone app for collecting data was just pushed out to the App Store. Most probably this will run with data collected from a experiment done in the last months.

Is there any source of data about how personal space changes between cultures?

Maya Irvine – mini-presentation – Final Project

by mirvine @ 7:55 am


We Be Monsters: An Update

by Asa Foster @ 7:44 am

So because everyone is already familiar with the ins and outs of our project, we’re basically just telling people that a.) we have extensive plans for future updates, and b.) we need help from all of you with Processing chops to give this a read-through and throw in your two cents as to how to improve the mechanics.

For some reason the Prezi embed is failing on me, so here’s the permalink:

We Be Monsters Update


Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2019 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity