Tag Archives: project-0

Project 0

My name is Daniel Vu. I normally go by Daniel or Dan. I am a super senior in BFA Art program with a concentration in ETB (Electronic and Time Based work). I would say for the most part I do 3D modeling with a bit of branching into many other fields in art from course work. I enjoy playing games, and making them as part of CMU’s Game Creation Society is another thing that I like doing.

In regards to this class, I don’t actually have that much confidence in my programming ability, but I will give it my best. I have some minor experience here and there mainly from classes– so I am familiar with digital art, using code as an art form, and several of the topics in the course. I hope to improve my skills and make something really cool at the end of the semester. As far as what I want from this class, I think I would like to see more interesting interactive work, Kinect or other computer vision stuff, and video game related projects.

A tiny prototype information visualization project I did for another class called Environmental Hackfest is this thing I call ‘The Top Ten’ written in Processing.

The Top Ten

What it is, is a basic interactive visualization of the top ten countries that produce the most CO2 emissions. The user can click on one of the implemented countries that is highlighted when moused over and additional information and charts will be presented.


I haven’t used them yet but:
Twitter: @flock_of_sheep
GitHub: SheepWolf


Afnan Fahim

21 Jan 2014

After graduating with a degree in computer science from Carnegie Mellon’s campus in Qatar (CMUQ), I had the fortune of becoming a fifth year scholar. What this meant was that, after graduating, I was allowed to take a year of courses at Carnegie Mellon’s campus in Pittsburgh, in addition to contributing to the Carnegie Mellon community in a meaningful way.

I’m excited about hacker culture and thus spent my senior year organizing the first hackathon held at CMUQ.

Most of my projects and previous experiences have been in the domain of mobile web development. I love HTML5 because it works cross platform and allows me to target many more people than by using any other platform. Recently I’ve been interested in both the performance and user experience aspects of detecting touch gestures on mobile web browsers.

This semester I hope to do three things: (1) delve into newer ways of both representing and interacting with information, (2) explore developing technology that serves the purpose of music, not medicine and (3) have lots of fun developing on new and exciting frameworks!

My twitter account is @AfnanFahim.

You can find my github profile here. If you’re interested, do check out my implementation of text rain. It’s built using Processing and was one of the assignments for last year’s IADC course.

As a project for a webapps course, a friend and I created historify – a media collection and visualization tool. Historify is a Map based web application that allows you to document the transformation of a city using text, pictures, audio and video.

The application features a time line that allows you to go “back in time” and click on different parts of the map to see what the area looked like in that year. You can navigate through media for a particular area using page flips.

Adding media to the application is simple – simply navigate to a date using the timeline, and drag’n’drop the media to the area you want your media to be associated with. Here’s what it looks like

Historify Home Screen

Historify Media Browser

Collin Burger

16 Jan 2014

A Piece That I Admire:
​Drei Klavierstuke by Cory Arcangel

I enjoy the hacker mentality of combining things together that at first glance should not be together and just making it work. I also enjoy works that mimic, mock, and experiment with 90s pop-culture and early Internet culture (See Arcangel’s other work, CLICKISTAN, or Jacob Bakkila’s @Horse_ebooks).  Even though this project is not computationally impressive on Arcangel’s part, I think that the premise and execution certainly makes up for it.

One Work That Surprised Me:
PHYSIS by Fabrica

PHYSIS from Fabrica on Vimeo.
I found PHYSIS to be upsettingly simple and admirable. I wish that I had made this.  I like how the simple combination of actuators and plants can conjure the small animal running through the brush that exists in the common animal and human consciousness.

And One That Disappointed Me:
Botanicus Interactus by Disney Research
This is the beginning of a very interesting project that I think falls short, unfortunately.  The video alludes to some evolving and interactive audiovisual displays attached to plants, however it is not discussed in detail.  It really is a novel mode of interacting with an object which is not known for its interactivity, however I would like to see it used to control something other than a few sounds or visual displays.

Sama Kanbour

16 Jan 2014

Passionate about giving meaning to data with sexy and elegant visualizations.

Twitter @SamaKanbour
Github samakanbour

For fun, a fellow classmate and I created a web application that gives an overview about people’s emotions on a particular day, in Doha, Qatar. These emotion vary between happiness, sorrow, anger, love and fear. The application is composed of a graph that shows the ratios of each one of these feelings. The graph updates itself every 25 seconds by fetching data from Twitter. Music notes are played every time a person posts his/her emotion. 




My name is Collin Burger and I am an electrical and computer engineer trying to escape my fate of dwelling in a cubicle.  I am currently pursuing a master’s degree in Electrical and Computer Engineering  at CMU in order to delay my aforementioned fate and perhaps make some entertaining things in the process.  Thematically, I am interested in cultural analytics, people’s relationships with technology, and humorous artworks.
Find me on the Twitter @cyburgee
Find me on the Github at https://github.com/cyburgee

Feelers Feelers

Feelers is an interactive installation with game elements in which two participants control the environment with skin to skin contact.  Participants are attached to the installation and instructed to match colored lights and the frequency of two sine waves by varying the area and pressure of skin contact.  Spectators are treated to a voyeuristic display of the players’ actions that possesses the quality of a foreign ritual. Feelers invites participants and spectators to explore each other’s bodies and investigate the notions of personal space.

Feelers is funded in part by the Frank-Ratchye Fund for Art at the Frontier.

Feelers on Github

Project 0

Hello! I’m Emily.

I started programming by going to C-MITES weekend workshops at Carnegie Mellon to learn HTML. Fast forward a decade, and I’m a master’s student in human-computer interaction at the same institution.

I’m a recent graduate of the University of Rochester, where I studied computer science, linguistics, and music. In this course, I hope to merge the creativity and lightheartedness of my humanities background with the tech savviness and forward thinking of my science background to create some beautiful, useful things. Maybe some not-so-useful things, too.

A recent project of mine is GestureCam, an Android app that I developed for the Software Structures for User Interfaces course last semester. I take a lot of pictures with my phone, and I get frustrated when I have to navigate through menus to find the setting or filter that I want, especially when I’m trying to capture an image quickly. To solve the problem, I added gesture recognition on top of a custom Android camera application, and set it up to recognize a few gestures. Now, instead of fiddling around to find the flash button, I can simply draw a lightning bolt shape, and instead of searching for a black-and-white option, I can draw a capital B on the screen. Below is a list of the gestures that my app accepts:








The following is a screenshot of my app running. The majority of the screen is taken up by the camera, not buttons or menus obscuring the image. To change settings, the user draws shapes on the screen. If the user doesn’t know or forgets the available settings, pressing the “help” button will create a popup with instructions. At the time, my camera was facing my laptop, and I took a picture of the presentation I was about to give.

The UI is lacking in style; creating a custom camera app for Android took me way longer than I expected. It’s hard. Someday, I’ll write a screenplay about my struggles.

On a positive note, it worked! Below are some examples of pictures that GestureCam took:

The entire SSUI class, looking surprisingly photogenic.

The entire SSUI class, looking surprisingly photogenic.

Our classmate, preparing for his presentation.

Our classmate, preparing for his presentation.

Does this guy look familiar?

Does this guy look familiar?

Paul Peng

15 Jan 2014

My name is Paul Peng and I am a sophomore in the Fine Arts program at Carnegie Mellon University. Next year I plan on being a junior in the Computer Science and Arts program at Carnegie Mellon University. I like to draw and program things. For this class I plan to draw and program things.

Last semester I made a depressing chatterbot for one of my studio classes. It prints out vaguely melancholy statements in its chat window every 3-8 seconds. To the right is another chat window for the viewer to respond to the bot, but this chat window is greyed out, leaving the chatterbot to endlessly talk alone, unsure of whether there is anyone who cares or is listening at all. It doesn’t actually feel these things because it is a chatterbot.

Screen Shot 2014-01-15 at 9.16.07 PM

Screen Shot 2014-01-15 at 9.15.06 PM

Screen Shot 2014-01-15 at 9.14.14 PM

I didn’t use any toolkits for generative text / literature for this, which I should have because coding this would have been much less annoying and it would have allowed me to create a greater variety of sentence structures for the chatterbot to spit out. It’s still pretty nice, though.

twit git

Andrew Russell

15 Jan 2014

Welcome to my post.

I am a masters in music and technology student, which is a half music, half CS, and half ECE degree. As such, I am very interested in computers, both hardware and software, as well as music. I started programming over ten years ago and cannot even remember when I played my first song. I also compose my own music and like to tinker with guitar pedals.

My interests don’t stop with music and computers though.  I love to play sports (doesn’t matter which sport), craft beers (the hoppier the better!), and gaming (of both the video and the board variety).

Second Screen

All engineering students at the University of Waterloo are required to complete an upper year design project during their last year and a half at school with a group of three to five members.  This project is supposed to be an actual product which the students could theoretically start a company around after they graduate (and quite a number do).  My team worked on Second Screen.

Second Screen is a TV buddy application, design to enhance your experience watching TV shows.  Upon opening, it will listen through your phone’s microphone for up to 30 seconds and, using acoustic fingerprinting, figure out what TV show and what episode you are currently watching as well as the current time in the show you are at.  It will then display information in real time as the show goes on, such as relevant plot points, show trivia, first appearances by actors, and friend’s comments. There is also a list of dialogue shown as it is spoken.

SS - 1 SS - 2SS - 3
Second Screen Workflow

Andrew Munn, Fravic Fernando, Noah Sugarman, Will Hughes


ajrussell.ca (redesign coming soon)
My latest video:

p0 : Joel Simon

  1. Hello everyone, I am Joel. I already know a handful of you and look forward to getting to know the rest. I have an eclectic group of things I like to make in my free-time including lamps, video games, robotics and figurative sculpture. I think a lot of my software projects have lacked a certain amount of visual polish, in IACD I hope to take any ideas to the next level of finesse and maybe even create some lasting open source projects.
  2. https://twitter.com/JoelSSimon
  3. https://github.com/Sloth6

The SmallTalk robot is something I made summer 2012 for the FE gallery in pittsburgh in response to a call for submissions. I copied the following summary from my website…

The SmallTalk robot connects to the internet and makes small talk about the weather, the news, the day of the week and more. It has onboard text-to-speech capabilities as well as text display on a bicolor LED array.

As seen in the video below, the robot was part of the “Robots of Unusual Sizes” exhibit at Pittsburgh’s FE gallery. Here’s an excerpt from a review by art critic Kurt Shaw in The Pittsburgh Tribune-Review:

Like Upchurch’s pieces, Carnegie Mellon University art and computer science student Joel Simon’s “Small Talk Robot” engages visitors with direct communication. However, instead of just sound, it uses text and data culled from the Internet and puts it in the form of questions. As if engaging in small talk, an LED screen flickers real-time text punctuated with typical small-talk questions and phrases like “How about that?” and “I hate Mondays.”

Simon says motivation for the piece came from a desire to have viewers explore their relationship with robots and their everyday use of small talk. “If a robot is capable of small talk, and small talk is often the majority of a relationship, then that says something.”

There also is humor in the piece, as it makes fun of how silly and ridiculous small talk is. Another element is that, while the piece is openly absurd in itself, robot companionship is becoming increasingly practical and useful.

“If the elderly can have robotic animals to keep them company, then why can’t the rest of us have robots to fill in at cocktail parties and art-gallery openings for us?” Simon asks.

Chanamon Ratanalert

14 Jan 2014

A new year, a new semester, and possibly a new outlook on life for some. Seems like a new semester really brings about starting anew by having us present ourselves to the rest of the class. This is even more daunting to me because I had to pathetically select only myself in a survey of “people you know in this class.” Anyway, my name is Chanamon Ratanalert. I’m a junior in Information Systems and Human-Computer Interaction. I absolutely love design but was discouraged from pursuing it for the entire time that I spent legally under my parent’s control. I find it pretty funny that I ended up steering my career as an IS major into the path of design anyway (take that, mother).

So far, my classes have either been entirely computing or entirely design, never together. It’s something that I’ve dabbled with on the side but haven’t fully experienced. I have high hopes for this class (IACD) to not only mesh together the two sides of my life but to teach me numerous methods and mediums for creativity.

My Twitter: @jaguarspeaks
My Github: https://github.com/chanamonster

A project that I worked on last semester was bringing the board game Battleship, popularized by Milton Bradley, to the computer screen and creating it as a version that you could play with someone without needing to be in the same room as them. I do not currently have it online because I ran over the trial time for its previous host; I’ll hopefully get it up shortly. This month-long project was the first that I created entirely from scratch–from idea conception to project presentation. It was also a fair collaboration between my coding side and my design side, since I wanted to present it with multiple graphics (see images below) and interactions. It was written using Javascript and Socket.io. I’m pretty proud of how it turned out (unlike how I felt about my janky 15112 project). I had been actually excited to present it despite my general hesitation toward public speaking, and it ended up winning two awards: Best User Experience and Best Visual Design.

Screen Shot 2013-12-09 at 4.00.41 PM copy

Screen Shot 2013-12-09 at 4.20.17 PMScreen Shot 2013-12-08 at 11.52.51 PM