Category Archives: 0-Introduction

Project 0

My name is Daniel Vu. I normally go by Daniel or Dan. I am a super senior in BFA Art program with a concentration in ETB (Electronic and Time Based work). I would say for the most part I do 3D modeling with a bit of branching into many other fields in art from course work. I enjoy playing games, and making them as part of CMU’s Game Creation Society is another thing that I like doing.

In regards to this class, I don’t actually have that much confidence in my programming ability, but I will give it my best. I have some minor experience here and there mainly from classes– so I am familiar with digital art, using code as an art form, and several of the topics in the course. I hope to improve my skills and make something really cool at the end of the semester. As far as what I want from this class, I think I would like to see more interesting interactive work, Kinect or other computer vision stuff, and video game related projects.

A tiny prototype information visualization project I did for another class called Environmental Hackfest is this thing I call ‘The Top Ten’ written in Processing.

The Top Ten

What it is, is a basic interactive visualization of the top ten countries that produce the most CO2 emissions. The user can click on one of the implemented countries that is highlighted when moused over and additional information and charts will be presented.

 

I haven’t used them yet but:
Twitter: @flock_of_sheep
GitHub: SheepWolf

 

Sama Kanbour

16 Jan 2014

Passionate about giving meaning to data with sexy and elegant visualizations.

Twitter @SamaKanbour
Github samakanbour

For fun, a fellow classmate and I created a web application that gives an overview about people’s emotions on a particular day, in Doha, Qatar. These emotion vary between happiness, sorrow, anger, love and fear. The application is composed of a graph that shows the ratios of each one of these feelings. The graph updates itself every 25 seconds by fetching data from Twitter. Music notes are played every time a person posts his/her emotion. 

 

Sharemotion1

Sharemotion2

Andre Le

15 Jan 2014

I’m Andre Le, a masters student in Human Computer Interaction. I have a BS degree in Visual Design and have worked for about 5 years in the tech and advertising industries before coming here. I started programming at 10 years old when my 4th grade teacher introduced me to BASIC, Logos, and Hypercard. Today, I am a total geek and like to dabble in everything from  hardware/software hacking to video production.

A few other interests are martial arts (jiujitsu and muay thai), robotics, hip hop dance choreography, and food. :)

Mirror Mirror

_DSC1490

Mirror Mirror, a game I developed in Fall 2013 in Paolo Perdicini’s Experimental Game Design class, is a collaborative multi-screen multiplayer puzzle game spanning 3 computers located next to each other. Each level contains one or more lasers, mirrors, and circular targets in a puzzle-like configuration. The 3 players are able to move and rotate the mirrors to bounce lasers across multiple screens. The goal of the game is to pass the laser(s) through all of the targets. In many situations, players are dependent on other players and must collaborate in order to effectively use the provided mirrors and lasers.

 

[youtube=http://www.youtube.com/watch?v=Ey1tZTFXIdY&feature=youtu.be]

 

Download and Instructions

Links:

Website: andrele.com
Github: andrele
Twitter: @andre_le

 

 

Paul Peng

15 Jan 2014

My name is Paul Peng and I am a sophomore in the Fine Arts program at Carnegie Mellon University. Next year I plan on being a junior in the Computer Science and Arts program at Carnegie Mellon University. I like to draw and program things. For this class I plan to draw and program things.

Last semester I made a depressing chatterbot for one of my studio classes. It prints out vaguely melancholy statements in its chat window every 3-8 seconds. To the right is another chat window for the viewer to respond to the bot, but this chat window is greyed out, leaving the chatterbot to endlessly talk alone, unsure of whether there is anyone who cares or is listening at all. It doesn’t actually feel these things because it is a chatterbot.

Screen Shot 2014-01-15 at 9.16.07 PM

Screen Shot 2014-01-15 at 9.15.06 PM

Screen Shot 2014-01-15 at 9.14.14 PM

I didn’t use any toolkits for generative text / literature for this, which I should have because coding this would have been much less annoying and it would have allowed me to create a greater variety of sentence structures for the chatterbot to spit out. It’s still pretty nice, though.

twit git

Kevan Loney

15 Jan 2014

Howdy y’all!

My name is Kevan Loney. As an introduction, I’ll give you folks a little background on who I am and where I came from. I’m a MFA Video & Media Design Candidate 16′ within the School of Drama. I little while ago, I graduated with a B.S. in Visualization from Texas A&M University (TAMU). The Visualization program is a fairly new program under the College of Architecture at TAMU. This program exposes it’s students to a broad range of topics such as art, animation, graphic design, and interactive visual programming. At the end of the program you can begin to chose the track of topic you want. I fell somewhere between interactive art and animation. During my education at TAMU I began to have withdraw syndrome from my other love in life, theatre. So I began to gear most of my projects at the university to be performance based or inspired. The summer before I graduated I got my animation accepted to be presented at the SIGGRAPH “Dailies!” 2012. During of which I ran into several Carnegie Mellon students who found out that I was interested in theatre and tech. They told me about the brand new VMD program at the School of Drama, and I knew right then it was to be the next step in my journey!

One semester down and the second one is just beginning! All I can say is that it has been the best decision of my life. I could not be happier to be at this school learning from/with some of the brightest/creative people around. I look forward to getting to know everyone in this class and getting my feet wet in the life of coding. I’ve had some coding in the past but recently have been focusing on platforms such as Touch Designer and Max/MSP/Jitter. So returning to pure coding after after a year or so shall be an interesting task that I’m excited to take and see what comes of it. I hoping this class will allow me to make more customized interactivity for the stage and live performances to enhance how the viewer experiences live entertainment.

Below is a bit of my work.

This was my first full production called THE NINA VARIATIONS! It just recently wrapped up in November! In this show, 90 percent of the visuals were live generated through Touch Designer and outputted through Millumin. We had embedded live cameras in the set on stage along with cameras backstage in a make-shift live film shoot within a small hallway behind the theatre. These were being live outputted/manipulated through TD on a PC and sent to a separate Mac Pro to send out projections for mapping and masking along the set. In addition, preview monitors were put backstage so that the actors could reposition themselves with the set. It was quite the task, and a crazy first production, but I loved every minute of it!

Loney_NinaVariations_WebEdit

This is my Demo Reel with short segments of my work at Texas A&M before coming to CMU. I think it may be time to update soon. ;)

 

Some handy links:

https://twitter.com/kdloney

https://vimeo.com/kevanloney

https://github.com/kdloney

www.KevanLoney.com

 

Cheers,

Kevan

 

 

Hi! I am Yingri Guan, a first year candidate from the MTID (Master of Tangible Interaction Design) . With background in graphic design and mathematics, I love to visualize our daily life phenomenon through mathematical theories and equations.

I hope to sharpen my programming skills and experiment with new ways to visualize massive data sets and develop a new perspective on understanding phenomenon around us.

My portfolio website yingriguan.com and I will be using twitter and Github too.

Work

One of my work is “Ice Core”.

“Ice Core” brings together the aesthetics of layers of ice with the immense amount of weather information core samples contain. It features eight acrylic tubes filled with layered gel wax, with each layer denoting the average data contained within eight thousand year sample. The gray scale tubes indicate the different carbon dioxide levels in the ice core layers.The darker the layer of wax, the more carbon dioxide that was produced during a particular thousand years.The colored tubes demonstrate temperatures variation by using the colors associated with a standard weather map. For example, red denotes the highest temperature and blue indicates the lowest temperature.These tubes, shown side by side, provide an insightful comparison between carbon dioxide level in relationship to global temperature changes during the last four hundred thousand years.

IceCoreCloseUp2

IceCoreCloseUp1

IceCoreDetail4

IceCoreDetail1

IceCoreInstallationView

 

 

 

Wanfang Diao

15 Jan 2014

Hello everyone! My name is Wanfang. My background is Electronic Engineering & minor in Industrial Design. Now I’m in the Master of Tangible Interaction Design program. I enjoy creating playful & interactive experience. Most of my work are building tangible smart things,  I have code experience but not much in visualization. The funny thing is that the initial motivation I taught myself Processing was  to build an application for visualizing Smith chart to help myself learning electromagnetic theory :p.  I guess now IACD is a great playground for me to use digital media to explore & create fun experience!!!

Twitter: @Wanfangd

Github: https://github.com/wfdiao

“Note cubes” is my solo project in Hybrid Instrument Building course.

Note Cubes from Wanfang Diao on Vimeo.

 

 

 

AAAEE779-065E-424F-A5F8-1DE5F4762351

3C07BA44-C90D-44DF-A977-C7A3ED0F61E6

Note Cubes is a set of tangible cubes designed for children to explore sound, notes and rhythm. By putting them a line or also stacking them (just like playing toy bricks), you can let cubes trigger their “neighbor cubes “( left or right & up & down) to play notes. Kids get a piece of sound or melody after a few time trials.

In this project, I want to build a very straight forward mapping between space (of cube) and  time (of the musical note) and give kids a playful experience of exploring rhythm & melody by some kind of “space sense”.

Ticha-Introduction

Hello everyone! My name’s Ticha and I am currently a sophomore in the School of Art, though I also plan to major in Computer Science. This semester in IACD I hope to be comfortable with a variety of media-arts programming environments so that I may create lots of cool projects in the future. :)

https://twitter.com/creativethumbs

https://github.com/creativethumbs

Augmented Projection: Musical Stairs

For Golan’s EMS II class last semester, I created a simple augmented projection on one of the stairwell signs in Gates. (Below is the description of the project I wrote up last semester)

‘Musical Stairs’ is a project that examines the role of music in our mundane lives. It was partly inspired by ‘Casse’ by Andreas Gysin and Sidi Vanetti, which effectively employs sound to add appeal to their simple projection.

An old habit of mine is tapping my fingers on a desk or flat surface in a manner that imitates playing the piano. It is a habit I thought I had grown out of, but only recently resurfaced due to my frustration with not having easy access to a piano. Clearly, I am not the only one who enjoys using this ‘musical instrument surrogacy’ in the event of having idle hands – I have seen enough people drumming on chairs and playing table keyboards to know this for certain. My projection attempts to manifest this concept of using an everyday object as a surrogate for a specific musical instrument. In addition, it redefines the image of a staircase by recontextualizing it into a musical situation. The way the program works is simple: when the user clicks on an point on the screen a white ball is spawned, which changes color and plays a note pertaining to the step it makes contact with.

 

p0 : Joel Simon

  1. Hello everyone, I am Joel. I already know a handful of you and look forward to getting to know the rest. I have an eclectic group of things I like to make in my free-time including lamps, video games, robotics and figurative sculpture. I think a lot of my software projects have lacked a certain amount of visual polish, in IACD I hope to take any ideas to the next level of finesse and maybe even create some lasting open source projects.
  2. https://twitter.com/JoelSSimon
  3. https://github.com/Sloth6

The SmallTalk robot is something I made summer 2012 for the FE gallery in pittsburgh in response to a call for submissions. I copied the following summary from my website…

The SmallTalk robot connects to the internet and makes small talk about the weather, the news, the day of the week and more. It has onboard text-to-speech capabilities as well as text display on a bicolor LED array.

As seen in the video below, the robot was part of the “Robots of Unusual Sizes” exhibit at Pittsburgh’s FE gallery. Here’s an excerpt from a review by art critic Kurt Shaw in The Pittsburgh Tribune-Review:

Like Upchurch’s pieces, Carnegie Mellon University art and computer science student Joel Simon’s “Small Talk Robot” engages visitors with direct communication. However, instead of just sound, it uses text and data culled from the Internet and puts it in the form of questions. As if engaging in small talk, an LED screen flickers real-time text punctuated with typical small-talk questions and phrases like “How about that?” and “I hate Mondays.”

Simon says motivation for the piece came from a desire to have viewers explore their relationship with robots and their everyday use of small talk. “If a robot is capable of small talk, and small talk is often the majority of a relationship, then that says something.”

There also is humor in the piece, as it makes fun of how silly and ridiculous small talk is. Another element is that, while the piece is openly absurd in itself, robot companionship is becoming increasingly practical and useful.

“If the elderly can have robotic animals to keep them company, then why can’t the rest of us have robots to fill in at cocktail parties and art-gallery openings for us?” Simon asks.

Project 0 – Spencer Barton

Spencer

I enjoy building things and that is why I am an engineer. Here at CMU I am a junior in Electrical & Computer Engineering. My focus is control systems, with a bent towards robotics. On campus I channel much of my energy through the Robotics Club and fencing.

As much as I enjoy my major, engineering can be dull. I want to be exposed to new ideas, new people and a new way of thinking. I am looking forward to being challenged in this course to create interesting stuff.

Past Project: Footprints

footprints1975216

This project explored how people move through space. We set up a camera to record the effects of subtle manipulations on the space that caused large changes in people’s paths. Our final product was an eye created in the UC. We performed a number of small manipulations like holding conversations in key areas and moving chairs slightly.

The camera watches from above recording motion with a tracking algorithm while those below walk unaware. We record motion by looking at frame differences which then translates to activity. As a space becomes more active it goes from blue to red.

This project was created last semester in Ali Momeni’s Hybrid Instruments with Rob Kotcher. We used Processing for the capture and analysis (repo).

On the web

Github
Twitter