Category Archives: Uncategorized

The Big Knob

[ Project with Ziyun ]

In music production studios, artists tend to get overly excited, and they think the engineer can fix even the largest musical mistakes.

So every single time this happens, (which happens a lot and in every studio), engineers shout this sentence with passion, “Don’t worry, it’ll sound better in Mastering”

In mastering studios, when musicians are presented with multiple mastered options, they tend to pick the loudest and the ugliest master, which would in reality have peaks, and awful pops and crack sounds,so mastering engineers feel the need to say : “Don’t worry it’ll sound better when printed to CD/Converted to MP3”

In TV / Cinema, this is usually an issue with the Color. Coloring engineers, calm down their clients with the standardized sentence : “Don’t worry it’ll look better after Color” or even more sometimes add : “Don’t worry it’ll look better in broadcast/projection”

So essentially, it’s all about the artist psychology. Almost everyone in the music/videography industry jokes about having a big red button, that fixes the mix/master/color/final. There are even products named after this. (This one is called The Big Knob, by Mackie)

The Big Knob

And sadly, for the most of the part, mastering or color is done using presets. And in these cases, it really is a matter of a couple of buttons being pushed, to make the artist feel better. And we think we can fix this, by actually making that magic button.


One fader, one knob, and one button, that fixes everything.

It’ll be a big Music Tech / Videography Joke, which partly works. The fader and the pot will control most of the parameters (at the same time) and the button will switch it on/off.


08 Apr 2013

I have some papers, that I will scan and put here, but here is some text as well. and here they are!

This project will build upon my previous work with RGBD and Augmented Reality, with a clear vision of a final product and some extra ambitious pieces to justify two people working together.

Our premise is to use RGBD and Augmented Reality to create a window into a fantasy world, or in the words of Golan, “a whimsical augmentation of a physical space.” Whimsical and personal graffiti sans vandalism? This piece that Golan showed me is definitely inspiration:

Another possible source of inspiration from artist Mark Jenkins:

We both discussed what forms this augmentation might take, and realized that we both saw actors performing with a sort of unreal physicality as part of our vision. In order to make that augmentation   be the best it can be (and not necessarily have the Kinect depth mesh distract at times with stretched areas formed from lack of data), we are considering if we can get RGBD toolkit to work with the use of 3 kinects, creating one mesh using data from 3 angles.

An example of a project which uses this is here:

The major challenge is fitting DSLR data onto this newly created mesh. If we can get this to work, our augmented actors would fit better with their space.

So, we take the meshes of a 1-4 second video, load them into Qualcomm’s Vuforia library for augmented reality on a mobile phone, and strategically place them around the space which we are augmenting using either custom markers or (maybe) pictures of the archictecture itself. We’ll see how far we can push the library. It should be cool!


08 Apr 2013



Capstone Checkin 1

Your avatar is on a boat floating through the air. The boat is slowly heading for a light house light spinning far in the distance. If you look above you see clouds and sky; if you look below you see lightning. Schools of fish swim next to the boat and flit in and out of the clouds. You are free to walk around your ship, though there is not much to explore. However, if you leap over the sides of the boat you yourself become a fish. You can swim and get back to the boat as long as you control the character. Left to its own devices, though, your character will join a school of fish and begin to travel far away from your ship. You may encounter other ships, but you may never find your own again.

-Fish state change
-Main character

-Fish swimming
-Other fish activities
-Main character walking
-Main character swimming
-Main character state change

-Main character state change
-Main character movement
-Move and spawn clouds
-Clouds avoid ship
-Fish on ship
-Fish flocking
-Compelled flocking


08 Apr 2013

Screen Shot 2013-04-08 at 8.41.55 AM

Increase Interactivity

figure from

Kinect to detect point of view
Arduino to add sensor – e.g hall effect sensor
Connection user data to cosm

Another application for 3D map
A more meaningful/interesting story
Maybe – keep working on Upfolding map

Screen Shot 2013-04-08 at 9.04.29 AM

Some Maybes:
RGBD – version of average face
Point Cloud Library


08 Apr 2013

## Background ##

In spring this year, people in northern China experienced severe weather with extreme dangerous sand storm and air pollution. One of dangerous factors in air pollution is PM2.5, which at that time was 400 times over the most dangerous level defined by WHO.

However, PM2.5 data was not open to public in China until recently. The studio in Guangzhou, China collected and veirfied data from official sources, and opened API to public. Therefore I decided to build a website with visualization to help people easily find dangerous level in their own cities.

## Design 1 ##

Screen Shot 2013-04-08 at 9.30.40 AM

Map visualization:

I got PM2.5 data of 74 cities in China with 496 air detection stations. Since the data not only includes PM 2.5 value but also contains air pollution with SO2, NO2, PM10, O3, the website will allow users to choose the which air pollutant they want to view.

Visualization Tool: TileMill, D3.js, Processing.js

## Design 2 ##

PM2.5 history data for each city in China. When users click on a label on the map, they can not only get the current PM2.5 value but also are able to scan PM2.5 history for the specified city.

## Tech ##

Server: Node.js, Express and MongoDB


Github Repository:

Notes from comments:

  1. Cosm / Pachube open data for PM2.5
  2. whether a map should be the main (or ONLY) entrance to the data
  3. include some basic information about these pollutants and how to protect yourself (if possible) on the website. Accessible public service information would definitely be helpful

  4. how you visualize over wide areas
  5. Will you be able to zoom in/out, and thereby get a greater/lesser resolution of PM2.5 distribution
  6. What is the important part of the data? How people want to see this way of visualization?

  7. Immediate concerns include interpolating data across geographic features that aren’t well-categorized, which is a proven hard problem.  There’s also the interesting ethical question of providing people with data that indicates imminent or constant danger without also providing them a means of acting on it.


08 Apr 2013

My plan for my capstone project is to construct a working, tangible version of the interactive novel concept I prototyped for my Interactivity Project. That said, my sketch looks an awful like (read: the same) as the sketches I posted in my Project 3 deliverable post. For an overview of my intentions, please visit the IMISPHYX IV project page, and for a continually updating list of prior art that’s inspired me, try this link.

I’ve iterated slightly on my goals for the project, based upon feedback from the class and also on my own daydreams, of which I tend to have a ton. Moving forward, I hope to accomplish the following things:

1. implement the reactivision prototype.
2. ditch the objects/props concept from the last iteration, and focus instead on illuminating the contrast that exists between the dialogue that occurs among characters and thoughts characters hold within them.
3. really push the way I display text upon the table to make it as engaging as possible.

I’m also still torn between using my own personal story, Imisphyx, for this project, or proceeding with a story that has already been told, and thereby at least allow people to compare the interactive version to the original, static version. This would also eliminate my need to worry about perfecting the story at the same time I’m perfecting my interaction — although it *is* arguable that evolving both story and presentation simultaneously is the best way to go.

If I were to shy away from using ‘Imisphyx’, I would like to revisit the Alfred Bester novel I was toying with in my original Project 3 sketch. What’s nice about The Demolished Man is that it deals heavily with the exact themes I’m trying to explore in my interactive piece : the tension between what’s happening inside somebody’s head and what they’re actually saying out loud. I think this story could allow me to play with interesting visualizations of text in the characters’ ‘first person’, ‘internal’ mode.

For example, below is a passage from the book where the murderer, Ben Reich, is trying to get a very annoying song stuck in his head, so that the telepathic cop, Linc Powell, can’t pry beyond it into Reich’s mind to discover his guilt.

A tune of utter monotony filled the room with agonizing, unforgettable banality. It was the quintessence of every melodic cliche Reich had ever heard. No matter what melody you tried to remember, it invariably led down the path of familiarity to “Tenser, Said The Tensor.” Then Duffy began to sing:

Eight, sir; seven, sir;
Six, sir; five, sir;
Four, sir; three, sir;
Two, sir; one!
Tenser, said the Tensor.
Tenser, said the Tensor.
Tension, apprehension,
And dissension have begun.

“Oh my God!” Reich exclaimed.

“I’ve got some real gone tricks in that tune,” Duffy said, still playing. “Notice the beat after `one’? That’s a semicadence. Then you get another beat after `begun.’ That turns the end of the song into a semicadence, too, so you can’t ever end it. The beat keeps you running in circles, like: Tension, apprehension, and dissension have begun. RIFF. Tension, apprehension, and dissension have begun. RIFF. Tension, appre—”

“You little devil!” Reich started to his feet, pounding his palms on his ears. “I’m accursed. How long is this affliction going to last?”

“Not more than a month.”

The description Bester provides of the nature of the song, the patterns it possesses, and it’s cyclical nature, lends itself to some really awesome interactive portrayals. On the table, one could envision Reich’s character object set to ‘internal’ mode, and suddenly emitting these endless spirals of the annoying, mindworm tune. Perhaps, every time Linc tries to pry into his head, his thoughts are physically deflected on the screen by the facade of Reich’s textual whirlpool. See below:



07 Apr 2013

My goal is to integrate hardware and software for this project. I recently came back from Beijing, China, and the trip left a footprint on my heart. I really had a great time, and I really want to document it in one way or another. This new idea has not deviate from my original idea of creating a map of Pittsburgh, and a tracking system of where a person has been to in the city. My original idea is to make a map of Pittsburgh, install tons of LED lights behind it, and light them up one by one (or in small groups) as a user visits more places. So the more places a user visits, the brighter the map gets. However, I need to be realistic with the time frame and the skills that I have. Getting the first prototype of the hardware done will take me a lot of time, and I am not entirely sure if it will exemplify the concept I have in mind. For this project, I don’t want to only create a demo — it really has to be a poem of some sort.

I want to create an interactive experience between a user and a map. I am planning on laser cutting a map of Beijing. The goal is for a user to be able to learn more about a specific place in the city, where he can touch (or interact with the map in other ways), and photos or information of the places would show up. I have a collection of photos I took while in Beijing, and I think it will be very memorable to be able to document the trip this way. I have not exactly decided on what types of interaction I would like to do – right now, I am thinking either the Kinect or light sensors. I want to utilize the tools I have learned from the past projects and use them to create something beautiful and meaningful.

Here are some photos from the trip!!





07 Apr 2013


photo (3)

For my final project I want to use rough posture recognition to create system that triggers a photograph to be taken as soon as a pose enters a certain set of parameters. In the above photographs I attempted a rough approximation of this system.  The photographs on the top are photographs taken when faceOSC detected that eyebrow position and mouth width equaled two, the second row is when the system detected both those parameters equaled zero, and finally the bottom row shows a few samples of the mess ups.

I have a couple ideas of how I might implement this project at varying levels of complexity:

  • trigger a DSLR camera whenever face or body is in a particular position. Make a large format print of faces in a grid in their various positions. 
  • Record rough face tracking data of a face making a certain gesture. Capture that gesture frame by frame, and then capture photographs that imitate that gesture frame by frame.
  • Trigger photographs to be taken when people reach certain pitch volume combinations. Create an interactive installation that you sing to and it brings up people’s faces that were singing the closest pitch volume combination.

All of these ideas involve figuring out how to trigger a DSLR photograph from the computer and storing a database of images based on their various properties. Here are some resources I have come up with to help me figure out how to trigger a DSLR:

In terms of databasing photographs based on their various properties, Golan recommended looking into principal component analysis, which allows you to reduce many axis of similarity into a manageable amount. He drew me a beautiful picture of how it works:

photo (2)


I also found Open Frameworks thread that pretty much described this project. Here are some of the influences I pulled out of that:

Stop Motion by Ole Kristensen


Cheese by Christian Moeller

Ocean_v1. by Wolf Nkole Helzle


07 Apr 2013

In order to start playing around with branching structures I implemented diffusion limited aggregation in rhino python (rhino is a 3d modeling program).  As long as the generating algorithm doesn’t require some insane speed I think it makes sense to do stuff with rhinoPython since everything is already in cad format and therefore much easier to get into the manufacture process. The following is a 3d DLA with particles being fired from some random point  on a circle to another random point.  If the particle comes within “stick range” of a node of the existing structure it stops and is added to the structure.  All of the branches from the root to that node get  a little thicker.  Clearly something more involved than little tubes would be required to make something manufacturable. I am not really sure the best way to do this. Maybe some sort of volume sweeping with spheres? Or maybe make really course meshes and run smoothing algorithms.
2013-04-08 09.30.55-3
Really I would rather do something with simulating heat conduction and fluids (with heat transfer) over 3d meshes and have them grow so that nodes on the mesh are greedy for cold areas and release inhibiting agents to keep there neighbors from steeling there coolth.  Like corals, sort of. But I don’t think I can do that in a few weeks. I know very little about heat transfer, and just beginning to learn navier-stokes equation and related concepts used for computational fluid mechanics. Maybe I can sort of approximate this with the particle DLA?

We also have several difficulties in getting this thing made in Aluminum and anodized black in time for the final crit.
Lost wax slurry casting, 3D printing, anodization, extrusion, we are going to make a functional sculpture of lightning that will be big and heavy.

2013-04-08 09.36.12 copy