Alan

08 Apr 2013

## Background ##

In spring this year, people in northern China experienced severe weather with extreme dangerous sand storm and air pollution. One of dangerous factors in air pollution is PM2.5, which at that time was 400 times over the most dangerous level defined by WHO.

However, PM2.5 data was not open to public in China until recently. The studio BestApp.us in Guangzhou, China collected and veirfied data from official sources, and opened API to public. Therefore I decided to build a website with visualization to help people easily find dangerous level in their own cities.

## Design 1 ##

Screen Shot 2013-04-08 at 9.30.40 AM

Map visualization:

I got PM2.5 data of 74 cities in China with 496 air detection stations. Since the data not only includes PM 2.5 value but also contains air pollution with SO2, NO2, PM10, O3, the website will allow users to choose the which air pollutant they want to view.

Visualization Tool: TileMill, D3.js, Processing.js

## Design 2 ##

PM2.5 history data for each city in China. When users click on a label on the map, they can not only get the current PM2.5 value but also are able to scan PM2.5 history for the specified city.

## Tech ##

Server: Node.js, Express and MongoDB

Data: PM25.in

Github Repository: https://github.com/hhua/PM2.5

Notes from comments:

  1. Cosm / Pachube open data for PM2.5
  2. whether a map should be the main (or ONLY) entrance to the data
  3. include some basic information about these pollutants and how to protect yourself (if possible) on the website. Accessible public service information would definitely be helpful

  4. how you visualize over wide areas
  5. Will you be able to zoom in/out, and thereby get a greater/lesser resolution of PM2.5 distribution
  6. What is the important part of the data? How people want to see this way of visualization?

  7. Immediate concerns include interpolating data across geographic features that aren’t well-categorized, which is a proven hard problem.  There’s also the interesting ethical question of providing people with data that indicates imminent or constant danger without also providing them a means of acting on it.

Elwin

08 Apr 2013

I’ve decided to take my “shy mirror” idea from project 3 to the next level for my capstone project. The comments that I received from fellow students really helped me to think a bit deeper about the concept and how far I could take this.

Development & Improvements


– Embed the camera behind the mirror in the center. This way the camera’s viewing angle will always rotate with the mirror and wouldn’t be restricted compared to a fixed camera with a fixed viewing angle like in my current design. Golan mentioned this in the comments and I had this idea earlier before, but the idea kind of got lost during the building process. This time I would definitely want to try out this method and probably purchase some acrylic mirror instead of the mirror I bought from RiteAid.

– Golan also mentioned using the standard OpenCV face tracker. I wasn’t aware that the standard library had a face tracking option. This is definitely something I will try out, since the ofxFaceTracker was lagging for some reason.

– Trajectory planning for smoother movement. At the moment I’m just sending out a rotational angle to the servo, hence the quick motion to a specific location.


– I always had the idea that this would be a wall piece. I think for the capstone project, I would be able to pull it off if I plan it in advance and arrange a space and materials to actually construct a wall for it. Also, the current mount is pretty ghetto and built last minute. For the capstone version, I would try to hide the electronics and spend more time creating and polishing a casing for the piece. Probably going to do some more sketches and then model it in Rhino, and then perhaps 3D print the shell?

Personality

This would be the major attraction. Apart from further developing the points above, I’ve received a lot of feedback about creating more personality for the mirror. I think this is a very interesting idea and something I would like to pursue of the capstone version.

In the realm of the “shy mirror”, I could create and showcase several personalities based on motion, speed and timing. For example:
– Slow and smooth motion to create a shy and innocent character
– Quicker and smooth motion for scared (?)
– Quick and jerky to purposely neglect your presence like giving you the cold shoulder
– Quick and slow to ignore
These are now very quick ideas, but I would need to define them more in-depth. In order to do this, I’ve been diving into academic literature about expressive emotion in motion, LABAN movement analysis and robotics.

Also, Dev mentioned looking at Disney for inspiration which is an awesome idea.

Someone also mentioned adding motion of roaming around slowly in the absence of a face, and becomes startled when it finds one. I think that’s a great idea and it would really help in creating a character.

Anna

08 Apr 2013

My plan for my capstone project is to construct a working, tangible version of the interactive novel concept I prototyped for my Interactivity Project. That said, my sketch looks an awful like (read: the same) as the sketches I posted in my Project 3 deliverable post. For an overview of my intentions, please visit the IMISPHYX IV project page, and for a continually updating list of prior art that’s inspired me, try this link.

I’ve iterated slightly on my goals for the project, based upon feedback from the class and also on my own daydreams, of which I tend to have a ton. Moving forward, I hope to accomplish the following things:

1. implement the reactivision prototype.
2. ditch the objects/props concept from the last iteration, and focus instead on illuminating the contrast that exists between the dialogue that occurs among characters and thoughts characters hold within them.
3. really push the way I display text upon the table to make it as engaging as possible.

I’m also still torn between using my own personal story, Imisphyx, for this project, or proceeding with a story that has already been told, and thereby at least allow people to compare the interactive version to the original, static version. This would also eliminate my need to worry about perfecting the story at the same time I’m perfecting my interaction — although it *is* arguable that evolving both story and presentation simultaneously is the best way to go.

If I were to shy away from using ‘Imisphyx’, I would like to revisit the Alfred Bester novel I was toying with in my original Project 3 sketch. What’s nice about The Demolished Man is that it deals heavily with the exact themes I’m trying to explore in my interactive piece : the tension between what’s happening inside somebody’s head and what they’re actually saying out loud. I think this story could allow me to play with interesting visualizations of text in the characters’ ‘first person’, ‘internal’ mode.

For example, below is a passage from the book where the murderer, Ben Reich, is trying to get a very annoying song stuck in his head, so that the telepathic cop, Linc Powell, can’t pry beyond it into Reich’s mind to discover his guilt.

A tune of utter monotony filled the room with agonizing, unforgettable banality. It was the quintessence of every melodic cliche Reich had ever heard. No matter what melody you tried to remember, it invariably led down the path of familiarity to “Tenser, Said The Tensor.” Then Duffy began to sing:

Eight, sir; seven, sir;
Six, sir; five, sir;
Four, sir; three, sir;
Two, sir; one!
Tenser, said the Tensor.
Tenser, said the Tensor.
Tension, apprehension,
And dissension have begun.

“Oh my God!” Reich exclaimed.

“I’ve got some real gone tricks in that tune,” Duffy said, still playing. “Notice the beat after `one’? That’s a semicadence. Then you get another beat after `begun.’ That turns the end of the song into a semicadence, too, so you can’t ever end it. The beat keeps you running in circles, like: Tension, apprehension, and dissension have begun. RIFF. Tension, apprehension, and dissension have begun. RIFF. Tension, appre—”

“You little devil!” Reich started to his feet, pounding his palms on his ears. “I’m accursed. How long is this affliction going to last?”

“Not more than a month.”

The description Bester provides of the nature of the song, the patterns it possesses, and it’s cyclical nature, lends itself to some really awesome interactive portrayals. On the table, one could envision Reich’s character object set to ‘internal’ mode, and suddenly emitting these endless spirals of the annoying, mindworm tune. Perhaps, every time Linc tries to pry into his head, his thoughts are physically deflected on the screen by the facade of Reich’s textual whirlpool. See below:

imisphyx_reactable-05

John

08 Apr 2013

For my capstone project, I’m continuing to build on the kinect-based drawing system i built for p3. My previous project was, for all intents and purposes, a technical demo which helped me to better understand several technologies including the Kinect’s depth camera, OSC, and OpenFrameworks. While I definitely got a lot out of the project WRT the general structures of these systems, my final piece lacked an artistic angle. Further, as Golan pointed out in class, I didn’t make particularly robust use of gestural controls in determining the context of my drawing environment. In the interceding week, I’ve been trying to better understand the relation between the 3d meshes I’ve been able to pull of the Kinect using synapse and the flow/feel of the space of the application window. Two projects have served as particular inspiration.

 

Bloom by Brian Eno is a REALLY early iOS app. What’s compelling here is the looping system which reperforms simple touch/gestural operations. This sort of looping playback affords are really nice method of storing and recontextualizing previous action w/in a series.

Inkscapes is a recent project out of ITP using OF and iPads to create large-scale drawings. Relevant here is the framing of the drawn elements within a generative system. The interplay between the user and system generated elements provides both depth and serendipity to the piece.

 

layer_stack

 gesturez

gests

Dev

07 Apr 2013

A couple of months ago I came across this amusing and interesting video on Italian hand gestures:

I really loved the fact that gestures can be so elaborate. The man in the video made it seem as if certain cultures turned gestures into a language of their own. Since I liked some of the gestures so much, I looked around and found a book on the topic:

Screen Shot 2013-04-08 at 12.38.28 AM

 

Inside the book there were still pictures, and a brief description on how to perform the gestures in the pictures. Although, the pictures were very clear, I felt that there had to be a better way to express these gestures.

munari_gestures11

The goal of my project is to revisualize some of the pages from Munari’s book using an animatronic hand. I will develop a processing app that can cycle through pages from the book (the description and the picture). This app will trigger the animatronic hand to gesture appropriately based on the gesture displayed on the screen. Overall this is an interactive reading experience.

For some insight on how the animatronic hand will function see this video:

(Ignore the glove that controls the hand in the video. My project will automatically trigger gestures when the pages are cycled)

 

Patt

07 Apr 2013

My goal is to integrate hardware and software for this project. I recently came back from Beijing, China, and the trip left a footprint on my heart. I really had a great time, and I really want to document it in one way or another. This new idea has not deviate from my original idea of creating a map of Pittsburgh, and a tracking system of where a person has been to in the city. My original idea is to make a map of Pittsburgh, install tons of LED lights behind it, and light them up one by one (or in small groups) as a user visits more places. So the more places a user visits, the brighter the map gets. However, I need to be realistic with the time frame and the skills that I have. Getting the first prototype of the hardware done will take me a lot of time, and I am not entirely sure if it will exemplify the concept I have in mind. For this project, I don’t want to only create a demo — it really has to be a poem of some sort.

I want to create an interactive experience between a user and a map. I am planning on laser cutting a map of Beijing. The goal is for a user to be able to learn more about a specific place in the city, where he can touch (or interact with the map in other ways), and photos or information of the places would show up. I have a collection of photos I took while in Beijing, and I think it will be very memorable to be able to document the trip this way. I have not exactly decided on what types of interaction I would like to do – right now, I am thinking either the Kinect or light sensors. I want to utilize the tools I have learned from the past projects and use them to create something beautiful and meaningful.

Here are some photos from the trip!!

beijing1

beijing2

beijing3

Sam

07 Apr 2013

For my capstone project I will be expanding upon my last, GraphLambda, which was a visualization of Lambda Calculus programs.

screenshot_mar31

The current state of the project is more a proof-of-concept than a usable tool. To bring it forward to achieve my original goals of providing an interactive visual programming environment for Lambda Calculus, I need to:

  1. Better layout of the nodes (topological sorting to clarify the order of operations)
  2. Create some tools for inserting, manipulating and deleting elements of the Lambda expressions
  3. Provide a clearer link between the text and pictorial representations (perhaps using some sort of synchronized highlighting)
  4. Provide a way to see the evaluation of the program in steps
  5. Enable collapsing of expression groups for readability

After achieving these goals, I plan to use the visualizer to record a video introduction to Lambda Calculus in this form. This paper from a researcher at FU Berlin gives a good overview of introductory concepts, and will probably form an outline for my video. I hope in the end to be able to offer this tool to the Computer Science department as a way to help their students understand this confusing topic.

Secondary features to be added as time permits:

  1. Investigate ways of coloring the objects to clarify relationships in the program structure
  2. Smooth animations for transitioning between views of the program (insertion/deletion, evaluation)
  3. Builtins of certain common elements (numbers, logical operations, named functions)

Caroline

07 Apr 2013

FaceGraphic

photo (3)

For my final project I want to use rough posture recognition to create system that triggers a photograph to be taken as soon as a pose enters a certain set of parameters. In the above photographs I attempted a rough approximation of this system.  The photographs on the top are photographs taken when faceOSC detected that eyebrow position and mouth width equaled two, the second row is when the system detected both those parameters equaled zero, and finally the bottom row shows a few samples of the mess ups.

I have a couple ideas of how I might implement this project at varying levels of complexity:

  • trigger a DSLR camera whenever face or body is in a particular position. Make a large format print of faces in a grid in their various positions. 
  • Record rough face tracking data of a face making a certain gesture. Capture that gesture frame by frame, and then capture photographs that imitate that gesture frame by frame.
  • Trigger photographs to be taken when people reach certain pitch volume combinations. Create an interactive installation that you sing to and it brings up people’s faces that were singing the closest pitch volume combination.

All of these ideas involve figuring out how to trigger a DSLR photograph from the computer and storing a database of images based on their various properties. Here are some resources I have come up with to help me figure out how to trigger a DSLR:

In terms of databasing photographs based on their various properties, Golan recommended looking into principal component analysis, which allows you to reduce many axis of similarity into a manageable amount. He drew me a beautiful picture of how it works:

photo (2)

 

I also found Open Frameworks thread that pretty much described this project. Here are some of the influences I pulled out of that:

Stop Motion by Ole Kristensen

 

Cheese by Christian Moeller

Ocean_v1. by Wolf Nkole Helzle

Joshua

07 Apr 2013

In order to start playing around with branching structures I implemented diffusion limited aggregation in rhino python (rhino is a 3d modeling program).  As long as the generating algorithm doesn’t require some insane speed I think it makes sense to do stuff with rhinoPython since everything is already in cad format and therefore much easier to get into the manufacture process. The following is a 3d DLA with particles being fired from some random point  on a circle to another random point.  If the particle comes within “stick range” of a node of the existing structure it stops and is added to the structure.  All of the branches from the root to that node get  a little thicker.  Clearly something more involved than little tubes would be required to make something manufacturable. I am not really sure the best way to do this. Maybe some sort of volume sweeping with spheres? Or maybe make really course meshes and run smoothing algorithms.
2013-04-08 09.30.55-3
Really I would rather do something with simulating heat conduction and fluids (with heat transfer) over 3d meshes and have them grow so that nodes on the mesh are greedy for cold areas and release inhibiting agents to keep there neighbors from steeling there coolth.  Like corals, sort of. But I don’t think I can do that in a few weeks. I know very little about heat transfer, and just beginning to learn navier-stokes equation and related concepts used for computational fluid mechanics. Maybe I can sort of approximate this with the particle DLA?


We also have several difficulties in getting this thing made in Aluminum and anodized black in time for the final crit.
Lost wax slurry casting, 3D printing, anodization, extrusion, we are going to make a functional sculpture of lightning that will be big and heavy.


2013-04-08 09.36.12 copy

Michael

07 Apr 2013

First, follow this link to Gigapan Time Machine.

SDO

I have been entranced by this interactive timelapse of the sun for years, and there is enough detail in each of the layers that I still haven’t found all of the interesting events yet.  For my capstone project, I would like to augment the interaction in some way that allows users to better view multiple frequency bands simultaneously.  Many of the events can be understood and appreciated more fully by watching them unfold across multiple frequency bands.  The current method of selecting them is somewhat clumsy, though, and only one video can be viewed at a time.

I’m still trying to come up with a good way for users to manipulate and blend the frequency visualizations in a way that decreases confusion rather than increases it.  My current plan is to implement a sort of magnifying glass that can either zoom in locally or reveal a different frequency than the background layer so that the user can examine points of origin and dissipation while the video plays.  A still from the video might look something like this:

sunstack2

Another option might be to allow the user to paint his or her own viewing windows in different frequency bands, as shown below.

sunstack

 

I’m currently leaning away from this path, though, because I worry that painting static image windows isn’t a good idea for a video that is constantly in motion, especially since the sun is constantly turning to the right.

Any feedback is welcome!