Daily Archives: 07 Apr 2013

Dev

07 Apr 2013

A couple of months ago I came across this amusing and interesting video on Italian hand gestures:

I really loved the fact that gestures can be so elaborate. The man in the video made it seem as if certain cultures turned gestures into a language of their own. Since I liked some of the gestures so much, I looked around and found a book on the topic:

Screen Shot 2013-04-08 at 12.38.28 AM

 

Inside the book there were still pictures, and a brief description on how to perform the gestures in the pictures. Although, the pictures were very clear, I felt that there had to be a better way to express these gestures.

munari_gestures11

The goal of my project is to revisualize some of the pages from Munari’s book using an animatronic hand. I will develop a processing app that can cycle through pages from the book (the description and the picture). This app will trigger the animatronic hand to gesture appropriately based on the gesture displayed on the screen. Overall this is an interactive reading experience.

For some insight on how the animatronic hand will function see this video:

(Ignore the glove that controls the hand in the video. My project will automatically trigger gestures when the pages are cycled)

 

Patt

07 Apr 2013

My goal is to integrate hardware and software for this project. I recently came back from Beijing, China, and the trip left a footprint on my heart. I really had a great time, and I really want to document it in one way or another. This new idea has not deviate from my original idea of creating a map of Pittsburgh, and a tracking system of where a person has been to in the city. My original idea is to make a map of Pittsburgh, install tons of LED lights behind it, and light them up one by one (or in small groups) as a user visits more places. So the more places a user visits, the brighter the map gets. However, I need to be realistic with the time frame and the skills that I have. Getting the first prototype of the hardware done will take me a lot of time, and I am not entirely sure if it will exemplify the concept I have in mind. For this project, I don’t want to only create a demo — it really has to be a poem of some sort.

I want to create an interactive experience between a user and a map. I am planning on laser cutting a map of Beijing. The goal is for a user to be able to learn more about a specific place in the city, where he can touch (or interact with the map in other ways), and photos or information of the places would show up. I have a collection of photos I took while in Beijing, and I think it will be very memorable to be able to document the trip this way. I have not exactly decided on what types of interaction I would like to do – right now, I am thinking either the Kinect or light sensors. I want to utilize the tools I have learned from the past projects and use them to create something beautiful and meaningful.

Here are some photos from the trip!!

beijing1

beijing2

beijing3

Sam

07 Apr 2013

For my capstone project I will be expanding upon my last, GraphLambda, which was a visualization of Lambda Calculus programs.

screenshot_mar31

The current state of the project is more a proof-of-concept than a usable tool. To bring it forward to achieve my original goals of providing an interactive visual programming environment for Lambda Calculus, I need to:

  1. Better layout of the nodes (topological sorting to clarify the order of operations)
  2. Create some tools for inserting, manipulating and deleting elements of the Lambda expressions
  3. Provide a clearer link between the text and pictorial representations (perhaps using some sort of synchronized highlighting)
  4. Provide a way to see the evaluation of the program in steps
  5. Enable collapsing of expression groups for readability

After achieving these goals, I plan to use the visualizer to record a video introduction to Lambda Calculus in this form. This paper from a researcher at FU Berlin gives a good overview of introductory concepts, and will probably form an outline for my video. I hope in the end to be able to offer this tool to the Computer Science department as a way to help their students understand this confusing topic.

Secondary features to be added as time permits:

  1. Investigate ways of coloring the objects to clarify relationships in the program structure
  2. Smooth animations for transitioning between views of the program (insertion/deletion, evaluation)
  3. Builtins of certain common elements (numbers, logical operations, named functions)

Caroline

07 Apr 2013

FaceGraphic

photo (3)

For my final project I want to use rough posture recognition to create system that triggers a photograph to be taken as soon as a pose enters a certain set of parameters. In the above photographs I attempted a rough approximation of this system.  The photographs on the top are photographs taken when faceOSC detected that eyebrow position and mouth width equaled two, the second row is when the system detected both those parameters equaled zero, and finally the bottom row shows a few samples of the mess ups.

I have a couple ideas of how I might implement this project at varying levels of complexity:

  • trigger a DSLR camera whenever face or body is in a particular position. Make a large format print of faces in a grid in their various positions. 
  • Record rough face tracking data of a face making a certain gesture. Capture that gesture frame by frame, and then capture photographs that imitate that gesture frame by frame.
  • Trigger photographs to be taken when people reach certain pitch volume combinations. Create an interactive installation that you sing to and it brings up people’s faces that were singing the closest pitch volume combination.

All of these ideas involve figuring out how to trigger a DSLR photograph from the computer and storing a database of images based on their various properties. Here are some resources I have come up with to help me figure out how to trigger a DSLR:

In terms of databasing photographs based on their various properties, Golan recommended looking into principal component analysis, which allows you to reduce many axis of similarity into a manageable amount. He drew me a beautiful picture of how it works:

photo (2)

 

I also found Open Frameworks thread that pretty much described this project. Here are some of the influences I pulled out of that:

Stop Motion by Ole Kristensen

 

Cheese by Christian Moeller

Ocean_v1. by Wolf Nkole Helzle

Joshua

07 Apr 2013

In order to start playing around with branching structures I implemented diffusion limited aggregation in rhino python (rhino is a 3d modeling program).  As long as the generating algorithm doesn’t require some insane speed I think it makes sense to do stuff with rhinoPython since everything is already in cad format and therefore much easier to get into the manufacture process. The following is a 3d DLA with particles being fired from some random point  on a circle to another random point.  If the particle comes within “stick range” of a node of the existing structure it stops and is added to the structure.  All of the branches from the root to that node get  a little thicker.  Clearly something more involved than little tubes would be required to make something manufacturable. I am not really sure the best way to do this. Maybe some sort of volume sweeping with spheres? Or maybe make really course meshes and run smoothing algorithms.
2013-04-08 09.30.55-3
Really I would rather do something with simulating heat conduction and fluids (with heat transfer) over 3d meshes and have them grow so that nodes on the mesh are greedy for cold areas and release inhibiting agents to keep there neighbors from steeling there coolth.  Like corals, sort of. But I don’t think I can do that in a few weeks. I know very little about heat transfer, and just beginning to learn navier-stokes equation and related concepts used for computational fluid mechanics. Maybe I can sort of approximate this with the particle DLA?


We also have several difficulties in getting this thing made in Aluminum and anodized black in time for the final crit.
Lost wax slurry casting, 3D printing, anodization, extrusion, we are going to make a functional sculpture of lightning that will be big and heavy.


2013-04-08 09.36.12 copy

Michael

07 Apr 2013

First, follow this link to Gigapan Time Machine.

SDO

I have been entranced by this interactive timelapse of the sun for years, and there is enough detail in each of the layers that I still haven’t found all of the interesting events yet.  For my capstone project, I would like to augment the interaction in some way that allows users to better view multiple frequency bands simultaneously.  Many of the events can be understood and appreciated more fully by watching them unfold across multiple frequency bands.  The current method of selecting them is somewhat clumsy, though, and only one video can be viewed at a time.

I’m still trying to come up with a good way for users to manipulate and blend the frequency visualizations in a way that decreases confusion rather than increases it.  My current plan is to implement a sort of magnifying glass that can either zoom in locally or reveal a different frequency than the background layer so that the user can examine points of origin and dissipation while the video plays.  A still from the video might look something like this:

sunstack2

Another option might be to allow the user to paint his or her own viewing windows in different frequency bands, as shown below.

sunstack

 

I’m currently leaning away from this path, though, because I worry that painting static image windows isn’t a good idea for a video that is constantly in motion, especially since the sun is constantly turning to the right.

Any feedback is welcome!

 

Nathan

07 Apr 2013

I am currently making work based on the ideas of Apprehension, Impact, Aggressiveness, and Gentleness. Through my own video performance work I have come to create a sculpture language that deals with these thoughts and actions that relate to them. My current interpretation is about purposefully being in the potential danger zone of an active simple machine throwing things at me.

First, I propose constructing 3 separate objects that throw things at me
1. A Catapult Mechanism
2. A Pitching Mechanism
3. A Simple Lever Drop Mechanism
photo (2)
photo (4)
These mechanisms will throw either
1. Lightbulbs
2. Hot Wheels
3. Wine Glasses (Maybe… potentially sugar glass if I can get my hands on it. I tested out wine glasses and they fucking hurt).

Second, I want to do multiple video performances of myself interacting with said mechanism. The Pitching Mechanism will be shown at my exhibition Uncontrol, on April 19th 2013 at the Frame Gallery.

Third, I am proposing that all 3 objects are displayed together with the videos of the performance work next to them at the final exhibition of the IACD class.

Examples of other work in Context

Red Apple [Impact] from Nathan Trevino on Vimeo.

Bulb II [Impact] from Nathan Trevino on Vimeo.

Egg [Impact] from Nathan Trevino on Vimeo.

Sex Machine I from Nathan Trevino on Vimeo.

Yvonne

07 Apr 2013

My project idea is “Sketch a Level” (name pending) – a rig where you can take a piece of paper, sketch a drawing on it (say, a maze), and then have the computer read the drawing and project characters onto the paper. You would control your character with the movement of your finger on the paper, or the movement of the pencil, I haven’t decided yet.

concept-sketch
A quick concept drawing.

The first item on my list is the game rig. I get that done and the rest is programming. Based off my last project, I think it would be a lot easier on me if I could write my program using the actual setup. Last time I wrote my program using a mock up at CMU, which wasn’t the same as my setup at home… which ultimately, just made things annoying and time consuming.

sketch1
Sketch for the game rig, I’m probably going to use our old glass office table.

sketch2
Another sketch, with some ideas on what I need to do.

sketch3
Shape recognition. Portals, death traps, and other special symbols.

I’m thinking of using one of our office tables at home, and using a handy old projector to project the characters from the bottom up (onto the glass and paper). Then use a simple rig on the table to hold the camera, kind of like a lamp, but not.

Inspiration… Mm.

SketchSynth is a project done last year in IACD. My project will probably be along the same lines as this one, in fact, they’re practically identical excluding the content. He did it to create a GUI, I’m doing it for a game.

Notes to self:

  • Setup table rig (glass top, camera holder, and projection holder). Camera looks down, projector projects up through glass. Line up camera image with projection. The frosted film on the glass should work.
  • Mark off area to sketch, needs to be consistent, otherwise I will have to calibrate for every session. Sketch needs to have a consistent black border due to the way the collision maps are generated. A piece of black acrylic lasercut and fixed to the table should do.
  • I need to re-program the AI to be more intelligent. Best suggestion I got was to implement individual personalities, similiar to how the PacMan game does it.
  • Re-configure preexisting game setup. Basically, fix the GUI for this application.
  • Work on shape recognition (hard) or color recognition (easy). Shape recognition could turn out to be a pain for me, especially since my programming experience is… well… 2 semesters, not even. I’ve done some reading and it’s not promising. Color recognition is easy, I have dappled with it before. I could have it so certain colors mean different things: a portal, a death trap, a power pill etc.
  • Methods of control will vary as time goes on. Will start with a keyboard, the easiest means of interaction. Eventually I hope to do one of the following: finger recognition, the character traces the path of your finger. Has been done with a webcam, can also be done with a Kinect. I haven’t done it personally, though I have done hand tracking on the Kinect before. Another, easier route is to do color tracking. I could have a pencil that is a particular color not present on the paper or setup, the character could follow the pencil.

Questions answered:

  • Are there an unlimited amount of portals, death traps, and other special symbols? No, I will probably set it up so the computer recognizes a maximum of say… 3.
  • If you lift your finger off the paper and then place it on another portion of the paper, will the character teleport? No, I will probably have it setup so the character moves through the maze to the position of your finger. There will be no instant teleportation, except through the drawn portals.
  • If the character enters one portal, and there are say 5, which one will the character pop out of? Are the portals linked? It will be random, the character will enter one portal and randomly pop out another. It’s a game of chance.
  • Any size paper? No, probably not. I’m thinking standard letter size or 11×17.
  • Scale of symbols and maze, how does that affect the characters? I’m not sure. It would be difficult for me to program something with variable size… At least, well… I don’t know. I guess I could try to measure the smallest gap and then base the character size off the measured gap. Then the characters would re-size according to the map. I’ve never programmed something like that before, so I’m not sure if what I am thinking would work.