Monthly Archives: April 2013

Yvonne

07 Apr 2013

My project idea is “Sketch a Level” (name pending) – a rig where you can take a piece of paper, sketch a drawing on it (say, a maze), and then have the computer read the drawing and project characters onto the paper. You would control your character with the movement of your finger on the paper, or the movement of the pencil, I haven’t decided yet.

concept-sketch
A quick concept drawing.

The first item on my list is the game rig. I get that done and the rest is programming. Based off my last project, I think it would be a lot easier on me if I could write my program using the actual setup. Last time I wrote my program using a mock up at CMU, which wasn’t the same as my setup at home… which ultimately, just made things annoying and time consuming.

sketch1
Sketch for the game rig, I’m probably going to use our old glass office table.

sketch2
Another sketch, with some ideas on what I need to do.

sketch3
Shape recognition. Portals, death traps, and other special symbols.

I’m thinking of using one of our office tables at home, and using a handy old projector to project the characters from the bottom up (onto the glass and paper). Then use a simple rig on the table to hold the camera, kind of like a lamp, but not.

Inspiration… Mm.

SketchSynth is a project done last year in IACD. My project will probably be along the same lines as this one, in fact, they’re practically identical excluding the content. He did it to create a GUI, I’m doing it for a game.

Notes to self:

  • Setup table rig (glass top, camera holder, and projection holder). Camera looks down, projector projects up through glass. Line up camera image with projection. The frosted film on the glass should work.
  • Mark off area to sketch, needs to be consistent, otherwise I will have to calibrate for every session. Sketch needs to have a consistent black border due to the way the collision maps are generated. A piece of black acrylic lasercut and fixed to the table should do.
  • I need to re-program the AI to be more intelligent. Best suggestion I got was to implement individual personalities, similiar to how the PacMan game does it.
  • Re-configure preexisting game setup. Basically, fix the GUI for this application.
  • Work on shape recognition (hard) or color recognition (easy). Shape recognition could turn out to be a pain for me, especially since my programming experience is… well… 2 semesters, not even. I’ve done some reading and it’s not promising. Color recognition is easy, I have dappled with it before. I could have it so certain colors mean different things: a portal, a death trap, a power pill etc.
  • Methods of control will vary as time goes on. Will start with a keyboard, the easiest means of interaction. Eventually I hope to do one of the following: finger recognition, the character traces the path of your finger. Has been done with a webcam, can also be done with a Kinect. I haven’t done it personally, though I have done hand tracking on the Kinect before. Another, easier route is to do color tracking. I could have a pencil that is a particular color not present on the paper or setup, the character could follow the pencil.

Questions answered:

  • Are there an unlimited amount of portals, death traps, and other special symbols? No, I will probably set it up so the computer recognizes a maximum of say… 3.
  • If you lift your finger off the paper and then place it on another portion of the paper, will the character teleport? No, I will probably have it setup so the character moves through the maze to the position of your finger. There will be no instant teleportation, except through the drawn portals.
  • If the character enters one portal, and there are say 5, which one will the character pop out of? Are the portals linked? It will be random, the character will enter one portal and randomly pop out another. It’s a game of chance.
  • Any size paper? No, probably not. I’m thinking standard letter size or 11×17.
  • Scale of symbols and maze, how does that affect the characters? I’m not sure. It would be difficult for me to program something with variable size… At least, well… I don’t know. I guess I could try to measure the smallest gap and then base the character size off the measured gap. Then the characters would re-size according to the map. I’ve never programmed something like that before, so I’m not sure if what I am thinking would work.

Caroline

03 Apr 2013

For my final project I want to create a gesture recognition driven photo booth animation tool. So here is my research for that:

 

The Robot Readable World by Timo Arnall

This piece is a non-expert reflection on the nature of computer’s gaze into the world. It is comprised of appropriated footage from research on computer vision.

This piece is one of the most beautiful pieces I have ever seen that engages the idea of the aesthetics of technology. It expresses the insect like movement of the computational gaze, which is sometimes terribly precise and sometimes wondering and almost whimsical. It also presents a pleasing juxtaposition the mundane footage being analyzed and the powerful math being pointed at it.

My final project will use gesture and facial recognition to trigger photo taking and I hope that there will be the same hint of wandering computational gaze.

Content Retargeting Using Parameter-Parallel Facial Layers by Prof. Yaser

Professor Yaser has led an amazing series of projects that deal with reconstructing human motion from all fidelities of images. In this project Yaser uses face tracking to puppet the movements of the various avatar.

droppedImage

The tracking data is stored and expressed in three different layers: emotion, mouth movement, and blinking. They use a series of extreme stored images in each category to serve as references. To get the face tracking data they used motion capture to creates a mesh and then they only pay attention to the mouth and eye only when changes occurred.

Although, this project is way out of my technical scope; it’s a good reference to see what has been done before.

Venus Webcam by  Addie Wagenknecht & Pablo Garcia

This project asks internet posers to pose in positions depicted in famous paintings. These images are then sent off to china to be painted.


This project describes itself as hacking a community of people rather than a code environment. It creates an unexpected link between the internet culture and high culture. I read it as a commentary on how we assign value to images. This is especially evident in the decision to get them fabricated in paint. 

Dev

02 Apr 2013

For the capstone I finally decided to do a hardware project. I was inspired by this blog post which I had read some months back. Please watch the videos in it, they are quite entertaining:
http://www.andrespagella.com/important-gestures-public-speaking

After reading that article I sorta fell in love with Italian hand gestures. They are so expressive, and I love the fact that the gestures themselves form a language. For this project I want to create a piece that will make learning these beautiful gestures easy and interactive.

As a source I plan on using Speak Italian: The Fine Art of Gesture (http://www.amazon.com/Speak-Italian-The-Fine-Gesture/dp/0811847748/?tag=braipick-20).

My goal is to translate gestures into animatronic hands. There are several instructables on how to make these. (http://www.instructables.com/id/Simple-Animatronics-robotic-hand/)

At the end of the day I want users to speak some word or words in English and have them spoken back with an Italian accent and the appropriate hand gestures.

Robotic hands are nothing new. And neither is the concept of having them gesture. (See video below) The interesting part of this project will  be the fact that I am reinterpreting Bruno Murani’s book in a very physical way.

Alan

02 Apr 2013

Abstraction: Browser and iPhone Interaction Experiment

Background: There are many interactions with sensors like Kinect, LeapMotion, Wii. All these interactions require additional sensors with limitation of context and cost. They are either scalable but with high cost or with cheap device but limited with size. Therefore, an idea using ubiquitous devices like a smart phone or iPad as sensors is on the right track. Using browser as the interaction medium also frees the system from heterogeneousness between different mobile operating systems.

Inspired by Boundary Functions (1998) by Scott Snibbe, every person in real world has a physical relationship with others, I am thinking how about relation between people in virtual space. This interaction installation will be set in public spaces, like airports and train stations. People are getting into Internet through devices. Identity of human is represented as identity of digital devices. With interactions between their mobile devices, people are getting sense of relations between others(whether anonymous or not) in the same public spaces.


Verification & Communication

Currently I didn’t implement the verification for the system right now. User can access the system by visit the server ip address with websocket. Once a device is connected to server, the server will generate a new ball for the user. Server can get the device id to identify the user.


Interaction

The browser on the mobile devices will obtain gravity and accelerometer of the current device and send data to the server. The server will normalize the velocity and orientation of x-axis and y-axis values and map them to the position on the browser page.

context


Tech

I use Node.js as server with now.js and socket.io for live interaction. Apple accelerometer is the sensor on the client device.

Github repo: https://github.com/hhua/xBall

Keqin

01 Apr 2013

I just found that current cv or kinect system cannot give user real feeling or real feedback.

So I think I want to make some real thing that can give some feedback to users when they

interact with the cv or kinect system.

This is a robot hand controlled by a kind of material called SMA wires. I think I will use something like this to give people feedback on their hand.

 

And one more project about this.

And this is one haptic way to give feedback to user when they interact with the cv or kinect system.

Meng

01 Apr 2013

Capstone Brief
I want to design a 3D tangible display for map data visualization usage.

See Through 3D Desktop
by Jinha Lee MSR

Trans of Data
by MIT Sensible City Lab
http://senseable.mit.edu/trainsofdata/

Geographical Visualization: Where America Lives
by Feilding Cage
http://www.time.com/time/interactive/0,31813,1549966,00.htmlScreen Shot 2013-04-01 at 6.02.25 PM

Nathan

01 Apr 2013

I am aiming to build a ‘throwing’ machine that will proceded to launch light bulbs at a wall and/or me. If you have seen my main body of work you will understand that I am talking about apprehension, gentleness, aggressiveness, and semi-uncontrollable circumstances. I have been skimming the web for designs of machines that inspire the design and application of my own machine.

I’m looking for a machine that has a sense of ‘crude’ making and a machine that has a ‘fluid’ action.
I’m looking to build a machine that talks about more than the sum of its parts and actions.
I’m looking to do a performance with or a video of this machine working.
I’m looking to put this in my upcoming show as a physical installation with accompanying video.

Oscar Peters

So KANNO yang02

SENSELESS DRAWING BOT #2 from yang02 on Vimeo.

Robb

01 Apr 2013

Joshua Lopez-Binder and I plan on making some gorgeous and outrageously efficient heat sinks.
What is a heatsink, you may ask? A heat sink is an object, typically metal, that is designed to absorb and dissipate heat. They are primarily used to cool hot electrical components.

My vested interest in making a super efficient and highly beautiful heatsink is quite related to my continued, yet slow pursuit of making a new Cryoscope. I find its current design noisy(due to fan) and a little static aesthetically. The device needs a large heatsink in order for the solid state heat pump(Pelter Element) to refrigerate the contact surface.
The applications of such a component are not at all limited to my old project.
If I can get it to provoke imagery of a lightening storm, I think it would be pretty neat.
Josh and I have some theories. We think that naturally inspired fractal geometries will make very nice heat dumpsters indeed.

I am thrown with licthenberg figures, the pattern left behing by high intensity electrical charges. Here is an example of one on the back of a human who survived a lightening strike.722px-Tesla-coil-discharge
This looks like it will shape up to be the most formal thing I have pursued since enrolling in art school. I feel that the physical manifestation of thermal radiation of waste is an important aspect of my earlier thermal work. I had tried in the earliest Cryoscope to hide the byproduct heat using aesthetics that were too close to Apple for my comfort.

Lichtenberg ‘Art’

A group of scientists, dubbing themselves Lightening Whisperers, started a company which embeds Lichtenberg figures in acrylic (Plexiglass) blocks using a multi-million volt electron beam and a hammer and nail. The website is a great way to kill an hour looking at these beautiful little desk toys. They also shrink coins.

Josh Outlined some very nice works by Nervous System. They make very pretty generative jewelry, among other things. I just spent an hour scrolling on their blog. I always look too far outwards and end up with a post that is too short.

Lichtenberg Figure in Processing!

Alan

01 Apr 2013

###Hydrophobic Material As Art? 

I got the inspiration from this TED talk that we may use any material to make artwork. This can be extended to hydrophobic material, fire, water, electricity, magnetics, etc.

 

###basil.js – Computational and generative design using Adobe InDesign

basil.js is a scripting library that has been developed at the Visual Communication Institute at The Basel School of Design during the last nine months and is now made public as open-source. Based on the principles of “Processing”, basil.js allows designers and artists to individually expand the possibilities of Adobe InDesign in order to create complex projects in data visualization and generative design.

This inspired me since I may generate art around certain texts and images. However, the art is limited by Adobe software, and I may expand it to browser based application which is much more scalable.

###Turn Your Favorite Website Into A Playable 3D Maze With World Wide Maze

A rad new game from Google Chrome Experiments synchs up your computer’s Chrome browser with your smartphone to create a multi-platform coordinated 3D maze. The game, called World Wide Maze, turns any website into a playable game where you navigate a ball around a series of courses.

Browser interactions are always my favored projects. Disecting websites into pieces and reframing them is a nice idea. However, the game itself is still lame. There is still a chance to raise the game flow and whole design features.

### PM2.5 in China – Data Visualization

A tech team in China opened PM2.5 api to public in China for the first time. I may generate the first visualization art about PM 2.5 data in China.

Looking outwards – Final Project

1. A new AR platform is desperately needed.

So I will likely be using Vuforia – Qualcomm’s AR platform. After talking with the lead developer at BigPlayAR, it seems like Vuforia is the clear winner, allowing me to work in Unity or in their own environment.

2. I will also be wrapping up some loose ends with the Processing implementation.

community

As Golan helped me discover, getting RGBD to work in Processing is a project that the community is just now tackling, so I likely will be forking this guy’s repo and pull requesting to create one dynamite implementation!

3. INSPIRATION

Geography-specific AR:

AR at MOMA:

AR Card Game:
This video autoplays so I made a link to it instead

Whimsical augmentation of a physical space:

Reinterpreting architecture from the perspective of the fantastic:

Another geo-augmentation