Monthly Archives: February 2013

Marlena

26 Feb 2013

I love the idea of computer vision–it’s an excellent system for sensing that lets people react with machines in a much more natural and intuitive way than typing or other standard mechanical inputs.

Of course, the most ubiquitous form of consumer computer vision has been made possible by cheap and the Kinect: 

Of course here are plenty of games, both those approved by Microsoft and those made by developers and enthusiasts all over the web; [see http://www.xbox.com/en-US/kinect/games for some examples], but there are also plenty of cool applications for tools, robotics, and augmented reality.

Here’s a great example of an augmented reality application that uses the Kinect–it tracks the placement of different sized blocks on a table to build a city. It’s a neat project in its ability to translate real objects into a continuous digital model.

Similarly, there is a Kinect hack that allows the user to manipulate Grasshopper files using gestures [see http://www.grasshopper3d.com/video/kinect-grasshopper ]. It is a great prototype for what is probably the next level of interaction: direct tactile feedback between user and device. This particular example is lacks a little polish–its feedback isn’t immediate and there are other minor experience details that could be improved. For an early tactile interface, though, it does a pretty good job. There are plenty of other good projects at http://www.kinecthacks.com/top-10-best-kinect-hacks/

Computer vision is also incredibly important to many forms of semi or completely autonomous navigation. For example, the Cobot project at CMU uses a combination of mapping and computer vision to navigate the Gates-Hillman Center. [See http://www.cs.cmu.edu/~coral/projects/cobot/ ]. There are a lot of cool things that can be done with autonomous motion, but the implementation is difficult to create due to the large levels of prediction necessary for navigating a busy area.

Another great application of computer vision is augmented reality. There are plenty of projects at http://www.t-immersion.com/projects to give a good idea as to how many projects involving augmented reality exist, with every idea ranging from face manipulation to driving tiny virtual cars to applying an interface to a blank wall having been implemented in some form. Unfortunately, it is difficult to make augmented reality seem like a completely immersive experience because there is always a disconnect between the screen and the surrounding environment. A good challenge to undertake, perhaps, is then how to make the experience such that the flow from screen to environment doesn’t break the illusion for the user. Food for thought.

Keqin

26 Feb 2013

SeeStorm

It can produce synthetic Video with 3D Talking Avatars by computer vision. Beyond plain real video, user can choose their look for each talk. User can generate Content (UGC) created from photo and voice. It’s a new mode of fun, personalization, visual communication. Voice-to-Video Transcoder converting voice into video. Platform for next-level revenue-generating services

 

Obvious Engineering

Obvious Engineering is a computer vision research, development and content creation company with a focus on surface and object recognition and tracking.

Our first product is the Obvious Engine, a vision-based augmented reality engine for games companies, retailers, developers and brands. The engine can track the natural features of a surface, which means you no longer have to use traditional markers and glyphs to position content and interaction within physical space.

The engine now works with a selection of 3D objects. It’s perfect for creating engaging, interactive experiences that blur the line between real and unreal. And there’s no need to modify existing objects – the object is the trigger.

 

MultiTouch

MultiTouch technology identifies and responds to the movement of hands, while other multitouch techniques merely see points of contact. It’s a good way to put the computer vision into the multitouch screen.

Yvonne

26 Feb 2013

Kinects! Kinects everywhere! Sorry! But they’re so cool :o

 

Kiss Controller
Okay… I don’t know if this counts as computer vision… It is interaction though! I think it’s awesome and different, especially since most games are controlled using hands, arms, legs, or bodies as a whole.

 

Virtual Dressing Room
This amused me and I thought the idea, though mentioned a lot in future scenarios and visions, is pretty fun and useful. Or maybe it’s the guy… and the music… and the skirts.

 

Make the Line Dance
I just thought this was really beautiful. It’s basically kinect skeletal tracking with a series of lines projected onto the human body.

 

Other fun things
More for my personal reference than yours! :P

Kinect Titty Tracker

Fat Cat

Bueno

26 Feb 2013

Ah, computer vision. In retrospect I should have put the Kyle McDonald work I mentioned in my previous looking outwards here. No matter – just an excuse to go out and dig up more work.


Please Smile by Hye Yeon Nam is a fairly simple installation piece. To be honest, the tech side of it doesn’t seem that complex. It can detect the presence of humans and if they are smiling or not, so not exactly the most thrilling set of interactions. What I do like is the use of the skeletal hands, which seem to point accusingly at you as their default reaction to your presence. It’s like they are punishing you for failing to be more welcoming to them.

Link: http://www.hynam.org/HY/ple.html

Dancing to the Sound of Shadows comes to us from the design group Sembler in collaboration with the Feral Theatre. The project takes the movements from the latter collaborator’s shadow puppet production of The Sound Catcher and uses them to generate real-time music that reflected the live performance. The music itself is inspired by native Indonesian music. It’s a real treat.

Link: http://www.thecreatorsproject.com/blog/dancing-to-the-sound-of-shadows

Lastly is another work from our homeboy James George, in collaboration with Karolina Sobecka. It’s  pretty amazing, I think. A dog is projected into a storefront window and reacts aggresively, defensively, indifferently, or affectionately based on the viewer’s gestures. Unlike the previous skeleton hand piece, I think here the choice of the dog as the central figure encourages more sustained interest and engagement with the piece. It was done using Unity3d in communication with openFrameworks.

Link: http://jamesgeorge.org/works/sniff.html

Andy

26 Feb 2013

1. Flutter

Flutter is a company that I interviewed with back in September. They use computer vision algorithms to allow users to control their music programs via gestures recognized by webcams, thus when iTunes is minimized to the tray you don’t need to open it up to pause or go to the next song. And it’s free!

2. DepthJS

With many of the same goals in mind as Flutter, DepthJS is a software application which uses the Kinect to allow users to navigate web browsers with gestures. This project raises the question – just because we can use gestures to control something, does that mean we should? It seems to me that the point-and-click interface is far superior to the DepthJS interface in terms of convenience and usability. Gestures will only succeed when they demonstrate that they are better than the current status quo, and to me all I see here is a swipey touch-screen like mentality that doesn’t utilize the depth of the Kinect sensor.

3. Kinect Lightsaber

I’m all about this project. Track a stick, overlay it with a lightsaber. I could see myself doing something like this to create an augmented reality game or something like that. Maybe fruit ninja except you have to actually slash with a sword to get the fruit. EDIT: Kinect fruit ninja definitely already exists. Dang.

Kyna

26 Feb 2013

Blow-Up

LINK

Blow_Up_03

Blow-Up is an interactive piece wherein a camera which is aimed at the viewer, whose image is then broken up and displayed in a seemingly semi-random fluid display of smaller squares. The overall effect is that of an insect’s compound eye.

The Telegarden

LINK

total1997

The Telegarden is a piece wherein the audience can participate in the management of a garden via a robot connected to the internet. Users can plant, water and monitor the remote garden through control of an industrial robotic arm. The cooperation of the collective audience is what manages and maintains the garden, which I find very interesting.

Close-Up

LINK

Close-up_04

‘”Close-up” is the third piece of the ShadowBox series of interactive displays with a built-in computerized tracking system. This piece shows the viewer’s shadow revealing hundreds of tiny videos of other people who have recently looked at the work. When a viewer approaches the piece, the system automatically starts recording and makes a video of him or her. Simultaneously, inside the viewer’s silhouette videos are triggered that show up to 800 recent recordings. This piece presents a schizoid experience where our presence triggers a massive array of surveillance videos.’

Ziyun

26 Feb 2013

{It’s you – Karolina Sobecka}

I like the way it’s showing it in the mirror, which is a much more intuitive way than to show it on a regular screen, although the result that you’ll see yourself in it is the same.

The “non-verbal communication” concept is another fact that makes this project interesting. If you look carefully, when you’re “morphed” into an animal, your ears tend to be more expressive!

 

{Image Cloning Library – Kevin Atkinson}

This is an openFrameworks addon, which allows you to turn one face into another..it is realtime and the result is, I would say, quite seamless.
ahh.. technology..I want to do this with voices!

hey..sad lady..

 

{Squeal – Henry Chu}

a cute app, I love the face changing swap..

 

Caroline

25 Feb 2013

Pygmies by Pors and Rao (2006-09)

In this playful piece Rao and Pors create a multitude of personified little creatures. The creature live around the peripheries of the frame. Then pop out when they sense that the environment is safe.

Screen shot 2013-02-25 at 2.04.57 AM

http://www.ted.com/talks/aparna_rao_high_tech_art_with_a_sense_of_humor.html

This piece creates a system of little creatures that are extremely simple in form, but are animated in their movement and interaction with their environment. They retreat whenever they are faced with noise, but they ignore background noise. I think this installation succeed in creating an environment for play, but I might have been more compelling from a formal stand-point.

Scratch and Tickle by George Roland (1996)

In Scratch you are faced with an image of a woman’s back and a voice requesting for you to scratch it with your mouse. She then instructs on how she would like to be scratched, but as time goes on she becomes increasingly insistant and abusive.

SFCI Archive: SCRATCH and TICKLE (1996) from STUDIO for Creative Inquiry on Vimeo.

This is a classic piece, where a very simple interaction is used as a framework to create a relationship and tell a story. I think it is a good example of how the simplest interaction, like a mouse click and drag can create a very compelling piece. I think it is also successful because it requires minimum effort on the part of the user, most of the piece happens in the application itself.

 Street View Stereographic by Ryan Alexander 

Alexander uses the google APIs to manipulate street view into a stereographic or circular view.

Screen shot 2013-02-25 at 3.13.04 PM

This isn’t really an art piece, as presented here, I am more interested in it because I want to learn more about how he coded it. (All his code is on git!!) It is an interesting visual effect. It creates quite a humorous form. I wish they could be globes I could circle around.

Nathan

25 Feb 2013

I’m using this as a kind of sounding board for all of the projects that I consider descent and at least a little elegant, interesting conceptually, and beautiful. I will slowly fill this in with more text but for now it is a culmination of my searching for inspiration.

.fluid – A reactive surface from Hannes Kalk on Vimeo.

Rain Room at the Barbican from rAndom International on Vimeo.

YCAM InterLab + Yoko Ando “Reactor for Awareness in Motion” promotional video from YCAM on Vimeo.

WOODS from Nocte on Vimeo.

Kentucky Route Zero trailer from Cardboard Computer on Vimeo.

One Hundred and Eight – Interactive Installation from Nils Völker on Vimeo.

Alan

25 Feb 2013

#Google Glass

Google Glass is a glass which extends human sensation. It is integrated Internet service and as many sensors into the small devices. It will be the first time possible to make strong augment reality possible for normal people in large scale.

 

#Johnny Cash Project

Again this is the most impressive crowdsourcing art project for purpose of memorizing Johnny Cash. The project divide each frame of the song Ain’t no Grave and present them on the Internet. Anyone who is interested in the certain frame can reedit it by all means. The MV is finally re-generated by people all over the world.

The Johnny Cash Project from Chris Milk on Vimeo.

 

#Bicycle Built for Two Thousand by Aaron Koblin

Bicycle Built For 2,000 is comprised of 2,088 voice recordings collected via Amazon’s Mechanical Turk web service. Workers were prompted to listen to a short sound clip, then record themselves imitating what they heard.


#Swarm Robots

Swarm Robots is a swarm of robots which individually has lower ability but collectively can achieve things which powerful robots cannot do with coordination. This is interesting since it provides us different view of intelligence.