Daily Archives: 26 Feb 2013

Marlena

26 Feb 2013

I love the idea of computer vision–it’s an excellent system for sensing that lets people react with machines in a much more natural and intuitive way than typing or other standard mechanical inputs.

Of course, the most ubiquitous form of consumer computer vision has been made possible by cheap and the Kinect: 

Of course here are plenty of games, both those approved by Microsoft and those made by developers and enthusiasts all over the web; [see http://www.xbox.com/en-US/kinect/games for some examples], but there are also plenty of cool applications for tools, robotics, and augmented reality.

Here’s a great example of an augmented reality application that uses the Kinect–it tracks the placement of different sized blocks on a table to build a city. It’s a neat project in its ability to translate real objects into a continuous digital model.

Similarly, there is a Kinect hack that allows the user to manipulate Grasshopper files using gestures [see http://www.grasshopper3d.com/video/kinect-grasshopper ]. It is a great prototype for what is probably the next level of interaction: direct tactile feedback between user and device. This particular example is lacks a little polish–its feedback isn’t immediate and there are other minor experience details that could be improved. For an early tactile interface, though, it does a pretty good job. There are plenty of other good projects at http://www.kinecthacks.com/top-10-best-kinect-hacks/

Computer vision is also incredibly important to many forms of semi or completely autonomous navigation. For example, the Cobot project at CMU uses a combination of mapping and computer vision to navigate the Gates-Hillman Center. [See http://www.cs.cmu.edu/~coral/projects/cobot/ ]. There are a lot of cool things that can be done with autonomous motion, but the implementation is difficult to create due to the large levels of prediction necessary for navigating a busy area.

Another great application of computer vision is augmented reality. There are plenty of projects at http://www.t-immersion.com/projects to give a good idea as to how many projects involving augmented reality exist, with every idea ranging from face manipulation to driving tiny virtual cars to applying an interface to a blank wall having been implemented in some form. Unfortunately, it is difficult to make augmented reality seem like a completely immersive experience because there is always a disconnect between the screen and the surrounding environment. A good challenge to undertake, perhaps, is then how to make the experience such that the flow from screen to environment doesn’t break the illusion for the user. Food for thought.

Keqin

26 Feb 2013

SeeStorm

It can produce synthetic Video with 3D Talking Avatars by computer vision. Beyond plain real video, user can choose their look for each talk. User can generate Content (UGC) created from photo and voice. It’s a new mode of fun, personalization, visual communication. Voice-to-Video Transcoder converting voice into video. Platform for next-level revenue-generating services

 

Obvious Engineering

Obvious Engineering is a computer vision research, development and content creation company with a focus on surface and object recognition and tracking.

Our first product is the Obvious Engine, a vision-based augmented reality engine for games companies, retailers, developers and brands. The engine can track the natural features of a surface, which means you no longer have to use traditional markers and glyphs to position content and interaction within physical space.

The engine now works with a selection of 3D objects. It’s perfect for creating engaging, interactive experiences that blur the line between real and unreal. And there’s no need to modify existing objects – the object is the trigger.

 

MultiTouch

MultiTouch technology identifies and responds to the movement of hands, while other multitouch techniques merely see points of contact. It’s a good way to put the computer vision into the multitouch screen.

Yvonne

26 Feb 2013

Kinects! Kinects everywhere! Sorry! But they’re so cool :o

 

Kiss Controller
Okay… I don’t know if this counts as computer vision… It is interaction though! I think it’s awesome and different, especially since most games are controlled using hands, arms, legs, or bodies as a whole.

 

Virtual Dressing Room
This amused me and I thought the idea, though mentioned a lot in future scenarios and visions, is pretty fun and useful. Or maybe it’s the guy… and the music… and the skirts.

 

Make the Line Dance
I just thought this was really beautiful. It’s basically kinect skeletal tracking with a series of lines projected onto the human body.

 

Other fun things
More for my personal reference than yours! :P

Kinect Titty Tracker

Fat Cat

Bueno

26 Feb 2013

Ah, computer vision. In retrospect I should have put the Kyle McDonald work I mentioned in my previous looking outwards here. No matter – just an excuse to go out and dig up more work.


Please Smile by Hye Yeon Nam is a fairly simple installation piece. To be honest, the tech side of it doesn’t seem that complex. It can detect the presence of humans and if they are smiling or not, so not exactly the most thrilling set of interactions. What I do like is the use of the skeletal hands, which seem to point accusingly at you as their default reaction to your presence. It’s like they are punishing you for failing to be more welcoming to them.

Link: http://www.hynam.org/HY/ple.html

Dancing to the Sound of Shadows comes to us from the design group Sembler in collaboration with the Feral Theatre. The project takes the movements from the latter collaborator’s shadow puppet production of The Sound Catcher and uses them to generate real-time music that reflected the live performance. The music itself is inspired by native Indonesian music. It’s a real treat.

Link: http://www.thecreatorsproject.com/blog/dancing-to-the-sound-of-shadows

Lastly is another work from our homeboy James George, in collaboration with Karolina Sobecka. It’s  pretty amazing, I think. A dog is projected into a storefront window and reacts aggresively, defensively, indifferently, or affectionately based on the viewer’s gestures. Unlike the previous skeleton hand piece, I think here the choice of the dog as the central figure encourages more sustained interest and engagement with the piece. It was done using Unity3d in communication with openFrameworks.

Link: http://jamesgeorge.org/works/sniff.html

Andy

26 Feb 2013

1. Flutter

Flutter is a company that I interviewed with back in September. They use computer vision algorithms to allow users to control their music programs via gestures recognized by webcams, thus when iTunes is minimized to the tray you don’t need to open it up to pause or go to the next song. And it’s free!

2. DepthJS

With many of the same goals in mind as Flutter, DepthJS is a software application which uses the Kinect to allow users to navigate web browsers with gestures. This project raises the question – just because we can use gestures to control something, does that mean we should? It seems to me that the point-and-click interface is far superior to the DepthJS interface in terms of convenience and usability. Gestures will only succeed when they demonstrate that they are better than the current status quo, and to me all I see here is a swipey touch-screen like mentality that doesn’t utilize the depth of the Kinect sensor.

3. Kinect Lightsaber

I’m all about this project. Track a stick, overlay it with a lightsaber. I could see myself doing something like this to create an augmented reality game or something like that. Maybe fruit ninja except you have to actually slash with a sword to get the fruit. EDIT: Kinect fruit ninja definitely already exists. Dang.

Kyna

26 Feb 2013

Blow-Up

LINK

Blow_Up_03

Blow-Up is an interactive piece wherein a camera which is aimed at the viewer, whose image is then broken up and displayed in a seemingly semi-random fluid display of smaller squares. The overall effect is that of an insect’s compound eye.

The Telegarden

LINK

total1997

The Telegarden is a piece wherein the audience can participate in the management of a garden via a robot connected to the internet. Users can plant, water and monitor the remote garden through control of an industrial robotic arm. The cooperation of the collective audience is what manages and maintains the garden, which I find very interesting.

Close-Up

LINK

Close-up_04

‘”Close-up” is the third piece of the ShadowBox series of interactive displays with a built-in computerized tracking system. This piece shows the viewer’s shadow revealing hundreds of tiny videos of other people who have recently looked at the work. When a viewer approaches the piece, the system automatically starts recording and makes a video of him or her. Simultaneously, inside the viewer’s silhouette videos are triggered that show up to 800 recent recordings. This piece presents a schizoid experience where our presence triggers a massive array of surveillance videos.’

Ziyun

26 Feb 2013

{It’s you – Karolina Sobecka}

I like the way it’s showing it in the mirror, which is a much more intuitive way than to show it on a regular screen, although the result that you’ll see yourself in it is the same.

The “non-verbal communication” concept is another fact that makes this project interesting. If you look carefully, when you’re “morphed” into an animal, your ears tend to be more expressive!

 

{Image Cloning Library – Kevin Atkinson}

This is an openFrameworks addon, which allows you to turn one face into another..it is realtime and the result is, I would say, quite seamless.
ahh.. technology..I want to do this with voices!

hey..sad lady..

 

{Squeal – Henry Chu}

a cute app, I love the face changing swap..