Oliver Daids

14 Jan 2016

http://universe.convivialproject.com/

Probable Universe is a robotic arm projecting an overlay mixing computer generated and pre-recorded video back on to the surrounding room while audio introducing some ideas of quantum physics plays in the background.  The projection system is aware of the surrounding room’s geometry, so it is able to pin the computer generated video to corresponding objects in the surroundings.

The robotic arm, the projected video, and the audio are uninteresting, but the limited bounds of the projection create a window into something unordinary about the otherwise ordinary room.  When the window, out of the viewer’s control, inevitably leaves, the viewer thinks about the now ordinary objects, but in the context of the view they last saw the objects through.  The robot arm helps the viewer see that the ordinary world around them is more interesting than what the viewer initially gave it credit for.  I admire this symbiotic relationship with the surrounding environment that I don’t normally see in other work.

On the whole, I respect what this project tries to do, but I don’t think it achieves it in an elegant way.  The projection of clearly computer-generated imagery, with visual elements like the tessellation of the surrounding point-cloud into triangles/tetrahedra, onto the world evokes ideas closer to the theme of digitization of the analog world that was popular in 80s and 90s science fiction than the theme of multiple realities that it intends.  It’s probably for this reason that the audio track, which plays in the background, and explicitly states the theme of multiple realities, is necessarily heavy handed.  The intention seems to be that the viewer will stay for long enough to watch the projection move across the surroundings while the reality it projects changes and the audio explains what’s going on.  This makes the visuals less interesting, since the viewer needs to focus on the audio in order to understand and then force themselves to understand the visuals in this new context.

While I could not find any concrete examples of possible influences, it seems have some influence from the area of street art or murals which exploit the position of the viewer and the objects they decorate to achieve their effect.


I’m interested in ofxSkeleton (https://github.com/tgfrerer/ofxSkeleton), which handles joint-based animation and inverse kinematics solving.

I’m interested in creating an automated system for animating otherwise inanimate objects based on data about those objects, where ofxSkeleton would be useful for partially deriving and driving the animations.