Daily Archives: 20 Jan 2013

Caroline

20 Jan 2013

Important Contribution: OFX Timeline by James George

This is an add on that lets you modify code parameters with a timeline and key frame based system. It takes inspiration from big expensive packages like After Effects, Premier, and Final Cut, but it is reusable light weight.

33135665

I have a feeling that there is way more to this add on than what is demoed on the video, but if I am right in think you can code it into whatever project you are working on it sounds like it could safe a lot of time and possibly give adobe a run for their money. I would really like to see an altered timeline that allows you to stack layers and access them through an input variable.

Quick Sketch: Screen Capture to Sound by Satoru Higa

This app captures the appearance of the window behind it and turn it into a texture. The color of the texture then determines the pitch and volume of the sound.

http://www.creativeapplications.net/sound/capture-to-sound-by-satoru-higa/

capture_to_sound_satoruhiga_01

this is a bit of a non sequitur project. What is the relationship between the appearance of the desktop and the action of making sound? I think this in an example of a project that make correlations for no conceptual reason. This project might be interested if the window functioned as a lens that somehow revealed or transformed the space beneath it.

Artistic Prowess: Forth by Karolina Sobecka 

Forth is public installation that simulates the an endless ocean and groups of people traversing that ocean in life boats. The weather, sound, and amplitude of the waves is influenced by the current weather in its place of installation.

pib_banners_21

This piece was commissioned for a academic space where they used simulation. I think Sobecka succeeded in creating that allegorical aesthetic in this piece. I think it is particularly successful as a commissioned piece where the patrons had very specific criteria.  I found the process of modifying a game engine interesting. This piece relates to Brody Conrad’s Elvis Piece.

SamGruber-LookingOutwards-2

TocattaTouch (Fabrica)

TocattaTouch allows the user to manipulate a simulated sheet of fabric, applying forces to drag it through the space in which it resides, producing audio feedback to the user’s actions at the same time. I find this project interesting because it enables interaction in an imprecise manner rather than the typical precision involved in computing. The use of audio feedback continues in this vein, providing only a sense of what is happening to the cloth rather than precise measures. In its present form, TocattaTouch seems to operate in a specific range of parameters on a single cloth object. As a result, this project feels more like a conceptual demo than an idea-generating tool. I could see a more fleshed-out iteration gaining traction as a conceptual design tool.

Photon (Jayson Haebich)

Photon explores the transformation of 2D information into an ephemeral 3D object which is nonetheless responsive to real objects in its vicinity. This project doubtless drew inspiration from Anthony McCall’s 1973 Light Describing a Cone, in which a film projector was used to project a cone into a foggy room. Unfortunately, Photon does not make a more significant departure from the earlier piece, which is disappointing given the strides made in the underlying technology since then. The cone’s only response to objects intersecting its surface is to split away from them, reinforcing its untouchable nature. It would be interesting to see Haebich explore more complex responses to intrusion in the work, which could add personality to a light object which at present seems rather rigid.

Painting with a Digital Brush (Teehan+Lax)

Painting with a Digital Brush is an extension of a longstanding field in computer art: text-mode. A painter using white paint on a black canvas is replicated in real time by the software to produce the ASCII-art rendering of the painting, which is then overlaid onto the original by a projector. This blurring of the distinction between working in the real world with traditional materials and producing a work digitally is intriguing, as is the notion that only through the (comparatively) vast computing resources of today, have we become able to live-generate works in a simple art medium that hasn’t been in widespread use for decades.

Daily Demo: Lick Weed (Brainstorm)

While not written in openFrameworks, Lick Weed does give a good introduction to the modern text-mode demo. All of the objects and effects in the video are generated in real time as the demo is running, and then rendered into terminal-printable characters for display. Text-mode demos began on the earliest personal computers, some of which had no graphic capabilities except for writing text to the screen. However, due to the requirements of converting to text-mode rendering and lower resolution, these demos lack the complexity typical of modern demos. Nonetheless, the complex reflections and distortions seen in Lick Weed represent a significant step forward in this field.

Michael

20 Jan 2013

I Spy – Neil Mendoza

This servo-articulated mobile tracks users with its four android tablets using two Kinects.  As with The Stranger, which I discussed in my last post, this installation is meant to raise questions about what it means to be watched by our devices.  It makes me think about the idea that any window that one can gaze out of may also allow others to gaze in.  I consider this installation to be more compelling than The Stranger, in part because the kinetic nature of the installation makes it seem like much more of an organic creature.  I wish I knew more about what the processing app on each tablet accomplishes.  It may simply be animation, with Open Frameworks managing everything else off-board.  If this is the case, I would be somewhat surprised that simple screens weren’t used instead, except the use of consumer devices seems to be a major theme in other projects displayed by Mendoza on his website.

 

Apple Sudden Motion Sensor – Luke Sturgeon

This Open Frameworks demo takes advantage of the accelerometer built into Macbooks and many other laptops to halt spinning hard drives in case of a sudden drop or impact.  The demo is fairly simple and just allows spheres to roll about the screen as the laptop is tilted.  Still, this is a fascinating demo to me because I didn’t realize that the accelerometer could be accessed easily.  I feel a little silly after finding this because my faceOSC project essentially accomplishes the same thing, but by detecting faces and assuming (possibly incorrectly) that the user is upright and has good posture.  I wish there was a bit more documentation on this project.  I think there’s a lot of potential to do some rather cool tablet-style projects on ordinary laptops, and I also wonder what other sensors can be tapped.

 

Minecraft AR Viewer – imakethin.gs

This demo uses Open Frameworks to render a section of a 3D Minecraft world in augmented reality.  It uses the Bukkit server plugin to export map chunks.  As an avid Minecraft player and server curator, this is one of the more creative uses of Bukkit that I’ve seen, and certainly the first time I’ve seen anything related to both Minecraft and augmented reality.  It certainly has a few shortcomings, but it’s a good start.  It would be good to see it incorporate texture files instead of just drawing large colored cubes, and as the poster mentions, the render does not currently update.  If these improvements were made, however, this could be a very useful tool for demonstrating structures and mechanisms in tutorials and webcasts.

Dev

20 Jan 2013

openFrameworks

1. Receipt Racer

Reciept racer takes a simple “dodge obstacle” game and adds a twist. The entire game is level is printed out on a long sheet of paper. The rate at which the game progressess is limited by the speed at which it can be printed. The player controls a light (projected on the paper) which is sensed by a camera.  If the camera sees the light intersecting an obstacle, the game is over.

I think this project is awesome because the printing makes the game more physical. Also, the entire journey the player took is printed. I think this recording is very deep since people never remember the past when playing games like the one demonstrated. Its always about the present. That said if the author did want to show destruction of the past, he could make the paper drop into a fire. This might add more anxiety to our player.

The author (http://www.undef.ch/cat/projects) seems to have done a number of print related projects in the past. His passion with this medium probably inspired him the most.

2. Particles

Processing is neat, and there are a lot of cool effects you can make for the confines of your computer screen. This project was particularly neat because it showed something that I would expect on a screen in real life. Seeing a display like this physically makes the experience really magical.

The artist essentially created a large “roller coaster” for glowing orbs. These orbs can transmit positional and glow information with the technology inside of them. With this, an elaborate effect can be constructed. I kind of wish they had added speakers to the orbs though. This would open up so much more potential, something like a marching band effect.

I was not surprised to find that the creator of this was an architect. When I first saw the image of the project, I thought that it was another light up building. I think the construction of the project mirrors what one might see in modern architectural buildings.

3. Precious

A bike, outfitted with sensors to measure speed, direction location, temperature, humidity, and more can communicate with the world through a website. The goal here was to create something that was informative with a human-like touch.

Humanizing objects is always cool, but I feel like with the bike it is even more appropriate. Since the bike was to go on a journey across the US, it made the process of presenting the journey a little more fun. Something interesting the creator could do to go one step further would be to add sensors to see if the bike was damaged or on its side. This may indicate that the user is hurt, and could be used to notify 911.

Joshua

20 Jan 2013

BIORYTHM

This little application is a beautiful visualization of noise/ PPG data. PPG stands for Photoplethysmograph, which is a device that provides data about the volume of blood pulsing into tissue or an organ. The data is used to distort various vertexes of a sphere. The entire thing is smoothed and rendered beautifully apparently with some help from stuff on this site: http://machinesdontcare.wordpress.com/, which is super awesome.  In addition, the user interface allows for the manipulation of the shaders and how quickly and at what magnitude the sphere distorts.  I find this project interesting for two reasons: 1) it is a visualization of data  (or noise as a test) that is not necessarily informative in a quantitative sense, but is still interesting aesthetically, and perhaps could be useful in a more qualitative fashion. 2) its making some beautiful blobs. I don’t know anything about this method of rendering, but it looks interesting and I want to learn.

 

SHADOW PLAY

 

This is an experiment using the Projector Camera Toolkit, an add on for openFrameworks.  This library allows precise projector calibration.  Shadow play involves two projectors, precisely aligned, and displaying inverse images.  The result is a white screen that only displays something interesting in the shadow cast by blocking light from one of the projectors.  Thus an animation, image, or whatever, is sort of encased in the shadow. I can imagine kids enjoying this.  Especially if the setup detected what sort of shadow was being cast and updated the image accordingly.

All The Universe is Full of the Lives of Perfect Creatures

 

All the Universe is Full of the Lives of Perfect Creatures is a screen based piece where a 3-d model of an animal head is overlaid on the reflection of a persons face. I suspect this is done using a lcd screen and a two-way mirror.  Using faceOSC, the animal head mirrors the motion and  expression of a person.  Apparently the animal sometimes makes its own expressions.  The animals are supposed to range from the highly domesticated, like a dog, to the extremely feral, like a wolf.  I am not entirely sure what the universe being full of the lives of perfect creatures has to do with overlaying those creatures on a persons face, but I think this an interesting project because the interaction seems entertaining and well-mapped and it is an elegant way of fiddling with the idea of identity and species.  It would be awesome to do this with bizarre monsters, or animals like nematodes and naked mole rats, or even other people’s faces.

Keqin

20 Jan 2013

Second Surface-Multi-user spatial collaboration system.

The system can provide an alternate reality space that generates playful and natural interaction in an everyday setup for multiple users. The goal is to create a second surface on top of the reality, invisible to the naked eyes could generate a real-time spatial canvas on which everyone could express themselves. The system can create an interesting and new collaborative user experience and encourages playful content generation andcan provide new ways of communicating within everyday environment such as cities, schools and households.

I think this one makes a great contribution to the field. It combines the virtual world and the real world by a screen. Also in traditional way, researchers always widen the virtual world by projection or make a virtual world. But this one does a opposite way. It is really good way to help people to make something in the real world collaboratively.

Here’s a link about this:http://www.creativeapplications.net/openframeworks/second-surface-multi-user-spatial-collaboration-system/

Eyewriter

ALS is a disease which has left people almost completely physically paralyzed except for their eyes. This international team is working together to create a low-cost, open source eye-tracking system that will allow ALS patients to draw using just their eyes. The long-term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.

I think this is not just a tech project just based on Openframework. It’s a very meaningful activity for patients. It helps them to do the writing or drawing work again even though they don’t have ability to use their arms to do this. It also have a long goal to gather such a group to help these ALS patients. And this also makes me to think that tech should be like this. It should help us to make the world more beautiful and help people’s life. Let them live more easily one this planet.

Here is a link about this: http://eyewriter.org/

Smiling recognition and Fingertracking

I think these two are very interesting little test. Though they are very little, they are all important to help us to make the very complicated project. We can do many things to know whether the people sitting in front of the computer smile or where it hands. So I think they are very helpful and interesting.

This is smiling recognition:http://vimeo.com/53657022

FingerTracking:http://vimeo.com/1252553

Marlena

20 Jan 2013

Auto Chasing Turtle – HirotakaSter

The Auto Chasing Turtle is a small autonomous robot that has the ability to recognize peoples’ faces using a Kinect, OpenFrameworks, Android, and other assorted software. Upon recognizing a person, the turtle perks up and scurries in their direction. The user can also see what it’s seeing by connecting the turtle wirelessly to an iPad and using the Kinect as a video camera.

This project’s a lot of fun for a couple reasons: first of all, it’s a home-made hack. Though it definitely shows that this robot was cobbled together out of various bits of hardware, it is in an impressive and admirable way. It’s always really cool to see what can be made out of relatively commonplace tech. It also incorporates a couple of preexisting software hacks; this video is actually a response to a Kinect/Android hack from which it almost certainly acquired some of the code. Not only is this a cool project from the community standpoint, but from a more personal perspective I find it incredibly cute–as it waddles towards the user or jumps in recognition I can’t help but want to cheer it along.

It’s not a polished project but what’s excellent about hacks like this is that it’s left a lot of room for improvement. It could go in a lot of different directions in terms of both improvement of function and in use. Right now it’s just wandering around the hacker’s house; it would be cool to see it in perhaps interacting with strangers in a public space or as part of a performance piece. It would also be interesting to add other animal characteristics and allow for autonomous movement outside of following people. It seems like a work in progress, though, so hopefully the hacker will continue improving the design and posting updates on his channel.

Funky Forest – design-io.com/projects/FunkyForestSAM/

Funky Forest is an installation piece that was built for the Singapore Art Museum where visitors can become part of a virtual forest–the site describes it as “an interactive ecosystem where children create trees with their body and then divert the water flowing from the waterfall to the trees to keep them alive”.

It’s a great application for kids–it’s an interactive piece that gives them a goal to work toward rather than just offering them a camera to flail their arms at. It also has a great environmental tone without being preachy; it and the other exhibits at this museum all offer a playful interaction with nature.

It might have been interesting to somehow combine this project with a physical interface, especially one that was made of real plants. If it could be done in such a way that prevented the plants from coming to any harm it would certainly drive home the idea that your actions directly influence your surrounding natural environment. As it stands, though, it’s a fun contribution to the museum and a good use of the technology that OpenFrameworks has to offer.

Dérive – http://www.francois-quevillon.com/html/en

Dérive is an interactive piece that shows the user through both audio and visual feedback the realtime environmental changes to the displayed urban areas. It attempts to show the blending of the digital and corporeal sectors on a city-sized scale.

I love how this piece looks and the concept of combining urban realities on and off of the web. I’m frustrated, however, by how it’s unclear from the videos as to what extent the user can interact with the piece. I’d like to see how easy it is for the user to maneuver around the city and how much information they see fluctuating as they explore. I’d like to see more about the actual information that is processed in the generation of this piece.

I do appreciate this artist’s style, though. Other pieces of his like Ciels Variables also sample huge amounts of data for relatively simple and elegant simulations. Though I would have preferred to have more of the data information in the visualization I still appreciate these works for their attempts at blending the real world with the digital.

Patt

20 Jan 2013

Monde Binaire “Hello World !!!” by Media Design HEAD

Monde Binaire “Hello World !!!” is a 36-paged interactive comic book, with 22 hidden animations that can be accessed through a mobile application. This openFrameworks application really attracts me because it takes a simple object and modifies it into something more engaging. When reading a comic book, you are usually engaged to the content of the story and the beautifully-drawn graphics. This application goes beyond the norm by adding a new and interesting interactive experience to the common act of reading cartoons.

Fun Race Machine by Getorade Run and Thomas Inc, Japan

Getorade, with Thomas Inc, Japan, developed this ‘Fun Race Machine’ — a unique treadmill machine, with arrayed 3D LED and sound that synchronize with runner’s running steps. I really like it simply because it just looks really fun. It definitely changes my attitude towards this not-so-amazing workout. It makes you believe that you are not actually doing what you are doing, and are not where you think you are (as in you are not actually running on a treadmill, but are dancing in a nightclub). I can only think of what a cool experience it would be to exercise on a machine like this. I would work out every day if I had this. I really would.

A Journey – Dublin by Kimchi and Chips

I don’t think ‘A Journey – Dublin’ is an important contribution to the field, but definitely is one interesting experiment. Visitors draw images of their private journey and arrange them in the box, where the images are being scanned. The light traces the drawings, representing the trace of a unique memory. I like the fact that it is not only interactive, but also gives an emphasis on personality and individuality.

~Maria

20 Jan 2013

Hi,

I’m an Interaction Designer, interested in information visualization and interactive art.

twitter: @mariame

Anna

20 Jan 2013

Audience – rAndom International from Chris O’Shea on Vimeo.

A few weeks back, Mike sent me a link to this webcomic about the process of coming up with new and off-the-wall ideas. It made me pretty happy — as did the protagonist’s almost manic enthusiasm about the possibility of letting the stars see us.

This project doesn’t quite make it to the stars, but it’s a powerful, whimsical and ‘reversive’ installation that makes us consider the purpose of objects and the purpose of ourselves.

The idea of having mirrors turn their faces to follow a person isn’t all that extreme — we see similar types of motion with solar panels following the sun. In my opinion the success of this installation is all in the details: the decision to give each mirror a set of ‘feet’ instead of a tripod or a stalk, or the fact that each mirror has an ‘ambient’ state as well as a reactive state (see video). They are capable of seeing you, but they weren’t made to see you — they seem to pay attention to you because they decide they want to, and so their purpose transcends their task, in a way.

There is a strange subtlety in their positioning too–clumped, but random, like commuters in a train station or traders on Wallstreet. Everything comes together to give the mirrors an eerie humanlike quality, and makes the participant want to engage — because maybe something really is looking back.

The Treachery of Sanctuary by Chris Milk

I’m probably being really obvious about my tastes, posting about this installation right after gushing about how much I loved the spider dress. Even though at face-value the idea of giving someone’s silhouette a pair of wings seems—I don’t know, adolescent and cliche, maybe?—there’s something elegant, bleak and haunting about this piece. Think Hitchcock, or Poe. I’m less drawn to the final panel (the one where the participant gets wings) than I am to the first two. I really enjoy Milk’s commentary (see video HERE) about how inspiration can feel like disintegrating and taking flight. And there’s something powerful about watching (what appears to be) your own shadow—something constant and predictable, if not immutable—fragment and disappear before your eyes. The fact that Milk has created the exhibit to fool the audience into thinking they are under bright light, rather than under scrutiny from digital imaging technology, lends the trick this power, I think.

All in all, the story Milk tells about the creative process works, and puts the ‘wing-granting’ in the final panel into a context where it makes poetic sense, instead of just turning people into arch-angels because ‘it looks cool’. (It does.)

Sentence Tree from Andy Wallace on Vimeo.

This is a quirky little experiment that organizes sentences you type into trees, based on punctuation and basic grammar structures. The creator, Andy Wallace, described the piece as ‘a grammar exercise gone wrong’, but I wonder if the opposite isn’t true. Even as a lover of words, it’s hard to think of something more boring than diagraming sentences the traditional way: teacher at a whiteboard drawing chicken-scratch while students sleep. I like the potential of this program to inject some life into language and linguistics. Think of the possibilities: color code subject, object, verb, participle, gerund. Make subordinate clauses into subordinate branches. Structure paragraphs by transitional phrases, evidence, quotations, counterarguments. Brainstorm entire novels or essays instead of single sentences! This feels like the tip of an iceberg.