Category Archives: openframeworks

Meng

28 Jan 2013

Dues make my nose blooding !!!!!!!!
Bloodnose actually with ofxosc + OfxBox2d.
This is a simply sketch with ofxosc+ofBox2d.
I trying to use other addones such as a combination of ofxCV ofxVector, but can not successfully build. So I decided to do it in a easy way with this two addones.These two addons are popular and with very good documentation. This may let a coding newbie do things less frustrated. With this experience I learn that the importance of good documentation such as coding comment and read me files.

The current technical problem is that I cannot update the circles behind the dude’s face…

PO4 - meng

code is here: https://github.com/mengs/ofxoscOfxBox2d

Keqin

28 Jan 2013

I use two addons in this project. One is the FaceTracker to track people’s face and the other one is Box2d which is a physical engine to make a real world. I use the face tracker to get the people’s expression change. If the mouth becomes bigger, it will produce many circles. And it will fall down like in real world. And if the eye moves, the rect will be produced and fall down. At last the window will be filled with different colored shapes. Maybe it can be some beautiful pics.

Here’s the code link:https://github.com/doukooo/textrain

 

QQ20130128-2

Michael

27 Jan 2013

This was my attempt at combining two openFrameworks addons: ofxEliza and ofxSpeech.  The goal was to create an implementation of the historic keyword-based Eliza chatbot that could utilize ofxSpeech to both recognize audible keywords and respond using synthesized speech.  Both addons successfully compiled together, but the Eliza module seems to have some issues, as demonstrated in the video.  Namely, the chatbot is great at detecting edge cases like repetitions and short responses, but doesn’t actually pick up any keywords, even when typed into the console.  This doesn’t make for a great therapist.  I spent time trying to debug the input parser for Eliza, but didn’t make much progress and as a result I didn’t dive deep into speech recognition.  An alternative to ofxSpeech is ofxGSTT, which uses Google’s speech to text engine but is more complicated and requires the integration of additional addons.  Eliza’s keyword-based responses should match well with ofxSpeech’s dictionary-based recognition.

The OF code can be found here.

Screen Shot 2013-01-28 at 12.20.49 AM

Yvonne

27 Jan 2013

screenshot

This is my simple compilation of two ofxAddons:
ofxBeatTracking by zenwerk (http://ofxaddons.com/repos/63)
ofxOscilloscope by mazbox (http://ofxaddons.com/repos/295)

The whole thing basically consists of a background image and different sound graphs I rotated and translated onto the screen of the TVs. I skinned the graphs to my taste… nothing particularly special. Kind of looks cool though.

I selected these two addons because I wanted to do something with sound and graphing. I don’t typically work with sound, so I figured something new would be good. In addition, I wanted something fairly easy code-wise because I’ve never worked with openFrameworks or C++ before.

Github Repository: https://github.com/yvonnehidle/beatTVs
Original Blog Post @ Arealess: http://www.arealess.com/compiling-ofxaddons/

Robb

26 Jan 2013

I combined the AudioOutput Example and the Billboard Example.
One thousand, one hundred eleven asses fill the screen as a horrible screech takes over your mind.
This is very conceptual. You might not understand.
I spent some serious time tweaking the numbers on the skin tone of those butts.
The artifacts of noise and blurriness are intentional. Easy enough to remove.
Enjoy.

Butt Image Credit:High-school Robb

Robb

23 Jan 2013

ofxVoronoi Graphics by vanderlin


I love Voronoi.
I am hardly alone in this.
I could use it to interpret dartboard hits or my typical click history. I’m sure it would be pretty.
Maybe it can even do 3d.


ofxOscilloscope by produceconsumerobot


I do enjoy a nice signal. Especially if they are well visualized. The combination of this and a fancy ADC would allow me to capture the proprietary waveform of a commercial electronic transcutenaneous muscle stimulator I am trying to reverse engineer. I plan on using my own arm as actuators, much like Daito Manabe, but with my arms.


ofxNetworkArduino Hardware Interface by egradman

Firmata ain’t a bad way to control your world with less fuss. This could come in handy for my moon-laser project, wherein I need to read accelerometer and compass values from an Arduino and push servo positions based on web info on astronomy. I was surprised to see that, unbeknownst to me, this add-on was written by a close friend from LA. Small world of new media folks, I guess.

Anna

20 Jan 2013

Audience – rAndom International from Chris O’Shea on Vimeo.

A few weeks back, Mike sent me a link to this webcomic about the process of coming up with new and off-the-wall ideas. It made me pretty happy — as did the protagonist’s almost manic enthusiasm about the possibility of letting the stars see us.

This project doesn’t quite make it to the stars, but it’s a powerful, whimsical and ‘reversive’ installation that makes us consider the purpose of objects and the purpose of ourselves.

The idea of having mirrors turn their faces to follow a person isn’t all that extreme — we see similar types of motion with solar panels following the sun. In my opinion the success of this installation is all in the details: the decision to give each mirror a set of ‘feet’ instead of a tripod or a stalk, or the fact that each mirror has an ‘ambient’ state as well as a reactive state (see video). They are capable of seeing you, but they weren’t made to see you — they seem to pay attention to you because they decide they want to, and so their purpose transcends their task, in a way.

There is a strange subtlety in their positioning too–clumped, but random, like commuters in a train station or traders on Wallstreet. Everything comes together to give the mirrors an eerie humanlike quality, and makes the participant want to engage — because maybe something really is looking back.

The Treachery of Sanctuary by Chris Milk

I’m probably being really obvious about my tastes, posting about this installation right after gushing about how much I loved the spider dress. Even though at face-value the idea of giving someone’s silhouette a pair of wings seems—I don’t know, adolescent and cliche, maybe?—there’s something elegant, bleak and haunting about this piece. Think Hitchcock, or Poe. I’m less drawn to the final panel (the one where the participant gets wings) than I am to the first two. I really enjoy Milk’s commentary (see video HERE) about how inspiration can feel like disintegrating and taking flight. And there’s something powerful about watching (what appears to be) your own shadow—something constant and predictable, if not immutable—fragment and disappear before your eyes. The fact that Milk has created the exhibit to fool the audience into thinking they are under bright light, rather than under scrutiny from digital imaging technology, lends the trick this power, I think.

All in all, the story Milk tells about the creative process works, and puts the ‘wing-granting’ in the final panel into a context where it makes poetic sense, instead of just turning people into arch-angels because ‘it looks cool’. (It does.)

Sentence Tree from Andy Wallace on Vimeo.

This is a quirky little experiment that organizes sentences you type into trees, based on punctuation and basic grammar structures. The creator, Andy Wallace, described the piece as ‘a grammar exercise gone wrong’, but I wonder if the opposite isn’t true. Even as a lover of words, it’s hard to think of something more boring than diagraming sentences the traditional way: teacher at a whiteboard drawing chicken-scratch while students sleep. I like the potential of this program to inject some life into language and linguistics. Think of the possibilities: color code subject, object, verb, participle, gerund. Make subordinate clauses into subordinate branches. Structure paragraphs by transitional phrases, evidence, quotations, counterarguments. Brainstorm entire novels or essays instead of single sentences! This feels like the tip of an iceberg.

Kyna

20 Jan 2013

MSA Fluid by Memo Akten

dddd

 

A really, really impressive simulation of fluid dynamics! The artist uses the iPhone as a control panel for the currents in the simulation, and OSC for communication over wifi to run the software in real-time. Using the touchscreen capabilities of the iPhone, the user can drag, poke, and twist to introduce new forces into the simulation. The user can also use more than one finger as a controller simultaneously.

MSAFluid for processing (Controlled by iPhone) from Memo Akten on Vimeo.

 

OpenFrameworks 3D Flocking by MultiRutele

This project isn’t particularly unique or groundbreaking, but it does illustrate a very graceful execution of 3D flocking, which I have always found to be very visually engaging.

 

Bloom Skin by Wow Inc. Tokyo

I think this is a really elegant example of an installation. The use of the flowing fabric really does instill a sense of organic flowing motion reminiscent of some sort of deep sea organism. My only complaint is that the music in the video appears to hide the noise of the fans, which I feel would likely take away from the natural, ethereal feel of the piece.

John

20 Jan 2013

Okay three projects:

Forms by Memo Akten:

forms11-03
This is just really cool. Akten is using footage of athletes from the Commonwealth games to produce large scale interactive video pieces. I really like how abstracted the final products are while maintaining the dynamism of the original footage/human performance. While it’s hard to discern the exact nature of interactive control from the video below, I like how simple the interaction appears to be.

Pennant by Steve Varga (puchased by Topps)

Pennant was originally a Masters project at SVA’s Design and Technology Program. I really like it because it feels designed to meet needs as opposed to being just a pure technology demo. At some point, Topps (the baseball card company) bought Pennant and now maintains it on the App Store.

Fabricate Yourself

This project is not massively impressive from a technical standpoint, but very cool considering the potential end user applications of chaining together depth cameras with 3D printers. In the video above, such a system is used to create 3D photo booth style relief prints. The prints are cleverly designed to act as puzzle pieces, encouraging users to create several prints. In a few years, it’d be easy to imagine such a setup encapsulated at a bar or party or wherever you might find a more traditional photo booth today.