Project 3: Makes You Dance and Sing

by jsinclai @ 10:46 am 3 March 2010

I had given up on my first idea for this project when I just didn’t feel it was interesting (or appropriate) anymore. I was looking around for some motivation and looked at my older projects. I saw my Nees sketch and wondered what it would be like to throw a webcam feed on it. Of course, though, I couldn’t have the same interaction with the mouse position. I wanted this project to deal with people on their feet!

And so, this project started as an installation that responded to the audience. When there is movement in the audience, the display would shake around and go crazy. I played around with a bunch of different forms, sizes, and representations.

STIA Project 3 – Submission from Jordan Sinclair on Vimeo.

I was fairly pleased with how these worked, but felt that it still lacked some “Jordan.”

Then, someone walked in my room and started singing and dancing to some music I was listening to. I got up as well, and we goofed around for a bit. The video started going crazy and we couldn’t ourselves having lots of fun. That’s when it clicked! I want to encourage people to be in this elated, excited state. Instead of rewarding people for being sedentary, I want to reward people for being active and having fun. This kind of ties back to my first project about Happy Hardcore, the music that I love so dearly. It’s dance music! It’s music that screams “Get up on your feet! Dance and sing along!”

And so I flipped the interaction paradigm around. When you’re having fun (dancing and singing) you can see yourself having fun. When you’re still, you don’t really see anything.

STIA Project 3 – Submission from Jordan Sinclair on Vimeo.

Some implementation details:
-I use frame differencing to detect movement.
-Every frame “fades” out. This creates a background that is something other than the plain grey. When the background is just grey, there is usually not enough data on screen to make an interesting display. You see a few crazy blocks and that’s it.
-Movement alone cannot bring the display into focus. Movement is a big motivator (more people dance than sing). It accounts for about 70% of the ‘focus’ (e.g. If there is “maximum” movement, then the display is 70% in focus). But if you want to achieve full focus, you need to sing along as well (or at least make some noise)!

TODO:
-Make a full screen version and hook it up to a projector!
-Frame differencing uses linear mapping “focus” values. I need to scale these to use the differencing values that are most common.
-The audio detection isn’t as solid as it could be. It should certainly be more inviting so that users know what the audio does. I also would like to implement some sort of “fade in/out focus offset.” Currently, the audio only creates focus when you make a noise, but you lose focus in between every word you sing.
-The colors are a little dulled out. Maybe it’s just the lighting, or maybe I can do something to help.

Jon Miller – Project 3

by Jon Miller @ 10:45 am

Project 3

A zip file containing the source code/executable: link
If you wish to get it working in a playable fashion, please contact me. Thanks.

Concept
I chose the input device fairly early on – I figured, given my limited options (microphone, webcam, mouse, keyboard), that webcam would be the most interesting for me, and new territory.

Having seen several projects where the user puts himself into awkward/interesting positions to propel the exhibit, I wanted to create something that forced the user (perhaps against his will) to enter in some embarrassing positions. I decided that a game would be best, to use a person’s natural inclination to win in order to force them to compromise themselves in front of the rest of the class.

I wanted to create a game where a person, using their finger, hand, or entire body, would propel themselves through water by creating snakelike or swimming motions. The idea would be that there would be a viscous fluid that the onscreen representation of one’s hand would push against, and if done right, would propel the user in a direction, similar to the way a snake uses frictional forces against the ground to travel.

Execution
Tracking hands/bodies in a convenient/consistent way across variable lighting conditions/backgrounds proved too daunting a challenge for me, especially since there would be no consistent position for them, so I decided to use brightly colored stuffed fish as the game controllers, because their colors were such that it was unlikely anything else in the room would have the same hue, allowing me to relatively easily parse their shapes and locations.

Secondly, implementing the viscous fluid for them to swim through was also too ambitious, and i settled on the fish moving around the screen based on the fish’s location.

In the end, it was a rush to get something that would maximize fun and increase my chances of delivering a workable product to the table by Wednesday. I changed the game to have the two fish (represented onscreen by the amorphous blobs of whatever the webcam was able to detect) shoot at each other, with sound effects and explosions to maximize fun/silliness.

I was able to (eventually) implement an algorithm that calculated the fish’s “pointiest” area, allowing the user to shoot lasers from that point – this meant that if the users pinched the fish in just the right way, they could achieve a certain degree of aiming, and by moving the fish around, they could dodge incoming lasers to a degree.

Conclusions (post presentation)
Although it was not what I expected, the class seemed to enjoy watching the battle, and the participants were sweating with exertion, so I feel I was able to at least capture the attention of people. I liked that the user was given a certain degree of control that was novel to a computer game (if it can even be called that) – I felt this provided a gameplay mechanic and level of complexity to an otherwise simple game.

Project 3: Musical Typing

by jedmund @ 10:44 am

placeholder

Project 3 – Trace Modeler

by Karl DD @ 10:29 am

Concept

Trace Modeler is an experiment with using realtime video to create three-dimensional geometry. The silhouette of a foreground object is subtracted from the background and used as a two-dimensional slice. At user-defined intervals new slices are captured and displaced along the depth axis. My motivation is to create new interfaces for digital fabrication. The geometry created can be exported as an STL mesh ready for 3d printing.


Related Work

Trace Modeler is related to slit scan photography in that slices of information are used (in this case a silhouette) and built up over time. In particular Tamás Waliczky & Anna Szepesi’s Sculptures uses silhouettes of performs to create 3d forms.

I have also been considering approaches for how the forms created can be fabricated. One approach is to use laser-cut planar materials such as the work of John Sharp. Below is one example using radial slices to form the outline of a head.

The ‘Hand-Made’ project by Nadeem Haidary uses video frames of hands to create laser-cutter ready patterns for fabrication using slice-forms.


Trace Modeler

The videos below show how the geometry can be created using anything from shapes drawn on paper, to physical objects, to parts of the body such as hands.


Output

Geometry can be exported as an .STL file (thanks to the ofxSTL library!). This opens up a number of possibilities for working with the mesh in other software as well as for fabrication using a 3d printer. Here is a screen-shot of an exported mesh in a 3d viewer.


Reflection

Processing the geometry was a little more complicated than I thought it would be. There are still problems with the normals that need to be fixed. The mesh resolution is also based on the first slice, with all subsequent slices resampled to match that number of points.

I am interested in implementing radial displacement in the future, in a similar way to the Spatial Sketch project I worked on last year. This would allow the construction of a more diverse range of forms.

I envision this as a general purpose tool that can be used to create 3d forms in an unconventional way. With careful consideration of camera placement, lighting and the objects used I think there are some interesting things that can be done with this as a tool. I am interested in releasing it in the near future to see what people make.

Project 3: Creature Physics

by guribe @ 9:55 am

Watch the demo here.

Where the idea came from

I originally wanted this project to include physics in a playful way like in the project Crayon Physics. Later, when looking at examples of interactive projects during class, I noticed the various experiences that could be created just through drawing on a screen. I eventually decided to combine the two ideas (physics and drawing) with this project.

My work process

Programming for this project was difficult for me. I ended up looking at various examples of source code from the Processing website. After figuring out the code, I was slowly able to implement my own physics and spring simulators.

My self-critique

Although the project is engaging and fun, it still feels unfinished to me. The “scribble monsters” could be more interesting by changing their expressions or by interacting with one another. I believe I could easily take what I have now and eventually create something more engaging like a game or phone/iPad application.

Looking Outwards: Augmentation

by guribe @ 9:05 am

This is an interesting augmented sculpture. You can read more about it here.

Augmented Sculpture

In January 2010 the Cologne based design agencies Grosse8 and Lichtfrontpresented their cross-media installation titled Augmented Sculpture. The core of the installation is a 2.5m tall wooden form that builds the screen for a 360° projection.

In constant transformation over a score by Jon Hopkins, the 2:32 minute performance is described by Svenja Kubler of Lichtfront as a “mirror of changing realities… a kind of real virtuality arises to confront virtual reality.” I’m not sure what that all means but I really like it.

Looking Outward: Freestyle

by guribe @ 9:04 am

The N Building by Alexander Reeder is a building people can interact with through their phones. More can be read about it here.

Video: N Building

N Building is a commercial structure located near Tachikawa station amidst a shopping district. Being a commercial building signs or billboards are typically attached to its facade which we feel undermines the structures’ identity. As a solution we thought to use a QR Code as the facade itself. By reading the QR Code with your mobile device you will be taken to a site which includes up to date shop information. In this manner we envision a cityscape unhindered by ubiquitous signage and also an improvement to the quality and accuracy of the information itself.

Project 3: Spider Eggs(-ish)

by Michael Hill @ 4:33 am


This was an attempt at creating an interface that would “spin” the users drawing around a defined object. I drew inspiration from the way a spider spins silk around it’s prey.

Download and run it here.

Successes

Overall, I would consider this project a success.  It functions more or less the way I want, with the exception of a few aesthetic problems.

Issues

1.Colors get distorted at the top and bottom of the shape.  I think this has to do with how colors are randomized and limited.

2.I tried adding lighting to the object, but because it is made out of lines, the shadows dont work very well.  If I were to continue work on it, I would like to try drawing shapes instead of lines. Doing so would allow me to outline the shapes with a different color, making each “stroke” more visible.

3. controlP5 fails to work on some of the sliders when the sketch is placed online.  I think this is how it handles bindings in regards to functions and variables.

**UPDATES**

1. The application now has instructions.

2 The program doesnt seem to work very well online.  I think it has to do with how controlP5 handles binding.  In light of this, I now have a Mac executable available for download: spiderNest.zip

speaker

by Cheng @ 4:30 am

speaker is an interactive gadget that sculpts wires of sound as people around it talks.

Inspirations
The idea started from a discussion about interactive fabrication with Golan and Karl, when we brainstormed what we could take from real life to inform the creation of artifacts. Later when I saw Peter Cho’s takeluma, I decided to make a machine that physically make the shape of sound.

takeluma

For a while I considered cutting/extruding pasta into tasty sound, but food safety and dough feeding made it a bit difficult given the time I had (still hope to do it someday!). Wire bending, at the same time, has an interesting play with the shape of sound wave. It also offers some unexpectedness as wire extrudes and bends into form.

I collected some manual and commercial wire bending examples and came up with my design. A stepper drives a pair of rubber wheels and push the wire forward. A servo waits on the end and busily turns around, bending wire to various angles. Extra support and guide are added to keep wire flying without tangling.

Implementation
Material List

  • servo motor
  • stepper motor
  • toy car wheel with rubber tire
  • hardboard as a mounting base, part of which covered with polythene sheet to reduce friction
  • rolls of metal wire
  • copper tubing as wire guide, and piano wire as wire bender
  • microphone, op amp,  H-bridge, and arduino board
  • The whole system

    Wire flow

    A test of  “speaking” arcs

    speakCurve from Cheng on Vimeo.

    Sound is a rich source of data; you can pick volume, pitch, tempo, timbre, (signal noise ratio, emotional impact…), or any of them combined and map them into shapes. In this prototype, I picked volume. As user speaks into the mic, arduino compares the averaged volume of small sections of time. For raising/lowering value, servo bends wire CCW/CW respectively.

    Future work

    A lot of time was devoted to separating power sources for microphoen, stepper, and servo, so that they don’t interfere. Still have issues with stepper. One major problem of the system is real time response. Default arudino stepper control   is blocking – sound sampling is paused when stepper turns. Even one step each time brakes the flow. Need to find another control strategy, and an optimal update rate.

    Beyond the engineering issues, I would also like to consider where this system will go. Would it be a real time jewelry maker?  Toy? Exibition piece? Would it be interactive? real-time interactive? Or just a wall of names and corresponding bended wire? Could wire be bended to 3D labyrinth? Could the project be scaled up and generate public sculpture? Or be kinetic sculpture itself (snake robot??) …

    Project 3:[Chocolate,Chocolate, Add some milk][Kuan-Ju]

    by kuanjuw @ 12:44 am

    Concept

    There is a game called “chocolate chocolate add some milk” which is played by 4 to 5 people standing in a line.
    It starts by the first player who does some moves with in the rhythm of “chocolate chocolate add some milk”.And after the first round the second player duplicate the first player’s move while the first player is creating a new move. And the third player duplicate the second player’s move and so on.

    In this project I used web cam to record moves. After the round it plays the frames that has been recorded right next to the real time frames. And then doing the video cascade for the rest player.
    For implementation first I captured the frames from the web cam and drew it in numbers of rectangle, and at the same time I save it as an image array. Finally I cascaded the video in sequence with 100 frames delay for each video.

    Untitled from kuanjuwu on Vimeo.

    The project uses a web cam and a projector. A good lighting condition is required.

    And I wore black shirt and gloves to enhance the quality of video capture.

    Augmenting with Optical Flow

    by paulshen @ 12:16 am

    http://in.somniac.me/2010/03/03/augmenting-optical-flow/

    Project 3 – The Secret Word of the Day

    by sbisker @ 9:15 pm 2 March 2010

    I’m interested in interactions that people can have with digital technology in public spaces. These ideas are not new, but digital technology has only recently reached the cost, effectiveness and social acceptability where someone can actually turn a crazy idea about a public interaction with computers into a reality.

    As soon as I heard this project was about “real-time interactions”, I got it in my head to try to recreate the “Secret Word” skit from the 1980’s kids show “Pee Wee’s Playhouse.” (Alas, I couldn’t find any good video of it.) On the show, whenever a character said “the secret word” of each episode, the rest of the cast of characters (and most of the furniture in the house, and the kids at home) would “scream real loud.” Needless to say, the characters tricked each other into using the secret word often, and kids loved it. The basic gist of my interaction would be that a microphone would be listening in on (but not recording, per privacy laws) conversations in a common space like a lab or lobby, doing nothing until someone says a very specific word. Once that word was said, the room itself would somehow go crazy – or, one might say, “scream real loud.” Perhaps the lights would blink, or the speakers would blast some announcement. Confetti would drop. The chair would start talking. Etcetera.

    Trying to build this interaction in two weeks is trivial in many regards, with lights being controlled electronically and microphones easily (and often) embedded in the world around us. However, one crucial element prevents the average Joe from having their own Pee Wee’s Playhouse – determining automatically when the secret word has been said in a casual conversation stream. (Ok, ruling out buying humanoid furniture.) For my project, I decided to see if off-the-shelf voice recognition technology could reliably detect and act on “secret words” that might show up in regular conversation.

    In order to ensure that the system could be used in a public place, I needed a voice recognition system that didn’t require training. I also needed something that I could learn and play with in a short amount of time. After ruling out open-source packages like CMU Sphinx, I decided to experiment with the commercial packages from Tellme, and specifically, their developer toolkit (Tellme Studio). Tellme, a subsidary of Microsoft, provides a platform for designing and hosting telephone-based applications like phone trees, customer service hotlines and self-service information services (such as movie ticket service Fandango).

    Tellme Studio allows developers to design telephone applications by combining a mark-up development language called VoiceXML, a W3C standard for speech recognition applications, with Javascript and other traditional web languages. Once applications are designed, they can be accessed by developers for testing purposes over the public telephone network from manually assigned phone numbers. They can also be used for public-facing applications by routing calls through a PSTN-to-VoIP phone server like Asterisk directly to Tellme’s VoIP servers, but after much fiddling I found the Tellme VoIP servers to be down whenever I needed them – so for now, I thought I’d prototype my service using Skype. Fortunately, the number for testing Tellme applications is a 1-800 number, and Skype offers free 1-800 calls, so I’ve been able to test and debug my application over Skype free of charge.

    So how would one use a phone application to facilitate an interaction in public space? The “secret word” interaction really requires individuals to not have to actively engage a system by dialing in directly – and telephones are typically used as very active, person to person communication mediums. Well, with calls to Tellme free to me (and free to Tellme as well if I got VoIP working), it seemed reasonable that if I could keep a call open with Tellme for an indefinite amount of time, and used a stationary, hidden phone with a strong enough microphone, I could keep an entire room “on the phone” with Tellme 24 hours a day. And since hiding a phone isn’t practical (or acceptable) for every iteration of this work, I figured I could test my application by simply recording off my computer microphone into a Skype call with my application in a public setting (say, Golan conversing with his students during a workshop session.)

    Success! It falsely detects the word "exit", but doesn't quit.

    Success! It falsely detects the word "exit", but doesn't quit.

    In theory, this is a fantastic idea. In practice, it’s effective, but more than a little finicky. For one, Tellme is programmed to automatically “hang up” when it hears certain words that it deems “exit words.” In my first tests, many words in casual conversation were being interpreted as the word “exit”, quitting the application within 1-2 minutes of consistent casual conversation. Rather than try to deactivate the exit words feature entirely, I found a way to programmatically ignore exit events if the speech recognition’s confidence in the translation was below a very high threshold (but not so high that I couldn’t say the word clearly and have it still quit.) This allowed my application to stay running and translating words for a significant amount of time.

    A bit of Tellme code, using Javascript to check the detected word against today's secret word

    A bit of Tellme code, using Javascript to check the detected word against today's secret word

    Secondly, a true “word of the day” system would need to pick (or at least be informed of) a new word to detect and act on each day. While the Tellme example code can be tweaked to make a single word recognition system in 5 minutes, it is harder (and not well documented) how to make the system look for and detect a different word each day. The good news is, it is not difficult for a decent programmer to get the system to dynamically pick a word from an array (as my sample code does) and have it only trigger a success when that single word in the array is spoken. Moreover, this word can be retrieved over an AJAX call, so one could use a “Word of the Day” service through a site like Dictionary.com for this purpose (although I was unable to get corporate access to a dictionary API in time.) The bad news is, while VoiceXML and Tellme code can be dynamically updated at run-time with Javascript, the grammars themselves are only read in at code compile time. Or, translated from nerd speak, while one can figure out what words to SAY dynamically, one needs to prep the DETECTION with all possible words of the day ahead of time (unless more code is written to create a custom grammar file for each execution of the code). Unfortunately, the more words that are added to a grammar, the less effective it is at picking out any particular word in that grammar – so one can’t just create a grammar with the entire Oxford English dictionary, pick a single word of the day out of the dictionary and call it a day. So in my sample code, I give a starting grammar of “all possible words of the day” – the names of only three common fruits (apple, banana and coconut). I then have the code at compile time select one of those fruits at random, and then once ANY fruit is said, the name of that fruit is compared against the EXACT word of the day. However, server-side programming would be needed to scale this code to pick and detect a word of the day from a larger “pool” of possible words of the day.

    Finally, a serious barrier to using Tellme to act on secret words is the purposes for which the Tellme algorithm is optimized. There is a difference between detecting casual conversation, where words are strung together, versus detecting direct commands, such as menu prompts and the sorts of things one normally uses a telephone application for – and perhaps understandably, Tellme optimizes their system to more accurately translate short words and phrases, as opposed to loosely translating longer phrases and sentences. I experimented with a few ways of trying to coerce the system to treat the input as phrases rather than sentences, including experimenting with particular forms versus “sentence prompt” modes, but it seems to take a particularly well articulated and slow sentence for a system to truly act on all of the words in that sentence. Unfortunately, this particular roadblock is one that may be impossible to get around without direct access to the Tellme algorithm (but then again, I’ve only been at it for 2 weeks.)

    "Remember kids, whenever you hear the secret word, scream real loud."

    "Remember kids, whenever you hear the secret word, scream real loud."

    In summary – I’ve designed a phone application that begins to approximate my “Secret Word of the Day” interaction. If I am talking in a casual conversation, a Skype call dialed into my Tellme application can listen and translate my conversation in real time, interjecting with (decent but not great) accuracy with “Ahhhhhhh! You’ve said the secret word of the day!” in a strangely satisfying text-to-speech voice. Moreover, this application has the ability to change the secret word dynamically (although right now the secret word is reselected for each phone call, rather than “of the day” – changing that would be simple.) All in all, Tellme has proven itself to be a surprisingly promising platform for enabling public voice recognition interactions around casual conversation. It is flexible, highly programmable, and surprisingly effective at this task with very basic tweaking (in my informal tests, picking up on words I say in sentences about 50% of the time) despite Tellme being highly optimized for a totally different problem space.

    Since VoiceXML code is pretty short, I’ve gone ahead and posted my code below in its entirety: folks interested in making their own phone applications with Tellme should be heartened by the inherent readability of VoiceXML and the fact that the “scary looking” parts of the markup can, by the by, be simply ignore and copy-pasted from sample code. That said, this code is derived from sample code, which is copyrighted by Tellme Networks and Microsoft, and should only be used on their service – so check yourself before you wreck yourself with this stuff. Enjoy!

    <?xml version=”1.0″ encoding=”UTF-8″?>

    <!–

    Solomon Bisker – The Secret Word of The Day

    Derived from Tellme Studio Code Example 102

    Copyright (C) 2000-2001 Tellme Netprocessings, Inc. All Rights Reserved.

    THIS CODE IS MADE AVAILABLE SOLELY ON AN “AS IS” BASIS, WITHOUT WARRANTY

    OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION,

    WARRANTIES THAT THE CODE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A

    PARTICULAR PURPOSE OR NON-INFRINGING.

    –>

    <vxml version=”2.0″>

    <!– Does TellMe REALLY support javascript? We’ll see. –>

    <!–    <script src=”http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js”/> –>

    <var name=”awotd”/>

    <script>

    var myWords = new Array(“apple”, “banana”, “coconut”);

    //gives us a random number between 0 and 2. This uniquely determines

    //our “secret word of the day”

    var randomnumber = Math.floor(Math.random()*3)

    awotd = myWords[randomnumber];

    </script>

    <!–Shortcut to a help statement in both DTMF and Voice (for testing) –>

    <!– document-level link fires a help event –>

    <link event=”help”>

    <grammar mode=”dtmf” root=”root_rule” tag-format=”semantics/1.0″ type=”application/srgs+xml” version=”1.0″>

    <rule id=”root_rule” scope=”public”>

    <item>

    2

    </item>

    </rule>

    </grammar>

    <grammar mode=”voice” root=”root_rule” tag-format=”semantics/1.0″ type=”application/srgs+xml” version=”1.0″ xml:lang=”en-US”>

    <rule id=”root_rule” scope=”public”>

    <item weight=”0.001″>

    help

    </item>

    </rule>

    </grammar>

    </link>

    <!– The “Eject Button.” document-level link quits –>

    <link event=”event.exit”>

    <grammar mode=”voice” root=”root_rule” tag-format=”semantics/1.0″ type=”application/srgs+xml” version=”1.0″ xml:lang=”en-US”>

    <rule id=”root_rule” scope=”public”>

    <one-of>

    <item>

    exit

    </item>

    </one-of>

    </rule>

    </grammar>

    </link>

    <catch event=”event.exit”>

    <if cond=”application.lastresult$.confidence &lt; 0.80″>

    <goto next=”#choose_sat”/>

    <else/>

    <audio>Goodbye</audio>

    <exit/>

    </if>

    </catch>

    <!– Should I take out DTMF mode? Good for testing at least.–>

    <form id=”choose_sat”>

    <grammar mode=”dtmf” root=”root_rule” tag-format=”semantics/1.0″ type=”application/srgs+xml” version=”1.0″>

    <rule id=”root_rule” scope=”public”>

    <one-of>

    <item>

    <item>

    1

    </item>

    <tag>out.sat = “sat”;</tag>

    </item>

    </one-of>

    </rule>

    </grammar>

    <!– The word of the day is either “processing” (static) or the word of the day from our array/an API –>

    <grammar mode=”voice” root=”root_rule” tag-format=”semantics/1.0″ type=”application/srgs+xml” version=”1.0″ xml:lang=”en-US”>

    <rule id=”root_rule” scope=”public”>

    <one-of>

    <!– The dynamic word of the day –>

    <!– WE CANNOT MAKE A DYNAMIC GRAMMAR ON PURE CLIENTSIDE!

    DUE TO LIMITATIONS IN SRGS PARSING. WE MUST TRIGGER ON ALL THREE

    AND LET THE TELLME ECMASCRIPT DEAL WITH IT. –>

    <item>

    <one-of>

    <item>

    <one-of>

    <item>

    apple

    <!– loquacious –>

    </item>

    </one-of>

    </item>

    </one-of>

    <tag>out.sat = “apple”;</tag>

    <!–                        <tag>out.sat = “loquacious”;</tag>–>

    </item>

    <item>

    <one-of>

    <item>

    <one-of>

    <item>

    banana

    </item>

    </one-of>

    </item>

    </one-of>

    <tag>out.sat = “banana”;</tag>

    </item>

    <item>

    <one-of>

    <item>

    <one-of>

    <item>

    coconut

    </item>

    </one-of>

    </item>

    </one-of>

    <tag>out.sat = “coconut”;</tag>

    </item>

    <!– The static word of the day (for testing) –>

    <item>

    <one-of>

    <item>

    <one-of>

    <item>

    processing

    </item>

    </one-of>

    </item>

    </one-of>

    <tag>out.sat = “processing”;</tag>

    </item>

    </one-of>

    </rule>

    </grammar>

    <!– this form asks the user to choose a department –>

    <initial name=”choose_sat_initial”>

    <!– dept is the field item variable that holds the return value from the grammar –>

    <prompt>

    <audio/>

    </prompt>

    <!– User’s utterance didn’t match the grammar  –>

    <nomatch>

    <!–<audio>Huh. Didn’t catch that.</audio> –>

    <reprompt/>

    </nomatch>

    <!– User was silent –>

    <noinput>

    <!–    <audio>Quiet, eh?</audio> –>

    <reprompt/>

    </noinput>

    <!– User said help –>

    <help>

    <audio>

    Say something. Now.

    </audio>

    </help>

    </initial>

    <field name=”sat”>

    <!– User’s utterance matched the grammar –>

    <filled>

    <!– HERE ECMA SCRIPT CHECKS FOR WORD MATCH –>

    <if cond=” sat == awotd “>

    <audio>I heard you say <value expr=”awotd”/>

    </audio>

    <goto next=”#sat_dept”/>

    <!– from old code –>

    <elseif cond=” sat == ‘processing’ “/>

    <goto next=”#shortword_dept”/>

    <!– Wrong word in grammar was said, spit back into main loop. –>

    <else/>

    <audio>You’re close!

    </audio>

    <goto next=”#choose_sat”/>

    </if>

    </filled>

    </field>

    </form>

    <form id=”sat_dept”>

    <block>

    <audio>Ahhhhhhh! You’ve said the secret word of the day!</audio>

    <goto next=”#choose_sat”/>

    </block>

    </form>

    <form id=”shortword_dept”>

    <block>

    <audio>That’s a nice, small word!</audio>

    <goto next=”#choose_sat”/>

    </block>

    </form>

    </vxml>

    Project 3: “Dandelion”

    by aburridg @ 6:12 pm

    Download a zip containing an executable of my project. To run it, unzip the downloaded file and click on the executable file named “proj3”:
    For Macs
    For Windows

    Here is a video of my art project. I placed it running in the Wean 5207 Linux Cluster for about 5 minutes. The video is sped up (so it’s like a little time lapse):

    Inspiration
    I knew I wanted to do something with audio at some point–since I have never worked with audio before at a code level. The idea to use a dandelion came from a dream, and because as a kid I was addicted to exploding dandelions. Another aspect I took into account was where I would ideally place this piece (if I ever continued with it and decided to prim it up). I would most likely put this in a generally quiet location–a museum, a library, a park. And, hopefully it would encourage people to interact with it by being loud (since this piece is more interesting the more sound you make).

    How the Project Works with the User
    You start out with an intact dandelion. I included noise waves in the background because I thought it was cool and would hopefully, if I ever did exhibit this place someplace quiet, would give my audience a clue as to how to interact with the piece. When you make a loud enough noise the petals will come off and float based on how loud you continue to be. If it is dead quiet the petals will stay at the bottom, otherwise they will ride on a simulated “wind” that is determined by the sound levels.

    Project’s Short Comings
    I also realize that this project is not very visually pleasing…and runs a little slowly if all the petals come off at once. I know this is because of Processing and probably because I’m keeping track of too many variables at a time. Also, I know visually it is a little dull…if I had more time I would probably have the dandelions tilt more due to forces and make the stem move as well.

    If I had a lot more time, I would make the dandelions more complicated and probably try in 3D. I would also probably try to port it to Open Frameworks. But, for a small project…Processing seemed like a good choice.

    ESS Library
    For this project I used the ESS library (found here) to interpret real-time audio input from my laptop’s built in microphone.

    I also borrowed the basis of the code from Shiffman’s Flow.

    Coding Logistics
    Instead of having a 2D array of vectors (so that the canvas was split up into a grid) like Shiffman, I only used a 1D array of vectors (and split up the canvas into columns). If a dandelion petal was within a column, it followed the flow vector of that column. To determine the flow vector of a column, I used the audio input. The angle and magnitude of the vector is determined by how loud the channel sound is of the column’s corresponding audio input channel. The dandelions also follow some basic real time forces (separation and gravity).

    « Previous Page
    This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
    (c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity