Category Archives: Uncategorized

Anna

25 Feb 2013

G’d’eve, folks. Here’s a nifty trio of interactive things I’ve discovered while perusing the internet this week.

Daily Dose of Shakespeare : Stubbornness

[C]aliban Robot Artificial Shakespearean Stubbornness… aka “CRASS”
This ugly little abomination locates humans, targets them, and delivers caustic Shakespearean insults originally attributed to the wicked Caliban from The Tempest (you may recall me moaning on a previous blog post about people forgetting this play, so I’m pretty excited that somebody used it)! Usually, I don’t like ugly stuff, but the choice to make this little dude as completely wretched looking as possible is frankly hilarious to me, given the character of Caliban. The robot doesn’t allow for people to respond to its insults, which in a certain light could be viewed as a shortcoming of the interaction, but the artists provide a pretty adorable rationale for why two-way-dialogue isn’t possible with their monster. They invent the concept of “artificial stubbornness”, explaining that in normal conversation between two humans, stubbornness occurs when one human isn’t capable of modifying their position or opinion based on feedback from another. The robot, they say, is merely exhibiting the same behavior, but “artificially”…. because it simply can’t listen. A good example of a clever narrative compensating for technical limitations—or, maybe, a piece of interactive art created specifically to fit a clever narrative.

Not for all those insect phobic people, I guess…


Delicate Boundaries
This is an older piece which I just happened to stumble across. Little glowing bug-like creatures swarm out of a screen and onto participants, crawling across them much the same way a parade of ants might crawl across your shoe. It seems pretty simple, but I really like the clean execution, and the message it’s trying to convey about the boundaries between virtual and ‘actual’. The artist seems to want to make a point about how uncomfortably and unexpectedly invasive digital technology is becoming in our lives, and the use of creatures that resemble bugs or bacteria of some-sort really drives home the metaphor for me. I’d like this piece more if the bugs somehow had a bit more substance when they left the screen, so that it wasn’t so obvious that they are just light projected onto clothing. I feel like advancements have been made since 2007 that would allow for 3D hologram-like creatures that would prove much more startling.

Everything is better with watercolors…


Starlay
This interactive comic for the iPad has been all over blogs this week, and although I’m not utterly blown away by the interactivity (it seems like very standard, game-like, touch-and-discover mechanics), I really do appreciate the art style. The hand-drawn lines and broad watercolor splashes really make this experience something lovely.

Joshua

24 Feb 2013

.fluid

this project involves a speaker, non-newtonian fluid, and a touch sensitive table surface. Non-newtonian fluids are fluids in which the rate of deformation is not linearly related to the forces trying to deform that fluid.  One type of non-Newtonian fluid, which is being used in this video (probably cornstarch and water), gets more viscous as it gets more agitated.  If this fluid is placed on top of a speaker and vibrated at high frequencies, the fluid begins to get more viscous and can form little towers and blobs. It appears that in this project the interactive component involves controlling the frequency (or perhaps also amplitude) of the speaker.  I enjoy that multiple people can contact the table and the effects of this change are fairly visible in the behavior of the fluid.  In fact I think that this is more interesting in the liquid itself.  I wonder how the sensors work

here is another example of non-newtonian fluid on a speaker

 

Interactive Robotic Painting Machine

This project uses a genetic algorithm to create various iterations of strokes on a canvas.  The GA takes inputs from a microphone to somehow evaluate a given sequence of strokes, and create a new sequence based on those external inputs.  The machine has the ability to listen to itself.  Unfortunately there is not much information on the website about the details of the GA and how exactly it is processing the sound input and what the GA is optimizing for.  The general concept is fascinating, and the machine itself is beautiful.

 

Pulse

click on link to see video (this video can’t be embedded unless permission is given. oh well),

a little physical graph.  I kind of like this because it could go in so many directions.  It makes me think of some sort of configurable sculpture.  Sculptures that are visualizations of data.  The idea of a piece of string being pulled by motors is simple and could be modified in many ways.  The string could be stretchy, the motors could be replaced with linear actuators or a combination of linear actuators and servos to allow for and and depth change.  I don’t like how slow and jerky this model is, but I am sure with some nice servos and more wires it could be pretty slick.

Bueno

06 Feb 2013

First up, The Art of Reproduction by the duo Fernanda Viégas and Martin Wattenberg, a project done in 2011. This project might be considered a unique curatorial perspective – the internet as a museum.

Viégas and Wattenberg gathered up as many digital copies of images of a select few famous artworks they could. Then, they coded up a program that would construct a mosaic out of components of each reproduction, forming a new whole imitative of the original painting/photograph. The huge variances in color are astounding. Even dimensions and proportions do not remain constant, thanks to small croppings of the images here and there. The resulting visualization is a concise observation of the innaccuracies of (digital) artistic reproduction.

http://hint.fm/projects/reproduction/

 

Next is a visualization that I feel more ambivalent about. Note that with the current goings on in the US I am quite invested in the topics of gun violence and gun legislation. I was even considering trying to tackle them for a while as part of this project. That said, Perioscopic’s U.S. Gun Murders in 2010 seems to go against the normal grain of infovis somewhat.

The graph consists of curved lines over an axis representing time. Each line is a person’s life. At some point the lines switch from yellow-colored to gray-colored, representing the point in their life where they were killed by someone with a gun. The rest of the trajectory represents the life they could have lived. Now, for me the problem is this last bit, the blatant speculation on the part of Perioscopic. While the graph is less visually striking without such a feature, it seems a tad dishonest or ill-considered. Should infovis consist solely of hard facts? I always thought so.

http://infosthetics.com/archives/2013/02/us_gun_murders_in_2010_an_alternative_view.html

This last one is really cool, though it isn’t strictly infovis in that it references no concrete data set. It does, however, help us to visualize the ever present but always invisible electomagnectic fields, radio waves, etc. They physically affect our world, never seen, never heard, but integrated into our surrounding space.

The light sculptures that Anthony Dunne and Fiona Raby created in this series, Immaterials, have no real tangibility, of course. But they are beautiful, and certainly a good use of an old technque.

http://www.onformative.com/work/immaterials/

Sam

06 Feb 2013

When I talk about Computer Club, I often introduce the club as having racks of servers squirreled away in Cyert B-level, happily grinding away at running a variety of services. But there hasn’t been a good way to communicate to people the scale and activity of our systems. I plan to construct a real-time visualization of the status of the Computer Club machine room, making it available directly from our webservers so that anyone can see what’s going on in real time.

I plan to collect data from as many of our servers as possible and aggregate it in a database to drive a web-based infographic. I was inspired by the Planetary project by Bloom, which visualizes the user’s music collection as solar systems in a galaxy, and saw that Computer Club’s infrastructure has a similar three-tiered structure which makes it ideal for such a visualization.

vis_sketch

I anticipate the challenges in this project will be in incorporating the “smaller” data, such as active users and processes, into the visualization without either introducing clutter or making them invisible, and ensuring that the data maps in an intelligible way.

Sam

06 Feb 2013

Embers (The Digital Artists)

Embers isn’t a particularly impressive demo to look at. What is impressive is this:

6370202430202f746d702f7a3b287365642031642024307c7a6361740a1f
8b0810293e245f3b245f0a290a003af7ebed3f7606086002621620be02e5
b3027100103302b1000341c00863bcc3a3c08280fe440e08871d49820f88
6580980788f54b8b8bf4733293f4532a7352a06206402c816698737e727e
a25e5a51626e6a797e51b63e98cfd0fc46a3f9c9871dcc4005bb415edaff
0d684f1410c4860631e72a74fbca3ce269a855f329fdc41c6aa21d6af388
a3f31a73a90e73a84ac34bced2bb13631312988d66f2af67326273b3122b
fd66ccae5a5f5fcf90d81bc453fa9439eb4fa776626f34cb7aff47d70ddf
32b127b4ff75075b9390f05fb8d345c52311240be5a82073421213418ab1
985cba37713d314ecbced5e1617c45ba034be765e8005d1811181416fe5f
203e2e2a32c30095bfff1c301d743eeb0d63c9304195e9b633fc96618126
66d5f93bc307552c2e8b71df3d06a0218f325cd064c2321cd06c33e86660
c88841159c1e91710de808c3931919686e0d41e577fecb8840d39a1180a6
25018d5f8066c4f30c15342fe9a86468a009d9a8743ecb9040b7bcf37d86
008642c36f86977646b1007d25832ad77c90039b9002314241617759546e
387eb92ba31217d56cc9507a0268b7089a7bde03ddc883e1bb1c5411a039
b76554e2c31bff4f034651b167162310fecffa0f0e9b1434ddf7336c5045
121280267aa08a251e7c279658dc78c00692130fa642b3a4543d90507064
3482d00c4a207a8103a32984669087f22d21f4a87a22d533406848890d01
009ec868c74d1888a2fb29798a3c309ed859e8b26bdc3e54ea17f4adaa2a
b476ac4818538752bad5fe7b8d49916024cebdbae36b2e7de8a643c4193d
0e9a243a2d09ad168446bbfec757bb7c0e219a6fe23b2d054935dbd7c743
d04388beebaf6f9625d397d8b9eca2e5cfdba91182f86a63d420d5c8e909
006f38aa6dd9c4728683bf9710995ff78dda614c3b556c6595669921d767
2afc7560be5bd8885c12e0face899ffe7313a6a48d8074ae37d9ebb9f0e5
96cae7620415edf42b0eecf6334ebb1fd6b89357ef73b89a83ef52dc7620
35b4cf743cba8f24845443a95d21e903da529390e0f4c46e3a1443c94c5b
d1a7dcd3f027427ea667ace9115e4cdbdc059ebe310fc3a52078c9b44ab0
4a66c31b0200b5df601fa24e8e8a49c2e411941e28d8187eef55a3bdf4f6
75bb42960db9f3fe3b36219db1cc8eb5c33d07b8c39d2794a53baf2d1735
00608015619d466941bd3ffc5b4ca2217f824ea576264752109f7e499dbd
aab38e2d936e8a1c4b85b24a60858011b33d9f36936b6e48616a7449be28
6b9e62b499704401ab932f43ae7872864c61206392432c67ae18174b6e51
818c6d8e3f8f8e4969b17ea68e505171997069a65f3e8354948d5ab15436

This is the entire compiled code for Embers. It is only 1020 bytes, and yet the demo manages a soundtrack and variety in the procedurally-generated environments which is rarely seen at this size. Tiny demos are all about unwinding intricate generative scenes from obscenely small amounts of code, and Embers represents an immense step forward in that regard.

Silk (Yuri Vishnevsky and Mat Jarvis)

weavesilk

Silk intrigues me because it has taken an extremely simple idea, that of building curves with the user’s mouse, and yet it has been executed in a way that creates a very rich sculptural experience for the user. The fade-off of the colors and basic applications of symmetry present an easily-learned interface and quickly produce undulating surfaces and voids, almost as alien hallways out of science fiction. One part of me yearns for more controls and capabilities in the interface: wider color selections, more complex symmetries, methods of rotation and translation. Yet at the same time, I feel that the project would become lost under those extensions, and that the present, limited interactions are already sufficient to produce intriguing sketches.

ANGELINA project (Michael Cook)

angelina_santa

ANGELINA designs games. She is an ongoing project developed by Michael Cook to produce an artificial intelligence which can generate games without any human input. The project began with simplistic collision-based games, and has evolved towards side-scrolling adventures, reminiscent of the original evolution path of human-designed games. ANGELINA, originally dependent on the work of Cook and others to provide many of the underpinnings of her games, now engineers all of the mechanics of the games herself, and is growing to develop even the images and music for games unaided, all while responding to feedback from real-world users of the games. I find this project especially intriguing because computer-invented computer games seem unlikely to experience the cultural shunning that other forms of artificial art have encountered from the existing community of creators, simply because the idea of computers doing fantastic things all on their own is already part of the paradigm.

Michael

06 Feb 2013

Sifteo Cube Gigaviewer

Screen Shot 2013-01-27 at 8.58.07 PM

This one’s fairly straightforward.  I really liked what I did with the Sifteo cubes in Project 1, and I’d like to expand it so that the Sifteo cubes can actually be used to explore very high resolution images from a sort of ant-on-a-page perspective.  I’ve already got some code that I’ve made since project 1 that auto-chops images into nice Sifteo-sized bits and then rewrites the LUA file accordingly.  This project would involve packaging all of that up and ideally using the (up and coming) Sifteo USB connection to upload new high-resolution images daily.  This way, the cubes could be an auto-updating installation in a classroom or gallery.

Here are the parts of the project from the image to the cubes (and my classification of each)

1. Get the newest image from a dropbox or git repository (Probably trivial)

2. Write processing script to chop images and autogenerate a LUA script (Pretty much done)

3.  Regularly run the processing script, re-compile, and re-upload to the Sifteo base (Maybe not too hard)

4.  Figure out how to rotate images (Should be easy… need to talk to Sifteo people)

5.  Devise a scheme for managing asset groups better on the limited cube resources (tough but interesting)

6.  Devise a scheme to predict which asset group will be needed next and load in a timely manner to keep the interaction smooth (Hard but very interesting and possibly publishable)

 

I think there’s some really cool potential here for having to piece together the “big picture” through little windows to understand what you’re looking at.  For example what’s this?

crop

 

Scroll down!

 

 

 

 

It’s my arm!

arm2

Patt

06 Feb 2013

I stumbled upon the “My Milk Toof” blog by Inhae Lee a while back. As the name suggested, it is about a story of two little tooth characters, Ickle and Lardee, who keep this blog alive. The characters are made out of polymer coat and painted with acrylic, and there are about five models of each in different expressions. Making each character by hand not only takes a lot of time, but it also sets boundaries and limitations to what the shapes of the characters can look like.

With the two characters in mind, I want to create a tool that can generate characters in various forms, but at the same time still maintain the same fundamental shapes (i.e. tooth). I am hoping to make use of toxiclibs library to create 3d models. I want to also do some digital fabrication to actually see the characters in a tangible form.

Anna

06 Feb 2013

So! Project 2. ‘Bout that time, eh?

I am completely incapable of deciding between random pipe dreams I have, particularly when it comes to balancing what’s compelling with what I can feasibly do. Here’s a rough list of ideas I’ve had in the last month or so, with elaboration on one or two:

story stealer : a concept based loosely on a story I’ve been writing for a long time. I envision a ‘master consciousness’ looks at two very different narratives and peels away elements, appropriating them and combining them to create its own story with eerie resemblance to both. I’d really like to push this away from the standard, mad-lib-style, mix-and-match algorithms that just seem to lead to disjointed and clearly computer-generated text. In particular, I want to know how well code can analyze a narrative for things like ‘tone’ and ‘voice’ and then dynamically write new paragraphs using those learned elements, even if the topic is entirely different.

names in vain : also based on a story I wrote. People seem awfully concerned about ‘using God’s name in vain’, and so they tend to say a lot of other ridiculous things instead. Do they think about the consequences? What if, every time somebody yelled a nonsense word as an interjection, it generated a little half-deity… something more than human, but not quite omnipotent? What would they look like? Would the word you said matter? The sketch below bases the little not-gods on the shape of the sounds, but that isn’t the only way their forms could be generated.
AVR2

piercing sounds : the last story-inspired idea. Again based on the shapes of sounds. Here, I’m interested in the relationship between energy and matter, and the ability of people to create solid objects using their voices. In the story, sound is wielded largely as a weapon. There’s something elegant about knives as artifacts, and the shapes of sounds lend themselves nicely to creating some interesting blades. Obviously for the purpose of a real-world project this would be done in blunt plastic.
AVR1

two screenwriters out for coffee : a dynamically written screenplay of a conversation between two well known screenwriters/playwrights, where the topic is variable but the responses all sound unmistakably written by said writers. Think Sorkin meets Fuller. Or Shakespeare meets Arthur Miller…Hilarity ensues.
better soccer data viz! : see my looking outwards post 5.
radiocarbon database art installation : I’ve got the data… I’d just need to figure out how to make it look as cool as it is.
when do I take all these medications I’m prescribed, and why? : helpful pharmacology info for real people in real life.

Andy

06 Feb 2013

Hello.

So I mentioned maybe two weeks ago that I wanted to do generative music from the parameters of a video game. At that time Golan suggested to me that I should find an open source game from which I could extract my parameters for generating. Since then I have done hours of scouring. What game would it be? I had an idea which kind of grew out of my Sifteo project that I could take an open source multiplayer game, and have interactions between different characters create different chords, and then the sound could somehow then affect the gameplay. As I continued to think, though, I felt like I should limit myself to a single-player experience in a language with which I was familiar so that I could focus most of my attention on the music and not the game itself.

After much browsing and a really poetic experience that I unfortunately didn’t think fit the project, I think I found my source game.

This isn’t really a game per se, but I think it has an attractive simplicity and mathematical depth that it could produce some really good music and a cool experience. Some parameters I thought of include: number of living particles/dead, location of living particles, number of moving particles, average color difference from frame to frame, survival and birth rules, average solid color block size, triggers for clear and reset, time application has been running, number of user interactions with the program (or an average of this per x seconds), and help me come up with more! The messaging system isn’t built in, but with some oscp5 action I could fairly easily write some signal sending I think.

The receiving program would most likely be Max/MSP, but Pd or Nyquist are alternatives I am considering. I think the next step is to build the generative music part and see what kind of sound I can come up with, and then we begin merging and tweaking the connection of the two.

Keqin

06 Feb 2013

I’m thinking about to generate some simple graphics by importing some pictures. For example, I import a pic which has bed, desk, and a bunch of stuffs on the desk. And using cv compute the general shape of the objects on the pic and then generate a bunch of such shapes to form the generative pictures.

I’m still thinking about it. I just want to make some simple but very beautiful things. And this is also a simple way to describe a pic by these simple shapes and colors I think.