Tim Sherman – Project 4 – Generative Monocuts

by Timothy Sherman @ 4:26 pm 23 March 2011

When I began this project, I knew I wanted to make some kind of generative cinema, in particular, wanted to be able to make Supercuts/Monocuts, videos made of sequences of clips cut from movies or TV shows all containing the same word or phrase. The first step for me was figuring out exactly how to do this.

I decided to build a library of paired movie files and subtitle files of the .srt format. This subtitle format is basically just a textfile made up of a sequence of timecodes and subtitles, so it’s easy to parse. I then planned to parse subtitles into a database, and then search it for a term, get the timecodes, and use that information to chop the scene I wanted out of the movie.

I modified code Golan had written to parse .srt’s into processing, so that they were stored into an SQLite3 database using SQLibrary. I then wrote a Ruby application that could read the database, find subtitles that contained whatever word or phrase you wanted to search for, then run ffmpeg through the command line in order to chop up the movie files for each occurrence of the word.

The good news was, the code I wrote totally worked. Building and searching the database went smoothly, and the program wasn’t missing any occurrences, or getting any false positives. However, I soon found myself plagued with problems beyond the scope of code.

The first of these problems was the difficulty I had in building a large library of films with matching subtitles quickly. There were 2 real avenues I found for getting movies: Ripping and converting from DVD, or Torrenting from various websites. Each had it’s own problems. While ripping and converting allowed me to get a subtitle with the file guranteed, it took a very long time to both rip the DVD and then convert from Video_TS to avi or any other format. Torrenting, while much quicker. didn’t always provide .srt files with the movies. I had to use websites like openSubtitles to try and find matching subtitle files, which had to be checked carefully by hand to make sure they lined up – being off by even a second meant that the chopped clip wouldn’t necessarily contain the search term.

These torrenting issues were compounded by the fact that working with many different undocumented video files was a nightmare. Some files were poorly or strangely encoded, and behaved very strangely when ffmpeg read them, sometimes ignoring the duration specified by the time codes and putting the whole movie into the chopped clips, or simply not chopping at all. This limited my library of films even further.

The final issue I came across was a more artistic one, and one that while solvable with code, wasn’t something I was fully ready to approach in a week. I had to figure out, once I had the clips, how to assemble them into a longer video, and I wasn’t quite sure the best way to do this. I considered a few different things, and made sketches of them using VLC and other programs. I tried sorting by the time a clip occurred in the movie, by the length of the clip, but nothing produced consistently interesting results. As I couldn’t find one solution I was happy with, I decided to leave the problem of assemblage up to the user. The tool still is incredibly helpful for making a monocut, as it gives you all the clips you need, and you don’t need to watch or crop what you want by hand.

Embedded below is a video made by cropping only scenes containing the word “Motherfucker” out of Pulp Fiction. While the program supports multiple movies in the database, due to my library building issues, I couldn’t find a great search that took interesting clips from multiple movies.

*Video coming, my internet’s too slow to upload at the moment.*

Ben Gotow-Generative Art

by Ben Gotow @ 8:54 am

I threw around a lot of ideas for this assignment. I wanted to create a generative art piece that was static and large–something that could be printed on canvas and placed on a wall. I also wanted to revisit the SMS dataset I used in my first assignment, because I felt I hadn’t sufficiently explored it. I eventually settled on modeling something after this “Triangles” piece on OpenProcessing. It seemed relatively simple and it was very abstract.

I combined the concept from the Triangles piece with code that scored characters in a conversation based on the likelihood that they would follow the previous characters. This was accomplished by generating a Markov chain and a character frequency table using combinations of two characters pulled from the full text of 2,500 text messages. The triangles generated to represent the conversation were colorized so that more likely characters were shown inside brighter triangles.

Process:

I started by printing out part of an SMS conversation, with each character drawn within a triangle. The triangles were colorize based on whether the message was sent or received, and the individual letter brightnesses were modulated based on the likelihood that the characters would be adjacent to each other in a typical text message.

In the next few revisions, I decided to move away from simple triangles and make each word in the conversation a single unit. I also added some code that seeds the colors used in the visualization based on the properties of the conversation such as it’s length.

Final output – click to enlarge!

Caitlin Boyle & Asa Foster ::Project 4 :: We Be Monsters 1.5

by Caitlin Boyle @ 8:20 am

Asa and I have decided to stick with We Be Monsters to the bitter end; we are continuing development of the project throughout the remainder of the semester as a joint capstone project for Interactive Art, Computational Design (and possibly over the summer, continuing development into a full-fledged, stand-alone instillation).  There is a lot left to do on our little Behemoth, and our to-do list doesn’t look like it’s ever going to end; besides integrating 3 and 4 person puppet capabilities into the project, each individual puppet needs a serious re-hauling to get it up to standard. After going over our to-do list and looking at the week we had left before Project 4 was due, we decided to choose the option that was most feasible; dynamically activating the front half of the puppet somehow via BOx2D physics.

In layman’s terms, we decided to make the BEHEMOTH vomit a bouncy fountain of rainbow stars.

BLARGH

The feature is not integrated into the puppet yet; we created a small demo that showcases the vomit-stars, which will be implemented after we rehaul the code for the puppets. We are also planning to tie the star-puke to sound; children (or adults) puppeteering the BEHEMOTH will be able to roar/scream, triggering a pulsing mass of stars. Our hope is that this feature adds another level of play to the program, and will encourage those using the puppets to really try and become the BEHEMOTH- not only in movement, but also in voice.

::other features on our to-do list include::

-Each piece of the puppet will be set on a spring, so it bounces back to place when not being manipulated by a user; this will hopefully alleviate the problem of the BEHEMOTH looking like it’s had a seizure if Asa and I step out of place.

-Using physics to create a more dynamic puppet in general; we like the look of the beady blank eye, but the BEHEMOTH and future puppets will include aspects that move in accordance to the movement of the puppet pieces; bouncy spines on his back, a floppy tongue, etc.

 

 

Le Wei – Project 4 Final

by Le Wei @ 7:50 am

For my generative project, I created a simulation of raindrops moving down a window. My main purpose form the beginning was to try to accurately reproduce the movement of the water on a glass windowpane, so the images of the droplets themselves are simply ellipses. The final product offers three options for the intensity of the rain: “Rain”, “Downpour”, and “Hurricane”. There is also a “Sunshine” option, which stops the rain and lets you see the water gradually dry off the window.

Research

Before beginning any coding, I looked into research papers to see if there was any helpful information on how to implement the rain movement. I knew that there would be a lot of factors to take into account, such as surface affinity, friction, air humidity, gravity, etc etc, and combining all of them could be quite difficult. Luckily, there were quite a few closely related (at least in terms  of content) papers that detailed exactly how to implement such a simulation. The three papers I relied most heavily on were “Animation of water dripping on geometric shapes and glass panes” by Suzana Djurcilov, “Simulation of Water Drops on a Surface” by Algan, Kabak, Ozguc, and Capin, and “Animation of Water Droplets on a Glass Plate” by Kaneda, Kagawa, and Yamashita.

Algorithm

I divided the window into a grid with cells of size 2×2. Each cell is given a randomly assigned surface affinity, which represents impurities on the surface of the glass. At each timestep, raindrops of random size are added to a randomly selected spot on the window, to give the simulation a raining effect. Then, existing raindrops that have enough mass to move downward calculate where to go next based on the formulas in the papers. The choices are the three cells below-left, directly below, and below-right. A small amount of water remains on the previous spot, the mass of this is also calculated from equations in the papers. Whenever raindrops run into another one, they combine and continue with the combined mass and a new velocity based on basic laws of physics. Apart from the information in the papers, I added a drying factor so that over time, raindrops on the window that are just sitting around dry off and disappear.

 

Paul-TrashFace

by ppm @ 2:05 am

So I saw some demos of Active Appearance Models like so:

And seeing them, I saw that they looked interesting. My original idea was to take the mesh animated by an AAM, detach it from the person’s face, put springs in for all the edges, and simulate it in Box2D. The physical properties of the mesh could vary depending on the person’s expression–springiness, fragility. A happy face could float up to the top, while a sad one could fall down.

What I have so far falls dramatically short, but it’s still fun to play with. I didn’t get an AAM working, and I’m using OpenCV’s face detector, which does not create a mesh. I only have physical bodies simulated in the corners of my rectangles, so the corners collide, while the rest passes through.

Marynel Vázquez – (r)evolve

by Marynel Vázquez @ 10:27 am 21 March 2011

This work is an attempt to generate interesting graphics from human input. I used OpenFrameworks + ofxKinect + ofxOpenCv2 (a simple add-on I created to interface with OpenCV2).

From the depth image provided by the Kinect:
1. edges are detected (Canny edge detector)
2. lines are fitted to the edges (Hough transform)
From the color image:
3. Colors are taken at the endpoints of the lines, and painting is done by interpolating these values

The result of about 10 frames is stored, and then displayed. This allows to perceive some motion evolution.



This line of projections was then multiplied, and the width of the edges were allowed to change according to how close stuff was to the Kinect. By moving the camera around the model, one can generate images like the following:



The video below shows how shapes can change,



Interesting additions to this project would be a flow field and music! In the former case, the generated lines could be forces that influence the motion of physical particles in the 3D space.

Mark Shuster – Generative – TweetSing

by mshuster @ 10:17 am

TweetSing is an experiment in generative music composition that attempts to transform tweets into music. The script reads a stream of tweets related to a specific keyword and sends the content to be converted to speech via Google TTS. The audio of the speech is then analyzed and the individual pitch changes are detected. The pitches are converted to a series of midi notes that are then played at the same cadence as the original speech, thus singing the tweet (for the demo, the ‘song’ is slowed to half-speed).

TweetSing is writted in Python and uses the TwitterAPI. The tweets are transcoded using Google Translate and the pitch detection and MIDI generation is done using the Aubio library. The final arrangement was then played through Reason’s NN-19 Sampler.

For an example, tweets relating to President Obama were played through a sampled violin. Listen to ‘Obama’ tweets on Violin.

SamiaAhmed-Final-Simulate/Generate

by Samia @ 8:53 am

 

 

Generative/Simulation

by Chong Han Chua @ 8:34 am

Placeholder

Generative – BBGun

by Ward Penney @ 7:41 am

BB Gun

For this project, I wanted to recreate the classic carnival BB gun game where there participant shots out a red star on a paper target. Well, I didn’t get the target, but I got the BB’s working in 3D space. Below is the video:

The user points the BB Gun with the mouse pointer and presses the spacebar to shoot BB’s. The BB’s are created using memo‘s MSA physics addon for Cinder. Special thanks to memo for his MSA Physics demo.

Susan Lin — Generative, Final

by susanlin @ 2:36 am


generative beat painter | live online


Revamped Version

  • Coming soon! Modifications based on received feedback.
  • A version which doesn’t wrap around and produces an infoviz-esque image of the entire song.
  • And if it doesn’t give me too much trouble, add a UI to select a song to play.

from Monday

Code lives here.

If anyone was interested, I presented with
FantomenK’s – Taking a Nap in the Jungle
and
We are Happy Planet’s – Time (remix)

You can screw around with songs of your choice. There’s no interface unfortunately, but just stick your song of choice into the data folder and add this in setup after the other songs:

song = minim.loadFile("YOUR_FILE_NAME_HERE.mp3", 2048);

Process

Get the pdf presentation.




> previous blog post



> minim beat energy
> wasd keyboard controls

> minim frequency energy

> minim fft



Retrospective

Some goals:

  1. Present on Monday (Yay, it happened!), and related…
  2. Scope, scope, scope into a short doable project while keeping it meaningful.
  3. Leverage strengths more (visuals).
  4. Create something that involves input from the user.
  5. Make it fun, toy like.

It feels good to accomplish this.

I am pretty happy with the result regarding time and concept. As for what could have gone better, aside from the UI, I wish I had more time to read up on FFTs or another interesting way to plot the y-axis. My method ends up looking reasonable, but more nuance would have been ideal.

Thanks for reading, hope you guys got a kick out of it.

Algorhythm

by huaishup @ 1:37 am

Project 4 Sketch – Algorithmic Shells

by Max Hawkins @ 11:06 pm 13 March 2011

For this project I’ll be exploring the science behind how sea shells form by implementing the algorithms from Hans Meinhardt’s book The Algorithmic Beauty of Sea Shells.

The complex patterns seen on sea shells are created by relatively simple reactions between antagonistic chemicals that can be described by a set of differential equations. By tweaking the parameters in the equations, most common patterns can be reproduced.

Happily, the book provides source code in BASIC for implementing the differential equations and also has a section on creating seashell-like shapes parametrically in computer graphics environments.

For my project I will re-implement the algorithms described in the book in a modern graphics environment. If time permits, I will export those 3d renderings in a format that can be printed on the full-color powder printer available in the dFab lab.

For the source of these images and more information on the reactions involved, visit this article at the Max Planck Institute.

Le Wei – Project 4 Generative

by Le Wei @ 5:00 pm 11 March 2011

For my generative art project, I want to create some sort of simulation inspired by the movement of rain on car windows. During a recent road trip, I was reminded of how interesting it is watching raindrops make their way rightwards on the glass, eating other drops of water in front of them and leaving a little poop trail of droplets behind them. I feel like this project might require some sophisticated mathematics to get it to look realistic, and I’m also worried about how hard it would be to create convincing graphics. Because of these worries, I might try to abstract the concept a little more so that it’s easier to accomplish but still echoes the feel of rain on a car window.

Susan Lin — Generative, Sketch

by susanlin @ 4:08 pm 10 March 2011

I would like to commit to creating a game during this project. Specifically, I am thinking of a music game, one where the levels are generated by a user chosen file. The program would analyze things like frequency, amplitude, or pitch and then render them as enemies on a (x, y) coordinate grid. Possible variables include coordinate, color, size, and fuzziness. For simplicity’s sake, I’d probably start with just one of the properties.

If I get something good working, the next step would be to add some incentive to shooting down these enemies. I’m thinking that if the player allowed the enemy to grow, it would distort the song the program is playing.

Okay, so it’d might end up more as an artsy-ass shooter than a shooter game per say, but the bottom-line is that I’d like to make it work with any music file the user throws into the code and have it output something that makes sense :)

Marynel Vázquez – LookingOutwards – 6

by Marynel Vázquez @ 5:04 am 8 March 2011

Audio responsive generative visual experiment (go to website)

Audio spectrum values as input for a “painter algorithm”.

Exothermic (go to site)

Installation by Boris Tellegen.

Vattenfall media facade: Garden (go to site)

The launch of the facade coincided with the Lange Nacht der Museen, whose theme this year was “Castles and gardens”. The decision was therefore made to focus on tree and flower shapes, subtly animating over the scope of 8 minutes to provide an ambient virtual garden.

SamiaAhmed-Sketch-Simulate/Generate

by Samia @ 7:45 pm 4 March 2011

For this project, I’ve been thinking about what I see at the “canon” of generative algorithms and techniques — I keep seeing words like flocking, perlin noise function, what have you, thrown around the internetosphere. I think it would be worthwhile to actually try to do one of these things – especially given that I’ve been avoiding the more math-y parts of computational art/design and I really need to sink my teeth into it. I’m not too sure of form right now – I’m thinking either a poster/print or a video demo. We shall see.

Looking Outwards – Project 4

by Ward Penney @ 7:12 pm

When I was in grade school, I had a minor obsession with Pascal’s Triangle.

First 9 rows of Pascal's Triangle

First 9 rows of Pascal's Triangle

Just to refresh your memory, Pascal’s triangle a pyramid of numbers formed by adding a row of numbers together to generate the next row. It contains many patterns and fascinating attributes that would be very useful for a generative art project, such as: binary row sums, number locating, hockey stick patterns, prime occurrences, magic 11’s, polygonal numbers, points on a circle and others.

 

 

 

 

I could use several of these attributes to do some visual effects in the design. Here are a few ideas:

hockey stick patterns

pascal hockey stick patterns

Pascal Hockey Stick Patterns

By adding numbers in a diagonal direction, the last number on a changed course equals the sum. I could do something where I do lightening bolts down the hockey stick points.

Lightning_Bolt

 

 

 

 

 

 

Polygonal Numbers

The occurrence of polygonal numbers could allow me to display 2D quasi 3D polygons at varying intervals.

 

3D PyramidWhen I was thinking about the triangle, I always wondered it was possible to extend this into 3D space.

 

 

 

 

BB Gun

Carnival Star

Carnival Star

I also have another idea to re-create a classic carnival game where the user shoots out a paper star with a BB gun and a fixed amount of ammo. I think I can do the star paper like how Igor Barinov did the Open Virtual Curtian, and let it fall apart from the BB’s.

 

It looks like I could use the MSA Physics environment to do the BB Gun.

shawn sims-lookingOutwards-Project 4

by Shawn Sims @ 5:13 pm

I plan on continuing some of the work that began in the Marius Watz/MakerBot workshop a could of weeks ago. This potentially means that the project will include Reaction/Diffusion+Camera interactions and/or a digital output for Rapid Prototyping or milling.

Update, Game events to Graphical Notation

by chaotic*neutral @ 1:17 pm 3 March 2011

« Previous Page
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity