Le Wei – Final Project Looking Outwards

by Le Wei @ 9:47 pm 27 March 2011

For my final project, I want to make some sort of tool to create music visually. Since I don’t have much (any) experience in computer music, I thought this project would be a good opportunity for exploration.

Clapping pattern visualization

This is a visualization (I think, as opposed to a program producing the sounds) of the patterns in a clapping song. The visualization is actually a couple of really simple shapes that clearly show how the song progresses.For my project, I would hope that the visual representation is just as easy to understand as this one.

mta.me

mta.me is a website that shows the nyc subway trains running throughout the day, based on their schedules. As they cross paths with other trains, the path is plucked and makes a nice little sound. This is a really clean visualization and it’s nice how the sounds correlate with the busyness of the subway system.

Artikulator

This is a project that allows a user to draw music on an iPad or iPhone. The person basically finger paints trails on the screen and the app plays through it left to right. It seems like, with this iteration, it is kind of hard to get something that sounds really ‘nice’, but I like how easy it is to just pick up and use.


 

 

Looking Outwards at Helmut Smits

by ppm @ 1:08 pm


Helmut Smit’s LP Bike is a clever and amusingly low-tech bike-record-player. The video doesn’t show him using it in public, but do I hope he’s exploited the enormous performative potential. Trick bike skills + DJ skills could make for quite a spectacle.


His Automatic Street Musician is so much more successful than a steel Simon & Garfunkel machine I built last year.


His Dead pixel in Google Earth is very punny, as the grass is actually dead. The implication is that 82×82 cm is the actual size of one Google Earth pixel at his latitude, which makes for an idiosyncratic unit of measure. Of course, Google’s imagery is most likely pulled from many different sources and has no consistent orientation or resolution.


His Greenscreen is interesting to me more because of the imaginative repurposing of the video technique more than the advertising/visual media message.

Project 4 Sketch – Algorithmic Shells

by Max Hawkins @ 11:06 pm 13 March 2011

For this project I’ll be exploring the science behind how sea shells form by implementing the algorithms from Hans Meinhardt’s book The Algorithmic Beauty of Sea Shells.

The complex patterns seen on sea shells are created by relatively simple reactions between antagonistic chemicals that can be described by a set of differential equations. By tweaking the parameters in the equations, most common patterns can be reproduced.

Happily, the book provides source code in BASIC for implementing the differential equations and also has a section on creating seashell-like shapes parametrically in computer graphics environments.

For my project I will re-implement the algorithms described in the book in a modern graphics environment. If time permits, I will export those 3d renderings in a format that can be printed on the full-color powder printer available in the dFab lab.

For the source of these images and more information on the reactions involved, visit this article at the Max Planck Institute.

Le Wei – Project 4 Generative

by Le Wei @ 5:00 pm 11 March 2011

For my generative art project, I want to create some sort of simulation inspired by the movement of rain on car windows. During a recent road trip, I was reminded of how interesting it is watching raindrops make their way rightwards on the glass, eating other drops of water in front of them and leaving a little poop trail of droplets behind them. I feel like this project might require some sophisticated mathematics to get it to look realistic, and I’m also worried about how hard it would be to create convincing graphics. Because of these worries, I might try to abstract the concept a little more so that it’s easier to accomplish but still echoes the feel of rain on a car window.

Marynel Vázquez – LookingOutwards – 6

by Marynel Vázquez @ 5:04 am 8 March 2011

Audio responsive generative visual experiment (go to website)

Audio spectrum values as input for a “painter algorithm”.

Exothermic (go to site)

Installation by Boris Tellegen.

Vattenfall media facade: Garden (go to site)

The launch of the facade coincided with the Lange Nacht der Museen, whose theme this year was “Castles and gardens”. The decision was therefore made to focus on tree and flower shapes, subtly animating over the scope of 8 minutes to provide an ambient virtual garden.

Update, Game events to Graphical Notation

by chaotic*neutral @ 1:17 pm 3 March 2011

Project 4: Generative Form – Looking Outwards

by Asa Foster @ 11:14 am 28 February 2011

When you see an idea you’ve been playing with executed by someone else (who happens to absolutely nail it), it’s always pretty satisfying. Such a project is Andy Huntington and Drew Allan’s Cylinder, which is a set of digitally fabricated physical representations of sound input. Having wanted to do some sort of buildable model based on music for some time now, it’s really nice to see these guys pull it off with some seriously nice looking objects. (via generator.x)

Another similar physical realization of sound is Daniel Widrig and Shajay Booshan’s Binaural Object. This, in my opinion, is a little less visually stunning than Cylinder, but still maintains the same spiky qualities and looks great. Reminds me of the blood slides from Dexter.

zDepth hack OpenNI

by chaotic*neutral @ 12:55 pm 9 February 2011

Hey guys I did a simple mod to the openNI library to allow you to get some z space depth information. This is just a quick hack to ofxTrackedUser.cpp

It takes the x position of the right shoulder, subtracts the difference in distance to the x position of the left shoulder, then draws a circle to the neck begin. The radius of the circle is mapped to the difference in distance between the two shoulders. SO IN SHORT: the further away the subject, the closer together the shoulder points, therefore the less distant between the two, in result a smaller head. Vice versa.

This will give you a head and z depth. Please feel free to improve it, as I just did this in a few minutes and thought I’d share.

NOTE: this is just a quick hack, if you turn shoulders (their x coordinates line up, it will shrink the head) Anyone have some other solutions?

 
#include "ofxTrackedUser.h"
#include "ofxDepthGenerator.h"
#include "ofxUserGenerator.h"
 
float xL;
float yL;
float xR;
float yR;
float zDEPTH;
 
ofxTrackedUser::ofxTrackedUser(
	 ofxUserGenerator* pUserGenerator
	,ofxDepthGenerator* pDepthGenerator
) 
:neck(XN_SKEL_HEAD, XN_SKEL_NECK)
 
// left arm + shoulder
,left_shoulder(XN_SKEL_NECK, XN_SKEL_LEFT_SHOULDER)
,left_upper_arm(XN_SKEL_LEFT_SHOULDER, XN_SKEL_LEFT_ELBOW)
,left_lower_arm(XN_SKEL_LEFT_ELBOW, XN_SKEL_LEFT_HAND)
 
// right arm + shoulder
,right_shoulder(XN_SKEL_NECK, XN_SKEL_RIGHT_SHOULDER)
,right_upper_arm(XN_SKEL_RIGHT_SHOULDER, XN_SKEL_RIGHT_ELBOW)
,right_lower_arm(XN_SKEL_RIGHT_ELBOW, XN_SKEL_RIGHT_HAND)
 
// upper torso
,left_upper_torso(XN_SKEL_LEFT_SHOULDER, XN_SKEL_TORSO)
,right_upper_torso(XN_SKEL_RIGHT_SHOULDER, XN_SKEL_TORSO)
 
// left lower torso + leg
,left_lower_torso(XN_SKEL_TORSO, XN_SKEL_LEFT_HIP)
,left_upper_leg(XN_SKEL_LEFT_HIP, XN_SKEL_LEFT_KNEE)
,left_lower_leg(XN_SKEL_LEFT_KNEE, XN_SKEL_LEFT_FOOT)
 
// right lower torso + leg
,right_lower_torso(XN_SKEL_TORSO, XN_SKEL_RIGHT_HIP)
,right_upper_leg(XN_SKEL_RIGHT_HIP, XN_SKEL_RIGHT_KNEE)
,right_lower_leg(XN_SKEL_RIGHT_KNEE, XN_SKEL_RIGHT_FOOT)
 
,hip(XN_SKEL_LEFT_HIP, XN_SKEL_RIGHT_HIP)
,user_generator(pUserGenerator)
,depth_generator(pDepthGenerator) 
,xn_user_generator(&user_generator->getXnUserGenerator())
,is_tracked(false)
{
}
 
void ofxTrackedUser::updateBonePositions() {
	updateLimb(neck);
 
	// left arm + shoulder
	updateLimb(left_shoulder);
	updateLimb(left_upper_arm);
	updateLimb(left_lower_arm);
 
	// right arm + shoulder
	updateLimb(right_shoulder);
	updateLimb(right_upper_arm);
	updateLimb(right_lower_arm);
 
	// upper torso
	updateLimb(left_upper_torso);
	updateLimb(right_upper_torso);
 
	// left lower torso + leg
	updateLimb(left_lower_torso);
	updateLimb(left_upper_leg);
	updateLimb(left_lower_leg);
 
	// right lower torso + leg
	updateLimb(right_lower_torso);
	updateLimb(right_upper_leg);
	updateLimb(right_lower_leg);
 
	updateLimb(hip);	
}
 
void ofxTrackedUser::updateLimb(ofxLimb& rLimb) {
	if(!xn_user_generator->GetSkeletonCap().IsTracking(id)) {
		//printf("Not tracking this user: %d\n", id);
		return;
	}
 
	XnSkeletonJointPosition a,b;
	xn_user_generator->GetSkeletonCap().GetSkeletonJointPosition(id, rLimb.start_joint, a);
	xn_user_generator->GetSkeletonCap().GetSkeletonJointPosition(id, rLimb.end_joint, b);
	if(a.fConfidence < 0.3f || b.fConfidence < 0.3f) {
		rLimb.found = false; 
		return;
	}
 
	XnPoint3D pos[2];
	pos[0] = a.position;
	pos[1] = b.position;
	depth_generator->getXnDepthGenerator()
		.ConvertRealWorldToProjective(2, pos, pos);
 
	rLimb.found = true;
	rLimb.begin.set(pos[0].X, pos[0].Y);
	rLimb.end.set(pos[1].X, pos[1].Y);
	ofSetColor(255, 0, 0);
	//ofCircle(pos[0].X, pos[0].Y, 5);
 
	float xL = left_upper_arm.begin.x;
	float yL = left_upper_arm.begin.y;
 
	float xR = right_upper_arm.begin.x;
	float yR = right_upper_arm.begin.y;
 
	zDEPTH = xR - xL;
 
}
 
void ofxTrackedUser::debugDraw() {
	neck.debugDraw();
 
	// left arm + shoulder
	left_shoulder.debugDraw();
	left_upper_arm.debugDraw();
	left_lower_arm.debugDraw();
 
	// right arm + shoulder
	right_shoulder.debugDraw();
	right_upper_arm.debugDraw();
	right_lower_arm.debugDraw();
 
	// upper torso
	left_upper_torso.debugDraw();
	right_upper_torso.debugDraw();
 
	// left lower torso + leg
	left_lower_torso.debugDraw();
	left_upper_leg.debugDraw();
	left_lower_leg.debugDraw();
 
	// right lower torso + leg
	right_lower_torso.debugDraw();
	right_upper_leg.debugDraw();
	right_lower_leg.debugDraw();
 
	hip.debugDraw();
	ofDrawBitmapString(ofToString((int)id),neck.begin.x+ 10, neck.begin.y);
 
	ofCircle(neck.begin.x, neck.begin.y, zDEPTH);
 
}

A really simple tutorial for installing kinect & openNI on PC

by huaishup @ 11:47 am

First of all, there is a video tutorial you can follow:

—————here is the tutorial————–
Basically you need 3 things: OpenNI, Kinect Driver, NITE.

1. OpenNI
OpenNI-Bin-Win32-v1.0.0.25.exe (http://www.openni.org/downloadfiles/openni-binaries/latest-unstable/25-openni-unstable-build-for-windows-v1-0-0/download)

Install it.

2. SensorKinect (https://github.com/avin2/SensorKinect/raw/master/Bin/SensorKinect-Win32-5.0.0.exe)
This is the driver. Just click the file and run it.

3. After install the driver, plugin the kinect. Your PC should find the device automatically. (BTW, the audio driver doesn’t work, but it doesn’t matter)

4. NITE( for skeleton)
(http://www.openni.org/downloadfiles/openni-compliant-middleware-binaries/latest-unstable/46-primesense-nite-unstable-build-for-windows-v1-3-0/download)
Install it, use the free key: 0KOIk2JeIBYClPWVnMoRKn5cdY4=

Now you can try some samples under C:\Program Files\OpenNI\Samples\Bin\Release
Recommendation: the NiUserTracker.exe is one that can detect multi-users and do the skeleton mapping thing.

Enjoy.

Huaishu

InfoViz: Potential Data Sources

by cdoomany @ 5:12 pm 19 January 2011

1. Pachube

“Pachube is a convenient, secure & scalable platform that helps you connect to & build the ‘internet of things’, Store, share & discover realtime sensor, energy and environment data from objects, devices & buildings around the world.”

A possible project would be to create a Processing/Arduino application that uses a remote or an international data feed to control a software animation or hardware that visualizes the data in some form

2. The FreeSound Project

A collaborative database of recordings that can be used to easily obtain sounds for audio analysis and processing

3. Realtime ambient sensor data from a local source

Monitor environmental data from a public space and display that information visually in the space in order to provide an augmented sensory experience for the observer. (ex. visualization of  ambient wind velocity)


Alex Wolfe | Data Visualization | Idea Revision

by Alex Wolfe @ 8:03 am

After the workshop on Monday, I decided to further explore the idea of flight vs. fall. I found a number of data sources on falling, a list of accidental falls, skydiving fatalities, suicide records, etc. I want to aggregate this information into one visualization.

I was thinking of creating a particle system that would begin in the top left side of the screen and would “jump” from the varying heights, building, bridge, plane, and would either halt before hitting the ground if the person survived or continue all the way to the bottom to some dramatic effect if the person was not so fortunate.

Data Sources

21st Century Mortality Dataset
A list of all of the deaths from 2007 – 2009 categorized by cause. Deciphered using the ICD-10, the World Health Organization Classification of Deaths Compendium. I specifically looked at all entries labeled from W00, Fall on same level involving ice and snow, to W19 Unspecified Fall.

BASE Jumping Fatality List
List of all reported BASE jumping deaths from inside the community with short snippets on how they died

BASE Numbers
Number of people rewarded a ranking for jumping off of a building, antenna, span(bridge), earth(cliff)

Timothy Sherman – Looking Outwards – Info Visualization

by Timothy Sherman @ 1:45 am 17 January 2011

1. Notabilia

Wikipedia has become a truly incredible compendium of free human knowledge, and a cultural barometer of sorts in terms of what we believe is worth knowing. Visualizing the discussions on removing items from its vast library is intriguing because the topics it shows aren’t always considered because they aren’t notable, but are sometimes considered because they’re things we’d like to culturally forget and ignore. The keep/delete leanings help frame and provide a story for each topic, but the fact that we see the story before the topic is almost a spoiler. I’d rather pick a name and then be surprised by what the community has voted for.

2. Tokyo: Right Now

Tokyo: Right Now is a visualization of census data collected in Japan of what people in Tokyo are doing. They asked citizens to record their actions in 15 minute intervals, and the data is available online to be browsed by gender, economic status, and activity over a weekly period in every minute. It’s amazing to see such a complete picture of a city in data, and to be able to browse it easily. That this data is so general yet can become so focused is very intriguing.

Marynel Vázquez – Potential data sources

by Marynel Vázquez @ 12:43 am

Personal Computers

I find interesting how people (their thoughts, what they do..) change over time. What if we look at the text files stored in the Documents/Home folder of our computer? Does a visualization of the words contained in these files show how we have changed?

The inspiration is Wordle. I think it would be interesting to group relevant words by time, instead of showing them all together.

xkcd: Tic-Tac-Toe

I think this comic is pretty cool, but sometimes I find it hard to follow. Maybe adding some interactivity would help.

Craigslist.org

Might be interesting to make a real-time visualization application for new posts (i.e., in apts/housing)

Ben Gotow-LookingOutwards-5

by Ben Gotow @ 11:25 pm 16 January 2011

https://ccrma.stanford.edu/~hongchan/lush/

“Lush” is an interface that allows the user to interactively explore and visualize music in an organic environment. The notes “swim” through the environment like fish using a flocking algorithm, and the user can create a line across their paths that cause their notes to be played. The application takes MIDI files as input, so there really is a song that can be played and reproduced if the user “finds” it in the environment. It’s a really cool idea, but a lot of the time, the notes just sounds like arbitrary noises…

Golan-TextRain

by Golan Levin @ 12:14 am 12 January 2011

Here is a Processing version of the classic TextRain (1999) by Camille Utterback and Romy Achituv:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
// Text Rain (Processing cover version)
// Original by Camille Utterback and Romy Achituv (1999):
// http://camilleutterback.com/projects/text-rain/
// Implemented in Processing 1.2.1 by Golan Levin
 
import processing.video.*;
Capture video;
 
float brightnessThreshold = 50;
TextRainLetter poemLetters[];
int nLetters;
 
//-----------------------------------
void setup() {
  size(320,240); 
  video = new Capture(this, width, height);
 
  String poemString = "A poem about bodies";
  nLetters = poemString.length();
  poemLetters = new TextRainLetter[nLetters];
  for (int i=0; i<nLetters; i++) {
    char c = poemString.charAt(i);
    float x = width * (float)(i+1)/(nLetters+1);
    float y = 10;
    poemLetters[i] = new TextRainLetter(c,x,y);
  }
}
 
//-----------------------------------
void draw() {
  if (video.available()) {
    video.read();
    video.loadPixels();
    image (video, 0, 0, width, height); 
    for (int i=0; i<nLetters; i++) {
      poemLetters[i].update();
      poemLetters[i].draw();
    }
  }
}
 
 
 
//===================================================================
class TextRainLetter {
 
  float gravity = 1.5;
  char  c;
  float x; 
  float y;
  TextRainLetter (char cc, float xx, float yy) {
    c = cc;
    x = xx;
    y = yy;
  }
 
  //-----------------------------------
  void update() {
    // Update the position of a TextRainLetter. 
 
    // 1. Compute the pixel index corresponding to the (x,y) location of the particle.
    int index = width*(int)y + (int)x;
    index = constrain (index, 0, width*height-1);
 
    // 2. Fetch the color of the pixel there, and compute its brightness.
    float pixelBrightness = brightness(video.pixels[index]);
 
    // 3. If we're in a bright area, move downwards.
    //    If we're in a dark area, move up until we're in a light area.
    if (pixelBrightness > brightnessThreshold) {
      y += gravity; //move downward
 
    } else {
      while ((y > 0) && (pixelBrightness <= brightnessThreshold)){
        y -= gravity; // travel up intil it's bright 
        index = width*(int)y + (int)x;
        index = constrain (index, 0, width*height-1);
        pixelBrightness = brightness(video.pixels[index]);
      }
    }
 
    if ((y >= height-1) || (y < 0)){
      y = 0;
    }
  }
 
  //-----------------------------------
  void draw() {
    // Draw the letter. Use a simple black "drop shadow"
    // to achieve improved contrast for the typography. 
    fill(0);
    text (""+c, x+1,y+1); 
    text (""+c, x-1,y+1); 
    text (""+c, x+1,y-1); 
    text (""+c, x-1,y-1); 
    fill(255);
    text (""+c, x,y);
  }
}

Here is a version for OpenFrameworks v.062, based on the MovieCapture example. First, the header (.h) file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#ifndef _TEST_APP
#define _TEST_APP
 
#include "ofMain.h"
 
class TextRainLetter {
public:
	char  c;
	float x; 
	float y;
 
	TextRainLetter (char cc, float xx, float yy);
	float getPixelBrightness (unsigned char *rgbPixels, int baseIndex);
	void update (unsigned char *rgbPixels, int width, int height, float threshold);
	void draw();
};
 
 
class testApp : public ofBaseApp {
 
	public:
 
		void setup();
		void update();
		void draw();
 
		void keyPressed(int key);
		void keyReleased(int key);
		void mouseMoved(int x, int y );
		void mouseDragged(int x, int y, int button);
		void mousePressed(int x, int y, int button);
		void mouseReleased(int x, int y, int button);
		void windowResized(int w, int h);
 
		ofVideoGrabber vidGrabber;
		int camWidth;
		int camHeight;
 
		float brightnessThreshold;
		vector <TextRainLetter>  poemLetters; 
		int nLetters;
};
 
#endif

And the C++ (.cpp) file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
#include "testApp.h"
 
//--------------------------------------------------------------
void testApp::setup(){	 
 
	camWidth  = 320;	
	camHeight = 240;
	vidGrabber.setVerbose(true);
	vidGrabber.initGrabber(camWidth,camHeight);
 
	brightnessThreshold = 50;
 
	string poemString = "A poem about bodies";
	nLetters = poemString.length();
	for (int i=0; i<nLetters; i++) {
		char  c = poemString[i];
		float x = camWidth * (float)(i+1)/(nLetters+1);
		float y = 10;
		poemLetters.push_back( TextRainLetter(c,x,y) );
	}
}
 
//--------------------------------------------------------------
void testApp::update(){
	vidGrabber.grabFrame();
	if (vidGrabber.isFrameNew()){
		unsigned char *videoPixels = vidGrabber.getPixels();
		for (int i=0; i<nLetters; i++) {
			poemLetters[i].update(videoPixels, camWidth, camHeight, brightnessThreshold);
		}
	}
}
 
//--------------------------------------------------------------
void testApp::draw(){
	ofBackground(100,100,100);
	ofSetHexColor(0xffffff);
	vidGrabber.draw(0,0);
 
	for (int i=0; i<nLetters; i++) {
		poemLetters[i].draw();
	}
}
 
//--------------------------------------------------------------
void testApp::keyPressed  (int key){}
void testApp::keyReleased(int key){}
void testApp::mouseMoved(int x, int y ){}
void testApp::mouseDragged(int x, int y, int button){}
void testApp::mousePressed(int x, int y, int button){}
void testApp::mouseReleased(int x, int y, int button){}
void testApp::windowResized(int w, int h){}
 
 
 
//=====================================================================
TextRainLetter::TextRainLetter (char cc, float xx, float yy) {
    // Constructor function
    c = cc;
    x = xx;
    y = yy;
}
 
//-----------------------------------
void TextRainLetter::draw(){
	char drawStr[1];
	sprintf(drawStr, "%c", c);
 
	ofSetColor(0,0,0);
	ofDrawBitmapString(drawStr, x-1,y-1);
	ofDrawBitmapString(drawStr, x+1,y-1);
	ofDrawBitmapString(drawStr, x-1,y+1);
	ofDrawBitmapString(drawStr, x+1,y+1);
 
	ofSetColor(255,255,255);
	ofDrawBitmapString(drawStr, x,y);
}
 
//-----------------------------------
void TextRainLetter::update (unsigned char *rgbPixels, int width, int height, float threshold) {
    int index = 3* (width*(int)y + (int)x);
    index = (int) ofClamp (index, 0, 3*width*height-1);
    float pixelBrightness = getPixelBrightness(rgbPixels, index);
 
    float gravity = 2.0;
    if (pixelBrightness > threshold) {
	y += gravity; 
 
    } else {
	while ((y > 0) && (pixelBrightness <= threshold)){
		y -= gravity; 
		index = 3* (width*(int)y + (int)x);
		index = (int) ofClamp (index, 0, 3*width*height-1);
		pixelBrightness = getPixelBrightness(rgbPixels, index);
	}
    }
 
    if ((y >= height-1) || (y < 10)){
	y = 10;
    }
}
 
//-----------------------------------
float TextRainLetter::getPixelBrightness (unsigned char *rgbPixels, int baseIndex){
	// small utility function
	int r = rgbPixels[baseIndex + 0];
	int g = rgbPixels[baseIndex + 1];
	int b = rgbPixels[baseIndex + 2];
	float pixelBrightness = (r+g+b)/3.0;
	return pixelBrightness;
}

Golan-Schotter

by Golan Levin @ 11:12 pm 11 January 2011

Version for Processing and Processing.JS. Pressing a key pauses the animation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
// Schotter (1965) by Georg Nees.
// http://www.mediaartnet.org/works/schotter/
// Processing version by Golan Levin
 
// There are versions by other people at
// http://nr37.nl/software/georgnees/index.html
// http://www.openprocessing.org/visuals/?visualID=10429
// But IMHO these are inferior implementations;
// note e.g the misplaced origins for square rotation.
 
//-------------------------------
boolean go = true;
void setup() {
  size(325,565);
}
 
//-------------------------------
void draw() {
  if (go) {
    background (255);
    smooth();
    noFill();
 
    int nRows = 24;
    int nCols = 12;
    float S = 20;
    float marginX = (width  - (nCols*S))/2.0;
    float marginY = (height - (nRows*S))/2.0; 
 
    float maxRotation = (abs(mouseX)/(float)width) * 0.06; //radians
    float maxOffset =  (abs(mouseY)/(float)height) * 0.6;
 
    for (int i=0; i<nRows; i++) {
      for (int j=0; j<nCols; j++) {
        float x = marginX + j*S;
        float y = marginY + i*S;
 
        float rotationAmount = (i+1)*random(0-maxRotation, maxRotation); 
        float offsetX = i*random(0-maxOffset, maxOffset); 
        float offsetY = i*random(0-maxOffset, maxOffset); 
 
        pushMatrix();
        translate(S/2, S/2);
        translate(x+offsetX, y+offsetY);
        rotate(rotationAmount);
        translate(0-S/2, 0-S/2);
        rect(0,0,S,S);
        popMatrix();
      }
    }
  }
}
 
//-------------------------------
void keyPressed() {
  go = !go;
}

Version for OpenFrameworks v.062 (C++). This is based off the EmptyExample provided in the OF download. Note that WP-Syntax uses lang=”cpp” as the shortcode for embedding C++ code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
#include "testApp.h"
 
bool go;
//--------------------------------------------------------------
void testApp::setup(){
	go = true;
}
 
//--------------------------------------------------------------
void testApp::draw(){
 
	ofBackground(255,255,255);
	glEnable(GL_LINE_SMOOTH);
	ofSetColor(0,0,0);
	ofNoFill();
 
	int nRows = 24;
	int nCols = 12;
	float S = 20;
	float marginX = (ofGetWidth()  - (nCols*S))/2.0;
	float marginY = (ofGetHeight() - (nRows*S))/2.0; 
 
	float maxRotation = (abs(mouseX)/(float)ofGetWidth()) * 0.06; //radians
	float maxOffset =  (abs(mouseY)/(float)ofGetHeight()) * 0.6;
 
	for (int i=0; i < nRows; i++) {
		for (int j=0; j < nCols; j++) {
			float x = marginX + j*S;
			float y = marginY + i*S;
 
			float rotationAmount = (i+1)*ofRandom (0-maxRotation, maxRotation); 
			float offsetX = i* ofRandom (0-maxOffset, maxOffset); 
			float offsetY = i* ofRandom (0-maxOffset, maxOffset); 
 
			ofPushMatrix();
			ofTranslate (S/2, S/2, 0);
			ofTranslate (x+offsetX, y+offsetY);
			ofRotateZ   (RAD_TO_DEG * rotationAmount);
			ofTranslate (0-S/2, 0-S/2);
			ofRect(0,0,S,S);
			ofPopMatrix();
		}
	}
}
 
//--------------------------------------------------------------
void testApp::keyPressed(int key){
	go = !go;
	if (!go){
		ofSetFrameRate(1);
	} else {
		ofSetFrameRate(60);
	}
}
 
//--------------------------------------------------------------
void testApp::update(){}
void testApp::keyReleased(int key){}
void testApp::mouseMoved(int x, int y ){}
void testApp::mouseDragged(int x, int y, int button){}
void testApp::mousePressed(int x, int y, int button){}
void testApp::mouseReleased(int x, int y, int button){}
void testApp::windowResized(int w, int h){}

Screencapture of OpenFrameworks version (at http://www.youtube.com/watch?v=bW-aE3OmVzo):

Ben Gotow-Text Rain

by Ben Gotow @ 5:44 pm

I implemented the text rain exercise in Processing and used code from the Background Subtraction sample at Processing.org to do the underlying detection of objects in the scene.

Hello world!

by Golan Levin @ 12:10 am 10 January 2011

Welcome to WordPress. This is our first post.

« Previous Page
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity