Dane Pieri – Build-A-Site

by dpieri @ 1:43 pm 28 April 2011

http://buildasite.heroku.com

Build-A-Site is a tool for making generative CSS based on select parameters. After surveying tons of sites designed in the last year, I have distilled what I think tipifies the design of sites in the categories Webapp, B2B, Social, and Lifestyle. Once you choose what type of site you want, and tweak the colors, you can download a CSS file that can function as a base for your website.

Inspiration
I can’t really think of many specific projects that inspired me. Mostly it was just observations about the formulaic nature of web design.
Things like this automatic movie plot generator pop up now and again, so I am somewhat inspired by stuff like that.

Research
You can see the screenshots that I saved into Evernote during my research here
Note that you can filter the screenshots by tag. I used more tags to categorize them than I included in the final tool.

From these screenshots I tried to distill generalizable features of different types of websites. For example, what are some features that one could find in 80% of mobile webapp site designs, or what are some features that would make a site instantly recognizable as a mobile webapp site.

Code
Build-A-Site is a Rails app. The full code is here

But these are the key files:
interface_controller.rb
_css.css.erb
show.html.erb
color_helper.rb

Video
Sorry, the free screen grab software I found does not show the mouse moving. Also for some reason the frame rate gets messed up when I upload to YouTube.
The best way to see the site is to just try it out for yourself.
http://www.youtube.com/watch?v=VwNw6iNKBNw

Icon

Short Description
Build-A-Site is a tool for making web designs based on user selected paramaters. Much like Build-A-Bear, after making a few choices the user is given a CSS file to take home with them.

Problem with OF 62 video player

by chaotic*neutral @ 6:11 pm 25 April 2011

After trying for hours to get ofMoviePlayer to do a simple loadmovie case switch, I found on the forums that there is a problem with the 62 video player.

Therefore for generative video cuts, I have to rollback to OF61 video player

http://forum.openframeworks.cc/index.php?topic=5729.0

SamiaAhmed-Final-LatePhase

by Samia @ 9:55 am

A pdf! Containing a large number of screenshots



KinectPortal – Final Check-In

by Ward Penney @ 9:49 am


as

This is my initial auto thresholding. You can see the depth histogram on the bottom.

This one uses ofxControlPanel to allow for adjustment of some settings and a video library.

a

Le Wei – Final Project Final Update

by Le Wei @ 7:57 am

I had a hard time coming up with a concrete concept for my project, so what I have so far is a bit of a hodge-podge of little exercises I did. I wanted to achieve the effect of finger painting with sound, with different paints representing different sounds. However, I’m having a really hard time using the maximilian library to make sounds that actually sound good and mix well together. So as a proof to myself that some reasonable music can be made, I implemented a little keyboard thing and stuck it in as well. I think the project would be immensely better to use with the wireless trackpad, since it’s bigger and you can hold it in your hand, but I haven’t gotten it to work with my program on my computer (although it might on another computer w/o a trackpad).

So what I did get done was this:

  • Multi touch, so different sounds can play at the same time. But the finger tracker is kind of imperfect.
  • Picking up different sounds by dipping your finger in a paint bucket.
  • One octave keyboard

And what I desperately need to get done for Thursday:

  • Nicer sounds
  • Nicer looks
  • Getting the magic trackpad working
  • A paper(?) overlay on the trackpad so that its easier to see where to touch.

 

Special Thanks

Nisha Kurani

Ben Gotow

JamesMulholland-FinalProject-UPDATE

by James Mulholland @ 7:44 am

Primary concerns:

Current plans:

 

Final Project: update – Mauricio Giraldo

by Mauricio @ 7:40 am

I finally managed to make a working prototype with:

  • integration with TTF-creation software (FontForge + potrace)
  • PHP-based ZIP file creation and email creation
  • interface to allow control of parameters without a keyboard

There were some significant software hurdles.

Maya Irvine – Hard Part Solved – Final Project

by mirvine @ 6:56 am

So, there have been some changes to my project, and it’s been a bit of a learning curve to try to touch on everything I want to do, but I feel like I made good progress in the past week and I am fairly confident with how I will progress from here.

After our discussions in class I decided that in order to make the generative work I was envisioning, I really needed to work with a data set. I also liked everyone’s input that it would be interesting to work with the transactional aspect of this project, while keeping in mind the idea of “customizable album art for mass production.”

After a bunch of searching through song databases and looking at alot of web-apps, I landed on Last.FM.

Last.FM is a site that allows people track what music they are listening too. It also works as an “online personal radio” suggesting songs based on the users past listening. After checking out the API, it seemed like exactly what I needed, a customizable dataset relating to a users musical taste. The only problem being that I don’t know anything about xml.

After looking into using xml and getting scared, I found this handy dandy java binder that someone made, allowing you to use the Last.FM API with java. Hooray!

It is a bit badly documented so I sent a lot of time trying to figure out what kind of classes had been written before my friend paul showed me the trick of unzipping the jar file and opening it in xcode.

So far I have been able to retrieve my top artists a playcount for all of them. I theoretically can retrieve the tags aswell but they seem to be all gobeldy-gook for some reason so I need to figure that out. My next step will be to put all this information into a multi-dimensional array so it can be retrieved individually.

Next, I got a bit caught up in the idea of keeping this application on the web. That would allow it to be used by anyone, and solve the problem of gaining access to someone’s Last.FM account. SO! I asked Max how you do this. And so I was introduced to the fab world of RUBY! He helped me mock up the login link below. so cool!

Right now, this doesn’t integrate with my processing sketch at all but I hope I will be able to figure that out.
I would really like to take a stab at a web app. I think I could learn a lot from it.

This is a sketch of my plan right now.
the final product will be a design of simple elements generated by each users history. the final out-put will be a pdf that could be applied to many appications, such as making screen-printed shirts.

madMeshMaker :: final update

by Madeline Gannon @ 5:45 am

UPDATE!

closed mesh for 3dPrinting

trouble with face normals

flip milling!

Project 4: Final Days…

by Ben Gotow @ 3:16 am

For the last couple weeks, I’ve been working on a kinect hack that performs body detection and extracts individuals from the scene, distorts them using GLSL shaders, and pastes them back into the scene using OpenGL multitexturing. The concept is relatively straightforward. Blob detection on the depth image determines the pixels that are part of each individual. The color pixels within the body are copied into a texture, and the non-interesting parts of the image are copied into a second background texture. Since distortions are applied to bodies in the scene, the holes in the background image need to be filled. To accomplish this, the most distant pixel at each point is cached from frame to frame and substituted in when body blobs are cut out.

It’s proved difficult to pull out the bodies in color. Because the depth camera and the color camera in the Kinect do not align perfectly, using a depth image blob as a mask for color image does not work. On my Kinect, the mask region was off by more than 15 pixels, and color pixels flagged as belonging to a blob might actually be part of the background.

To fix this, Max Hawkins pointed me in the direction of a Cinder project which used OpenNI to correct the perspective of the color image to match the depth image. Somehow, that impressive feat of computer imaging is accomplished with these five lines of code:


// Align depth and image generators
printf("Trying to set alt. viewpoint");
if( g_DepthGenerator.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT) )
{
printf("Setting alt. viewpoint");
g_DepthGenerator.GetAlternativeViewPointCap().ResetViewPoint();
if( g_ImageGenerator ) g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint( g_ImageGenerator );
}

I hadn’t used Cinder before, and I decided to migrate the project to Cinder since it seemed to be a much more natural environment to use GLSL shaders in. Unfortunately, the Kinect OpenNI drivers in Cinder seemed to be crap compared to the ones in OpenFrameworks, et. al. The console often reported that the “depth buffer size was incorrect” and that the “depth frame is invalid”. Onscreen, the image from the camera flashed and occasionally frames appeared misaligned or half missing.

I continued fighting with Cinder until last night, when at 10PM I found this video in an online forum:

This video is intriguing, because it shows the real-time detection and unique identification of multiple people with no configuration. AKA it’s hot shit. It turns out, the video is made with PrimeSense, the technology used for hand / gesture / person detection on the XBox.

I downloaded PrimeSense and compiled the samples. Behavior in the above video achieved. The scene analysis code is incredibly fast and highly robust. It kills the blob detection code I wrote performance-wise, and doesn’t require that people’s legs intersect with the bottom of the frame (the technique I was using assumed the nearest blob intersecting the bottom of the frame was the user.)

I re-implemented the project on top of the PrimeSense sample in C++. I migrated the depth+color alignment code over from Cinder and built a background cache and rebuilt the display on top of a GLSL shader. Since I was just using Cinder to wrap OpenGL shaders, I decided it wasn’t worth linking it in to the sample code. It’s 8 source files, it compiles on the command line. It was ungodly fast. I was in love.

Rather than apply an effect to all the individuals in the scene, I decided it was more interesting to distort one. Since the PrimeSense library assigns each blob a unique identifier, this was an easy task. The video below shows the progress so far. Unfortunately, it doesn’t show off the frame rate, which is a cool 30 or 40fps.

My next step is to try to improve the edge of the extracted blob and create more interesting shaders that blur someone in the scene or convert them to “8-bit”. Stay tuned!

Checkpoint 04/25

by Chong Han Chua @ 2:57 am

The previous checkpoint was a reality check, and I scrapped the computer vision project for a continuation of my twingring project.

A short list of things I have to do and where I am:
1. Put on the web
2. Fix bugs
3. Modulate sound to create different voices
4. Do dictionary swaps and replacements of text
5. Switch to real time API and increase filtering options
6. Design and multiple parties

Instead of doing a search, the new option will revolve around looking for hashtags or using a starting message id. With this, we can prototype a play with multiple actors as well as real time. This would enable twingring to act as a real life twitter play of some sort, which should be fun to watch.

On the user interface side, there’ll be some work required to improve the display of messages, the display of users, as well as a way to visualize who is talking and who isn’t. Some other work includes making it robust and possibly port it for iPad (probably not).

To check out the current progress, visit www.twingring.com

Meg Richards – Final Project Update

by Meg Richards @ 2:57 am

I’m working on correctly calculating the bounce direction and velocity. Using OpenNI, I track the y position of the base of the neck. With a small time delta, I look at the difference between the two positions to get a reasonable approximation of both direction and velocity. After introducing an actual trampoline, I had to significantly reduce my periodic sampling because I could perform a full bounce and return to a position close to the source position before the sample would be taken. I haven’t mapped velocity to sidescrolling action, so there isn’t much to see, but here’s a picture of a trampoline:

Bounce bounce bounce.

Three Red Blobs

by ppm @ 2:34 am

I have a Pure Data patch supplying pitch detection to a Java application, which will be drawing and animating fish in a fish tank based on the sounds people make with their voices. These red blobs are the precursors to the fish, where vertical width corresponds to pitch over a one-second recording. I plan to add colors, smooth contours, fins, and googly eyes.

Here is the patch:

I may end up ditching the cell phones. The point of the phone integration was so that many people could interact simultaneously, but now that I’m using Pure Data, which does real-time processing (not exactly what I wanted in the first place) it would be inconvenient to process more than one voice at a time.

Timothy Sherman – Final Project Update

by Timothy Sherman @ 2:26 am

Over this weekend, I’ve succeeded in finishing a basic build of a game using the Magrathea system. The game is for any number of players, who build a landscape so that the A.I. controlled character (shown above) can walk around collecting flowers, and avoiding trolls.

Building this game involved implementing a lot of features – automated systems for moving sprites, keeping track of what sprites existed, etc, and finding modular, expandable ways to do this was a lot of my recent work. The sprites can now move around the world, avoid stepping into water, display and scale properly, etc.

The design of the game is very simple right now. The hero is extremely dumb – he basically can only try to step towards the flower. He’s smart enough not to walk into water, and not to walk up or down too-steep cliffs, but not to find another path. The troll is pretty dumb too, as he can only step towards the hero (though he can track a moving hero).

I’m not keeping track of any sort of score now (though I could keep track of how many flowers you’ve collected, or make that effect the world), because I’m concerned about the game eclipsing the rest of the tool, and I think that’s what I’m struggling with now.

Basically, I’m nervous that the ‘game’ isn’t really compelling enough, and that it’s driven the focus away from the fun, interesting part of the project (building terrain) and pushed it into another direction (waiting as some dumb asshole sprite walks across your arm).

That said, I do think watching the sprites move around and grab stuff is fun. But the enemies are too difficult to deal with reliably, and the hero a little too dumb to trust to do anything correct, requiring too much constant babysitting.

I also realize that I’ve been super-involved with this for the last 72 hours, so this is totally the time when I need feedback. I think the work I’ve done has gone to good use – I’ve learned how to code behaviors, display sprites better, smooth their movement, ensure they are added onto existing terrain, etc. What I’m trying to decide now is if I should continue in the direction of refining this gameplay, or make it into more of a sandbox. Here’s the theoretical design of how that could happen (Note, all of the framework for this has been implemented, so doing this would require mostly the creation of more graphical elements).:

The user(s) builds terrain, which is populated with a few (3? 4?) characters who wander around until something (food, a flower, eachother) catches their attention. When this happens, a thought balloon pops up over their head (or as part of a GUI above the terrain? this would obscure less of the action) indicating their desire for that thing, and they start (dumbly) moving towards it. When they get to it, they do a short animation. Perhaps they permanently affect the world (pick up a flower then scatter seeds, growing more flowers?)

This may sound very dangerous or like I’m in a crisis, but what I’ve developed right now is essentially what’s described above, but with one character, one item, and the presence of a malicious element (the troll), so this path would really just be an extension of what I’ve done, but in a different direction than the game.

I’m pretty pleased with my progress, and feel that with feedback, I’ll be able to decide which direction to go in. If people want to playtest, please let me know!!

(also, i realize some sprites (THE HOUSE) are not totally in the same world as everything else yet, it’s a placeholder/an experiment)

screen shots (click for full):

Emily Schwartzman – Final Project Update

by ecschwar @ 1:41 am

Final Project Update


Since my last update I was able to get my data finalized and prepped to start working on the visualization. Thanks to Mauricio for helping me out with a PHP script to load all of the lyrics files onto the LIWC site. Below is a snapshot of the final data that I am working with.

I did some initial tests to see how the data mapped out, and then added in the artist names to see where each artist fell on the spectrum.

I also created a full set of comparisons for each possible combination of variables to see where the most significant correlations were. Surprisingly the metrics seem fairly well correlated across the board. (

PDF of Charts)

 

 

 

I was hoping to try and create one 2D plot of artists that would look at similarity based on lyrics by reducing all of these metrics down to 2-dimensional data, per Golan’s suggestion, but was unable to successfully figure this out. Instead I decided to build on one of the visualizations I had by adding some interactivity to it. I reduced the opacity of the artist names so that only the selected artist would be highlighted across. There are still some issues with speed though that might pertain to loading the text/font. Below is a screenshot of what this looks like:

I’m still working on trying to add another layer of information to the visualization. I’ve collected genre information by accessing the Last.fm API for the top tag for each artist. I would like to allow the user to select a genre and see if there are any patterns of where the artists fall based on what genre of music they are classified as. I am also considering integrating an image of the artist and perhaps other secondary information as well when you rollover an artist’s name. If I have any more time, I would like to refine this to a point where I can create a supporting print piece to work with the interactive component. Before creating the final visualization, I want to go through and clean up some of the data further to make sure it is as accurate as possible (some of the lyrics have secondary information that was pulled in when scraped which could be throwing off the liwc metrics).

Questions:

Any suggestions for other ways to visualize this information that would be interesting or take this to the next level?

 

 

Are there other layers of information that you think would work well or communicate something interesting about this data (besides genre)?

 

 

Any suggestions for how to improve the performance/speed of the interactive visualization?

 

John Horstman – Final Project: Update 2 (late crit)

by jhorstma @ 12:56 am

Progress

Things have progressed well with the sunset conductor idea I pitched in my previous post.

I have a sky that changes color based on the motion in the scene: the video capture is divided into 20 horizontal bins, and the total motion in each bin changes the blue component in a horizontally-corresponding section of the sky.

I have a separate piece of code that totals the motion in radial bins instead of horizontal bins, with the origin of the bin rays located at the center of the bottom of the screen.  I’m using this bit of logic in two ways.  First, I’m calculating the scene motion in a particular direction and using that to affect the red component in the corresponding section of the sky.  Second, when the scene motion in a particular direction crosses a certain threshold, stars are fired in that direction.  The goal is to create the feeling that the audience’s motion is launching stars into the night sky, where they stick.

 

Main technical issue

There is currently a problem with my radial motion calculations.  The motion doesn’t appear to be totaling evenly across all angles; there are spikes at about 9:00 and 1:30 (using a clock face for the angle).  Judging from my debugging visualization of the optical flow, I believe these spikes come from a flaw in OpenCV’s optical flow function.  If this is in fact the case, I’ll have to find a different way to total my motion into radial bins.

Remaining work

The installation will require bits of tuning before it’s ready for showtime.

Right now, it doesn’t respond well to the audience – i.e., it’s not clear what motions in front of the camera correspond to which responses by the system.  I want it to be so easy to learn that anyone can walk up and understand what’s happening almost immediately.

The colors don’t behave very closely to those of an actual sunset.  I think the piece would be more effective if there was a closer match.

The stars don’t distribute very evenly; they disperse in an arc pattern.  Some simple math based on the trajectory angle should clean that up.

Aside from those technical issues, I have some concerns about the installation itself.  My motion detection is all based on optical flow; in the installation space, I’ll probably have some problems with a noisy background.  Also, I’m not sure if my concept is coming across.  I think that what I’ve built so far is fun to interact with, but I don’t know if the idea is being clearly communicated.

Feedback requests

Does the main idea come across?

Should the sunset reset when all motion stops?

Should the audio be more musical?

Final project update

by huaishup @ 11:16 pm 24 April 2011

I am trying to work along my plan and schedule for this final project. Ideally I will finish 3~5 fully functioned drum bots to demonstrate potential combination of music and algorithm.

 

What I have done:

Spend some time redesigned my own Arduino-compatible circuit board, which is less smaller than the original one and with all sockets and electronic parts on. (1 x 1 inch)

Ordered all parts from Digikey last Wed.


Tried a lot. But still remains some problems:

piezo sensor is really fragile and not stable.

Solenoids are either too weak or eat too much current & voltage.

Batteries are always the problem.

Circuit board never shipped.

Sounds effect.

 

Expectation:

In Thursday have a working demo with 3~5 drumboxs

Software is hard, hardware is HARDER!

Charles Doomany: Final Project Concept: UPDATE

by cdoomany @ 10:22 am 20 April 2011

 

pieces

by susanlin @ 8:57 am

start here

1. color – sepia, minimal
2. lineart – edges are important
3. trails – particles or such


Live Feed, 2-toned



Edge detection, understood and working stand-alone



Combining? Broke it. Inefficient keyboard banging to no means, eyes bleeding. (This is a overlay Photoshopped to demonstrate the somewhat desired effect.)



Next: oFlow, learning from this good example found at openprocessing…




Scoping… Make this into a part in series of learning bits of coding.
Combos in mind include:
1. 2 color + trails
2. 8bit/block people + springs
3. Ballpit people / colorblind test people + boids



Display may be something like this..

final project early phase presentation

by honray @ 8:01 am

Link to demo

Original idea

  • Users each control a blob
  • Blobs can interact with each other
  • Playstation Home, but with blobs

New Idea

  • Collaborative platformer
  • Blob has to get from point A to point B
  • 2 player game
  • Person 1 is blob
  • Person 2 controls the level
  • P2 controls levers, ramps, mechanics of level
  • Goal is to help p1 pass the level
  • How does p2 help p1 without communicating directly?

What’s been done

  • Box2d up & running
  • Blob mechanics
  • Basic level design
  • Keyboard control of blob

Hurdles

•Collaboration
  • 2 people go to website and register (php)
  • Create websocket server (python), each player communicates via web browser (chrome) and websockets
•Maintaining state via websockets
  • P1 (blob player) is master, maintains overall state
  • P2 is slave
  • P2 (level player) sends level state changes to P1
  • P1 sends blob position/velocity updates to P2
  • Any other ideas on how to do this?
Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2017 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity