Madeline-Gannon__Term Project Ideas

by Madeline Gannon @ 8:08 am 30 March 2011

INITIAL GOAL :: to continue developing my last project, advancing it from a pedagogical tool to a design tool

+ ADD MORE TOOL PATHS
+ ADD ATTRACTION/REPULSION POINTS
+ FIX PEASYCAM ROTATION ON EXPORT
+ REFINE HOW DATA IS STRUCTURED
+ FREEZE-N-EDIT CONTROL

SECONDARY GOAL :: prove the tool’s validity

+ MAKE SOME BEAUTIFUL STUFF
+ EXPLORE FORM, MATERIAL, AND TECHNIQUE
+ + + ideas: small scale to large scale – platter, wall tiles, table, sink+counter, door
+ + + + + + laminated hardwood, paper stone, resin, mdf, plywood
+ + + + + + milling, casting (resin, rockite, plaster), vacuum forming

QUESTION

+ WHAT ELSE? to the application … fabrication techniques … stuff to be made
+ DOES ANYONE WANT TO TRY OUT THE LAST ONE!!!
+ + + + + + I’d like to have robust documentation of other people designing+making with it.

Dashboard

by chaotic*neutral @ 8:01 am

foo PDF

Eric Brockmeyer – Looking Outwards 3

by eric.brockmeyer @ 7:35 am

I dig the star wars reference, but the use of the kinect for a holographic image is something new. It seems like every other day a new usage for this device becomes available. The video isn’t a great representation of the usage of this technology

kRC // kinect controlled rc helicopter from Shawn Sims on Vimeo.

shawn sims-lookingOutwards-FINAL

by Shawn Sims @ 12:27 am

I plan on continuing to work with the ABB4400 robot in dFab. My final goal is live, interactive control of the machine. This may take the form of interactive fabrication, dancing with the robot, or some type of camera rig.

Inspiration

There have been a few projects and areas of research that have given me inspiration. The use of robotic surgery tools is an extremely adept example of interactive/ live control of robots while maintaining the precision and repeatability they are designed for. The ultimate goal of my project is to leverage these same properties of the robot through a gestural interface.

There is a very interesting design space here, which is the ability for these robots to become mobile and perform these tasks in different environments. My vision on the future of architecture is these robots running around building and 3d printing spaces for us. something like this…

Design Goal
The project will explore the relationship between the users movement and gesture and the fabrication output of the robot. This is to say that the interpretation of the input will be used to work on a material that offers a unique and efficient relationship to the user. IE the user bends a flex sensor and the robot bends steel, or the user makes a ‘surface’ by gesturing hands through air and robot mills an interpretation of that surface. A few other ideas are an additive process like gluing things together based on user input like this example…

Technical Hurdles
TCP/IP opensocket communication is proving to be a bit tricky with the ABB Robot Studio software. I beleive that we can solve this problem but there is some worries about making sure we dont make the super powerful robot bang into the wall or something because that would be costly and bad.

Question
What are some interesting interfaces or interactions you can imagine with the robot? input // output?

There are a few constraints like speed, room size, safety, etc…

Thanks

Susan Lin – Looking Outwards – 7: Final Project Inspiration

by susanlin @ 9:17 am 28 March 2011

I am kicking around an idea of either analyzing or generating cute characters.


http://pinktentacle.com/2010/08/99-cute-trademarked-characters-from-japan/ from http://www3.ipdl.inpit.go.jp/cgi-bin/TF/sft3.cgi


Pictoplasma Book


Darwin Hill


Takashi Murakmi


Art from Summer Wars

SamiaAhmed-LookingOutwards-FinalProject

by Samia @ 7:45 am


The Written Images book.


Shaun Inman’s generative website css.



Zach Gage’s experiements.

Looking Outwards – Final Project

by Max Hawkins @ 6:45 am

This is an interesting anti-pattern for my transit visualization. It’s a somewhat arbitrary mapping between sitemap information and a London-Tube-style map.

How transit-oriented is the portland region? The mapping here is pretty straightforward (transit-friendliness to height) but compelling.

Cool project out of Columbia’s graduate school of architecture. It maps the homes of people in New York prisions on a block-by-block basis. I want my project to have this sort of granularity.

Maya Irvine – looking outwards – Project 5

by mirvine @ 3:25 am

For my final project I would like to continue to work with generative graphic design. I would like the system to be more intricate and meaningful, I would like to also generate more than one functional design. For instance, the final product could be a logo, poster, and album book all for a single cd.

In order to do this I have been looking into other “generative print” projects.

1.



In the last few weeks Ishac Bertran has been making experiments in the area of “Generative Photography”. He describes the process where the digital drawings are sequentially projected on to a screen in a dark room and photographed using long exposure times. As in generative art, this photography technique uses an algorithm that is polluted with a certain randomness. The randomness comes from rendering imperfections and the asynchrony between the frame rate of the video signal and the refresh rate of the projector.
Glitches are unique, almost impossible to reproduce, and usually imperceptible to the naked eye. This technique gives shape to these digital glitches and captures their unpredictable beauty…The experiments pursue an artistic exploration to achieve a certain aesthetic outcome more than a research on computer engineering…
Most recent experiments analyze the glitches caused by the rendering and the asynchrony between the frame rate of the video signal and the refresh rate of the projector. Running a Processing sketch and then photographed, he uses various photographic techniques including exposure, speed, framerates to seek out the glitches sometimes caused by the computer monitor or graphics card and sometimes what is caught in between the digital and analogue mediums.

2.



Used for the Posters and flyers for The Puddle (Stall 6), Andreas Gysin with Sidi Vanetti created this Processing for the production of the rasters, eventually screen printed on coloured paper.
Built with a modified version of Toxi’s cp5magic.

3.

Benedikt just completed his studies at HFG Schwäbisch Gmünd and focused on Generative Systems. His thesis project, produced in collaboration with Julia Laub, yielded a fantastic series of posters documenting their experiments with Adobe Indesign scripting and Processing. The posters are definitely worth looking at, and Benedikt has been so kind as to share his Indesign scripts which are available for download to anyone interested. Some of the scripts lend themselves to auto-annotation of design syntax (i.e. the point size of every word is written underneath it) and others deal with paper-space (i.e. a rectangle is generated at the end of each line of text which demarcates the ragged right side of a left-justified paragraph). There is also a processing applet for generating recursive tiling patterns which can be used to determined nested grid systems like the one utilized in the image above.

John Horstman – Final Project: Looking Outwards

by jhorstma @ 1:51 am

Sound sculptures & installations, by Zimoun

I’m thinking about creating an installation piece for my final project.  Zimoun’s work interests me as low-fidelity and low-complexity ways to create an atmosphere.  I find the droning sounds comforting in their repetition and ambiance, since they’re all basically different shades of white noise.  The random movement of the pieces make them endlessly interesting to watch, too, and when they’re mounted in grids of 100 or more they only become more interesting.  Each installation starts with a small idea, but the end result is greater than the sum of its parts.

 

Fragments of RGB, by Onformative (Julia Laub and Cedric Kiefer)

Again, a simple idea that resulted in an impressive installation. A 2-D image is separated into its RGB components, which are then manipulated individually. It’s a playful way to make us consider what we’re seeing when we look at an electronic image. The way the RGB elements are manipulated and distorted is soothing to watch. Not sure what algorithms are being used to control the pixel behavior, but as Onformative describes itself as “a studio for generative design” I would expect that there’s at least some randomization taking place.

 

Untitled 5, by Camille Utterback

What interests me about this piece is the way in which it was constructed – specifically, the hand-drawn figures that Utterback scanned in to give the digital piece a more interesting texture (as she explains in this video interview with Wired).  The idea calls to mind John Maeda’s Cheeto paint, in which he scanned Cheetos and used the images to create line drawings.  I like the idea of taking images of real objects and using them to create textures that might be difficult to achieve algorithmically.  I have a few ideas in mind for ways this could technique could be applied to a final project.

 

Final project idea

The first piece I posted above gave me my favorite idea so far for my final project: putting someone in front of a video camera and processing their movement so that they appear to break into particles as they move around.  The particles could be separate RGB elements or simply the pixels that comprise their body.  I’m thinking about whether or not it would make sense for the particles to demonstrate some flocking behavior as they move with the arm, then they reconvene as the movement stops.  The particle breakdown wouldn’t have to apply only to people, it would apply to any objects moving in the scene.

Looking Outwards-Final Project

by Ben Gotow @ 12:36 am

I’m still tossing around ideas for my final project, but I’d like to do more experimentation with the kinect. Specifically, I think it’d be fun to do some high-quality background subtraction and separate the user from the rest of the scene. I’d like to create a hack in which the users body is distorted by a fun house mirror, while the background in the scene remains entirely unaffected. Other tricks, such as pixelating the users body or blurring it while keeping everything else intact could also be fun. The basic idea seems manageable, and I think I’d have some time left over to polish it and add a number of features. I’d like to draw on the auto calibration code I wrote for my previous kinect hack so that it’s easy to walk up and interact with the “circus mirror.”

I’ve been searching for about an hour, and it doesn’t look like anyone has done selective distortion of the RGB camera image off the kinect. I’m thinking something like this:

Imagine how much fun those Koreans would be having if the entire scene looked normal except for their stretched friend. It’s crazy mirror 2.0.

I think background subtraction (and then subsequent filling) would be important for this sort of hack, and it looks like progress has been made to do this in OpenFrameworks. The video below shows someone cutting themselves out of the kinect depth image and then hiding everything else in the scene.

To achieve the distortion of the user’s body, I’m hoping to do some low-level work in OpenGL. I’ve done some research in this area and it looks like using a framebuffer and some bump mapping might be a good approach. This article suggests using the camera image as a texture and then mapping it onto a bump mapped “mirror” plane:

Circus mirror and lens effects. Using a texture surface as the rendering target, render a scene (or a subset thereof) from the point of view of a mirror in your scene. Then use this rendered scene as the mirror’s texture, and use bump mapping to perturb the reflection/refraction according to the values in your bump map. This way the mirror could be bent and warped, like a funhouse mirror, to distort the view of the scene.

At any rate, we’ll see how it goes! I’d love to get some feedback on the idea. It seems like something I could get going pretty quickly, so I’m definitely looking for possible extensions / features that might make it more interesting!

Looking Outwards + Ideation, Final

by Chong Han Chua @ 12:00 am

A few things I’m thinking about, mainly about sound.


In the clip above, the image is dissected into each individual colours and rotated to show a distorted perspective. I think that breaking down image into individual balls and orbs might be a good idea, and then giving it some sort of physics, producing a sound as the user moves around and the balls collide with each other in a spring model.


The video here shows a dancer dancing with a virtual actor. I’m thinking of using the kinect to track a dancer’s body and producing music along with the dance. In a sense, it’s a juxtaposition of dancing to the music to generating music with dance.

The last inspiration that I had was the project where the artist tracks the movement of glass blades blown by the wind and produces sound. I’m thinking of creating a purely sound based project, creating a soundscape where users can wander into and interact with the environment. The idea is that basically the user is wading through a blade of grass and as the user pushes the grass blades around, they will collide and create a sound. It would a project purely sound based with no visuals.

Timothy Sherman – Final Project – Looking Outwards

by Timothy Sherman @ 11:46 pm 27 March 2011

For my final project, I’m currently thinking I want to adapt the dynamic landscape with kinect I made with Paul Miller for the 2nd project, and probably create some sort of game on top of it.

Recompose is a kinect project developed by the MIT media lab. It uses the depth camera, mounted above a table, to do gesture recognition on the users hands in order to control a pin-based surface on the table. I think this is interesting because it’s almost a reversal of the work Paul and I did, which I’d like to expand – modifying something physical rather than something virtual. These type of gestures might also be good to incorporate into a game to give the user more control.

Not quite a project, but something I’ll be looking over is this Guide to Meshes in Cinder. OpenFrameworks has no built-in mesh tools, so if Cinder has something to make the process easier, I may consider porting the code over in order to save myself some trouble.

This project, Dynamic Terrain, is another interesting reversal of our project. This work totally reverse things, modifying physical through virtual rather than modifying virtual through physical.

These aren’t new, but I’m trying to find more info on Reactables, as one of the directions I could go in would be incorporating special objects into the terrain generator that represent certain features or can make other modifications. A project like this can help guide me in how to think about relating these objects to each other, and what variety and diversity of functions they might have.

Finally, I found this article on Terrain reasoning of AI. I’m thinking of a game where the player must guide or interact with groups of tiny creatures or people by modifying their landscape, so the information and ideas here could be extremely useful.

LookingOutwards-Final project

by huaishup @ 11:22 pm

For my final project, one of the possible trend is to keep working on the algrhythm project and make a set of physical drum bots. Ideally I’d like to create about 10 drum bots with different drum stick/ and in-built algorithm. These drums may pile up/ make a chain/ circle, tree or what ever. And see what kind of music can we get from these drums.

Some inspiration:

1. Yellow Drum Machine


This is a cute project. The drum machine has a IR sensor which, instead of make the robot avoid obstacles like most other robots do, lead the robot to these objects and beat them.

2. ABSOLUTE MACHINES


I saw this video on the last week’s What’s on talk. Jeff Lieberman showed his project absolut machines. Triggered by a piece of impromptu music, these set of machine robots will replay and revise the pattern. By combining different type of bots, the final work turn out to be a piece of art.

3. muchosucko


This project is from Georgia Tech. The drum robot will learn the percussion beat from the human and by applying some generate algorithm it will make more complicate but beautiful beats and play it together with the drum performer.

Looking Out – Project 5

by Ward Penney @ 11:00 pm

Soul Stuff

I found this really weird video of people walking on a street, but they are colored and background subtracted in.

I have been thinking about doing some “soul” visualizations of the observers in a live installation. Some possible scenarios:

  • User walks up to a seemingly normal mirrored display of themselves. Then a moment later, a “soul” of their same person but more like a light outline, walks right to where they are and joins with them.
  • Other user’s pervious souls walk in and stand and approach the installation.

Here is the background subtraction camo example:

Presentations

My friend has an idea to use the Kinect to direct a live presentation. That gave me an idea of using the Kinect to speak to a facetious large audience, and try to get them riled up. The user would talk and stand behind a podium. Perhaps like the state of the union? User walks up to the podium and half of them stand and clap. Say something with cadence and a partisan block stands? See this video of the the interaction in Kinect Sports:

But, apply it to this:
Barack Obama State of the Union

 

Looking Outwards: Final Project

by nkurani @ 8:40 pm

As you know, for my final project I plan to take the movements of a user (using the Kinect) to generate a unique tree. Depending on how long this takes me to code, I will play with the aesthetics and possibly include multiple trees for multiple users. Here are some projects that I am using as inspiration:

http://www.creativeapplications.net/processing/cloud-forest-processing/
This is a cloud forest of generative trees created by Holger Lippman. He generates the trees in a forest-like atmostphere by adjusting the alpha, position, size, etc. It’s a very beautiful landscape. I thought this project was super cool. I may try to generate multiple trees.

http://www.creativeapplications.net/openframeworks/moc-openframeworks/
When the user whistles, he plants the seed for the tree. The trunk and branch represent the sound spectrum to create a unique tree. Their aim was to use immaterial data in a fun way. They have recorded each tree in an online database to create an ever-expanding forest. This is great inspiration for my project because it gives me a beautiful example of a cool way to generate trees using user input.

http://www.creativeapplications.net/other/dancing-with-swarming-particles-kinect-unity/
Dancing with Swarming Particles explores the relationship between the virtual and physical world. The movement of the user effects the movement of the particles on the screen. This helps me visualize how movement can impact what the user sees on screen.

Emily Schwartzman – Final Project – Looking Outwards + Initial Concepts

by ecschwar @ 8:23 pm

I’m still a bit undecided on the space I want to focus on for the final project. I would like to do something with information visualization again, and perhaps incorporate some generative aspect as well.

I’ve had a couple of ideas that I have been considering. One would be to look at visualizing some aspect of identity or personality. I don’t know exactly what angle I would take on that yet though. I was thinking I could try to generate an abstract icon or symbol of some sort. Some possible metrics to generate an icon or symbol that represents someone’s identity could include their name, date of birth, a profile shot or webcam capture, the sound of their voice, location, etc. One issue is where to get this kind of data though. Given the timeframe of the project, I could probably realistically only look at a couple of these metrics. I think it could be interesting to compare what different identities would look like, and if there would be any visual relationships between them that could correspond with relationships in real life.

Personas
Aaron Zinman


Personas is a project that creates a portrait of your online identity. I think this is an interesting angle to take on identity. The visualization has a very minimal aesthetic, which I appreciate. I wish that you could see how it constructed the visualization once it finishes. As it is building you get a snapshot view, but can’t view it after that.

 

Another space that I have seen some beautiful visualization work in is visualizing literature, poetry or some sort of text document(s). Again, not sure what kind of document(s) I would want to work with, but it is a good opportunity to explore several ways of visualizing the data to get multiple perspectives. Maybe I could look for something in the Internet archive, or look at visualizing literature in different languages.

Jack Kerouc’s Literary Oragnism
Stefanie Posavec
http://infosthetics.com/archives/2008/04/literary_organisms_jack_kerouac.html


This project looks at “depicting the literary organisms, rhythm textures, sentence lengths & structures of Jack Kerouac’s literary space.” I think that Stefanie’s visualizations are quite beautiful and inspiring. I love the different ways that she looked at the data.

 

I could also build on some of the code that I worked with in my first project, Color of the News, and explore other ways of creating a color analysis visualization. Below are a couple of examples that I found in this space.

Field Guide to Style and Color
Jason Salavon
http://salavon.com/FieldGuide/FieldGuide01.php


“This piece is a fullsize reproduction of the entire 2007 IKEA catalogue, leaving only color and structure.”

Luscious: Abstract Color Compositions of Advertisements
Martin Wattenberg and Fernanda Viégas

Luscious creates abstract color visualizations of magazine advertisements for luxury brands. It is interesting to make comparisons between the different compositions looking at just the colors.

Looking outwards: final project – Mauricio Giraldo

by Mauricio @ 5:45 pm

For my final project I’m considering implementing TTF or similar output to my generative project. Hence, I’ve been looking into other generative typography projects:

Generative-Typografie

This project uses Geomerative apparently to produce fonts, but I know no german in order to confirm this.

Generative Typography

 

An analysis of generative design strategies in terms of creating display fonts with vvvv.

Thesis work by Philipp Steinweber, also in german, so I have no idea what he’s talking about. It seems to not output font files.

Plagiairc

According to the site:

Plagiairc is a chat software which uses Facebook chat or Google Talk (jabber). It creates private chat with other Facebook/Google talk contact. But the interface will not allow the speaker to express himself through a classic keyboard. Indeed, to create a new sentence, he will have to use the words of others, the specific semantics of public chat room users. He will choose « public » words from a database of 40 000 sentences (English or French) recorded on the Internet Relay Chat (I.R.C.), an Internet text messaging network mainly designed for group communication.

Dane Pieri – Looking Outwards::Final

by dpieri @ 5:12 pm

Rafaël Rozenda

This guy has a whole bunch of websites that are each an individual art piece. I really like them because often they are a really fun and satisfying interaction.

The interactions are so simple and silly that it is at first hard to take them seriously. In the end though it is hard to stop yourself from using them because the interactions feel so good. The lessons learned from this art can be used to inform more “serious” issues of interaction design.

Here are some of my favorites:

Color Flip

Le Duchamp

Paper Toilet

And here are them all:
http://www.newrafael.com/websites/

Alex Wolfe | Final Project | Looking Outwards

by Alex Wolfe @ 4:05 pm

Generative Knitting

So there hasn’t been much precedence for this, since contemporary knitting machines are ungodly expensive, and the older ones, generally the brother models, that people have at home are so unwieldy that changing stitches is more of a pain to do this way than by hand. But if I can figure out someway to make it work out, I think knitting has ridiculous potential for generative/algorithmic garment making. Since it is possible to create intense volume/pattern in one seamless piece of fabric, simply though a mathematical output of pattern. It would be excellent just to be able to “print” these creations on the spot, and do more than just fair isle.

I sent off a couple emails to hackpgh, but I’ll try to stop by their actual office today or tomorrow and just ask them in person if I can borrow/use their machine

here’s an example of a pattern generator based off of fractals and other mathy things

how to create knitting patterns that focus purely on texture

Perl script to Cellular Automaton knitting

Here’s a pretty well known knitting machine hack, for printing out images in fair-isle. This is awesome, but I was hoping to be able to play more with volume and texture than color

Computational Crochet

Sonya Baumel crocheted these crazy gloves based off of bacteria placement on the skin

User Interactive Particles

I also really enjoyed the work we did for the kinect project, and would be interested in pursuing more complicated user generated forms. These two short films by FIELD design are particularly lovely

 

Generative Jewelry

I also would be interested in continuing my work with project 4. I guess not really continuing, since I want to abandon flocking entirely and focus on getting the snakes or a different generative system up and running to create the meshes to make some more aesthetically pleasing forms. Asides from snakes, I want to look into Voronoi, like the butterflies on the barbarian blog.

 

Looking Outwards – Project 4

by Ward Penney @ 7:12 pm 4 March 2011

When I was in grade school, I had a minor obsession with Pascal’s Triangle.

First 9 rows of Pascal's Triangle

First 9 rows of Pascal's Triangle

Just to refresh your memory, Pascal’s triangle a pyramid of numbers formed by adding a row of numbers together to generate the next row. It contains many patterns and fascinating attributes that would be very useful for a generative art project, such as: binary row sums, number locating, hockey stick patterns, prime occurrences, magic 11’s, polygonal numbers, points on a circle and others.

 

 

 

 

I could use several of these attributes to do some visual effects in the design. Here are a few ideas:

hockey stick patterns

pascal hockey stick patterns

Pascal Hockey Stick Patterns

By adding numbers in a diagonal direction, the last number on a changed course equals the sum. I could do something where I do lightening bolts down the hockey stick points.

Lightning_Bolt

 

 

 

 

 

 

Polygonal Numbers

The occurrence of polygonal numbers could allow me to display 2D quasi 3D polygons at varying intervals.

 

3D PyramidWhen I was thinking about the triangle, I always wondered it was possible to extend this into 3D space.

 

 

 

 

BB Gun

Carnival Star

Carnival Star

I also have another idea to re-create a classic carnival game where the user shoots out a paper star with a BB gun and a fixed amount of ammo. I think I can do the star paper like how Igor Barinov did the Open Virtual Curtian, and let it fall apart from the BB’s.

 

It looks like I could use the MSA Physics environment to do the BB Gun.

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2017 Interactive Art & Computational Design / Spring 2011 | powered by WordPress with Barecity