Project 3 Proposal: Imma Designa

by jonathan @ 3:53 pm 22 February 2012

For project 3 I initially wanted to play around with the idea of growing 3 dimensional forms and 3D printing them, for example growing a bike helmet defined by my head shape and optimal impact resistance and air flow. However, this was clearly out of my programming abilities and I actually wanted to finish a project this time. Yet, I still wanted to try to take what is on my computer into the real world in some capacity (hence the ‘laser’ idea).

Now I am planning on heading down a different route. I have always been intrigued with applying the basic rules of graphic design into a program to poop out ‘cool’ design-y posters at the click of a button and promptly sharing if across your social networks with the tag “I’m a fucking designer”, because afterall that’s what designers do, don’t they?

I’m thinking I’m going to fool around with toxiclibs in Processing to randomly generate a faceted grid, write an Adobe Illustrator script to design the poster and re-input it into Processing to disseminate across my social networks. I’m going to experiment with more ways of interaction, but this is it for now.

Luci Laffitte- Project 3- Proposal

by luci @ 12:15 pm

 

ADVENTURE GENERATOR!

For my ‘generative’ project I want to create a modern day text adventure. I have a lot of ideas for what this could be, and  my worry is getting as many of them prototyped as possible.

Implementation: For now I have started to get my feet wet in creating simulation of what I am imagining in processing. If I do decided to use physical locations I will look into resources such as open paths and unfolding. Ultimately, I think it could be best implemented in xcode using a iphone simulator, but I’m not sure if I could do that in time for this project deadline (maybe that could become my final project?).

 

 

somethings I have been considering:

LOCATION

I would love to create a location aware adventure that propels users to discover new places. If it could be used from an iphone, or be a text based service, the location of a user could be relayed back to the adventure and the story could relate to the surroundings of their adventure. **You have arrived @ ______. Now…**

 

SOUND

I would love for there to be sound included in the process. I recently played a bunch of different kinds of online text adventure games (“research”) and enjoyed the games with sounds that enhanced the story (especially for the horror adventures) although some got annoying because you often have to go through the same room many times and the sounds can start to be like a broken record.

 

EFFECTIVE TEXT & IMAGINATION

A key aspect of text adventures is descriptive text that allows players to jump into a fantasy world. I am aware of this critical point, but am unsure how it will fit into my desire to use physical location. Will this really enhance the adventure? or deter from the players ability to get into the story.

 

INTERESTING INTERACTIONS

I have been brainstorming interesting interactions that I could use in my game- such as hearing a phone ring and choosing to pick it up and listen to a message. Or needing to run a certain speed to get away from an “enemy”…

Duncan Boehle – Project 3 Proposal

by duncan @ 5:12 am

For my generation project, I plan to create a simulation for interactively growing, manipulating, and destroying plant-like organisms.

Throughout my gaming history, I’ve played countless games based around the element tetrad – the balance between air, water, earth, and fire. However, what I haven’t seen is an organic, emergent simulation of these elements or how they react with each other in a way that still affects the game. Some of the work of Ron Fedkiw and other graphics researchers have been very inspiring, and I could learn from some of their techniques for combining mesh and fluid simulations that I’ve already programmed. Here’s one paper in particular that’s relevant, along with a couple of videos.

[scribd id=82414078 key=key-2ksb57goml7o9moscuee mode=list height=100px]

Those exact techniques seem a bit too advanced for the scope of this version of the project, unfortunately. As a first stab at tackling this simulation, I want to just try to stick to plant life. Besides the art in the games from my recent Looking Outwards post, I was also inspired by the mathematics of plant growth taught in a video series by Vi Hart:

[youtube=http://www.youtube.com/watch?v=ahXIMUkSXX0&w=600]

 


 

The ideal interface that I’m imagining for this project would be to use a Kinect sensor and have the player’s hands directly guide the growth of the plants. At first, I was hoping to play with the duality between earth and fire, and perhaps one hand could be used to grow plants, while the other would manipulate fire, but I’m not sure this is either feasible or conceptually cohesive. I think it would make more sense to have only the ability to generate new life and accelerate the death of old life, and experiment more with the phenomenon of aging and life cycles rather than just outright destruction. Perhaps I could add more elements later; for example, to see how water can promote growth, drown life, or dowse fire which can bring destruction to plants, but cannot be rejuvenated.

In order to save time for polishing aesthetics and to make the interface more accessible, I plan to make everything in two dimensions, so I wouldn’t use something like Unity for this project. Either OpenFrameworks or Cinder seem appropriate for this project; OFX already has some decent Box2D support along with Kinect support, and Cinder seems to link up nicely with existing C++ libraries. But since I’ve never used them before, I’m very tempted to stick with what I know and use something like XNA with Microsoft’s Kinect SDK. Theoretically I could use Processing, since 2D drawing is dead-simple and it has plenty of Kinect and physics support, and it’s nice to be able to share things online. But if I ever wanted to extend the demo with more elements, any grid-based fluid simulation or advanced GPU rendering wouldn’t be possible.

Nir Rachmel | Project 3 + Queue Simulation (Proposal)

by nir @ 2:01 am

Standing in line – What if.. ?

Following the lecture we had on Feb 14th, I had in my mind one of the flock algorithms that simulates a crowd entering through a small crack in the wall. It inspired me to think about simulating a crowd standing in line for a ticket booth, or even better at the grocery store.

Here’s the thing. Each time you get to the cashier, you make a choice in which line to stand. My simulation will emulate several lines, and the user can select the one he thinks will get them out in time. The simulation will then run all lines in parallel, and the user can see his chosen location as well as all the alternatives he didn’t choose.

Each person in line will be represented by a colorful circle and will have  several properties, such as number of items he is carrying, and the “complexity” for each item which affects the time it takes to process that item at the register (such as fruit for example).

I want to add some more parameters that will make the simulation interesting and resemble real life, such as:

1. A random event that delays the line (a product without a barcode / price, or the need for manager approval to correct a mistake)
2. A customer asks another customer to bypass him in line (in case the customer is late, for example).
3. A customer forgot to pick up an item, and thus loses his spot.
4. The cashier’s speed of processing will also vary.

A big timer on the screen will show the user the elapsed time, and this whole thing can be thought of as a game, where the user does his best to always choose the fastest line. He can have a score according to his choices.. The game part of this simulation can be further thought.

As for libraries, I still need to explore more, and decide exactly how will the app look like. In the meantime, here are some sketches:

Joe Medwid – Project 3 Proposal

by Joe @ 10:12 pm 21 February 2012

Project Proposal – EVOLUTION!

 

pokemans

Although not quite as grand a computer-generated cityscape or as geometrically bizarre as an algorithmically-modified Doric column, the various creatures, monsters and critters we looked at in our exploration really resonated with me. It’s impossible to mentioned “genetic algorithms” or “evolving forms” without my mind immediately jumping to those little imps that have permeated popular culture over the last 15 years, Pokemon.

Thumbs

When viewing the generative Nokia blobs, though, a second childhood memory surfaced. It’s a common practice in the artistic community to do a number of quick thumbnail silhouettes in an attempt to get as many ideas on the page as quickly as possible before choosing which to develop. Although it would be bordering on sacrilege to take this extremely loose creative process and relegate it to the cold mechanical guts of a machine, there are some interesting considerations within the realm of evolution, inheritance, genetic algorithms and morphology.

evolution

During my Looking Outwards, I stumbled upon these little guys – creatures generated by randomly combining body parts, each with associated attributes. Tentacles, for example, make a create more aggressive, while a larger body makes it more durable and sluggish. They can then perform an approximation of a battle, pursuing each other according to their morphological programming.

What I’m proposing is, ideally, some combination of the three preceding images. A program that can, at the very least, make a creature either out of a predetermined kit of parts or through abstract geometry. The next step would be to enable that creature to “evolve,” enhancing various features of its physiology, like a Pokemon. Time willing, there would also be some sort of genetic inheritance tied to data associated with their ultimate morphology. I’m really hoping to at least get that first one done!

To this end, here are a few of the resources I’ve scoped out…
Genetic Algorithm Overview
Genetic Algorithm Java Library
Geomerative, a basic geometry library for Processing
Toxiclibs, specifically Toxi.geom
Aaaand some more discussion of genetic modeling.

Project 3; Generative designs for tables and chairs and hybrid combinations of the two

by sarah @ 10:01 pm

My idea for this project comes from an desire to do some woodworking. I want to “cross breed” prominent modern chair and table designs to create a new hybrid/mutated design, which then, at a later date, I would like to build. I am planning on using the interactive selection variation of the genetic algorithm from Dan Shiffman’s book in order to manually determine the “fitness” of the designs by interest.
A problem I am considering how to approach is whether or not these designs should be modeled in 3D for the final product. The source imagery I am pulling from is 2D and I’m trying to weigh the options of whats possible in the time given. Ideally I would really like to have 3D designs to genetically evolve and mutate.

Code example from Dan Shiffman’s book, The Nature of Code, Chapter 9, page 36.

Some images I am planning to use are

Duncan Boehle – Looking Outwards 3

by duncan @ 8:45 pm

Bohm

Bohm is an experimental game based around interactively growing a tree. Unfortunately the game is still in development, so this teaser video is the only way to see it in action:

[vimeo=16065687 width=550px]

Similar to Cloud and Flower, this game tries to break away from traditional game conventions in order to create a primarily relaxing experience, rather than one fueled by adrenaline or addiction. According to the developers, the player interacts with the tree while it grows, influencing the paths of its branches and overall shape, while adaptive music plays in the background. As of now, they’ve already achieved some form of generative artwork – they have a tree with virtual growth in a beautiful dream-like world, but the key to their success is putting the player in a state of meditation. The developers describe the gameplay – slowly manipulating the branches, creating new ones – as if it were an artform, like sculpting a bonsai tree. Empowering players with creativity, but freeing them from competition, risk, should make for a very immersive and unique experience.


 

PixelJunk Eden

PixelJunk Eden is a game where players explore an alien-like garden full of growing plants and flowing creatures to collect glowing items. The game has a very minimal aesthetic – it has distinct palettes in each garden, and makes good use of blur and glow to create a dream-like ambience. The art director, Baiyon, also created a dynamic techno soundscape that responds to the player’s progress in the garden. You can look at the official trailer on Steam to hear him describe the inspiration for the game.

PixelJunk Eden Screenshot 1

Baiyon’s clear artistic vision is evident from the tight coupling of the music and visuals, and it works together to create a fascinating experience. But I’m not convinced that the initial mechanics are successfully translated into a good game. The control scheme was ported from the PS3, but the platforming controls just don’t seem as responsive on the PC as they should be, especially when the game is about exploration. The gravity-bound avatar and collectible orbs also seem to distract from the experience of just watching the environment grow and animate; it’s as if they’ve already achieved half of what Bohm was trying, but tried tacking on a game to try to make t more accessible.

PixelJunk Eden Screenshot 2

 


Sam Lavery – Project 3 Proposal

by sam @ 8:16 pm

For this project I really want to produce something that is both beautiful and interesting. My interest in urban planning and design has exposed me a little to the fairly novel field of parametric urban design. I’m definitely not sold on the idea that a computer (or a person for that matter) can centrally plan a successful city, but the technology available creates some interesting opportunities for experimentation.

My current plan is to use ESRI CityEngine to model several different versions of Pittsburgh, changing the appearance of the city by applying rules from the most famous and infamous urban design theories. I am imagining now that there will be a dense, low-rise, small-block a-la-Jane-Jacobson city, a city composed of superblocks, towering modern buildings, and vast expanses of grass and parking lots, and perhaps some kind of future or alien looking city.

I have a shapefile of Pittsburgh’s topography that I will use as a base for my 3D models. From there I will write logic that will dictate how the streets are laid out and how the resulting lots are filled. Unfortunately, CityEngine is a VERY expensive program and the trial version won’t export the model to any file type that I could use to make nice renderings. Hopefully I can find someone with a full version of the program or some other method…

VarvaraToulkeridou – Generate – proposal

by varvara @ 6:52 pm

In this project, I would like to experiment with form generation via a Braitenberg vehicles simulation.

The concept of Braitenberg vehicles was developed by the neuroanatomist Valentino Braitenberg in his book “Vehicles, Experiments in Synthetic Psychology” (full reference: Braitenberg, Valentino. Vehicles, Experiments in Synthetic Psychology. MIT Press, Boston. 1984).

What excites me about this concept is how simple behaviors on the micro-level can result to the emergence of more complex behaviors on the macro-level.

—————————————————————————————————————————————

Below there is some precedent generative art work using the concept of Braitenberg vehicles:
Reas, Tissue Software, 2002
In Vehicles, Braitenberg defines a series of 13 conceptual constructions by gradually building more complex behavior with the addition of more machinery. In the Tissue software, Reas uses machines analogous to Braitenberg’s Vehicle 4. Each machine has two software sensors to detect stimuli in the environment and two software actuators to move; the relationships between the sensors and actuators determine the specific behavior for each machine.
Each line represents the path of each machine’s movement as it responds to stimuli in its environment. People interact with the software by positioning the stimuli on the screen. Through exploring different positions of the stimuli, an understanding of the total system emerges from the subtle relations between the simple input and the resulting fluid visual output.

 

Yanni Loukissas, Shadow constructors, 2004
In this project, Braitenburg vehicles move over a 2d imagemap collecting information about light and dark spots (brightness levels). This information is used to construct forms in 3d, either trails or surfaces.
What I find interesting about this project is that information from the 3d form is projected back onto the source imagemap. For example, the constructed surfaces cast shaddows on the imagemap. This results in a feedback loop which augments the behavior of vehicles.

 

     

 

—————————————————————————————————————————————

 

I would like to implement a Braitenberg vehicles simulation where the vehicles will move in 3d space and their positions in space will correspond to the control vertices of a surface. This way, while moving in space, interacting to the various stimuli, the vehicles will generate surfaces. I expect that by linking together groups of vehicles, each group having a different set of behavior different surfaces in space will be generated. I have not decided yet what the stimulus will be, however I will try to have the evolving surfaces contributing to the stimulus so I can have a feedback loop that will augment the behavior of the vehicles. I am thinking of constraining how far away each vehicle can move from the rest of its group by linking them with springs.
As far as libraries are concerned, I will start with using toxilibs for the geometry and peasycam to navigate in space.

 

John Brieger – Project 3 Update

by John Brieger @ 3:57 pm

Since in Project Three, we need to come up with a way to generate form, I began my concepting by looking at how I generate form. I do a lot of woodwork and a lot of cooking, so I started looking at how I would be able to generate form in those contexts. Without doing some complicated Rhino scripting to use a CNC Router, doing algorithmic woodworking seemed like a no-go in our timeframe, so I focused on food. Initially, I wanted to build some sort of robotic cooking tool, but again: time issues. Golan encouraged me to do something with Markov Text Chain Synthesis and recipes, so I began to look at way to generate recipes algorithmically.

The difficult part of recipe generation is that the association caused by ingredient lists and titles REALLY messes up Markov synthesis. I started with a Belgian Folk Recipe Book from Project Guttenberg and edited out the intro and exit, then wrote a quick script to strip out the titles. Running that through the Markov synthesizer gave me a very unique recipe for a Cod stew with raspberries, so I knew I was on the right track.

I’ve decided for this project to create a cookbook of 20 or so algorithmically generate recipes, tentatively entitled “Edible Algorithms”. I felt that it really wasn’t enough to just generate the text of recipes (the work for which essentially involves editing a plaintext and running it through a very simple algorithm). To really get at the heart of the project, you have to cook them.

This weekend, I’ll be cooking 8-10 recipes I’ve generated and documenting with photos (which will of course be in the cookbook). I’m finishing my plaintext editing today, and hopefully should have all my recipes generated by tonight. Then, I’ll have to reverse engineer a list of ingredients out of the recipes, and go shopping. I’m also still working on a way to generate titles for each recipe (I’m thinking a wordcount frequency of “Most Common Adjective + Most Common Noun” for each recipe).

I’m pretty excited (and you should be terrified given that I’m probably bringing in some food Thursday).

-John

A note about my plaintext:
I pirated some scans of “Mastery of French Cooking”, “Joy of Cooking”, and “The Silver Palate”, which I consider to be seminal works in American Cuisine. I ran them through OCR, and then have been editing them by writing some simple RegEx based perl scripts to strip out things like page numbers, recipe titles, chapter names, etc.

Sample Recipe I generated last night:
Beat a tablespoon of sugar is whipped into them near the end of which time the meat should be 3 to 3 1/2 FILET STEAKS 297 inches in diameter and buttered on one side of the bird from the neck to the tail, to expose the tender, moist flesh. Gradually make the cut shallower w1til you come up to the rim all around. Set in middle level of pre­heated oven. Turn heat down to 375· Do not open oven door for 20 minutes. Drain thoroughly, and proceed with the recipe. Blanquette d Agneau Delicious lamburgers may be made like beef stew, and sug­gestions are listed after it. Savarin Chantilly Savarin with Whipped Cream The preceding savarin is a model for other stews. You may, for instance, omit the green beans, peas, Brussels sprouts, baked to­matoes, or a garniture of sauteed mushrooms, braised onions, and carrots, or with buttered green peas and beans into the boiling salted water. Bring the water to the thread stage 230 degrees. Measure out all the sauteing fat. Pour the sauce over the steaks and serve. rated, washed, drained, and dried A shallow roasting pan con­taining a rack Preheat oven to 400 degrees. Spread the frangipane in the pastry shell. Arrange a design of wedges to fit the bottom of the pan, forming an omelette shape. A simpleminded but perfect way to master the movement is to practice outdoors with half

UPDATE TO MY UPDATE: Started cooking this first recipe. Since I can’t make a 24 foot steak, I decided I would take a bit of creative license and use 2.97in medallions instead.

Photo 1: “3 to 3 1/2 FILET STEAKS 2.97in in diameter” with some meat typography from trimming “the cut shallower until you come up to the rim all around”

Photo 2: Completed Dish (in frangipane bed, with onions, carrots, peas, and brussel sprout, garnished with sauteed mushrooms.”

It terrifies me that this looks good. As for taste, the frangipane sauce was actually delicious with the meat (which cooked to a medium-rare at 20 minutes and 375 degrees). It did NOT mesh so well with the vegetables, which were less than impressive.

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity