Looking outwards for final project ideas

by blase @ 7:07 am 3 April 2012

Unearned Intimacies- Michele Iversen

This photography project from a few years ago presents photographs taken when standing outside peoples’ houses or other domiciles and taking photographs of moments that seem intimate. The subjects have neither consented nor are aware that they’re the subjects of the photos.

I think the project is particularly interesting since it violates our expectation of privacy in the home. Whereas a person would not be expected to have a candid photo taken of them while walking along the street, even though some people might perceive that as creepy, the idea that they could have their photo taken as they walk to a window is very weird. I think it invades privacy to make a point in the same way I’d like to for my final project.

creepy

Showroom Girls by William Popelier

In this project by a Dutch artist, photos taken by two pre-teen girls on a store showroom’s webcam are reapportioned by the artist, who then (assumably) used tineye or another reverse lookup service to find their Twitter and other social media handles and seems to essentially stalk these two girls. That’s super creepy. This project diverges from my own ideas in that I’d like to violate the privacy of people who have at least come to an art exhibit rather than unwitting observers. However, the idea of looking people up on the Internet to make a point is the same approach I’d take, albeit automatically.

Michelle Teran- Life: a user’s manual

This project unmasks the trackers who are using CCTV and other surveillance techniques to track people. Taking advantage of the unencrypted signals many surveillance cameras broadcast right out of the box, this traveling exhibit broadcasts and brings into the light the surveillance that is happening without you realizing it.

Mahvish Nagda – Final Project Brainstorming

by mahvish @ 7:00 am

Project 2 :

SO I have continued working on what I had for Project 2 on the side (very very slowly). The tiling is still a work in progress, but I also started looking at ToxicLibs and playing with that. One of the examples in the library that I started playing with was the Polygrid Tesselate. I combined that with the Voronoi Clipping example to go from 2D to 3D (pictures). I’ve also played with Rhino a little bit (but not Rhino scripting) so the final project could be just completing my initial vision for Project 3. Also, here’s a link to the code (link).

Project 3

The other option for me is to keep working on Project 3. Although, I enjoyed Project 3 ideas, I think adding biofeedback to the loop (for Project 4) and using that in interesting ways might make the interaction more interesting.

Blase’s final project ideas

by blase @ 6:55 am

Idea 1: A lot of my research for grad school has looked at privacy. I’ve thought for a while that it might be interesting to make a privacy-invasive robot. Imagine a robot that started talking to you and said, “Hi, my first name is Roboto, what’s your first name?” “My name is Golan.” “Nice to meet you Golan. Have you ever been to Robotland?” “No.” “Oh well, that’s where I’m from. Where are you from?” “Pittsburgh.” “Hey, the land of black and gold… bzzzz.” “Did you know that robots are German? My name is Roboto Merkle. What’s your last name” “Levin.” At this point, the robot would have more than enough information to scrape the web for as much information as it can about you, and it does so. It also takes a sinister turn and starts presenting your information to you. e.g. “Golan, I’m happy to meet you. Will you be my friend? I see you already have 4,431 Facebook friends. What’s one more? Oh, and this is a nice picture of you that I just found in my database. Hehehe.” I think this piece would make a point about how much information is available online, ready to be correlated with only a modicum of information to start with.

Idea 2: More particularly, a lot of my research this year has focused on online behavioral advertising, the practice of advertisers profiling a user based on the websites he or she visits. There is already a tool that came out a month or two ago that enables the visualization of information of who’s tracking you: the Collusion add-on for Mozilla Firefox. The idea of collusion is that websites are attached in a graph to advertisers, letting you see which advertisers know about which sites you’ve been to:
collusion

However, I think the weakness of this tool is that you can’t actually see what these sites could infer about you, which is the important part. It’s one thing to know that the HuffPost tracks you on sites X, Y, Z, but a tool that pulls out the articles you read, the information contained in them, and let’s you know when you’ve crossed academically validated thresholds for losing your anonymity would show more clearly how non-PII is becoming the new PII.

Duncan Boehle – Final Project Brainstorming

by duncan @ 2:47 am

Polished Kinect Music Machine


My first idea for a final project would be to extend Project 3’s Kinect beat machine into a much more polished, useful, and creative work.

One of the biggest issues with the first implementation was the accuracy and responsiveness. I think this could be remedied by looking into alternative methods for virtual drumsticks. I originally used OSCeleton to transmit joint data to OpenFrameworks, so using Synapse as a base to eliminate the inter-process communication delay might be somewhat helpful. This has the added benefit of giving access to the original depth data, which could be used for even more responsive hand data. Since the hand joint’s movement is slightly smoothed automatically by OpenNI, its position lags behind the apparently hand location in the depth field. Instead of only using the hand joint’s position, I could use it as a seed for a search for the lowest depth value that seems to be close to the hand, which might directly correspond to the user’s movement much faster.

I also would want to have a more intuitive instrument layout, with more varied sounds, and have it respond to the strength of the hit, to change the volume or create effects like echoing. I also would want to be able to visualize the individual looping tracks, and give the user the ability to delete or modify them, instead of having them play forever until they are all deleted.

In addition, being able to switch instruments or layouts could also make for much more dynamic music. Creating a virtual keyboard could add a lot more musicality to the purely rhythmic compositions.

This type of project extension could certainly be a month’s worth of work, but it might not be all that satisfying, since most of what I’ve proposed would complicated the interface beyond that of a simple toy, but it might not be quite useful or creative enough for someone who genuinely wanted to compose a drum beat or song.

 

Binaural Audio World


I was very inspired by two projects in the audio realm: Deep Sea and Audio Space.

Deep Sea uses equipment (a blind gas mask with headphones and a microphone) to make the player feel vulnerable to the unknown sea around them, inducing claustrophobia and a negative feedback loop of breathing. I was inspired by how it didn’t use video in order to create an emotionally powerful experience.

Audio Space was an art exhibit that used head tracking, binaural audio reproduction, and recording to create a space for exploring and creating sounds. Visitors would “paint” the room with sound by speaking into the microphone, which would correspond to the 3D position that they were in while they were speaking it, which would be realistically played back for any future visitors. That project lent itself very well to creating interesting collaborative experiences with only sound, and I was very hopeful that using binaural audio and head tracking together could work well.

My hope would be to create a narrative for a fictional space that the player must explore using only audio cues. By rotating the head in 3D space and simulating binaural sounds, I would be able to place 3D sound cues in a virtual room in order to craft an experience. I’d like to induce fear, similar to Deep Sea, and successfully combine head-tracking with binaural audio, like in Audio Space.

For technology, it would probably make the most sense to use a WiiMotion Plus controller to track the head’s orientation using it’s gyroscope and accelerometer, and receive the data via bluetooth. I would use Clunk, and open-source library for generating binaural sounds, and use Kinect with OpenNI to track the general location of the player’s head and body.

My biggest problem is that I don’t already have a narrative in mind for this virtual audio space. I would like to try the tech out to see if it works, and then craft a narrative that seems natural, but that seems like a very backwards approach.

 

Location-Based Game


This is more of a concept idea than a specific brainstorm example, but I’ve been very intrigued with the prospect of making “social games” genuinely social. Using smartphones with GPS, it would be interesting to create a game that were significantly benefited by finding other players in real life using GPS and playing in proximity. Using API’s like Bump, this type of direct personal interaction could be a great way to break the ice between strangers.

My first rough idea for a game is collaboratively uncovering buried treasure hidden in real-world coordinates. The digging process would be significantly aided by having multiple players in the same area at once, but I’m not sure if the idea is at all feasible.

 My other idea revolved around building castles or territory marked in real world locations, but I think which idea I would explore depends on whether collaboration works better for this type of game than competition.
The main problem with this idea is that I don’t have a great idea of the final result, and that the end game would have to be very polished in order to get real people to play it. Unfortunately, a significant number of people would have to play it in order to test the social effects of the game, which makes it a very difficult project to measure the success of.

Final Project Proposal and Looking Outwards – Sarah Keeling

by sarah @ 1:00 am

For my final project I’ve had a change in heart and instead of continuing with the augmented projection of suburbia on the floor, I would like to pursue more work with chairs. My plan is to get the front seat of an ordinary car, a swivel office chair, a wooden kitchen chair with a tie-on cushion (this one I have), and a lazy-boy recliner. I will then, with the use of an arduino and multiple motors and sensors, have a realtime feed of motions in suburban life that are then created in these chairs. I am thinking of asking friends who have family in suburban neighborhoods near Pittsburgh to help provide the data that will be mimicked.

This is an attempt to sum up the complete suburban experience by visualizing removed actions in a new context. The chairs will be installed side by side and spaced approximately a foot apart. I feel that much of suburban living is done sitting down and, most often in some variation of these particular seats.

Some projects I have been looking at for inspiration

William Lamson does an interesting series of small gestures/actions involving balloons, sculptural elements and sometime himself. I always find these really enjoyable to watch because there is a light humor and sense of suspense/expectation set up in the beginning of each video.

And here is a link to his website where you can thumb through all 33: http://www.williamlamson.com/#/selected_work/actions/video/8

Here are some my favorites:

11/33
8/33

Zimoun: Volume

I know Golan showed us this one in class, but I came across it again and it made me think of using motors within the final project.

Seiko Mikami's "Desire of Codes"
http://www.creativeapplications.net/events/seiko-mikami-desire-of-codes-ycam/

This piece corresponds more with my original plan to expand on the the projected image. Overall, I think that it has a lot of very interesting and complex components and I do like the treatment of video, but I find that the complexity of all these components muddles the message in the piece.

Eli Rosen – Final Project Ideas

by eli @ 11:43 pm 2 April 2012

Some Final Project Ideas:

GO ASK ALICE:

This is an idea inspired by the music video below.  It would be a project installation which involves eating some berries or pills to modify an on screen silhouette representation of yourself.  Some could make your legs grow.  Others could make your arms become stretchy and gelatinous…

 

Civil War Visualization Revisited:

This would be another attempt to visualize some data from the civil war.   I would try to capture a narrative more effectively and would maybe build it in javascript and html5.

Emotion Generator:

This is a reaction to the fact that we read, watch movies and play games in order to draw out extreme emotional responses.  We seem to like to experience extreme emotion when it’s source is artificial and it can be turned off at any time.  This project would attempt to cut out all the time consuming narrative and just try to generate the emotional response.

 

The New Gestural Language of the Digital Native:

Some of you have probably seen this video of a baby using an ipad.

That got me thinking that the kid will grow up with the knowledge of these gestural commands.  What will future gestural commands look like?  Will our new vocabulary of gestural commands be applied to other types of objects besides screens?  Here I would examine what it would mean to apply a pinch and zoom gesture to some physical object like a hat or a pair of shoes.

Nick Inzucchi – Final Project Ideas + Inspiration

by nick @ 11:22 pm

1. Kinect body/motion visualizer for large-screen projection and performance settings. Take the point cloud literally; make each voxel a physics particle which reacts the the motion of the dancer.

Three things I love about this project : 360 degree depth map, depth points become physical particles, the sweet industrial soundscape tying it all together.



2. Extend my project 3 into an even more performance-centered system. Bring projections into the mix so the audience can visualize my activity behind the tables on a large scale. Play up the conducting metaphor even more. Push and pull musical elements into or out of the mix and onto the projection screen, brainstorm creative ways of combining spatial actions and visualized audio.

3. Something with a glitch art aesthetic. I recently visited the American Art Museum in DC and saw some images from Thomas Ruff’s jpegs series. I love how these images bring the medium into relief, and I think it would be cool to do something similar for computer art. Ruff’s art works because we have a common language of pixels. What is the common language of interactive art?






http://chicagoartmagazine.com/2011/09/an-unknown-error-has-occurred-new-media-and-glitch-art/

4. Binaural soundscape explorer. Strap on headphones and be completely transported to another audio realm. Seriously dark shit, ambient industrial factory horror set.

Billy Keyes – Final Project Ideas

by Billy @ 10:20 pm

Algorithmic Playscapes

Building on my Box Fort project, I’d like to do more with genetic or fractal/L-system generation of architecture. I’m still interested in play and I think playground structures are a good place to explore the depth of these techniques without all the complexities of buildings. Plus, many of the examples of computational architecture I’ve seen kind of resemble jungle gyms already.

Primarily inspired by the work of Michael Hansmeyer and the real playground featured on Playscapes.

I’m thinking of building climbing and seating structures out of hills or other terrain, but I’d be interested in other ideas, particularly ones that deal with light and shadow.

Audiovisual Accompanist

I really like music, but I haven’t done any projects with sound yet. This project would probably a mashup of a visualization system and an algorithmic composition system. I don’t have many ideas about how to build on existing work in either domain, but I like the idea of standing in front of a screen and speakers and playing my trumpet or singing along to a dynamic soundtrack which changes in response. Hopefully the result would listenable, and the visuals would be pretty and somewhat informative to the user. I’m not sure if there’s any meaning in this system, but I think the final experience would be enjoyable.

Primarily inspired by the music of Tristan Perich and the auto-accompanist systems developed by Roger Dannenberg (he showed videos in his class, but I can’t find them online).

The composition system and the audio input are the most important parts; the visuals are secondary. I hope this would transcend my previous attempts at algorithmic composition to produce something interesting and enjoyable, instead of merely tolerable.

Other Ideas

I have several other not-very-developed ideas, which I’ll list here and maybe expand on later.

  • Internet Control Panel – ask people on Mechanical Turk to draw components from a control panel (switches, buttons, gauges, levers, etc.) and then write a program to assemble them all into a huge “Internet Control Panel”
  • Light-responsive architecture, again. Maybe interactive? The user can sculpt a building using light?
  • I really like the sketches of Frank Lloyd Wright; they have a very interesting style and make me think of separating a scene into layers that can be separated in space. I have no idea what kind of project this would lead to.
  • Projection mapping the some of the statues in front of the CFA so that they whisper and gossip about other statues when people sit in the alcoves.

Sam Lavery – Final Project – Ideas

by sam @ 10:02 pm

Revisit mapping wifi

I would like the opportunity to revisit my data visualization project (an interactive map of wifi names). With a month to work further on this project I could improve the aesthetics and performance, possibly making the visualization available in an online format. It would also be nice to gather more data for the project and work on different ways of sorting the data. I could use this as an opportunity to work in openFrameworks.

screenshot from my project 2

Augmented Reality Homestead Steel Works

For my BXA capstone project I am creating an interactive historic interpretation of the Homestead Steel Works and Carrie Furnace using webpage-linked QR codes. I had been hoping to create a mobile augmented reality experience, overlaying 3d models of the former industrial buildings onto the current site, but the technology is too difficult for me and not very robust. If I moved to a laptop-based system using openFrameworks or pointcloud, I think I could produce something that resembles my original plan. I have already produced a lot of the 3d models and would want to use markerless augmented reality because the scale is so large (the entire waterfront shopping center). This would be a good opportunity to learn a lot about computer vision and augmented reality, two things I am very interested in.

[youtube http://www.youtube.com/watch?v=wvSPzG7AqLY]

something like that but with steel mills!

Physical Data Visualization

I have been thinking about physical visualizations of digital data and how this approach could be really novel and fun in a world full of 2d displays. My plan for this project is to use real-time bus schedule data to drive a physical display reminiscent of the horse racing games you often see at carnivals. The mechanical buses would display their route and eta and would all inch forward along a track, mimicking the actual movements of the buses in real life.

 

Final Project: My Life is like a Video Game cont…

by jonathan @ 9:49 pm

[vimeo https://vimeo.com/25261181]

Looks like I wasn’t the first one with this idea. There are a couple things I really like about this particular project: firstly the helmet is pretty darn nifty. The planar form allows for the user to easily be able to define where they are in the space while looking equally ridiculous and futuristic. Secondly, the pairing with an Arduino to physically be able to rotate through the space is a nice extension of interaction, though I had wished that he had incorporated more features like zooming or panning. In any case, I fee like my particular project is an extension of this one. I aim to truly be able to manipulate the projected environment instead of merely representing it like in this video.

[vimeo https://vimeo.com/25852368]

I want to know how they were able to so accurately use the ipad accelerometer to move around the image, unless they used some other marker to match the image to.

[vimeo https://vimeo.com/24775861]

Hmm, I wonder if there is a way to combine AR markers to my project. Perhaps placing a marker on the helmet to switch heads would be interesting?

« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity