cell phone intervention

by Mishugana @ 5:53 pm 29 March 2010


some explanation/stream of consciousness. needs work but i really wanted to post something before tonight and I’m running out of time that i can do that. I will fix up the rest of this on Wednesday night and make it clearer and much better written. (for now though, i feel that something is better than nothing) so here goes:

some very quick writings i did during class

This is a three part experiement/digital intervention that aims to examine the relationship
that our cell phones have with communication. Cell phones are designed to enhance communication and give the user power
often it distracts the user from real world communication and this interuption isnt helpful because even when considering
the added virtual communication, the whole doesnt equal the sum of its parts.
Step one is to make people aware of the situation without effecting the situation.
If people would only observe their own environment there is a chance they would be enlightened, by showcasing the existing
world that people live people will be more aware of the interuption that happens on a day to day basis.
BY connecting GSM activated LED keychains to  light siren and having it sit in a class room. people will be more aware of the
frequency of virtual interuptions. Even vibrating/silent calls/texts will set off the siren so this is
like the oppisite of the mosquito ring tone.
Step two would be to create fake ring tone devices that only make loud ring tone sounds and have it go off in interesting spaces/times
BY taking the previous step further and actually changing a situation I can cause people to question the very nature of these
interuptions and why they are necesary. By placeing the ringers in places like… the ceiling, rafters, vents, sewers, and have
them go off when people wait for the bus or go shopping or are in an HCI interview, people will be forced to confront
issues.
The third and last step of cell phone rehabilitation is putting the power of cell phone communication back into the hands of the people
by connecting a car radio fm transmitter to a cell phone headset and setting my phone to autoanswer. i can broadcast anyone who
calls my number (a number that will be advertised) and then anyone who tunes into the frequency, or any radios set to the frequency
that are connected to speakers set up in places i choose will hear anything that people want to say un censored, unleashing the real power
this device to the common people.

Capstone Project Process – OpenGL and transformations

by paulshen @ 7:38 pm 28 March 2010

http://in.somniac.me/2010/03/28/using-opengl-to-calculate-transformations/

Jon Miller – Final Project

by Jon Miller @ 11:57 pm 24 March 2010

Summary
This is a game about protecting your “Little”, a small car that follows you around, while running into your opponent’s “Little”. At your disposable is the ability to change the terrain at will. You can form hills and valleys in real time at your discretion, to aid you and thwart your opponent.

To download the game, contact me or Golan Levin. The filesize is too large for this server.

Some screenshots:

The players face off.

The battle ensues!

Green wins!

Concept
My idea went through several iterations, however I was satisfied with what I ended up with. I wanted to do something involving dynamic terrain generation. Initially, my idea was to write algorithms that simulated Earthlike terrain formation, creating a similar effect to watching a few seconds of stop motion photography of a mountain forming over a period of thousands of years. However, I was also interested in making this a game somehow, so I decided to add an element of driving through these mountains, as they formed.

I decided the gameplay element would be to make as much progress as possible, as terrain gradually became harder and harder to progress through. This idea was scrapped, because I felt it would be frustrating to have to struggle through ever growing, terrain, with losing by getting stuck as the only option. I also saw my original idea (forming terrain) getting lost as I attempted to answer the question of how to create a lifelike environment in only a few weeks.

After many more ideas passed through my mind, I settled on creating a game where the player controlled the formation of hills and valleys, an idea I had from the beginning. The game would involve two players driving around, each in charge of protecting a “baby car” while simultaneously crashing into the enemy’s baby car. This idea is only one of several that I could have implemented; listed here are some of the others and why I chose not to implement them:
A game where two players cooperate to ward off zombie-like little things that attack your castle in waves. Defense would consist of forming hills and valleys to lure them away from your castle to a hole, where they would fall to their doom. This idea was scrapped because I felt the game would be too difficult to implement quickly and well.
A game where one player attempts to escape the other player. The hunter can make hills and valleys – to hinder and trap the quarry. I scrapped this idea because I wanted both players to be able to create hills – I felt being the quarry would become boring.
A game where one player protects a treasure while the other player attempts to capture it. The game that I eventually created is, in essence, similar to this game, except that each player has something to protect, and ‘capturing’ is changed to ‘crashing into’. I wanted to make sure to add an element of collision and mild violence when I observed Xiaoyuan’s glee at running into my car during one of the checkpoint presentations.

Implementation
At the suggestion of one of my classmates, I started to use Unity, a game engine designed for making 3 dimensional games, especially first person shooters. Using Unity as a development environment has been a very smooth experience, and I would recommend it to anyone looking to rapidly prototype computer games that require advanced physics, good graphics, or simply work with a 3D environment.
From Unity, I took from the public domain a demo of a vehicle and began to familiarize myself with heightmaps and meshes, because I would be constructing all of the terrain procedurally from code. I also looked into terrain deformation, and I found a demo of mesh deformation, which proved to be somewhat similar to what I needed.
I decided the terrain would be an infinite expanse of grid that formed as you approached it – this would serve to give the world more of an otherworldly feel as well as reduce rendering costs. Also, if necessary, I could delete parts of the grid.
Then, I learned how meshes behave, and I began to create my own rectangular prism meshes from code, as these would be the basis for my terrain deformation. Once I had satisfactorily created these prisms, I looked into deforming them to make hills and valleys. The mesh deformation algorithm was essentially a function that that added to the height of the mesh at the point of interest, and changed the height of the surrounding points by varying lesser degrees, so that it looked as if someone had stuck a ball under a rug.
I would then redraw the mesh and recalculate the collision detection. I am glad that I chose to use a grid from the onset, because I was able to apply a couple optimizations that allowed the game to run at an acceptable rate on my computer. Some include: only calculating the collision detection near the vehicles, updating the collision detecting at a slower rate than the rendering of the meshes, and only calculating the deformation on a select part of the grid, rather than the entire playing surface.

Future
I think the game could do with more polish. I also think there are many new possibilities for using terrain deformation as a gameplay mechanic – only one of which I have explored. Furthermore, even within the concept of terrain generation itself I have barely scratched the surface. I could create tunneling algorithms, or the ability to create crevasses.

Capstone Project Proposal – Interactive Fabrication

by Karl DD @ 8:52 pm


The Interactive Fabrication project will explore and develop new interfaces for digital fabrication from an art & design perspective. Our aim is to create prototype devices that use realtime sensor input to directly influence and affect fabricated output. The above figure illustrated how the current creative process for digital fabrication closely follows the desktop publishing metaphor: A computer is used to create a design, a file representing the design is saved, that design is then fed to an output device, and finally the output device manifests the design into physical form. This process is a far removed from traditional craft where the artist or designer interacts directly with the material using tools such as brushes or chisels to paint or sculpt.
Although there are numerous advantages to the current digital approach, we believe by more closely linking input to output artists and designers can better understand the nature of the material they are working with. Furthermore interactive fabrication opens up new creative possibilities for fabrication through interactive performance and improvisation.

Shaper

Shaper is a new prototype that will explore near real-time fabrication using an additive 3D printing process. We will construct a 3-axis fabrication device that can be controlled directly via computer. This will allow us to experiment with a range of sensor based interfaces for near real-time digital fabrication. The first iteration of this prototype will focus on a sketch interface where gestures are used to control the fabrication device. Different software modes allow the device to automate the creation of 3D form, a single sketch gesture can be repeated to slowly build up 3D form by shifting a print head slowly along the Z axis. Subsequent interface iterations will explore other forms of input such as real-time tracing of physical objects using a camera and the use of sound to control the amount of material dispensed from the print head.

Speaker

Speaker’ is an existing prototype that interactively sculpts wire forms based on the sounds of people talking. A micro-controller is used to analyze speech and control several small motors that push and bend wire. The sound level determines the shape the wire is physically bent into. The next stage will focus on rebuilding the system to allow more accurate shape forms to be created in. This will create a more interactive experience as people can more directly see how their voices affect the fabricated output.

Trace Modeler

Trace Modeler’ is an existing prototype that uses realtime video to create three-dimensional geometry. The silhouette of a foreground object is subtracted from the background and used as a two-dimensional slice. At user-defined intervals new slices are captured and displaced along the depth axis. The next stage will involve developing a system for outputting the forms directly to a fabrication device in an interactive manner, e.g. by sending the silhouettes incrementally to a laser cutter or milling machine.


Cheng Xu & Karl D.D. Willis

Project Listen (capstone proposal)

by davidyen @ 6:57 pm

For my final project, I’ll be working with Jack Mostow and the Project LISTEN team (http://www.cs.cmu.edu/~listen/). They’ve developed a software called the Reading Tutor, a software that supplements the individual attention of teachers to teach children how to read better. For my final project, they’ve asked me to create some design sketches that explore a new feature to perhaps eventually incorporate into the Reading Tutor software. The basic question I will be investigating is: Can visual cues and game-mechanisms help children read more fluently, in real-time?

The software they’ve already developed does some speech analysis that assesses the prosody of a child’s spoken reading. Prosody is the stresses, pacing, and changes in intonation within spoken language. Fluent & non-fluent readers speak with measurably different prosody.

My responsibility is to use canned speech samples of both fluent adults and non-fluent children, and the speech analysis results of these samples, to create some design explorations. I’ll investigate if kids can read a sentence more fluently (with the correct prosody) if they are given some visual cues as to how to say the sentence. I’ll create a few (goal: 3 to 4) sketches exploring both different visualization techniques of prosodic features, as well as different gameplay mechanisms to engage children & encourage improvement.

Eventually, I will work with the team to go from canned samples to real interactivity. I’d love to have this interactivity possible by the time of the final show, but that will depend on the ease of integrating my code into their software, which is uncertain. User testing will be done with children in pittsburgh schools over the summer.

visually mobile (proposal)

by jedmund @ 3:19 pm

visually is a media bookmarking website. Gather images and video to create your own visual trends over time, and discover the unique visual culture of people and places around the world.

Proposal (PDF)

visually mobile progress (04/08/2010)

Final Project: Proposal

by areuter @ 10:34 am

Idea / Summary
Continue the Minute idea I worked on in the first project
Create gallery installation (for senior show / STIA exhibition)
Present audiovisual display of people perceiving a minute
->Minute videos pulled from a database
->Minutes will be randomly arranged to that viewers can make their own inferences about what might be affecting people’s perception of a minute
Allow people to add their own minutes to the project database on the spot

Plan

What I have so far
App that plays back (preselected) minutes
Videos are randomly arranged based on people’s background information
Six app-ready minutes, another 20 or so are still on tape

What I still Have to Do
Digitize and prepare remaining minutes
Clean up / polish app
->Do another pass on video arrangement
->Fix memory issues
Create second app to record new minutes on the spot and submit to database
Build booth for recording and other presentation components

And If I have time…
Add to database
->Record more minutes
->Collect optional background information
Add extra feature to arrange videos based on background info

Looking Outwards #6: Final Project

by areuter @ 10:10 am

In preparation for my final project, a continuation on Minute, I wanted to research what other people have done with time perception in art. It was actually very difficult to find any works directly related to how people experience time internally, as most time-related art works are focused on altering external environments to convince viewers that their perception has changed (for example, time-lapse).

While researching I found an interesting segment from Hugh Foley and Margaret Matlin’s book, Perception and Sensation:

Time perception might well be influenced by physiological state, knowledge, personality, and other factors. For instance, there is some evidence that a person with a high fever shortened her estimates of a 1-sec interval and that a person who lived in a cold cave lengthened his time estimates. The evidence for the contribution of metabolic rate to time perception is weak, but would be consistent with a biological clock. The fact that knowledge and experience also play a role (Theme 4), however, argues that there is a cognitive component in time perception. Alberto Montare (1985; 1988) has found that providing feedback to people about the accuracy of their time estimations increases the accuracy of subsequent judgments. Montare did not find gender differences, suggesting that time perception might be equivalent among men and women.

So it does appear that a person’s background affects their perception of time, although the most important factor may be his or her physiological state–perhaps other factors such as location effect time because they effect the body through stress and other lifestyle habits that vary by region.


Jeremy LAKHLEF – La perception du temps
Uploaded by bofman. – Watch original web videos.

Here’s one slightly related video I found, which simultaneously displays several videos which all span the same about of time, but the content of each panel affects he viewer’s time perception in a different way. It’s successful in communicating it’s point, although some of the videos are more interesting than others (the top left and bottom right) and I don’t feel that this is anything that I’ll really remember a few months from now. However, it does help me understand things I can do better in my own project: higher video quality, stronger consideration of aesthetic during filming. On the other hand, here’s a video I found which has nothing to do with time, but does really interesting spatial arrangements with video clips, which might be interesting to keep in mind while establishing the arrangements in my own project:

Final proposal

by xiaoyuan @ 7:58 am

I want to program a game where you get to kill a lot of enemies with guns… (More info to come)

Kaleidoscope Mirror

by ryun @ 10:34 pm 23 March 2010

When I was a small child, I remember when I experienced Kaleidoscope at first. It was small cylinder but the visuals that it created were so beautiful and I was very amazed, play with it all day long. Kaleidoscope is such a mysterious toy that generates endlessly different patterns using multiple mirrors and small colored particles.

For the capstone project I would like to build a “Kaleidoscope Mirror”. We are able to see ourselves only in the way it is through the regular mirror but Kaleidoscope Mirror can show the viewers in various ways with different patterns. Because the different patterns can be seen when Kaleidoscope is rotated, I am thinking of what interaction between the viewers and the screen can interesting. This part is not decided. One option is the viewer could rotate the mirror’s frame. The other option is the mirror detects the viewers l gesture or voice such as ‘Mirror, Mirror… ” and respond to that. Here is another example of Kaleidoscope using self-portrait image

Paul Shen Final Proposal

by paulshen @ 10:47 pm 21 March 2010

Proposal

Presentation

Fantastic use of ChatRoulette

by jsinclai @ 10:44 pm

Only because I know this class is in love with ChatRoulette:

Ben Folds on ChatRoulette while performing:
Ben Folds on Chat Roulette

and the full video taken from the audience http://www.twitvid.com/67269

Looking Outwards – Augmented Reality

by ryun @ 8:14 pm

Last semester, I built a “Virtual Wall” as my term project. This project was about tangible shopping experience. People buy furniture, televisions from the store but it is quite hard for them to see how they will look like on their apartment overall before they purchase them. So what they usually do is they buy the product only based on their imagination. The idea started from here. What if there is a way to upload their room picture on the wall and put the virtual item pictures on it and see how it looks like in advance? This is the concept about the project.

In this design, however, you need big space to see and control things and run a projector to make a real-size screen. I was thinking if there is another handy and easier way to keep this interaction and the main concept. Therefore, for this capstone project I would like to use augmented reality technology in iPhone and try to make interesting interaction with a virtual wall concept.

Besides, the virtual wall concept, it will have huge potential. I did not decided what it will be but, I would like to make something fun like these below.

Lego Augmented reality

Hallmark Augmented reality card


Final Project Proposal: Earth Timeline

by caudenri @ 6:58 pm

I’m currently working on a group project in which we are designing an exhibit about the evolution of birds. As I research the topic, I’ve become increasingly interested in the way that people perceive time on a long scale and also in the history of the earth. I would like to build on the research and motivation from this other project and create an interactive timeline for my capstone project. I would like to show the scale of the different time periods of earth’s history and what was going on on the Earth at that time period (this is of course dependent on the research that is available; factors like temperature data is more accurate in recent history than the theories we have for ancient times.) Factors I’d specifically like to show are average temperatures, continent formation, extinctions, and an brief overview of the types of life in the time period.

I know I’d like to do some sort of an interface that shows the whole timeline but allows you to magnify any portion to see the detail. I looked around for timelines like this on the internet and found a few nice ones but they all felt a little too segmented to me. I want to find a way to keep the information close together so it would be easy for the user to explore and compare information.

timeline example 1

I like the presentation of background information in this example and how they present the world maps for each period. http://www.nationalgeographic.com/seamonsters/timeline/index.html#jurassic

timeline example 2

British History timeline– The aesthetics are not great but I like the way of showing a larger part of the timeline at the bottom of the screen and using that to control where you are on the main screen frame.  http://www.bbc.co.uk/history/interactive/timelines/british/index.shtml

It’s generally difficult for people to conceive of long time periods; the dinosaurs lived 65 million years ago but what does that mean? I would want this piece to be easy to understand and navigate and to be interesting to play with. I envision that something like this could appear on a website for the National Geographic or Discovery.

sketch ideas

Here’s a few sketches I made for several modes the timeline could have and what the interface could look like, however I’d like to come up with a way to show all the information in a more integrated way.

Final Project Proposal – BusCount

by sbisker @ 6:02 pm

For my final project, I hope to continue my work in urban computing and time-lapse photography. In particular, I’ve been working on creating a cheap, time-lapse camera that people could use to “sense” the world around them. The cost of such a device would be around $10, when all is said and done, and having multiple of these devices would open up options for public computing that people have not considered – a world where people are willing to deploy “personal, public” electronics throughout their environment with the same ubiquity, recyclability and reusability as paper.

That said, getting a camera to a usable state where one could seriously explore this future would take time. I’ve created a proof of concept camera, to test how seriously my idea can be taken today with open-hardware techniques – but the question remains, how do I explore what might be possible in the future using today’s technology? In the seven weeks of this project, it’s infeasible to linearly refine my open-source hardware to a usable state and THEN explore the possibilities of personal, public computing on that same hardware.

My solution? Cobble together existing off-the-shelf hardware to do the task for me. I hope to combine the Wingscapes Plantcam ($80) with the Eye-fi Wireless SD Card ($40), to give myself a slightly expensive “prototype” hardware platform for taking time-lapse photography in public spaces. I will then create a server application and website for time-lapse photo processing that, combined with the hardware, makes a reasonable case for why a cheap, open-hardware time-lapse photography kit makes sense for individuals in the community.

By the end of this class, I hope to have worked out the kinks in this prototype hardware setup, and successfully deployed a hardware and software setup that regularly counts people at the CMU bus stop and shares that information with commuters around CMU through a publicly available website. The thought is that people can safely assume the bus has already left if no one is waiting at the bus stop, and that the bus is running late and will arrive soon if more people than normal are waiting. A rough sketch of what such a website might feel like is below.

(Note: I reserve the right to rework this project as PamelasCount, or LaundryCount, or whatever allows me to complete a successful intervention in the time alloted that still taps into the “zeitgeist” of personal public computing.)

Development Plan:
Mon, March 22nd (Today, and My Birthday): Deliver this pitch. Done!
Thur, March 25th: Have hardware acquired and functional (PlantCam/Eye-fi). Spend lots of money to get things quickly. Try to get other projects interested in equipment to reimburse.
Wed, March 31st: Prove that pictures can be transmitted with PlantCam/Eye-fi to a local machine or Flickr service
Fri, April 2nd: Prove pictures can be transmitted over public wifi (CMU, Sq Hill)
Mon April 6th: Have blob detection working with “test photos” (taken without PlantCam/Eye-Fi)
Fri April 9th: Have blob detection work in real time with PlantCam/Eye-Fi photos.
Sat April 10th: Begin work on public-facing website.
Sun April 11th-Fri April 16th: CHI in Atlanta. Work on website during boring talks.
Freak out, recognize irony of being crunched to finish further work in Personal Public Computing because of presenting previous work in Personal Public Computing.
Fri April 16th: Plug real photos, blob detection data into public-facing website.
Mon April 19th: Present finished work, or beg for forgiveness/extension.

Capstone Project Proposal: “Vent at Me”

by aburridg @ 8:03 am

Concept & Motivation

I’ve always been really interested in how people choose to express themselves through the internet. Many people I know have heard of Post Secret and FMyLife: sites in which people can express something personal about themselves anonymously to the entire internet. I’m sure many people have also heard of various YouTube celebrities who blog about random topics in their lives/about the world and end up getting 1.8 million views (a.k.a. Boxxy).

Another big question I have is why?: why do people feel the need to express their problems publicly to potentially everyone on the internet? An obvious answer would be because it’s an outlet for them. Or, they believe that by expressing their secret or opinion, they will change the way others think about the topic of their secret or opinion (or at least open their eyes to the topic). So, I figured I’d investigate even further than that and put forth the final question I’m going to hopefully investigate for this project: Why do people feel the need to express themselves over the internet, and how do the topics they feel comfortably sharing over the internet differ from the topics they would share with a person face-to-face?

Method(s)

I don’t think they’re has been a huge study on this, but I want to collect my own data anyway. I will be using Mechanical Turk again (since I had so much fun with it in my first project!). I will also try to use other survey sites and chat clients to get data. I’ve already begun collecting data and shaping my survey from mechanical turk. I’m going to start trying to collect data from chat roulette soon (when I’m not sick anymore and my voice returns).

This is going to be an information visualization project, so after I collect enough data, I will try to compile the data into a visual project. I haven’t fully decided on how I’m going to visualize my data–it depends on how successful my data collection from the video clients are and on the final data set (how many dimensions I have to work with).

Right now, I’m thinking about using a bar chart to display my data (each bar would represent the data from a different participant). And, using the fish-eye idea to be able to zoom in on the bars to get more specific information about the participant. There will also be filters to only show participants from certain age groups and genders.

Final Project Proposal – Little Friends

by kuanjuw @ 11:51 pm 20 March 2010

CONCEPT

In this project I am building a group of creature-like robots which interact with people.

In many movies we can find a group of small adorable creatures that are friends of  human beings.

We like creatures that is smaller than us and being surrounded by them we feel worm.

(of course, if those guys look like bugs in “Mummy” you might wanna run away)

They usually have same characteristics: soft, small, and pure soul. And most important thing is that they

make human beings aware of being respect to the nature.

In my final project I am interested in exploring the interaction between  a school of physical

objects and the user, also creating an experience of playing with objects which have minds.

To implement the kinetic installation I am thinking of using camera to track viewers’ position from above,

and projecting intense light circle on the floor. A school of light following robot will then tracing those

light sources.

Beacon at Lightwave 2009 from Cinimod Studio & Chris O'Shea on Vimeo.

SKETCH

Looking Outwards: Gramazio & Kohler Walls

by mghods @ 6:54 pm

Fabio Gramazio is a professor of architecture in ETH Zurich Architecture Department. He is also a co-founder of Gramazio & Kohler, an architecture firm based in Zurich, Switzerland. He has done many interesting projects in field of digital fabrication. He has been offering a course in creating digital fabricated walls. Here are some of the projects done his classes: (You can find a brief description of each work here.)

1- The Programmed Wall

2- The Perforated wall

3- The Disintergerated Wall

4- The Resolution Wall

5- Acoustics

6- The Sequential Wall 1

7- The Sequential Wall 2

8- The Programmed Column

Looking Outwards: Spam Architecture

by paulshen @ 5:39 pm

http://www.sq.ro/spamarchitecture.php

This is a looking outwards for my capstone project. I stumbled across this project which generates 3D models analyzing the spam in an inbox (the project documentation is very limited). However, I find the models very beautiful and would like to learn how to render such models (as well as how to generate them). One critique I have of the project is that there seems to be little I can conclude from just looking at the models. But perhaps this is a result of the limited documentation.

Looking Outwards: Augmentation (Sightseeing telescope)

by paulshen @ 5:36 pm

http://www.we-make-money-not-art.com/archives/2008/06/im-back-from-my-favourite.php

Sightseeing telescope reveals open wifi networks in urban space

Similar to my project of augmenting videos with optical flow, this project augments a telescope with the current state of wireless networks that can be “seen.” In the same sense, this project shows something that is invisible to the naked eye.

The telescope works with a WiFi antenna that can detect WiFi networks far away in a direction. Using this information, the sight of the telescope is projected on to a screen, along with circles indicating WiFi networks.

Obervatorio reflects on this scenario by informing viewers about the current state of wireless networks located in the area where the device is installed. The sightseeing telescope, installed on the Laboral tower, tracks and shows where Gijon’s wifi networks are located in real time. You can visualize them on the screen of the telescope, swing it around and see which areas have a denser wifi coverage, and get additional data such as which ones among these networks are open or private. Because Observatorio is programmed to try and connect to any open network available in the area, it can send the information from the observation tower to the exhibition hall, where it is displayed on a big screen. If there is no open networks detected in the area, Observatorio remains separated from the main exhibition space, located in another building. A modification of these networks is also offered, showing an ideal configuration in which the local residents of large areas in the city could gain or share access to it.

Next Page »
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2016 Special Topics in Interactive Art & Computational Design | powered by WordPress with Barecity