Monthly Archives: March 2013

Elwin

06 Mar 2013

I have a couple of random ideas for this project. Not sure which one I should do yet.

Face-away

This idea doesn’t really have a purpose. It’s more experimental and artsy I guess. Imagine a panel sticking out of the wall that can rotate on the x- and y-axis. The panel reacts to a person and will rotate away from the person’s head, facing away from the user. For example, if the user goes to the right, the panel will rotate to the left on the y-axis. The panel rotates the other way around if the user goes to the left. Same inverse movement will occur when the user tries to look at the panel from above or below.

**sketch image coming very very soon**

Possible implementation:
– I could either use a webcam or Kinect to track the user. I think the most important part is the ability to track the location of a person’s head. If I use the Kinect, I should be able to get the head position from the skeleton (I’ve never worked with the Kinect). I could also use blob detection with the webcam from a top-down view or something, but I don’t think it would be accurate enough. Perhaps a better and easier method would be to use FaceOSC to track the head. I would have to place the camera in such a way that I would be able to see and capture the face from all angles.
– For rotating the panel I could use 2 servo or stepper motors; 1 for each axis. These shouldn’t be hard to implement.

???
For now there’s nothing to see on the panel. Could be just a plain piece of material, wood, acrylic or something. But I’m not sure if that’s interesting enough or if I should come up with something to display on the panel.

 

Bueno

06 Mar 2013

So, Caroline and I have decided to collaborate on a project together.

A few inspirations and references we hope to draw from for this:

  • Here is an awesome article about how virtual reality does and does not create presence.
  • Virtual presence through a map interface.  Click
  • making sense of maps. Click
  • Obsessively documenting where you are and what you are doing. Surveillance. Click
  • Gandhi in second life. click.
  • Camille Utterback’s Liquid Time.
  • Love of listening to other people’s stories. click
  • Archiving virtual worlds

A few artistic photographs of decayed places:

Our thoughts concerning all this surrounded Carnegie Mellon and “making your mark”. People really just past through places, and there is a kind of nostalgia in the observance of that fact. Furthermore, what “scrubs” places of our presence is just other people. There is a fantastic efficiency in the way human beings repurpose/re-experience space so that it becomes personalized for them. We conceived our project as having people give monologues and stories that can be represented geographically, using Google maps. Their act of retelling would also be a retracing of their route.

Some technically helpful links:

 

Here is a video sketch I did of me “walking” through a little personal story of my car incident on Google maps. This example video I made is of really crappy quality, so I apologize in advance. I was making it in kind of a rush and didn’t have time to figure out proper compression. I will upload a better version later.

 

Sequence 01 from Andrew Bueno on Vimeo.

Caroline

06 Mar 2013

Inspiration/ References:

  • Here is an awesome article about how virtual reality does and does not create presence.
  • Virtual presence through a map interface.  Click
  • making sense of maps. Click
  • Obsessively documenting where you are and what you are doing. Surveillance. Click
  • Gandhi in second life. click.
  • Camille Utterback’s Liquid Time.
    • (Bueno) Ah, could you imagine what it would be like to have a video recording spanning years?
  • Love of listening to other people’s stories. click
  • (Bueno) Archiving virtual worlds


Thoughts:

  • carnegie Mellon and “making your mark”. People really just past through places and I think there is a kind of nostalgia in that.
    • (Bueno) I think an addendum to that thought is that what “scrubs” places of our presence really is just other people. Sure, nature reclaims everything eventually but there is a fantastic efficiency in the way human beings repurpose/re-experience space.


  • artistic photos of old places


Technically Helpful:

 

Sound scrubbing library for processing.

Andy

06 Mar 2013

So for better or for worse, I think one of my greatest sources of inspiration is to learn how to use a new tool and then demonstrate proficiency with it. For my current idea of the interaction project, perhaps two tools. I guess my basic idea is pretty simple – I want to take real objects and put them into video games. Below is my first attempt to do so – a depth map of my body sitting in chair which I was able to import into Unity via the RGBD system.

bodyinunity

Here is the image in the intermediate steps, perhaps you can see it better this way:

mesh0

So we are a long way away from good-looking representation, much less the possibility of recombining depth maps to get true 3d objects in Unity (an idea), but I really want to become more proficient in Unity as well as spending some time with RGBD, so I like the idea of playing around and seeing what I can make possible in a space which is (to my knowledge) very unexplored.

WHERE IS THE ART?

Definitely a question on my mind. The super-cool idea I have is to use RGBD and augmented reality to allow me to create levels for a video game with the objects around my house, recording the 3D surfaces and assigning spawn points/enemy locations/other stuff based on AR symbols which I can put in the scene. The result could be this hopefully cool and creative hybridization of tabletop and video games, allowing users to create their worlds and then play them.

I’m also curious to see who shoots the first sex tape in RGBD, but I don’t think I want to be that guy.

Michael

06 Mar 2013

How often?

We often hear statistics about how frequently certain events occur.  One child dies from hunger every five seconds.  Someone buys an iPad every 1.5 seconds.  Someone dies from poor indoor air quality every 15 seconds.  A baby is born every quarter second.  These numbers only let us understand these phenomena on a very cerebral level, though.  Even well-designed infographics only engage the user visually.  I would like to make an installation that cycles through a database of these statistics and allows the user to experience each through a combination of touch, light, or sound.  For example, a light could blink with a period of 1.5 seconds to indicate the frantic pace at which the world is buying up iPads while a gentle burst of compressed air to the back of the hand every five seconds reminds the user how often the world lets a child starve to death.  Approximately five children starved in the time it took to read this paragraph.

 

QR Code Infobombs

People love to scan QR codes, even if they don’t know what they lead to.  I might like to pepper sidewalks with QR codes made with chalk and stencils that lead to a website that presents highly localized and continuously-updated information on smog and air pollution.  If people scan them while walking along a busy road, I hope I can make the presentation compelling enough to make the link between air quality and traffic stick in their minds.

 

Secret Keeper

I imagine a tiny black cube with a phone number and instructions on the side, to be placed on a pedestal in some public location.  If you text it a secret (and the text checks out in terms of length and variability to weed out messages like “butts butts butts”), it will store it and reply with an anonymized secret that it has heard before and is most similar to yours.  Each secret gets sent to only one other person after a suitable number have accumulated, so you know that when you tell it a secret, only one other person will receive it.  In a sense, it’s a bit like Post Secret, except for the strange sensation that exactly one stranger will know something deeply personal about you. (Also, the cube may emit a faint red glow when it receives the secret, to indicate some link between the physical object and the process).

 

Patt

06 Mar 2013

I have two ideas for this coming project, both of which has to do with making sound/music.

The first one is to combine the Kinect with abletonLive to create an interactive, real-time performance.

I talked about this video in my LookingOutwards post. I think it’s a great work that combines different tools and brings together different groups of people to create something quite extraordinary. Since I am new to both the Kinect and abletonLive, it will be a chance for me to explore what is possible. For this project, my goal is to learn how to combine the two softwares together to create great music and just something fun to play with.

My second idea is more hands-on. I want to use arduino, conductive materials, and objects you can find around the house such as paper, fabric, plastic, etc..etc.. to make something that can make interesting sound. I have seen tutorials that teach the basics of how this can be done, but I am trying to come up with a new and interesting way to implement it.

http://hlt.media.mit.edu/?p=1372

Kyna

06 Mar 2013

Interactivity Project ->

For this project I’m really hoping to make a game for Android tablets/phones that utilizes the touch screen. I’m not sure if that’s too ambitious for the time we’re given but I feel like it’s an area I’m going to need to explore eventually.

My current idea, which I think is definitely too big for this assignment, is to make a wave-based (think Tower Defense / Plants vs Zombies) game wherein you play as a goblin warlock’s apprentice, and your job is to go clear out an old fort that’s infested with humans. Levels would be different rooms, and the waves would consist of different types of people (knights, knaves, whatever). As a warlock apprentice, you know some spells that you can cast onto the oncoming waves by drawing different symbols.

ugh

I have some other less time-consuming ideas that I might fall back on in the event I can’t get the barebones version of this running by the due date.

SamGruber::Interactive::Sketch

I began thinking about this project with a question: why is code text? Almost all programming must be accomplished by writing out long stretches of symbols into a text box, with the only “graphical” component being (often incomplete) syntax highlighting. Back when all computers could display was text and the primary input device was a keyboard, this was perfectly reasonable.

But now even a high school calculator draws color graphics, and more and more we use phones and tablets which are meant to be touch-driven. And yet, programming remains chained to the clunky old keyboard. Producing programs on a tablet or phone is all but impossible. But there’s no reason it should be. Creating programs should be as easy as drawing a picture.

lambda_graphical

I draw from the computational framework of Lambda Calculus, in which all computation is represented through anonymous function-objects. Naturally, this mode of thinking about programs lends itself to a graphical interpretation.

Lambda Calculus needs only a few metaphors defined. A line charts the passage of a function-object through the space of the program. Helix squiggles denote passing the squiggled function-object to the other function-object. Double bars indicate an object which dead-ends inside of an abstraction. Large circles enclose “Lambda abstractions” which are ways to reference a set of operations as a unit with inputs and an output.

The goal of this project is to develop a drawing-based editor for Lambda Calculus programs that can be expressed in this manner, which automatically converts the user’s sketches into programs.

Erica

06 Mar 2013

I have a couple of ideas that I am trying to decide between for my interactivity project.  I am interested in doing something that is both screen and touch based using either a phone/tablet, Sifteo, the AR toolkit, or Reactivision.  I’m not really sure what I would do with these later two tools, as I was just introduced to them in class on Monday but I’m keeping them in mind.

My first idea is to continue the Sifteo project that Caroline, Bueno and I worked on for project 1.  I think that we had a really neat idea and I would like to find a way to optimize the clock to alleviate the memory issues we were having as well as create an interface that would allow users to design their own “puzzles” for turning off the alarm clock.

Another idea I have to to use As-Rigid-As-Possible Shape Manipulation (which makes it possible to manipulate and deform 2d shapes without using a skeleton) to create a tool for real-time, interactive story-telling.  I plan to implement this algorithm in C++ for my final project for Technical Animation, and I thought that I could extend upon this to let users draw the characters to be manipulated on a tablet, then, by connecting to a monitor or a projector, tell stories by manipulating the characters.  I see two possible applications of this: 1) as a story-telling tool to create a sort of digital puppetry, and 2) as more of an interactive exhibit where visitors could add to the story by either creating new characters or manipulating the characters that are already there.  I’d also be interested to hear other suggestions of applications of this.

I’m also really interested in the idea of educational software.  For my BCSA capstone project I’m working on an educational game and I really appreciated the iPad app we saw Monday that counts your fingers.  I would like to maybe apply the Shape Manipulation I discussed above to an educational context but I don’t have one definitely in mind yet, so I’d also like to hear ideas of such applications or ideas of interactive educational software in general.

 

Caroline

04 Mar 2013

FFT = Fast Fourier transform

Engineering Terminology for artists

Will be focussing on continuous digital data: 1D sensors and 2D signals (images)

Even buttons have noise. Media artists must deal with noise:

url

 

Signals:

amplitude, frequency, period

Timbre: the shape of the wave (ex: square, ragged, curved)

Phase: phase must have two waves in relation to each other. They can cancel subtract or add to each other.

Pulse Width Modulations: duty cycle is the amount of time something is on

Spatial Frequency: visual signals have it all too ( amplitude, frequency, period and orientation)

different spatial frequencies convey different things about about an images:

high = detail, low = blur

Digital Signals:2 numbers characterize the sampling resolution:

Bit Depth

Sampling Rate

Nyquist Rate & Aliasing: nyquist rate is 1/2 the sampling rate. Any frequency higher than  two times the sampling rate will be aliased ( distorted and represented as a lower frequency)

line fitting: least squares line fitting. opencv

Forier: ways of representing a complex sound as a combination of different waves. This allows you to re-create a sound. see visually in stereography

can also see the the fft of an image. (has orientation unlike stereography) can reconstruct an image from its fft.

Noise:

Gaussian noise is most common when observing natural processes

shot noise: bad individual samples (sporadic pops)

Drift noise: linked to time. where sensor becomes degraded

Filtering:

local averaging: local filters average of surrounding local values (use a copy buffer)

median average: gets rid of spot noise really quickly.

Winsorized Averaging: is a combination of median and averaging. It cuts off extreme values and then it averages.

convolution kernel filtering (2D): replacing my value with that of my neighbors. Can give different weights to different pixels/

kernel: 3×3 equal weights. can use it to detect edges etc. ( use imagej to write own filters)

gaussian: 7×7 pays less attention to corners.

Histograms: thresholding – determining foreground and background.

finding the best thresholding: use the random triangle method that usually works. eyeo thresholding is the intersection between different curves. iso thresholding.