supercgeek-FinalProposal

I met with CARO yesterday (April 17th) to discuss ideas for a final project collaboration and we generated some interesting things:

  1. Spherical Capture and Representation
  2. Small Cube “Object-Based Capture” with Physical Memory
  3. Sound of Things

Anyway, after a lot of time chatting, we decided it would be best to go our own directions: I’m planning to revisit my Place project for the HoloLens with a number of main goals.

Revamp Goals:

  1. Nail the Craft & Kill (or integrate) the glitch (into the aesthetic)
  2. Use Vuforia Image Targets to create a ‘continuously pinned’ spatial environment that stays synchronized
  3. Realize the time-tear origins of the project with portals that open and close in a convincing and engaging way
  4. Create more scenes (hopefully some outdoors) & make a killer documentation video of the whole thing.

https://vimeo.com/209779985

Place Posts:

cdslls-finalProposal

For my final, I was hoping to either :

  1. further my research with the Schlieren mirror and capture airflow through speech (which was my original goal). After doing some research and finding this paper dealing with my subject specifically, I have decided to try two different methods: 1) by using dry ice to generate cold air that could possibly contrast with breath and 2) use a high resolution camera without slow-motion, which might help with the loss of detail. If I succeed (huge if), I will visualize a meaningful piece of text (tbd), without sound.

Or 2) put my LIDAR scans into Google cardboard.

Bernie-FinalProposal

Quan and I would like to keep playing with the robot arm and black magic camera.  It’s interesting to make tools for people integrating focus and zoom and the robot arm.  I think it would be cool to try to use touchOSC to have people create scores sort of like the ones we created with focus blur, but integrating more parts.

Maybe one designed specifically for the Dolly Zoom effect.

ngdon-finalProposal

I would like to improve my event project as my final project. I think there’re a lot of possibilities that can be explored with this tool I now possess.

For the event project, I only have the time to demonstrate the tool on horse pictures. As suggested in the group critique comments, I plan to give the subject more thought and apply the algorithm on more interesting datasets.

I will also refine the code, since currently my program is basically thrown together and glued with hacks, and running it is a complicated process. Better image registration algorithms are also mentioned in the comments, and I will try them out.

 

hizlik-finalProposal

In simplest terms, I want to create an applet that “applies Instagram filters to music” or other sound files. I did a lot of research into the methods and techniques this could be done with and I have a few options, all of which would technically yield different results. I would have to choose the perfect balance of what I want and what is possible.

Method 1 – FFT (spectrum analysis)

This is the original method I thought up of. At any given moment, the human ear can hear a range of frequencies (21 – 22,050 hz). The applet would, ideally in real-time, take each “moment” of sound from a sound file (or mic input?), convert the 22k frequencies into a RGB representation and plot them into an image form, apply the selected filter to the image, and read back in the new RGB values pixel-by-pixel to “rebuild” the sound. To be somewhat recognizable, I was thinking the image would be organized in spiral form, where the image is drawn inward starting with the outermost border of pixels, and progress up the frequency range as it does so. This way the image would have bass tones on the outside, and the higher the pitch, the more center the image. In this method, an Instagram filter that applies a vignette effect would accomplish the same as a bass-boost, if I choose for it to behave that way. Of course, the direction of sound can be applied in reverse as well (bass on inside).

PROS:

  • Since the filter is applied to entire sound spectrum, and the spectrum is organized in a data-vis kind of human readable way, it would act as expected and could yield recognizable, fun and understandable results (deeper bass, modified treble, etc).
  • Full control over how sound is visualized and applied. FFT results one value per frequency (amplitude/volume), which can be interpreted with an RGBA value in any way.

CONS:

  • Realtime may be slow or impossible. It may be impossible to “rebuild” sound based on just modified FFT values/ frequency values, as FFT is an average of the sound instance index <code>i</code> and <code>i-1</code>, I believe. And if possible, could be very complicated.
  • FFT has one value per frequency (amp/vol). A pixel is made up of RGB (and HSB), and optionally an alpha value. An Instagram filter will not only modify brightness, but possibly hue as well. How to convert all those variables back and forth with amp/vol? Would a hue shift result in a tonal shift too? And brightness is volume? Are all “tones/similar hues” averaged for volume? Added? Mean? Mode?

Method 2 – Equalizer

This is similar to Method 1 in that it applies the effect as it goes, to the sound instance at that moment, preferably in real-time. However, instead of using FFT to do a full 22k spectrum analysis, it would go the route of sound equalizers, ones you’d see in DJ software of HiFi systems. I tried to find examples of this in Python and Java but they’re hard to find online and I don’t quite understand how I’d do this. The Minim library for Processing has “low-bandpass” and “high-bandpass” filters but I’m not quite sure how to do adjust these bandpass filters, and how to apply them in specific frequencies rather than just “high end” and “low end” sound. The use of Instagram filters would also be different than Method 1. I’m not quite sure how it would work, but my thought is applying the Instagram filter to a constant original image, “calculating the difference” between before/after pixel-by-pixel, and apply those pixels to specific frequencies or frequency ranges on the equalizer. Essentially the Instagram filters would be run once, perhaps loaded as variables in setup(), and the difference mentioned above would be calculated to a simple mathematical expression to be performed on the equalizer.

PROS:

  • Since the filter is applied to entire sound spectrum, and the spectrum is organized in a data-vis kind of human readable way, it would act as expected and could yield recognizable, fun and understandable results (deeper bass, modified treble, etc).
  • Full control over how sound is visualized and applied. Equalizer can apply one change per frequency (amplitude/volume), which can be interpreted with an RGBA value in any way.
  • Realtime is possible, depending on the efficiency of my coding. Other software does this, why can’t mine?

CONS:

  • A pixel is made up of RGB (and HSB), and optionally an alpha value. An Instagram filter will not only modify brightness, but possibly hue as well. How to convert all those variables back and forth with the equalizer adjustment? How would hue shift and brightness shift change things separately?

Method 3 – Song File

This is perhaps my least favorite idea because it applies to the entire song at once, which may yield results that don’t quite make sense. In this method I would essentially be reading a sound file’s encoding, such as .wav or .mp3, somehow decode it into readable sound files OR directly convert the filetype’s encoded hexadecimal values linearly into some kind of image (hexadecimal to RGB hex?), and that image would represent the music file as a whole. However, applying an Instagram filter on it would yield weird results. In method 1, a vignette could act as a bass-boost. However, this method may just boost the volume at the beginning and end of a song for example. The other blaring problem with this is that messing with encoding may just result in absolute gibberish noise.

PROS:

  • Potentially very fast execution, and could spit out a new, savable file as well (or just play it back, which probably requires saving the file as well and re-loading it with a proper MP3/WAV reader library

CONS:

  • Potentially garbage noise, or breaking the encoding/file entirely
  • Extremely complicated to encode/decode music files, or even just read them

Quan—FinalProposal

For the last project, I want to continue working with Evi on the Robot Arm, but this time I want to put the camera ON the arm. With this, I was interested in doing some 3D-tracking and matching that with some sort of dolly-zoom technique. I think this would allow for some interesting effects.

Another thing I was interested in trying out is connecting certain settings on the camera with touchOSC on a phone. It’s interesting how we would be translating a human interface (zoom ring/ focus ring) with another touch interface (phone).

I have several other ideas, but I want to continue with Robot Arm dev with Bernie!

mikob – final proposal

For my final project, I would like to revisit the place project of answers from password security questions. As it was suggested, I’m going to build an actual interface that depicts a realistic experience of answering password security questions when registering as a new user. For the previous project I narrowed down my questions to the ones that are relevant to a place but this time I want to curate a variety of questions. Also instead of using mechanical turk, I will reach out to someone that Golan knows who runs a company that distributes surveys so I can get more answers with better quality.

 

fourth-Final-Proposal

For my final project, I would mainly like to collaborate with somebody else on a new project. My best work in previous semesters has come out of collaboration and this class is a great chance to do something remarkable that I couldn’t do alone.

That said, I would like to revisit the high-speed camera portraits. Instead of exploring in-between expressions, I would like to explore time-remapping and ‘busy’ scenes.

I love the idea of a high speed portraits captured as portraits (not as “slow mo shots”), but in high speed, and would like to explore this further and take advantage of the tool more. Things like moving the high speed camera to create depth and parallax in a portrait with a lot ‘else’ going on – think throwing colored flour and so forth – in an aesthetic that is lit well and visually stunning as hell; I know these tools can bring something new and exciting to portraiture but I’ve yet to really get to it.

Imagine this but slowly moving. Still, basically, a still frame; just a spark of life and character and expression; more cinemagraph than videography.

gloeilamp-finalProposal

*An inspiration: Tauba Aurbach’s RGB Colorspace Atlas

For the final project I would like to further investigate video as a volumetric form. I began this during the first project through my time remapped stereo videos, but would like to explore some other possibilities.

One workflow I am imagining would be:

  • Capture a scene with a specific subject, perhaps a person, in high-res video
  • Capture the same scene through photogrammetry or otherwise develop a watertight 3d model of the subject
  • Create a voxel representation of the video from the XYT domain
  • Intersect the 3D model of the subject with the voxels of the volumetric video

This would, I imagine, result in the subject’s movements over time being represented on their body as a texture. It could look pretty nuts.

Other possibilities for exploring these video volumes could be:

  • Produce the voxel volume of a video as a digital material that one can actually “sculpt” into, revealing different moments in time. Workflow would likely involve Processing + ImageJ + Unity, and output would likely be a VR or game-like experience.
  • A higher quality version of the stereo time-remapping I did with the bloggie camera. This would involve mounting and genlocking two DSLRs, potentially controlling through OpenFrameworks. This could also be an opportunity to revisit depth from stereo.
  • Underwater – I could do more underwater explorations with 2 waterproof gopros (knockoff)
  • A book- similar to Tauba Aurbach’s work I could produce these volumetric videos as flipbooks

Bierro-finalProposal

For my final project, I would like to refine my Place project about Shibuya crossing in Japan. I felt I was lacking time to craft my result and I want to spend more time on it. This will involve grabbing the camera feed on my OF app. I am running Windows so I can’t use Syphon but I will try to use a similar approach using Spout, Windows equivalent. I will also work on the layout of my app and tweak the different parameters of my algorithms to have the graphs most representative of the dynamics of the place.

I will also in the next few days finish to edit my Event video and will discuss around me to see if this is actually more worth showing in the final exhibition than the Place project or not.

fatik – final Proposal

For my final I want to continue playing with the depthKit. I had a really fun time using it so I want to continue to use it. I am planning on going to different concerts now and trying to get a variety of crowds because I know how to use it. In the end I was thinking of trying to make a VR experience with the footage so that people can experience being in the space as well. It’s also Carnival, so finding masses of people won’t be that difficult.

 

iciaiot-final-proposal

For my final project, I’ll be working on my senior capstone. This is a interactive piece made in unity with objects reconstructed from memory using clay, fabric, glue, plexiglass, yarn, and hand drawings. I am using photoscan to convert put these objects in an intractable world. I will use a sheet as a backdrop to project this project onto and a kinect will allow the user to navigate the space.

caro-finalProposal

New Project: The Sound of Things

Could you modulate light waves into the audible domain? Or audio waves to the visible spectrum? What results would you get? Could you convert an audio file into a photograph?

What do your photos sound like? Or, what do your sounds look like?

I’d like to explore this in my project.

  1. Get a photo
  2. Pixel by pixel, find the color value and the frequency of that color
  3. Modulate that frequency to the audible domain
  4. Stitch together each “pixel” of sound to create an audio file

Reverse this process to create images out of sound.

Could you create an image out of the ambient noise of a room? What would it look like?

Could you intentionally compose music to create a photo?

 

“Colours and their sounds” http://altered-states.net/barry/newsletter346/colorchart.htm

Prometheus, the poem of fire, a piece of music intended to create certain colors https://en.wikipedia.org/wiki/Prometheus:_The_Poem_of_Fire

https://en.wikipedia.org/wiki/Spectral_color

PhotoSounder (costs $90) http://photosounder.com/

Similar project http://www.gramschmalz.com/encoding-images-as-sound-decoding-via-spectrogram/

Similar project http://www.npr.org/sections/pictureshow/2014/04/09/262386815/can-you-hear-a-photo-see-a-sound-artist-adam-brown-thinks-so