I am still working on looping, and will experiment tomorrow with rotoscoping and looping around a 360 video.

After talking with other students I was inspired to work on a project combining strobes and continuous light. I would also like to play with focus blue and this effect, but I need more hands to do that.

Flash/Continuous mixing is not a new technique. Combining continuous and flash is a necessary tool for photographers who shoot with flash out doors. The flash output is not affected by the shutter speed, so a photographer can change the shutter speed to get independent control of the balance between flash light and continuous light, in-camera. It’s a very powerful (and difficult) tool for photographers to master.

This is something I have done before for creative ends, as seen above, where I shook my camera while I fired my flash.

I set up a quick studio, a black backdrop, a continuous LED light (right), and a flash (top left, above lamp) with a grid on it (to control spill). The standing lamp (bright spot, left) was not used during the exposure.

There is one clever thing I did. I put a warm filter over my continuous light and a cooling gel filter over my flash. By shooting RAW and adjusting the white balance, I had control over both of these lights independently. I could push them further blue and further orange. Doing this, I mixed to black and white. In the black and white mix, I can adjust thee white balance and the brightness of my blue channel and my orange channels.

This way, I could fine tune the balance after-the-fact, and edit my image with much more precision and control over the perceived lighting and shadows than if I had used masks, or dodging/burning.

These are all from the same source image. The left images are adjustments of white balance, and the right images include white balance and black/white mix adjustments to get different looks. Notice the shadows on the image-right side of my face in the BW photos above.

Moving around during the exposure got your standard “multi-exposure” “two-face” look, which is pretty cliche.  Heisler nailed it, I see no reason to just make a bad imitation.

Attempting to stay still, however, proved more interesting. Uncanny, you might say.

Next, the actual direction I may take my final event project. For this image, I removed the color filters from both lights. The color version still looks like junk, so I mixed black/white here (okay, some split toning, to bring back my blue/orange palette in spirit), but I would may shoot color or go for a more documentarian (read: ‘non-artsy’ ‘point-and-shoot’, ‘just captured a slice of life’) feel for a final project.

I light different parts of a scene with continuous and strobe light. This gives the continue blur a sense of movement – I capture an event; but allows my to highlight a part (a face) of this event with clarity; all in-camera. In this case the event is drinking a beer. The continuous light doesn’t light my face almost at all, so the beer is gone when I raise it up. You see me fidgeting, crossing/uncrossing my legs, moving the beer, and so on. Note: this image is intentionally underexposed (I dropped it down in post).

Again, this image was created in-camera, colors/tones/acne edited in post for fun/habit.

If I did pursue this technique for my final project, I would get a model so I could focus on the photography side of things. I’d like to capture individuals performing different actions, using motion to a greater extreme than it could be done without a flash.


Up and Running with DepthKit!

Currently I have DepthKit Capture and Visualize installed on the studio’s Veggie Dumpling PC. It worked with the Kinect V2 nearly immediately (a small filepath error needed to be corrected), so DepthKit in the studio is now a go.

Further, after exporting the video from DepthKit Visualize (Visualize exists for Mac OS/OSX but is really not usable, at least on 10.12.1, as the UI is invisible), I successfully dropped the clip into Unity.

Next steps: 

I am constructing a watertight box for the Kinect to begin underwater testing. In-studio testing has shown that high quality cast acrylic does not interfere with the sensing ability of the Kinect. I will tentatively use the CMU pool next week(meeting with aquatics director soon). Once I have a better idea about the sensing limitations in water, I will know what type of shots I am able to achieve. Further research into different types of underwater sensing lead me to be more optimistic; time of flight depth cameras have shown good results in papers like this.

Once the capture has been made, the question of the final media object remains. Now that I am able to get the clips into unity, I have the possibility of making this a VR experience. I have a feeling with the proper lighting and scene design it could be nice in the Oculus Rift. Else, I will make a video or series of gifs.



The box is constructed, but I still want to sure up the sealing. The area around the cord was not watertight despite my best efforts with silicone. May need to redesign that face of the box. (don’t panic, kinect is dry!)

Tests so far: splashes and ripples look pretty cool!

Submerged tests: mixed feelings on the results here so far; the actual depth being sensed seems pretty tiny. Definitely noisy. Looks more low-relief than anything else. That said, there is at least a visible output!

Working in the Browser!!! 

I’m trying to learn about a few different things here- chiefly, Javascript, WebGL, three.js, and shaders. Cobbling some things together, I have my model up in chrome! I am working to write my own shader that fits better with the watery captures, as well as making the models more satisfyingly interactive.


I’m going to experiment with motion capture. The event I want to capture is the formation of a skeleton using the software. I plan on covering the floor in the mocap studio with retroreflective balls, covering my body in double sided tape and rolling around in the balls until my body is covered. I’m interested in playing with the points in 3d space as well as figuring out ways to map the body in space without using a rigged skeleton.


My event is the moment when things break. I want to track the patters and paths of the pieces of broken objects. I am thinking of breaking many different objects and to capture how these objects break, perhaps with a 360 camera such that the camera is in the object originally (but I need to make sure the camera doesn’t break so this probably won’t work), or maybe taking a long-exposure shot of the process. My media object would be a series of photos of different objects breaking.


I’ve been working with data from a shoot I did recently in the Panoptic Studio (PhD project in Robotics and CS). The Panoptic Studio is a dome with 480 VGA cameras, 30+ HD cameras, 10 Kinect, hardware-based sync, and calibration for multi-person motion capture.

The output is in the form of skeletons (like traditional mocap), and also dense point clouds that can be meshed. I filmed two dancers in this dome, and am working on ways to express the data, primarily working with the point clouds and meshing these to create an animation.

The event I’d be capturing is the interaction between the two dancers. I’m interested in this as a prototype for understanding how to work with this data, as there is not much documentation on it. I’ve been working with this data using Meshlab and Blender, but am interested in potentially working with OF to create spheres on the individual points in the point cloud, to create usable geometry.

Here’s what I’ve been doing so far:


For my event assignment, I had two ideas:

My first idea was inspired by Kyle Mcdonald’s open frameworks project with imposing open eyes everytime you blink. I thought it would be funny and interesting to capture a video of something really boring (like 213 lecture), and later edit the video so that the student’s eyes are super imposed with a set of fake eyes.


My second idea was inspired by Golan’s super sensitive force module, which could like detect the surface of your palm. I was wondering how accurate it could be, and I was seeing if I could potentially develop more on it. My idea for using the module was to perhaps train something that could recognize someone’s identity based on some social interaction, such as a fist bump. I’m not sure how distinguishing one’s first bump could be, and I thought it would be cool if this sensor could somehow identify the person.

DMGordon – Event Proposal

For my event, I plan to train a neural net to ‘undecay’ images. I will use a Generative Adversarial Network. The dataset are pairs of images taken from time lapse videos on Youtube of rotting food. I will then train a discriminator to recognize the fresh food. The generator will be fed images of rotten food and its output will be judged by the fresh-food-recognizing discriminator. After sufficient training, we can feed any image into the generator for an ‘undecayed’ output.
While I’ve started compiling my data set, I only have around 15 image pairs, and will need at least 20 times that to get any sort of interesting generator output. Also, to generate high-resolution images I will either need a gigantic network or some form of invertable feature extractor, neither of which I have experience in.


I would like to explore looping. I am fascinated with perfectly looping gifs (remembering loop findr) and how we look at these loops (and canon’s) differently as events than ones that either do not loop perfectly or are not looping.

I would like to use video editing magic to create impossibly looping gif’s of my morning routine.

Any routine element is conceptually amenable to be shown as a loop, isolating the event from the world around it and showing it in it’s repetitive, unchanging nature.

Experimentally, I want the gif’s to create a reaction of “waitwhat?” or “how did that happen?”, the magical impossibility of the video drawing attention to, yet juxtaposed by, the repetition and banality of the event.


sayers-Event Proposal

For my event project, I would like to focus on the event of rain and the ideas of erosion that come from it.  I don’t know why I have a thing about rocks/the underground.  I would like to create a custom-controller game that would be played by the weather.

Game will be played on cellphone with custom interface.  Small sensor that attaches to phone (with waterproof phone case).  Sensor will be a small square with four quadrants.  The sensor will pick up if a raindrop hits in one of the four quadrants.  The game will also get your GPS coordinates and check online to see if it is raining (no cheating with an eyedropper).  If a raindrop hits one of the four quadrants, the in game raindrop will explore in that direction.  One could try to watch the rain drops and move your phone so that it lands on the desired quadrant, or could let nature do it’s thing. When enough raindrops hit a quadrant, it will begin to disintegrate on the screen, uncovering objects in the strata of sediment.


Other Idea:

Surfing-type game that you play as a child in the car.

Pressing at Jump points or else crash.

Holo-lens project using edge detection.


I have gotten a cube to appear on the Hololens and now understand much more of how to develop for that.

I have also been doing mostly research on how I might do edge detection quickly in the Hololens.  It doesn’t seem like there is one clear option.  The main thing that it seems the Hololens uses is spatial mapping (specifically I could use the low-level unity spatial mapping api).  This is very computationally intensive though, and I believe would probably only work in an already mapped area (so not out of a car window).  The other option that I could explore would to be get the camera feed out of the Hololens then put it into a processing/openframeworks sketch that would give me the coordinates of the edges in a silhouette (using some kind of edge detection for video).  I would then have to send the data back to the Hololens and compute where the figure should be.  Also since this is mixed reality, everything would have to be happening in real time with next to no lag.  I’m not completely sure if I have the technical abilities/the technology is there yet to get this done quickly and efficiently.

If this proves too difficult, one thing I may do is use Vuforia within the Hololens to create small creatures/people hanging from street signs.  For example, if I saw a stop sign, Vuforia would know oh this is the general shape/look of a stop sign and would then attach a 3D model (in various forms) to the sign.  This is also creating a little animate world.

Can I screenshot the holograms in the Hololens?


For the Event project, I am interested in using the Robot Arm at the studio. I figured that the opportunity to using such a machine might not happen again soon for me and I wanted to take advantage of it.

I was struggling for quite a bit to come up with an idea but at some point I remembered the Kylie Minogue music video directed by Michel Gondry which was shown in class. Once you have a robot and a camera, the same looping process sounds applicable to show shots simultaneously while they were recorded .

I haven’t really fleshed out the event that I will record with it but one idea I had was to show a set of marbles going down a slide with a loop. If the timing is accurate, with one continuous take, the output would be a series of marbles increasing in numbers going down that slide, then disclosing the mechanics phenomena causing the varying distances within the marbles.


My event is pregnancy. For this project, I will be investigating pregnancy as a spectacle. I think it’s hilarious that hundreds of women post the exact same pregnancy photos on the internet for any weirdo to use for art projects. #pregnant #20weeks #pregnancy

I want to compile images of pregnant women from Instagram, and animate them from most to least pregnant. Ultimately I want to turn this into a music visualizer. I haven’t fully decided what type of music, but probably something overly sexualized. The reasoning for the soundtrack is that pregnancy is societally this beautiful ethereal thing, but we all know how these ladies got pregnant. It’s also pretty funny and jarring to hear super sexual songs beside selfies of pregnant ladies.

Here are the funniest ones

I did a run at animating the pregnant ladies, but it’s pretty jittery because of the variation in the images. I want to redo this process with more cohesive images so it isn’t so jarring. I tried making it gray and also removing the background but it’s still not ideal.


Also I got over-excited and already built a lot of the music visualizer component. I really need to go back in and create better visuals though, because right now they’re lacking.

1st Draft (hear the music): PregTunes Draft Video

Play with the current draft at: https://caro/io/pregtunes (soon to be

^(warning slow-loading and not good yet)


  • Improve loading. Make loading animation. Also lazily load songs on click so it doesn’t take forever.
  • Domain name
  • Autoplay songs after they’re done
  • Icons and drawings for playing music
  • Visuals of visualizer: make them a lot better
  • Pick the rest of the songs (I currently have 2-3)
  • Upload your own song
  • Dot is a fetus instead of a dot?


I’ve been itching to use the Schlieren Mirror ever since it was introduced in class and what better assignment than “Event”? Because the Schlieren Mirror is such a methodical and scientific tool, I was hoping to depict something that traditionally is not. More specifically, I am interested in human behavior and communication .

As someone who encounters differences in semantics every single day, I am specifically interested in miscomprehension between cultures and languages. Metaphors and expressions are a very specific and suitable example of this. With the Schlieren Mirror, I am hoping to visualize the abstract, be it in semantics or physical forces.


mikob – Event proposal

I’ve been recently growing interest in 360/VR filmmaking. Capturing the world in 360 can break the way we traditionally think about perspectives. I want to capture and present a reality that may not be possible, but I’m still unsure where to go with this yet!  I want to take a look at the 360 cameras and play around with it.




I have so far 5 proposals—feeling indecisive.

  1. Inspired by the Deep Time Walk, this project is a spatiotemporal representation of any timeline. In this project I would map a timeline to a set distance that one would walk / drive through. The timeline would begin at the start of the walk, and end at the end. I am interested to see how history would be perceived differently through this system—I suspect a more bodily and intuitive understanding of the scale of the time would be gained, more than just numbers. (Side note: I have already constructed the code for this)
    1. Anyone would be able to select any timeline or create their own.
    2. The communication would likely be some sort of audio file.
    3. Code is already constructed.
  3. The Kairos Watch. This would be a weighted 24-hour clock/ watch, in tune with more the idea that we weight certain time in the day higher than others. For example, we don’t place much significance on the time we spend sleeping, so on the clock, the hours from midnight to 8am could be 1/12 of the watch face, while the 8AM-10AM could be a far greater portion, because that time is valuable. You might not care about keeping track of the time you spend in class, but want to maximize your work time.
    1. This is a personalized system—down to the day.
  4. Gigapan video/ Gigapan slow mo video (Robot arm??) or 360 Slow mo video
    1. Less meaning, but would be a cooler capture technique that I would 100% be excited to figure out.
  5. Develop a way to fill in the empty spaces w/ photogrammetry


In further exploration of the Canon SDK, I think it would be fun to play with focus blur.  The repeatability of the arm and the Canon Camera being controlled by an app, would be a great way to conduct experiments in focus blur.  Since focus blur on moving light is extremely hard to capture in fireworks (a very quick event,)  I would be able to experiment with focus blur on LED lights indoors.  I would perform this experiment on a few different shapes with various focal blurs and see what happens!


For my event project, I am largely interested in investigating things that happen underwater. I have two possible methods for capture-

  1. Depthkit underwater. I am constructing a polycarbonate underwater housing for the kinect v1 (1414), so that I may capture RGBD video underwater in the pool. I am especially curious to see how bubbles rising will appear in RGBD, so I will work on capturing a few different scenarios: person splashing into pool, creating many chaotic bubbles (medium distance shot); person holding their breath underwater, slowly releasing bubbles (close up shot); releasing bubbles from container (close up shot, no people only bubbles).
    1. The main uncertainty with this project is the great attenuation of near-IR wavelengths in water- the depth sensing abilities of the kinect will be quite compromised. I am hoping that the system will at least be usable within distances of ~1m.
  2. Sonar slitscan depth mapping. Using a sonar pod marketed to fisherman for “fishfinding,” I would like to create a sonar depthmap (bathymetric map) of local bodies of water. I have scouted out locations and will most likely use the Squaw Valley Park pond (known RC boating spot) for initial tests, and then take the system into non-manmade locations like North Park for richer mapping opportunities. The pod delivers a graph of the recorded sonar response over time, so I will systematically tow it using a RC boat in a grid pattern to obtain the bottom contours of the lake.
    1. My main concern with this project is the terrible online reviews of these pods. Depending on the model, they connect over bluetooth or a wifi hotspot, and many online reviewers state that the connection drops frequently or will not connect at all.



I currently have two ideas:

1. I want to make a machine that find out the essence of any event by taking lots of photos/videos of the event as the input. Then the program uses computer vision and/or machine learning techniques to study the data. It will align the similarities and sort the differences. The product could be a video or a multi-dimensional visualization. I haven’t got a detailed plan yet.

2. I want to capture the formation of boogers. I’m always wondering how come all of a sudden I have booger in my nose. I can probably mount a device consisting of a small webcam and a light source below my nose to find out.

hizlik- EventProposal

After Golan showed Rendevous piece in the beginning of class, I couldn’t get an idea for a cool and unique way to remake it. I had an idea that was similar to the content of Rendevous, but with a twist that hopefully will shock the audience.

I would create a similar aesthetic to Rendevous by mounting a camera with a medium-narrow FOV very low on the front bumper of a car. Initially I would recreate the urgency and daringness of the Rendevous driver, but in reality I would have the camera attached to a small RC car (not a real car). In addition to this being a safer way to film such a task, this presents a unique visual opportunity to “play” with the audience. They think a real car is being used, so I would start presenting situations that hopefully should scare the audience. For example, speeding down a city lined with parked cars and driveways and suddenly a car reverses from a driveway right in front of our driver! The audience thinks it’ll cause an accident but WOOSH the camera goes right under the car backing out (because it’s an RC car). After that initial reveal of this deception of size, i may use it for other things (like driving on sidewalks or through buildings). Then when the car reaches it’s final destination, it may park in front of a reflective surface (or the driver will detach the camera and show the car) to show that it was a full-size vehicle all along.

An addition to this idea would be to also make it in stereoscopic video (using two GoPros for example to create a paralax effect) and watch this film in VR (as well as a 2D version for online sharing). I’ve never done that before, and would be interesting to both build the setup and edit it.

Of course, if Golan deems this project not “experimental enough” then I will come up with a few more ideas. I don’t have any others right now because this has taken up all my thinking, as I’m very excited for a potential opportunity to make such a choreagraphed video (and I always like incorporating my RC car into projects).