Quan—Final

For this final project, Bernie and I decided to tackle cinematography with the robot arm, a step beyond attaching the light. In the end, we created two single-path, multiple-subject clips.

The robot arm operates on a waypoint system, set manually.  Waypoints are the positional and directional points through which the camera travels. Once set, the operator then sets the speed the robot moves at, and how smooth the motion is (blending between waypoints).

We initially had a steroscopic setup on the arm, to capture video with depth.

However, we soon realized that our setup afforded a fixed convergence angle between the two cameras. This meant that either the subject had to be always a fixed distance away from the cameras, or the video would be at a permanent cross-eyed state. Because we were low on time, and these constraints were too stifling, we ditched the second camera and moved on.

As you can see, the video doesn’t converge properly as it should, because the cameras were not dynamically converging and diverging at different distances.

During this project, Bernie and I got a hold of the Olympus 14-42mm lens, which has electronic focus and electronic zooming capabilities. So now we had computational control over all camera elements simultaneously—camera position, direction, aperture, shutter angle, ISO, focus, and zoom. We had created a functional filming machine.

A beautiful aspect of the robot arm is the capability for the path to be replicated. Once we set a series of waypoints, the robot arm can travel in an identical way over and over, as many times as we want.

These are four subjects filmed in the exact same way (path, light, zoom, position).

With this repeatability, we are able to have interesting transitions and combinations between clips. We explored two different methods: splicing and positional cuts.

This is an example of a spliced video.

Since all four subjects are all filmed in the same way, they should be perfectly aligned the whole time, but they are not. This is due to human difference, which gets worse and worse as the video goes on.

This is an example of a positional cut.

As long as the paths are aligned, the cut should have an interesting continuity, even with a different subject.

Here are our final videos.

BODIES from Soonho Kwon on Vimeo.

Temporary Video (not final)—

Faces from Soonho Kwon on Vimeo.

Quan—FinalProposal

For the last project, I want to continue working with Evi on the Robot Arm, but this time I want to put the camera ON the arm. With this, I was interested in doing some 3D-tracking and matching that with some sort of dolly-zoom technique. I think this would allow for some interesting effects.

Another thing I was interested in trying out is connecting certain settings on the camera with touchOSC on a phone. It’s interesting how we would be translating a human interface (zoom ring/ focus ring) with another touch interface (phone).

I have several other ideas, but I want to continue with Robot Arm dev with Bernie!

Quan—EventProposal

I have so far 5 proposals—feeling indecisive.

  1. Inspired by the Deep Time Walk, this project is a spatiotemporal representation of any timeline. In this project I would map a timeline to a set distance that one would walk / drive through. The timeline would begin at the start of the walk, and end at the end. I am interested to see how history would be perceived differently through this system—I suspect a more bodily and intuitive understanding of the scale of the time would be gained, more than just numbers. (Side note: I have already constructed the code for this)
    1. Anyone would be able to select any timeline or create their own.
    2. The communication would likely be some sort of audio file.
    3. Code is already constructed.
  2. EXPERIMENTALLY CAPTURE MY HAND SURGERY
  3. The Kairos Watch. This would be a weighted 24-hour clock/ watch, in tune with more the idea that we weight certain time in the day higher than others. For example, we don’t place much significance on the time we spend sleeping, so on the clock, the hours from midnight to 8am could be 1/12 of the watch face, while the 8AM-10AM could be a far greater portion, because that time is valuable. You might not care about keeping track of the time you spend in class, but want to maximize your work time.
    1. This is a personalized system—down to the day.
  4. Gigapan video/ Gigapan slow mo video (Robot arm??) or 360 Slow mo video
    1. Less meaning, but would be a cooler capture technique that I would 100% be excited to figure out.
  5. Develop a way to fill in the empty spaces w/ photogrammetry

Quan-Place

Full process

First Try —

I went to a large tunnel for my first iteration of steel wool photography, with the intent of creating a spacial mapping of layered pictures. A singular photo looked like this:

and the stacked photo looks like this:

Second Try —

I realized how distracting those circles were in long exposure, so I wanted to eliminate them. One option was photoshop, but a cleaner and more interesting method was with the use of video. This is one ‘slice’ of the tunnel that I chose to map.

I did this every 5 feet for the entire length of the tunnel, and then texture mapped these videos with alpha values  on planes in space.

Final Result—

Vid 1

Vid 2

Quan—PlaceProposal

I have not yet reached a definitive conclusion for what place i want to represent nor the method I want to employ to portray it. I suppose the method should be decided based on the place itself, but I have a few ideas for both that I want to try out.

Place—

Physical Places:

  • Church/ University Center Chapel. I come here a lot and this room holds great meaning to me. I want to capture the essence of calmness and peace that this place gives me.
  • Emergency Room. I was recently at the ER, and noted that it had a very peculiar environment, and the interactions that happen there seemed very odd, and I would like to capture that oddity in some way.
  • Margaret Morrison/ Design Studio. For obvious reasons, but I would probably focus on the competitive culture.
  • Room/ Drawers/ Bookshelf. I think my bookshelf is very important to me, and I think it would be interesting to understand certain underlying patterns that bring all these seemingly random books to the same collection.

Abstract “Places”:

  • Time. I think it would be interesting to see how we perceive time as a place. I think there could be an interesting abstraction when we visualize time as a volumetric space, with certain hours being more important than others, or even representing time as greater than just 24 hours.
  • Conversation. I wonder what conversation as a place would look like.
  • Broken Hand. I recently injured my hand, and I was able to snag the X-Ray files, and I wonder if there are any cool things I could do with those.

Methods—

  • Steel Wool Photography. I’ve done this in the past, and I wonder if I can expand its usage to represent the space it’s performed in. The particles fit the trajectory of the swing, so if the swing is calculated, we could highlight/hide certain aspects of physical space.
  • I am interested in creating a similar light pole as the one that visualized Wi-Fi signals.
  • I am interested in creating a 3D light painting, perhaps as a method of 3D annotation.
  • I’d be interested in taking a stab at doing something with Light-Field cameras.
  • I want to learn how to use Computer Vision as a medium—I don’t completely understand the lengths of its capabilities.

Quan-PortraitPlan

I’m working with hizlik on this portrait project. Since both of us are photographers, we decided to share a single process that records our photography style over time, split into two different visualizations—lighting preferences and subject preferences. Photographers evolve their style over time, and we wanted to see how ours did.

For the lighting portrait, we grabbed the EXIF data for every photo we’ve taken, and created a value from the aggregate of ISO, Shutter Speed, and Aperture, and plotted them with the appropriate timestamp on a chart over time. This is an example:

The categorical portrait will require us to run all of our photographs through Google Vision, which is a computer vision API that produces keywords from photos. We will use these keywords to figure out what we took pictures of in a general sense.

Quan-SEM

I chose to examine closely a piece of pencil lead:

When Donna started operating the joystick for the first time, I instantly felt as though she was operating a rover that was charting a new world. It was so interesting how zooming in far enough completely erases all sense of familiarity, and establishes a whole new terrain void of all sense of scale.

This is an example of that unfamiliarity of scale. At one point, I thought I was seeing a beach within a piece of pencil lead. If not for the information displayed on the bottom and my crude colorization, I wonder if people would be able to tell.

Overall, I think there are great parallels and contrasts to be drawn from seeing the extremes of the minuscule and the massive. I noticed that I felt a similar sense of nebulous beauty from the SEM experience as I do when I go to the planetarium or look up with a telescope. However, I think there is a great contrast in that understanding the vastness of space could make one feel absolutely insignificant, while the SEM images might make that same person feel as if they are the center of the universe.

quan_about

Origin of Quan: https://youtu.be/X0fizqifumk?t=30s

I am a second year in the School of Design, with a concentration in Environments. I have done photography for many years, and have seen how both the camera and the photos it produces can be tools used to communicate truth, by highlighting and hiding specific elements. I am taking this class because I one day hope to be a designer who is able to develop and leverage unprecedented methods of communication, or as Bret Victor likes to put it, Seeing Tools.