Bierro – final

The pulse of Shibuya

An organic visualization of the dynamics and movements inherent in the iconic Shibuya crossing in Tokyo.

The project

The hustle and bustle happening all day long in Shibuya crossing is unique in the world. During rush hour, it has as many as 2,500 pedestrians crossing every time the signal changes. My project consisted in capturing the dynamics of this place through the lens of a public webcam and visualizing them as a set of almost biological parameters: pulse, flows and lights.

While the flow is directly coming from the motion of the pixels in the video, the pulse is derived from the average speed at which the pixels are moving, and the crosswalk light is captured based on the number of moving pixels at the same time (more pixels implies that people are crossing the street in all directions).

I tried to make this project novel by considering the city as a living being whose health features can be monitored. In a time when many efforts are made towards the planet’s sustainability, rethinking the city as a living being emphasizes the need to preserve its essence and to check upon its health status.

This work was inspired by famous timelapses, such as the Koyaanisqatsi by Geoffrey Reggio and Philip Glass or Tokyo Fisheye Time Lapse by darwinfish105, that managed to capture the dynamics of cities.

Fascinated by these works, I also tried to deviate for their form. Although time lapses are very effective at condensing the multitude of movements happening over a long span of time, I was interested in a more real-time and organic output.

In the final version of my app, we can clearly see a pattern in the pulse of Shibuya: the red light sequences are subject high variations in the graph and precede a more stable curve when the crosswalk light turn green. In this way, we see that the fingerprint, the heartrate of this place is actually perceptible

However, this effect could be conveyed more intensely if the app was accompanied by a heartrate monitor sound and if the frame rate was higher. Moreover, the real-time version of the app is not working steadily yet and this would benefit from being fixed in the future.

The App

The media object that I created is an OpenFrameworks application. The following video was recorded from my screen while the app was running. Unfortunately due to the recording my app was running slower and the frame rate gets quite low.

Making-of

Bierro-place

Bierro-FinalProgress

Revisiting The pulse of Shibuya

 

Changes and improvements:

  • Get the actual live feed to work. Challenging on Windows. This is what I have been spending all my time on over the past week, and still get it in OpenFrameworks
    • RTSP reception in OpenCV
    • Tweak the encoding parameters speed etc and get the M-JPEG feed working in OF with IpVideoGrabber
    • Get Awesomium working with YouTube feed
  • Emphasize more on the graph:
    • Remove background opaque images
    • Make lines thicker
    • Add axis and legends: Speed of people
    • Try different time scales and sampling. See the pulse over a day and patters change. Or really get the pattern to show up
  • Try a different layout of the windows. Too much happening so far
  • Ghosty style of the people in the feed
    • Shorten the lines
    • Make the background a little darker
    • Try view with only ghosts and live feed next to it
  • Get a good screen recorder for windows
    • IceCream Recorder

Bierro-finalProposal

For my final project, I would like to refine my Place project about Shibuya crossing in Japan. I felt I was lacking time to craft my result and I want to spend more time on it. This will involve grabbing the camera feed on my OF app. I am running Windows so I can’t use Syphon but I will try to use a similar approach using Spout, Windows equivalent. I will also work on the layout of my app and tweak the different parameters of my algorithms to have the graphs most representative of the dynamics of the place.

I will also in the next few days finish to edit my Event video and will discuss around me to see if this is actually more worth showing in the final exhibition than the Place project or not.

Bierro-Event

One hundred balls – One trajectory

The laws of physics can often appear to be very mysterious. In particular, mechanics and the way objects are moving is not necessarily intuitive. The human eye cannot look into the past and it often need helps of equations, sketches or videos to capture the movement that it just saw.

In this project I decided to document and capture the simple event of a ball thrown in the air. My goal was here to recreate the effect seen in the picture above: get a sense of the trajectory of the ball. But I wanted to get away from the constraint of using a still camera and decided to use a moving camera mounted on a robot arm.

Inspirations

 

This project started for me with the desire of working with the robot arm in the Studio. The work of Gondry then inspired my: if a camera is mounted on the robot and if the robot is on a loop, superimposing the different footages allows the backgrounds to be identical while the foregrounds seem to happen simultaneously, although shot at different times.

Gondry / Minogue – Come into my world

I then decided to apply this technique to show how mechanical trajectories actually occurred in the world, as Masahiko Sato already did in his “Cruves” video.

Masahiko Sato – Curves

The output would then be similar to the initial photo I showed, but with a moving camera allowing to see the ball more closely.

 

Process

Let me here explain my process in more details.

The first part of the setup would consist in having a ball launcher throwing balls that would follow a consistent trajectory.

I would then put a camera on the robot arm.

I will have the robot arm move in a loop that would follow the trajectory of the balls.

I would then throw a ball with the launcher. The robot (and camera) would follow the ball and keep it in the center of the frame.

The robot would start another loop and another ball would be thrown,  but with a slight delay compared to the time before. The ball followed would then appear to be slightly off the center of the frame.

Repeating the process and blending the different footages would do the trick and the whole trajectory would appear to move dynamically.

 

Robot Programming

My trouble began when I started programming with the robot. Managing to control such a machine implies writing in a custom language, using inflexible functions, with mechanical constraints that don’t allow to move the robot smoothly along a defined path. Moreover, the robot has a speed limit that it cannot go past and the balls were going faster than this limit.

I then didn’t manage to have the robot follow the exact trajectory I wanted but instead it followed two lines that approximated the trajectory of the balls.

Launcher

For the launcher, I opted for an automatic pitching machine for kids. It was cheap, but probably too cheap. The time span between each throw was inconsistent and the force applied for each throw was also inconsistent. But now that I had it I had to work with this machine.

Choosing the balls

Choosing the right balls for the experiment was not easy. I tried using the balls that were sold on top of the pitching machine but they were thrown way too far and the robot could only move within a range of a meter.

I wanted to use other types of balls but the machine was only working for balls of a non-standard diameter.

 

 

 

 

 

 

 

 

 

 

The tennis ball where then thrown to close to the launcher.

I then start trying to make the white balls heavier but it was not really working.

 

 

 

 

 

 

 

 

 

 

I also tried increasing the diameter of tennis balls, but the throws were again very inconsistent.

 

 

 

 

 

 

 

 

 

 

 

At that time I noticed a whole in the white balls and I decided to try and put water in them to make them heavier. The hole was to small to inject water with a straw.

 

 

 

 

 

 

 

 

 

 

 

I then decided to transfer water into it myself…

… Before I realized that there was a much more efficient and healthy way to do it.

I finally caulked the holes so that the balls wouldn’t start dripping.

 

 

 

 

 

 

 

 

 

 

 

Finally the result was pretty satisfying in terms of distance. However, the throws were still a bit inconsistent. The fact that the amount of water was not the same in each ball probably added to these variations in the trajectories.

 

Setting

Here is the setting I used to shoot the videos for my project.

And here is what the scene looked like from a different point of view.

 

Results

This first video shows some of the “loop footages” put one after the other.

The next videos shows the resulting video when footages are superimposed with low opacity, once the backgrounds have been carefully aligned.

Then, by using blending modes, I was able to superimpose the different footages together to show the balls thrown at the same time. The video below was then actually made out of one launcher, one type of balls and one human.

In this video, I removed the parts where someone was catching balls to give a sense of an ever increasing amount of balls thrown by the launcher.

 

Next Steps

  • Tweak the blending and the lighting in the final video
  • Try to make a video with “one trajectory”
  • Different angle while shooting the video
  • More consistent weights among the balls

Bierro-EventProposal

For the Event project, I am interested in using the Robot Arm at the studio. I figured that the opportunity to using such a machine might not happen again soon for me and I wanted to take advantage of it.

I was struggling for quite a bit to come up with an idea but at some point I remembered the Kylie Minogue music video directed by Michel Gondry which was shown in class. Once you have a robot and a camera, the same looping process sounds applicable to show shots simultaneously while they were recorded .

I haven’t really fleshed out the event that I will record with it but one idea I had was to show a set of marbles going down a slide with a loop. If the timing is accurate, with one continuous take, the output would be a series of marbles increasing in numbers going down that slide, then disclosing the mechanics phenomena causing the varying distances within the marbles.

Bierro-place

THE PULSE OF SHIBUYA

I decided to capture a place that has been fascinating me ever since my first trip to Japan: Shibuya crossing in Tokyo. The amount of people who cross the road in any direction (almost feeling too disorganized for Japan) is just huge and there more than ever you can feel that Tokyo is one of the most densely populated cities in the world. I wanted to somehow capture the motion, the dynamics, the hustle and bustle of that place.

I quickly discovered that there was a live stream of that crossing which could be accessed on Youtube at any time at the following link.

I was inspired by the work of Kyle Mcdonald, who managed to get a sense of the life in Piccadilly Circus in London thanks to a webcam (http://www.exhaustingacrowd.com/london), and decided to use this feed as my input.

Getting the Live Cam… On Windows!

The first part of my work, which actually lasted way longer than I thought, consisted in getting the live stream from Shibuya to my computer. This was not trivial: The Camera IP was not available to the public and Youtube doesn’t provide a way to access it due to its privacy policy. I somehow managed to get the HLS URL of the video through the command-line program “Youtube-dl”. However, this link didn’t seem to work on regular video decoder such as VLC (only the first frame showed up). After days of tweaking with this, I finally used another command-line software called “FFMPEG” to stream back this link to a port of my computer using a UDP protocol.

The stream was now available on my computer once I decoded it through VLC. You can see in the following picture that I actually get the camera feed more than one minute before Youtube published it.

If VLC was able to decode the video through UDP, I now had to do it in OpenFrameworks. After having a few unsuccessful attempts, and more time spent than I should have, I decided to move to the Computer Vision part and to use a 90 minutes recording from the webcam that I got through FFMPEG.

Future work will consist in either finding a better protocol than UDP to stream back the camera feed, or decoding the UDP stream in OpenFrameworks, perhaps using the same method than the one used in the ofxIpVideoGrabber (https://github.com/bakercp/ofxIpVideoGrabber) but using UDP protocol (and OfxUDPManager) instead of HTTP.

What to do with it

– Getting the position of the people in the video? Took measurements of the place through Google Maps

– Played with OpenCV reagular addon. But contour finder didn’t provide the information I wanted. Too complex to filter the important information

– Addon ofxCV from Kyle McDonald.  Windows version GML not good. Install the previous version from Kyle McDonald. Tweaked and got it to compile

– Started monitoring two factors: speed and movement. Differentiate cars and people. Will have to be adjusted based on the time of the day. But nice overview

– Output. Pulse. + Ghosty movements

 

 

Bierro-PlaceProposal

For this assignment, I would like to focus on a place in Tokyo that I find fascinating: Shibuya crossing.

I would like to extract and show in real-time the hustle and bustle happening over there. This would be done through optical flow or another Computer Vision method applied to the live webcam feed that can be found here: http://wwitv.com/tv_channels/b6809-Shibuya-crossing-Tokyo.htm

I am not sure about the output yet. It could be in VR, it could be a data visualization… But I would like it to be updated in real-time.

Bierro-portrait

For this project, I wanted to create an intimate portrait of Iciaiot through the “voice” of her moves, her breath, her quirks and so on. For that purpose, I put four contact mikes on different parts of her body during a 2-hour dinner and recorded the output of the mikes along with a close-up video that I then edited.

This project started with a first chat at Starbucks with Iciaiot. We noticed that we were both fidgeting, probably out of nervousness and excitement triggered when you meet someone new. While she was playing with her ring, I did the same with my pen. The idea then emerged in my mind to create a portrait based on this fidgeting movement.

My initial thought was to have someone play with a fake ring, which would modify a virtual environment representative of Iciaiot’s world. However, this situation was too contrived and based on Golan’s recommendation, I moved on to something more straightforward to capture Iciaiot’s quirks: contact mikes.

First idea: Glove with ring / photocell sensor
Second idea: Contact Mikes

Figuring out the best way to use the mikes required some testing. I tried different locations on Iciaiot’s body and different situations while recording. Having her eating or drinking turned out to be most interesting as it required movements of the jaw or the esophagus which made distinct waves for the mikes. In the end, four locations were compelling: the skull behind the ear for voice and chewing, the throat for swallowing, the chest for breath and moves, the hand for picking up objects.

Testing the mikes
Testing the Mikes

As Iciaiot knew me better after a while, she would no longer fidget with her rings next to me. I then decided to record her in a casual place outside during dinner. We went to the Porch in Pittsburgh and I recorded the moment with a camera and 4 contact mikes positioned at the locations mentioned above.

Contact mikes can generate a lot of noise, and the setup took a bit of time to find the right sound level. I also needed to tape the mikes a few times during dinner as the foam became less sticky. I was also surprised that some mikes (especially the one on the hand and the one on the ear) actually recorded voices very well. I would have preferred to get rid of it as the camera already had voice as an input but I had to deal with it in my audio files.

I think the output is somehow original, as the different positions of the mikes allow to get different perspectives over Iciaoit at the same time. Another situation than a restaurant might be more suitable though. I think recording the reaction of people when they discover the “sound” of their body would be very interesting. Here, Iciaiot was aware of my plan and had tested it with me before, so the “surprise” effect was not available anymore, but her reaction (along with mine) the first time we tried was very expressive. Recording such moments could be very compelling.

Bierro-PortraitPlan

After a few hours talking to iciaiot, we started discussing about quirks we noticed in each other. While I was spinning my pen around my fingers, iciaiott was playing a lot with her rings. Later on, iciaiot explained that she would touch her rings usually when she felt nervous or tired. I then started thinking about a way to represent nervousness and use the fidgetiness of the ring manipulation as a way to calm stress. The experience I imagine would consist of someone using iciaiott’s quirk to quell a stressful situation.

I really wanted to try something with an Arduino and a VR headset so I figured I could use such a setting for the purpose of this assignment.

What I envision is an input device made of an Arduino that would capture the position of the ring on a finger. This would be done through a photocell placed at the base of a finger: the ring would block it if normally positioned. The value recorded by the light sensor would notify the system when the user manipulates the ring: the recorded value would then be a sort of “nervousness” value.

Regarding output, I imagine an evolving VR environment. We saw in class how 360 videos and pictures became new way of capturing the world, but I got surprised that this stream was textured in game engine on a sphere. What if we could play with this sphere shape and distort is? I looked up different shaders and shape distortion algorithms. What I want to do is link the distortion of the textured sphere to the stress of the situation. I would trigger stress with the use of a music and distortion of the sphere, while the movement of the ring would bring back the usual 360 picture, in the same way iciaiot does when getting nervous.

Arduino input device sketch
Example of shader I want to try to distrot the scene

Bierro-SEM

Through the eye of the Scanning Electron Microscope

The way I understand the SEM

The spider web I scanned had been staying for a while above one of bathtubs in my house: it was most probably not sheer silk anymore. But who would have thought that this whole mess of dust, and who knows what else, would have produced such images. Our brain is not ready for that. It starts comparing what we see to familiar environment: plants, bamboos and barb-wire, bones etc… But this is not it: the very small does exist, and we just have no clue about its beauty, its intricacies and patterns. Nature’s creativity is endless, and while there is a lot to study at the human scale, there is more inspiration to be found for scientists and artists at the scale even below!

Spider web winded around an interdental brush
Spider curls
Bamboo grove and barb wire
Dusty planet
Quicksand

Bonjour!

Hey all!

I’m Pierre and I’m a Master’s student in Human-Computer Interaction. After a handful of years spent in France majoring in different subjects of Science (Maths, Physics, Chemistry, Biology…), I finally chose CS about 2 years ago. What drives me right now in HCI is unveiling all the cool stuff we can do with new technologies and reflecting on how it can be incorporated in our society. I’ve particularly focused on AR over the last few months.

PS: If you want to practice your French, or make me practice my Spanish or Japanese, let me know!