caro-Final

@the.circles.of.life

Finding the center and radius of Instagrammed pregnancies.

Link to code

The Project

My project is an investigation into the geometry of pregnant women on Instagram. I downloaded over six hundred images of pregnant women, and using a tool I built with Processing, annotated each photo to find the center and radius of the belly.  After collecting all of these photos, I have been periodically re-uploading them to Instagram under the name @the.circles.of.life and reporting the data.

Through this project, I’ve created an absurd a way to objectively evaluate women that’s completely useless and has no basis in traditional beauty standards. This turns social media into an even stranger evaluative process than it already is.

There’s also a certain amount of ridiculousness in the fact that someone would spend so much time doing this. To poke at this, I’ve included several screen capture videos on Instagram of me annotating the pregnant women, so that people will know I’m doing this all by hand. I want there to be a hint of the weirdo behind the project, without actually revealing anything about who I am or why this is happening.

Context

The most similar projects I can think of are other works that make you question “who on earth would spend the time doing this?” My favorite comparison is to anonymous Internet people who use video games as platforms for strange art projects, such as this person who built a 210 day long roller coaster in Roller Coaster Tycoon, or this person who beat Sim City with an incredibly intensely planned out metropolis. It’s funny, and it clearly took an impressive amount of effort, but you have to wonder who’s behind it. They also leverage popular culture through video games in a similar way that I’m doing with Instagram.

I have been evaluating my work based on how well the humor lands. The project has been getting in-person reactions that are similar to what I was hoping for, which is a lot of fun. I’ve shown people and had them be shocked and bemused as they scrolled through dozens and dozens of Instagrammed photos of geometric pregnant women, which was exactly my goal. I hope to continue posting these photos until I have only 50 or so left, and then try and throw the project into the world and see how/if people react.

 

Media Object

My media object is the ongoing Instagram account. I’ve also compiled all of my pregnant lady images into this Dropbox folder for safe keeping.

Example photos

Example GIFs

I also created a print sorting about 200 of the images from least to most pregnant from left to right.

 

Process

Creating this work was an incredibly explorative process. I tried a lot of things that worked, I tried a lot of things that didn’t work, I regularly got a lot of feedback from a lot of people, and I regularly revised and improved my ideas.

I started where I left off with my last project, with some new insights. My final project really originated during a conversation with Golan, where he pointed out a really amusing GIF from my previous iteration of the pregnant women project.

The idea of a person hand-fitting a geometric shape to a woman’s pregnant stomach is very amusing. We brainstormed for a while about the best format to explore this potential project, and settled on Instagram as a medium. What if there was an account that analyzed pregnant women from Instagram, and re-posted the analysis back online?

I quickly registered @the.circles.of.life, and started coding. Unfortunately I had accidentally deleted the tool I made for the first draft of the project, so I had to rewrite the data logging tool.

Finding the Images

“Where did you get 600 images of pregnant women?” is a question I get a lot. I’ve developed several methods. The first method is searching hashtags such as #pregnancy, #pregnantbelly, and #babybump. The second method is that after I search these things, I can occasionally find themed accounts that are a great resource for photos of pregnant women.

Since you can’t click-and-drag download images from Instagram, I had to find a workaround. If you go to “inspect element” on an Instagram image, you can find a buried link to the image source and download it. So I did that, 600 times.

Working with the images

I went through several drafts of the design of the circles. I had several versions, all with different colors and varying degrees of fonts. After conferring with people in the STUDIO and getting a lot of valuable feedback, I settled on light pink semi-opaque circles, with the circle data adjusting visibly on the circle as it’s being dragged. I began creating videos like this, and posting them on Instagram to test.

However, I realized quickly that scrolling through dozens of videos on Instagram is pretty uneventful. The videos don’t autoplay, and the thumbnail of the video isn’t very interesting looking. If I wanted to hold people’s attention, I realized that I needed to start posting images. This also made the data collection a lot easier: where previously I had to take a screen recording of the women and split it up by which woman was in the video, I could now simply tell my processing app to save photos of women once they were finished. I began to create photos like this, but still it wasn’t quite right.

Do you see the problem? The top dot isn’t on the woman’s body. In a few of my photos, I wasn’t using exclusively the woman’s body to determine the circle, which is a very important element of the project. Throughout this time, I got better and better at marking up the images.

I settled on creating images like this.

The dots are all on the belly, the center and radius are very visible, and the circle is semi-opaque so that you can see the woman’s body through it, but the text is still visible on top of patterned clothing.

Now, I had hundreds of images and a processing sketch that would save these photos and log the data for me. At this point, it takes me about an hour to mark up every photo: not bad.

Posting the photos

There was also some debate about how to post the photos. Instagram is really difficult to post on, because they actively try to discourage bots, and will ban you if they think your account is doing something suspicious. I looked into it a lot, and decided that to be safe, I could only post about 10 photos an hour. I originally wanted to compensate for this by creating a temporary twitter account, but decided that Instagram was the correct medium. I have to post them all by hand, as there’s no Instagram API. I’ve been posting them a few at a time for several days now, and should have most of them up within the next few days.

Creating the print

Creating the print was simple once I had all of the belly data. I just sorted all the women from largest to smallest radius, and created another processing tool where I could tag the images for whether the full belly was in the circle or not, because I didn’t want any cut off circles in the print. I went through several drafts of the print, and ultimately decided on the pink and grey.

Special thank you to Golan Levin, Claire Hentschker, Cameron Burgess, Ben Snell, Avi Romanoff, Anna Henson, Luca Damasco, the ladies of Instagram, and all the other people who helped me out and gave me opinions. Additional shout-out to my eight loyal Instagram followers (Smokey, Chloe, Adella, Golan, Me, Anne, Anna, and some random person), who are still following me even though I post over 30 pregnant ladies a day.

 

 

caro-finalProposal

New Project: The Sound of Things

Could you modulate light waves into the audible domain? Or audio waves to the visible spectrum? What results would you get? Could you convert an audio file into a photograph?

What do your photos sound like? Or, what do your sounds look like?

I’d like to explore this in my project.

  1. Get a photo
  2. Pixel by pixel, find the color value and the frequency of that color
  3. Modulate that frequency to the audible domain
  4. Stitch together each “pixel” of sound to create an audio file

Reverse this process to create images out of sound.

Could you create an image out of the ambient noise of a room? What would it look like?

Could you intentionally compose music to create a photo?

 

“Colours and their sounds” http://altered-states.net/barry/newsletter346/colorchart.htm

Prometheus, the poem of fire, a piece of music intended to create certain colors https://en.wikipedia.org/wiki/Prometheus:_The_Poem_of_Fire

https://en.wikipedia.org/wiki/Spectral_color

PhotoSounder (costs $90) http://photosounder.com/

Similar project http://www.gramschmalz.com/encoding-images-as-sound-decoding-via-spectrogram/

Similar project http://www.npr.org/sections/pictureshow/2014/04/09/262386815/can-you-hear-a-photo-see-a-sound-artist-adam-brown-thinks-so

caro-event

Inspiration

I was very inspired by the Cassandra C. Jones work that Golan showed in class, where she manually aligned different photos of sunsets to create one continuous sunset. I thought the concept of creating one event through hundreds of different people’s momentary experiences was very interesting, and I wanted to explore it in my project.

https://vimeo.com/84883569

 

I was also inspired by the pixillation works we saw in class, particularly the One Frame of Fame music video, for the same reason.

 

Pregnant Women

Lots of pregnant women post the exact same selfie on Instagram. It’s this one:

I thought it would be fun to align these women from most to least pregnant. So I downloaded about 100 photos from Instagram, and I got to work manually aligning them in Photoshop.

Attempt #1

Most to Least Pregnant (my first gif, aligned manually)

 

I then turned this into a music visualizer (which you can still play with a draft of at https://caro.io/pregtunes):

Changing the Media Object

I did make the music visualizer, and it worked, but there were a few problems with it.

  1. I was choosing the woman’s size by the volume of the song, and volume doesn’t really map very intuitively onto sound
  2. It looked choppy enough without the random frames jumping around, and with this visualization method it looked even more incongruous.
  3. Conceptually, I don’t really know why I was doing this. It was straying from my original idea of turning all of these women’s experiences into one.

So, I scrapped the music visualizer, and went for another project.

Three Points Define a Circle

My new idea was collecting data from my images, and creating visualizations with the ladies I collected. If I could get three points on the stomach, I could define a circle that corresponds to the curvature of the pregnant lady’s belly. So, I built a tool to log this data for each of my pregnant ladies.

 

Using this data, I created a few visualizations.

 

Creating the Visualizations

I used python imaging library and various matrix transformations to align the images. Using principles that I learned in computational photography and computer graphics, I constructed transformations to achieve various effects.

(napkin math)

0. More and Less Pregnant

My manually aligned gif that I started with is honestly still my favorite one, and it’s the idea that sparked this whole project. Still, the computer generated ones are interesting, and it was fun to model women and babies as mathematical shapes.

 

1. Same Belly Button

  1. select a point to be the “new belly button” location
  2. get transformation values by subtracting the woman’s belly button coordinates from the new belly button coordinates
  3. construct a transformation matrix for each lady based on the transformation values

I originally tried it on the full color images with backgrounds, but came to the conclusion that it was too visually busy, so I reverted to the background-less images I made.

2. Spinning around the belly button

  1. Move the belly button to the new, approved belly button place
  2. For each lady, increment the angle of rotation a little bit
  3. Translate the lady’s belly button to (0, 0)
  4. Rotate the lady by theta
  5. Translate the lady’s belly button back to the correct belly button location

3. All Woman-Circles are the Same Size

  1. Translate the woman’s belly button to the new centralized belly button location
  2. Scale the image by the ratio of a standardized radius to the ratio of the woman-circle
  3. Translate the image again by a factor that eliminates the movement about the centralized location due to scaling

Reflection

I wound up taking a pretty experimental route with this, and got some interesting results. If I had more time, I would love to gather even more photos for this, and do it with a thousand photos rather than a hundred. The problem with that is that this only really wound up working for the images where I took out the background. I think it’d probably look a lot better if I had hundreds of images, and there was no variation in the clothes, i.e. if their stomachs were all bare.

Challenges I encountered:

  1. Switching my project after a while
  2. Matrices are hard
  3. Jitteriness in the resulting gifs

I did get some interesting media results out of this project, and tagging women’s stomachs with bubbles was fun.

 

caro-eventProgress

My last blog post has a detailed process writeup:

caro-eventProposal

 

Since then, I haven’t done much, but I was really far ahead for the last checkpoint so I guess it evens out? I still need to do icons for the website, and I actually want to play around more with the images of pregnant ladies to make more weird visualizations.

caro-eventProposal

My event is pregnancy. For this project, I will be investigating pregnancy as a spectacle. I think it’s hilarious that hundreds of women post the exact same pregnancy photos on the internet for any weirdo to use for art projects. #pregnant #20weeks #pregnancy

I want to compile images of pregnant women from Instagram, and animate them from most to least pregnant. Ultimately I want to turn this into a music visualizer. I haven’t fully decided what type of music, but probably something overly sexualized. The reasoning for the soundtrack is that pregnancy is societally this beautiful ethereal thing, but we all know how these ladies got pregnant. It’s also pretty funny and jarring to hear super sexual songs beside selfies of pregnant ladies.

Here are the funniest ones

I did a run at animating the pregnant ladies, but it’s pretty jittery because of the variation in the images. I want to redo this process with more cohesive images so it isn’t so jarring. I tried making it gray and also removing the background but it’s still not ideal.

 

Also I got over-excited and already built a lot of the music visualizer component. I really need to go back in and create better visuals though, because right now they’re lacking.

1st Draft (hear the music): PregTunes Draft Video

Play with the current draft at: https://caro/io/pregtunes (soon to be http://pregtunes.zone)

^(warning slow-loading and not good yet)

To-do:

  • Improve loading. Make loading animation. Also lazily load songs on click so it doesn’t take forever.
  • Domain name
  • Autoplay songs after they’re done
  • Icons and drawings for playing music
  • Visuals of visualizer: make them a lot better
  • Pick the rest of the songs (I currently have 2-3)
  • Upload your own song
  • Dot is a fetus instead of a dot?

caro-place

Ultrasonic Exploration

Project Summary

What would it be like to navigate an environment using sound to navigate instead of sight?

To investigate, I built an echolocation device and used it to explore a room that I’d never seen visually. The probe is made using ultrasonic distance sensors that trigger both an LED and a speaker based on proximity.

I set up a 360 camera to document this exploration process, and wrote a simple processing sketch to add the brightness of the LED to each future frame of video. The end result is a recording of my path through the space.

Place: Somewhere I’ve Never Been

In documenting the process of visually blind exploration, it was important that I have no preconceived image of the room I’m documenting. I had my boyfriend come wander around campus with me and help pre-select mysterious rooms for filming. He chose the most unfamiliar room of all: the men’s bathroom in the basement of Doherty.

Capture System Part 1: Ultrasonic Probe

My probe consists of three ultrasonic sensors, an LED, a buzzer, and an Arduino. All of these are wired together onto an old crutch I had lying around in my closet.

The ultrasonic sensors use echolocation to determine distance. They emit a high frequency sound waves, and time how long it takes the waves to return. Whenever any of the ultrasonic sensors get within a certain range of an obstruction, the LED lights up and a buzzer beeps to let me know I’m close to something.

Here’s a longer video I made playing around with it (the buzzer was disabled for this because I didn’t want to wake people up):

At the end I wound up putting a “shield” on the LED to mitigate the classic “light painting” look of following a glowing dot around. This way it’s more about what’s being illuminated than the light source itself.

Capture System Part 2: 360 Camera 

I chose to document the exploration process using a 360 camera.

I had my boyfriend put the 360 camera in the room for me and turn off the lights so that I never saw the location. I blindfolded myself, and then began the process of blindly probing the room. My only spatial cue was the beeping of the speaker.

The 360 camera wound up documenting my exploration via only the LED light’s path around the room. Similarly, all I knew about my environment was a single beeping noise: in my mind, the buzzer and the LED are equivalent, but one is sonic and one is visual.

Media Product: Additive Brightness 360 Video

The original video I got was a light moving around a room in 360, turning on and off. Using ffmpeg, I split the video into frames. Then, I wrote a simple processing sketch to additively composite all of the bright areas from the previous frames into all future frames. This way, the LED additively paints the environment, similarly to how every new “beep” from the speaker gave me new spatial information.

The video is meant to be viewed in a Google Cardboard, so people can spatially experience the environment.

 

Evaluation

Blindly wagging a stick around deserted men’s bathrooms at 3AM with only beeps to guide me was quite an experience. It was very uncomfortable not being able to see what I was doing. It felt like I lost control of the product I was creating by not being able to see it. In the end, this was really the point of the project.

Result

caro-PlaceProposal

Echolocation Depth Map of the Electrical Engineering Lab

We experience and understand environments primarily using the sense of sight. But what if we could see using sound? I was inspired by bats, who can get around using high frequency sound waves to describe their environments, without being able to see. I was also inspired by Ben Snell’s LIDAR project, where LIDAR is a technology that measures distance by illuminating it’s target with a laser light. In a way, I’m sort of attempting to create my own Echolocation distance sensing LIDAR. The location I’ve chosen is the Electrical Engineering lab at CMU, which is very personal to me, I’ve spent many long hours there.

 

Turns out, there’s this thing called an Ultrasonic Sensor:

It emits a high frequency noise, and then waits for the sound to come back. Based on this information, it can tell how far away something is.

Giant 2×4 covered with Ultrasonic Sensors

I hypothesize that by covering a big stick with Ultrasonic Sensors, I can construct a rough depth map of the EE lab. I want to do this with the lights off.

By placing the sensors at regular intervals, I know the location of the sensor in the height direction (y). If I stand in one spot with the pole, I know where I am on the floor (x). The only other questions is “how far away is all the other stuff?” (z).

I think that if I spin around in a circle, and time how long it takes me, I’ll be able to create a 180 degree image of the lab (basically a cylinder). I bet there’s a way to do this more precisely with motors but honestly I’ll probably just wind up spinning around in a circle.

But will the sensors interfere with each other? No because I’m gonna do math and make that not happen

Data Visualization

I’ll have all of this data about the room, but it’ll still be spaced out a lot because of how far apart the sensors are. I’ll write processing code to interpolate between the sensor values, so I get a smooth depth map. It won’t be hyper-accurate, but it’ll give a vague sense of the location.

Ultimately I’d like to put the 180 degree photo in a cardboard or some sort of viewer, so people can experience the room by using sound technologies to create a “physical” experience.

 

caro-portrait

Portrait made by Neural Networks

This portrait was made by neural networks. DMGordon, the subject, discusses his artwork as a machine attempts to recreate him in his own artistic style.

Artistic Process

DMGordon and I talked a lot before I settled on the idea for this project. I was inspired to create this portrait after we looked through his sketchbooks, which were full of interesting drawings and characters. He has a style of sketching that I felt captured his quiddity in an interesting way. I wanted to see what DMGordon would look like if he were rendered in the style of one of the characters he creates. Additionally, by constructing the video in clips showing the style of one sketch at a time, I wanted to emulate the experience of flipping through one of his sketchbooks. Here are the three sketches I used:

Before I settled on these three sketches, I scanned in dozens of his sketches and experimented on the results that each of them gave when passed through a neural network. I tested out the styles using http://deepart.io, a website that allows you to style transfer individual images. I made the following gif out of the results:

From making this gif I realized that varying between dozens of sketches and colors is visually very unpleasant, and it’s hard to see consistency from image to image. I like the randomness of it, but not so much randomness that it’s hard to look at. I ultimately settled at only using black and white images, and, for the most part, staying consistent with which sketch I used from frame to frame. In terms of the video content, I decided to have DMGordon discuss each sketch as he was rendered in the style of that sketch.

Inspirations

I was inspired a lot by traditional rotoscoping, such as the Take On Me music video and the movie Waking Life. I wanted to see if it was possible to create a similarly interesting effect programmatically.

 

Technical Process

To create this project, three of DMGordon’s sketches were scanned and passed through a neural net. For each sketch, the neural net developed a style model. Then, the model was transferred one by one onto each frame of video, rendering the frame in the style of the original sketch.

There were two technical pipelines I used, one to generate the style transfer models, and one to apply the style transfer models to a large amount of media.

Tech Part 1: Style Model Transfer

Originally, I attempted to do all of the style transfer locally on my own computer. I downloaded an implementation of the style transfer algorithm on GitHub (there are many), and tried running it on my own laptop. Unfortunately, running one frame through this neural network on my own laptop took about 90 minutes, which wasn’t feasible.

Fortunately, I found an incredible service called Algorithmia that can apply the style transfer in only 10 seconds per image. I would highly recommend this service for any sort of intensive algorithm needs anyone may have. Algorithmia has their own image hosting, and an API to transfer image back and forth from their hosting and run it through their algorithms. I contacted Algorithmia about my project, and they gave me a bunch of free credit – turns out both of the founders went to CMU!

After I had filmed all the video for my project, I went through all the clips and found the ones I liked. Essentially, I edited together my video in normal footage before I processed the effects. Then, for each clip, I developed the following process.

  1. I used ffmpeg to split the clip into 30fps images.
  2. Using a python script that I wrote to generate bash script, I uploaded the images to Algorithmia. I ran into an interesting problem doing this, where originally, the script would get stuck after 5 or so images. I fixed this problem by running all the commands as background processes. However, since I was uploading thousands of frames, I wound up accidentally fork bombing my computer once or twice and learned all about fixing that.
  3. Using another python script I wrote, I passed the images through Algorithmia’s style transfer algorithm (using the models that I generate using AWS – more on that later) in batches of 20. If you do more than 20 images at a time, the requests time out. This script was actually really interesting to write, because I didn’t always want only one style applied to a video. Often, I wanted to randomize which model was applied to a given frame between 2 or 3 options. Additionally, since I trained so many models, I was able to save benchmarks for each style model. Basically, this means that for every sketch I had a “really good model” and a “less good model” and a “weird looking model”, so I was able to randomize the model between these options without actually randomizing the style. It made things look just a bit more visually interesting.
  4. Using another python script I wrote to generate bash scripts, I downloaded all of the frames from Algorithmia.
  5. Using ffmpeg, I stitched the frames back together, split off the original audio from the video, and re-combined the audio with the new video.

 

Tech Part 2: Style Model Generation

I needed to actually generate models to apply to the frames of video. Again, there was no way this process was going to work on my personal computer. Fortunately, Algorithmia has a solution using Amazon Web Services to run a style model training script. After spending several hours on the phone with Amazon trying to convince them to let me use one of their beefy GPU-enabled EC2 instances, I triumphed!

I also managed to get $150 of educational AWS credit, so cost wasn’t a problem. This instance costs about $1 an hour to run, and it takes around 24 hours to train one style model, so it would normally cost about $25 per model. I can only imagine how long it would take without a GPU.

Here’s how I generated the style models:

  1. Select the sketchbook image I want to use
  2. Launch an Amazon EC2 p2.xlarge instance using the Algorithmia AMI
  3. Install the Algorithmia script. This loads the necessary machine learning software and environments that are dependencies to the model training code.
  4. Start the model training on the style image (using tmux so that I don’t have to leave my computer open for 24 hours)
  5. Upload the trained model to Algorithmia

 

Reflection

I feel I’ve created a “portrait machine” with this project, and I think the output is nice but it could be a lot better. For example, in critique, my group pointed out that the video for the interview could be more interesting, and the audio could be cleaner.

There’s definitely room for further experimentation with this project, especially since I’ve worked out all the technology now and still have Algorithmia and AWS credits left.

I’m planning on doing a series, and next trying the same technique on someone whose art uses color, and possibly a digital artist as well.

caro-NeuralNetPortraitPlan

My subject keeps really interesting sketchbooks.

These days, neural networks can “draw” real life portraits in the style of another photo. An example is here. 

 

I want to draw a video of my subject in the style of his own sketchbooks, each frame being rendered in the style of a randomly chosen sketchbook page. I’ll write a script to process the video of himself through the lens of his own art. In a sense it would be a self portrait.

Here’s an example of one frame

caro-SEM

Last Friday I found some wacky stuff in a piece of Ibuprofen.

Here’s the Ibuprofen tablet at a recognizable distance:

Here it is way zoomed in on some of the chipped off area:

Here it is zoomed in on some cool alien planet lump:

This morning I took an Ibuprofen because my stomach hurt and I just stared at it for so long because now I know that there are entire worlds contained inside this tiny thing I’m about to eat.

I was originally worried that my sample would look really boring, because of the coating that companies put on pills. However, since the pill had been rattling around in my backpack for so long, the coating had worn off in places. The really interesting images came from zooming in on the spots where the coating had worn off. There were a few other locations with tiny bits of the coating chipped off where you could see some of the cool spider-webby stuff hidden just behind a crack in the smooth surface.

hello there! – caro

Hello there! I’m a three-dimensional human junior majoring in ECE and minoring in Design. I’m really into weird combinations of art and tech and I think this class is gonna be excellent for exploring and experimenting. I also enjoy virtual reality, AR, video games, HCI, computer vision, and futurology.