CraigFahner-Project2-timescrubber

by craig @ 2:04 am 9 February 2012

For this project I decided to focus on making an information sonification rather than an info visualization. I came across some data for the Billboard Top 10 chart since 1960, containing data for the key and mode (major or minor key) for each song on the chart. I was interested in using this data to develop a generative composition, meandering through history based on the progression of popular music. I unpacked this data in Max/MSP where I generated arpeggios by finding the minor or major 3rd and 5th of each note from my data. I sent this data to Ableton Live to generate audio.

While it was interesting to me to create a rather spooky and minimal piece of generative music using this data, it became apparent that more data could be presented by creating a visual counterpart to the sound. I decided to work with the Google Image Search API to generate images based on the dates that the songs came out. A search query containing “September 6 1963”, for instance, would return, typically, a magazine cover from this time. I sent the date values from Max/MSP via OSC. I also sent the artist and song data, which is displayed below the photographs, which fade in with each subsequent chart entry that is encountered.

In the future I hope to find better ways of blending together images, so that they better correlate with the tone of the music. I would like to look into effects that blur the images, and potentially add motion. Also the text could be treated such that it blends properly with whatever is behind it. If anyone has any pointers for how to accomplish this, send them my way!

[youtube=http://www.youtube.com/watch?v=il6KRQ5H1YU]

Presentation

1 Comment

  1. ======================================
    Craig: Sonification of Billboard Chart since 1960

    I like the idea of being able to sonically visualize the music of that time. I wonder if a remix of songs + images would be good. Also maybe a visualization of the beats, etc: because I’m guessing popular genres changed over time. I guess not coming from a music background, I’d rather have get a sense of the mood of the period. 

    Good choice to to look at the Key/mode of the music! 
    I’m not getting a holistic unity of image and sound. Do you really need the images at all? The images are very distracting in a way — all I can think about is John F. Kennedy. 

    Does listening to your “song” tell us anythying about the songs from which it was generated? It doesn’t seem so. But I don’t think the answer is to introduce the images; I think the answer is to re-work the sonification. It ought to be possible to learn something about the [history of pop] songs by listening to yours!

    Check out this related project by Luke Dubois; he creates a history/mixture track of all the billboard hits: http://music.columbia.edu/~luke/artwork/billboard.shtml  and his related http://music.columbia.edu/~luke/artwork/academy.shtml
    For fast treatments of lots of images, I do recommend using OpenFrameworks, Cinder.

    Maybe you could have taken a song / tune and incoporate the random component into it. So there is a more harmonic framwork / setting, and an interesting twist with the changing tunes.

    Not sure I understand what is the connection between the music and the changing images despite the fact they come from the same date.

    toxiclab colorlib: image processing library + openCV 

    this is eeeeeerie; i am also confused how the songs translated into this new tune because I know nothing about music theory.< -I agree suggestion for text- layer on top of an opaque bar  I also little confused about the correlation between the music and images.  Cool idea I think the project would have been a little more effective if the sound we were hearing were more clearly connected to the original songs--- simply having an avante garde mixture based just on arpeggios based on the key is a little obtuse. I had similar problems with video audio separation.  Check out this similar and also beautiful prject http://itunes.apple.com/us/app/bloom/id292792586?mt=8

    Wonder if there is a library to do structural analysis of music… If you cut snippets of the actual tracks that exhibited the tonic and major/minor determinant and played them, you might escape the errie avant garde aesthetic while still being able to hear the key/mode 

    Have you speeding it up? What about arpeggiating only two chords, like.. C Maj/Minor and speeding it up so you could get the tone over time….?

    Would there be any way to use search for images based on the audio that you extract? Both approaches seem tenatively linked. Also, wouldn’t breaking down the songs by key only limit the number of “unique” sounds? Yes, the octaves would change, but if it’s a constant arpeggio, or series of notes, will it offer any new insights into the music? Don’t get met wrong, it’s still a very interesting! 

    I always thought this (visualizing things as sound) was cool. Have you seen http://photosounder.com/?

    This is a really interesting approach and process, it’s cool that you attempted to combine all of these programs
    **Yeah.  Very ambitious.  I like the video as well.

    I think visualizing a music chart as sound makes a lot of sense but obviously it is hard to learn or understand anything about the data from the sonification.  Maybe just grabbing and playing bits of the song would have been more informative (and less creative).

    Using the keys and billboard positions of the songs is a great idea – it seems like you’d be able to really critique the repetitiveness of pop music. The final result is a bit too abstract to show that kind of critique, but it’s still an enjoyably surreal experience.

    It seems that your fairly confident with the system of which you grab the songs and pull them through the synthesiser. But as a viewer with little music mechanics background, I is hard to follow. I understand mechanically how it works, but the choices and visualization seem not tight enough.

    Did you try spending more time (or more notes) in each song? From the video you seemed to get about two notes in each key, which makes it a little difficult to hear the key. I wonder how it would sound if there was more time for the listener to process the key.

    The images are a good addition and help separate the songs from each other.

    Did you use Max4Live or did you just stream MIDI data to Abelton?

    end result sounds pretty cool

    I wonder what it would sound like if you took the rhythmic form of something like a Bach Prelude and played these arpeggios instead of the ones Bach wrote.

    Doing a manual cross dissolve between images isn’t too too bad. You  basically have a for
    loop that goes from 0 to one, increasing by some cross dissolve factor and you do 
    Img1 + (cross dissolve factor)*(Img2-Img1) for each pixel in the image. You can write this directly to the display or store them as an array of PImages depending on how fast it runs. 

    The smaller your cross dissolve factor, the smoother the transition and the more sub images you have to generate. 

    I still don’t get exactly what this visualization does. I don’t quite get a visual sense of understanding from the video, but I do like the playback music that was generated.

    This is a cool idea. The sonification would be more interesting if you gave each week of chart dominance one measure. That way when a certain song is at number one for a long time, the sonification would reflect that.

    Comment by patrick — 14 February 2012 @ 9:26 am

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
(c) 2024 Interactive Art and Computational Design, Spring 2012 | powered by WordPress with Barecity