Caroline

29 Jan 2013

Screen shot 2013-01-29 at 3.15.00 PM

I used Kyle Mcdonald’s syphonFaceOSC app to create a processing sketch that fills the viewers mouth with text that dynamically resizes to their mouth. Every time the mouth closes the word changes to the next one in the sequence. This piece resides in an interesting juncture between kinetic type, subtitles and lip reading. Now that I have built this tool I intend to brain storm ideas of how I could use is to make a finished piece or performance. I am interested in juxtaposing spoken with written word. I am also interested in finding out whether this has any applications to assist the deaf.

code on github: https://github.com/crecord/faceOSC