I saw this recent work. I thought it was fun to see the Google Draw experiments made tangible and interactive, but….I actually included this piece because I wanted to bring up a potential critique — beyond the physical form just making it easier to exhibit, what does the tangible nature of the machine do for this experience? Does it fundamentally change or enhance the interaction? What the machine is doing is something the digital form can do just as easily. The way the user inputs things here is more or less the same as they would on the web version of this (mouse on screen instead of a stiff, vertical pen on paper) where they begin to draw a doodle. The machine tries to guess what it is and ‘autocomplete’ it. How it doesn’t line up/or guess it correctly ends up with your drawing recreated as a strange hybrid mix with dementing visually similar. Do I want to keep the end product? Not really. Do I really cherish the experience? I don’t know, it doesn’t bring much new to the table that Google’s web version didn’t already in terms of getting a sense of how good/ or bad the AI system behind it has gotten at image/object classification and computer vision.
So what is it that it brings? Is it the experience of seeing machine collaborate intelligently in realtime with you?
Kind of like Sougwen’s work — (see below) ?