SWEATSHOPPE created augmented reality video paintings on walls. I can see cool things being produced from this, but their documentation does not appeal to me. It gives off the impression that they are trying to be edgy, with people sucking popsicles in their videos and low framerates showing them running in the dark. I want more variations that shows off the different range of emotions this project is truly capable of evoking in people, and I want this to be more available to the masses for all kinds of people to interact with it. It reminded me of Camille Utterback’s Falling Words project, which also involves projections being affected by physical interactions. Her documentations are closer to what I want SWEATSHOPPE’s to be more like.
Saba Khalilnaji used supervised machine learning to teach a bot to play foosball. He documented his theory here. For the hardware, he used a PS3 Eye for the camera, and and Arduino for the moving parts. The complexity and ambition of the project amazes me. I am also interested in artificially behaving creatures, so watching the table play against the human really makes me happy. By having this, it is almost as if you have a friend to play foosball with :( There are of course technical limitations of the project, as he explained in his documentation, and the way the bot moves is too sporadic and random at times to tell if it is actually intelligent. It reminded me of the Street Fighter AIs that people wrote that learn from human players and can beat them. Nintendo’s Amiibos are also similar, learning from player’s play style and adapting it as their own.