amwatson

19 Feb 2015

Selective Memory Theatre is an installation that addresses the interaction between memory and perception.  It retrieves collections of Flickr images in and, in real time, distorts and decays an image until a “similar” image is retrieved and associated with the earlier “memory”.  Two screens, the “perception layer” and the “memory layer” depict the interaction between the senses and memory.

This project stood out to me, because I’m really interested in theatre that tries to depict the mind and explore the mechanics of perception, and it’s very cool to see someone using tech and real-time computation to create visualizations for the stage.  I think it defeats the purpose of “real-time” a bit to rely on manual tags to denote similarity rather than actually determining it computationally (otherwise, the machine really isn’t acting as a brain, it’s just sort of pretending to).  I’d like to see something like this, but with some clever processing to detect similar features in the existing dataset.

The artist explains that he was inspired by the “permanence” of digital memory, and that he wanted to instead model a more human notion of impermanent memory.  I’m reminded a lot of the Entropy programming language, which attacked the permanent nature of digital memory by permuting data every time it was touched.  The artist also has previous real-time visualizations he calls theatre.

Ethical Thinking is a set of “smart” devices (such as the fan displayed) that are intended to be directed by ethics, rather than user instruction.  Its settings guide what moral code it uses and, when determining how it should function, the device consults its memory, mathematics, and ultimately Mechanical Turk to determine what action is most ethical.

I really liked this project, because I think it explores a lot of really interesting questions about how to engineer differently, and the limitations of technology.  The way in which the project was built is as much, if not more, of an artistic exploration than the final output: as engineers, we are used to a couple base heuristics when we design.  Designing something with vastly different expectations and aims, something fundamentally more human, requires a unique and culturally significant departure from the way we traditionally think about machines.

With that in mind, I’d be interested in knowing more about what went into the design of the machine to make it “ethical”.  I want a better sense of why they chose their different sets of ethics, how an atheist should be expected to operate a fan differently from a hindu.  I’d also like to see the machine solve more interesting questions than the one the fan seems to.  Finally, I’m not convinced the fan in the video knows there are two bodies, where they are, and which one is fatter.  If not, I’d like to see it require less human interaction, and make decisions based on elements it can detect itself.

The project was inspired by the observation that machines so often operate under a programmed decision-making “logic” that is specific to a machine.  The engineers were interested in seeing what might happen if that logic was made more human — for instance, how would an atheist’s logic differ from a Hindu’s?  Like Selective Memory Theatre, the project identifies a certainty about how machines are made and attempts to negate it.