Memo Akten: “Learning to See”: You Are What You See. 2017.

On Tuesday, Chelsea Manning called for an ethics of engineering from programmers bringing new algorithmic tools into the world. This came during her conversation with Heather Dewey-Hagborg, as part of the School of Art’s Spring lecture series, which also focused on the machine-learning based models of Manning’s face on which the two of them had collaborated.

It’s fortunate that artists like Dewey-Hagborg are among the earliest to help articulate the warnings of activists against the predictive algorithms that are quickly becoming integrated into all of our daily lives, because the task of representing something as esoteric and shrouded in fantasy as machine learning is a difficult one at best.

This is what Memo Akten is claiming to do with his “Learning to see” project: By visualizing the limits of the machine-learning algorithms that in other contexts are deployed to sell us products and identify enemies in occupied countries, Akten is suggesting that they are just as capable of error as human operators in these contexts.

But I think it’s a bit disappointing that, with such a powerful tool at his disposal—in particular one that is now live—Atken is choosing to turn his algorithm on the same subject-dataset pairings that Google is using: The programmer and clouds. Or flowers and something as banal as his headphones and keys. If Atken is claiming to use his live-action, neural-network predictions to address the flaws of the algorithms, he’s doing just the opposite by generating beautiful AR visions of himself and the view from his office window. Why not blog about the applications of his work to different subjects?