a-FinalProject

tweetable sentence

makes you think… 🤔” is a system for maximizing brain usage. Stare at the cross, and the system will automatically optimize what you see to maximally activate your brain.

abstract overview

makes you think… 🤔  combines a portable EEG sensor with a parametric image generator.  It modifies the image generated to maximise the brain activity in the beta & gamma waves produced by your brain.  It could be considered the opposite of meditation.

big pic

 

narrative

I was inspired by the EEG sensor used – the “Muse” headband – being framed as “next-level” meditation in its advertising

makes you think… 🤔’  uses a non-gradient based optimization of input parameters for a parametric image generator to generate an image, with the loss value for each image being derived from the level of brain activation when that image is seen.

a – FinalProposal

I want to make a system that attempts to maximise some bodily response from a viewer.

This requires a parametric image and a way to measure bodily response in real-time.  Given the hardware available, the simplest options seem to be either using heartbeat data, or the Muse EEG headband.

The project works as follows: Modify the parametric image. Evaluate response.  Estimate gradient of emotional response w.r.t parametric parameters of image.  Take a step of gradient ascent in the direction in parameter space that maximises the estimated emotional response function using reinforcement learning or genetic algorithms. Repeat. An alternative route would be to train a neural network to predict emotional response, and optimize using its surrogate world-model gradient, which would enable using stochastic gradient descent to optimize the image much faster.

Given the slow response of heartbeat data, I should use the Muse headband. In addition, we know the approximate timeframe that a given visual signal takes to process in a brain, although it remains to be seen if the noisy data from the EEG headband can be optimized against.

This project parallels work done using biofeedback in therapy and meditation, although with the opposite goal. An example of a project attempting to do this is SOLAR (below), in which the VR environment is designed to guide (using biofeedback, presumable using a Muse-like sensor) the participant into meditation.

For the parametric image, there are a variety of options. Currently, I am leaning towards using either a large colour field, or a generative neural network to provide me with a differentiatable parametric output. It would be awesome to use bigGAN to generate complex imagery, but the simplicity of the colour field is also appealing.  A midway option would be to use something like a CPPN, a neural network architecture that produces interesting abstract patterns that can be optimized into recognizeable shapes.

http://picbreeder.com
from http://blog.otoro.net/2016/03/25/generating-abstract-patterns-with-tensorflow/

 

a-LookingOutwards03

HeadLight is a mixed-reality display system that consists of a head-mounted projector combined with spatial tracking and depth inference.

HeadLight uses a Vive tracker to track the head pose of the wearer.  Combined with the depth information of the space in front of the viewer, this enables the wide-angle projector mounted on the viewer’s head to projection-map the room and the objects within it, with a working 3D illusion from the viewer’s point of view.

PE001

 

 

Speed of Light is short film created using two pico projectors and a movie camera.  The pico-projectors are used to project animations of the things in the movie onto the surfaces of a room.  By moving these animations, the sharp & jenkins turn the room into a movie set for these mixed-reality “characters.”  This piece could have come out of just experimenting with stock footage on a black background, or through extensive planning and creation of the animations to match up to the scenes.

 

 

GVBeestje is a sticker used to activate a game on the Amsterdam transportation system (the GVB).  It consists of a set of stickers of a beest, that invites the viewer to play by using the parallax between the fore- and background to position the beest by moving their head and eat the people the bus is going past.

 

In all of these projects, there is exploration of some non-traditional ways of activating a space, much in the way the “AR” or “MR” does.  The GVBeestie is successful in operationalizing the latent interaction of parallax the rider experiences daily into a game using nothing more than a sticker.  Speed of Light is an interesting concept, however the film itself is relatively uninteresting.  The idea of it arising out of play, or creating some tool allowing one to play in this way is more exciting than the film itself.  the HeadLight is an ugly and cumbersome device, with sub-par tracking (even in the official documentation video).  The single-user nature of it is interesting, as is the notion of augmenting space in an egocentric way that other people can see, having their space be overridden.

 

 

 

conye & a — DrawingSoftware

Mldraw from aman tiwari on Vimeo.

origin

Mldraw was born out of seeing the potential of the body of research done using pix2pix to turn drawings into other images and the severe lack of a usable, “useful” and accessible tool to utilize this technology.

interface

Mldraw’s interface is inspired by cute, techy/anti-techy retro aesthetics, such as the work of Sailor Mercury and the Bubblesort Zines.  We wanted it to be fun, novel, exciting and deeply differentiated from the world of arxiv papers and programmer-art.  We felt like we were building the tool for an audience who would be appreciative of this aesthetic, and hopefully scare away people who are not open to it.

dream

Our dream is for Mldraw to be the easiest tool for a researcher to integrate their work into.  We would love to see more models put into Mldraw.

future

We want to deploy Mldraw to a publicly accessible website as soon as possible, potentially on http://glitch.me or http://mldraw.com.  We would like to add a mascot-based tutorial (see below for sketch of mascot).  In addition, it would be useful for the part of the Typescript frontend that communicates with the backend server to be split out into its own package, as it is already independent of the UI implementation.  This would allow, for instance, p5 sketches to be mldrawn.

process & implementation

Mldraw is implemented as a Typescript frontend using choo.js as a UI framework, with a Python registry server and a Python adapter library, along with a number of instantiations of the adapter library for specific models.

The frontend communicates with the registry server using socket.io, which then passes to the frontend a list of models and their URLs. The frontend then communicates directly to the models.  This enables us e.g. to host a registry server for Mldraw without having to pay the cost of hosting every model it supports.

Mldraw also supports models that run locally on the client (in the above video, the cat, Pikachu and bag models run locally, whilst the other models are hosted on remote servers).

In service of the above desire to make Mldraw extensible, we have made it easy to add a new model – all that is required is some Python interface* to the model, and to define a function that takes in an image and returns an image. Our model adapter will handle the rest of it, including registering the model with the server hosting an Mldraw interface.

*This is not actually necessary. Any language that has a socket.io library can be Mldrawn, but they would have to write the part that talks to the registry server and parses the messages themselves

process images

The first image made with Mldraw. Note that this is with the “shoe” model.

The first sketch of the desired interface.


The first implementation of the real interface, with some debug views.

The first implementation of a better interface.

The first test with multiple models, after the new UI had been implemented (we had to wait for the selector to choose models to be implemented first).


The current Mldraw interface.

Our future tutorial mascot, with IK’d arm.

Some creations from Mldraw, in chronological order.

atiwari1-lookingOutwards1

Assemblance was the first media-art piece I saw in an art gallery.  I saw it at the Digital Revolutions exhibition at the Barbarian in 2014.  It was created by Umbrellium for the show, by a team of many people and two creative directors.

I found its mix of participatory collaborative interaction with strange visual experiences.  I had never experienced a projection that so clearly was able to define shapes and create semi-solid surfaces.  I found myself feeling almost surprised each time my hand pushed the projected walls away without physically feeling them.

It was successful in eliciting participation amongst the viewers, as there weren’t any explicit instructions detailing the various gestures you could use to draw and remove rigid objects and chains, leaving viewers to show each other the movements to make to activate them.  The objects could be pushed around and would collide with other people’s creations.

I spent a while in the installation and it as also interesting to see how first time participants would react—mostly by drawing a wall around themselves and pushing it around.

The visuals possible were necessarily limited due to being 2D objects extruded out in space over the projection volume, but still had sufficient variability to be satisfying.

 

Although my work isn’t necessarily directly inspired by Assemblage, it still points to interesting directions in participatory, emergent interaction between people.

a-2DPhysics

I was interested in using a fluid simulation (i used this one), but I couldn’t think of ways to satisfactorily visualize and interact with it that hadn’t been extensively explored before.

I thought to use boids to drive the interaction with the liquid, and have the liquid then apply forces to the boids, to create interesting emergent interaction.  This made it hard to balance the system, as this is a positive feedback loop.

I experimented with a number of ways to visualise the boids and the liquid, with the boids being above at first, then moving behind unless they were under a specific size (to give the impression of them “jumping out”).

However, neither of these gave the liquid “depth” and “murkiness,” which were my emergent goals for this piece as it went on. Eventually, I realised I could just make the liquid black, and so I did.

I am satisfied by the result of exploring a fluid simulation system in a new (to me) way, with “live” <s>creatures</s> boids. I am especially happy with the murky, inky effect that sometimes happens. Looking back, I wish I had added more dampening to the boids. In addition, it would be interesting to explore how they interact with a GPU fluid system, letting one simulate the fluid at a much finer resolution (here it is just 64×64).

You can see the sketch (and the source code) at https://editor.p5js.org/aman/full/HysZkJ6fV.

tadpoles from aman tiwari on Vimeo.