lumar-lookingoutwards2

I saw this recent work. I thought it was fun to see the Google Draw experiments made tangible and interactive, but….I actually included this piece because I wanted to bring up a potential critique — beyond the physical form just making it easier to exhibit, what does the tangible nature of the machine do for this experience? Does it fundamentally change or enhance the interaction? What the machine is doing is something the digital form can do just as easily. The way the user inputs things here is more or less the same as they would on the web version of this (mouse on screen instead of a stiff, vertical pen on paper) where they begin to draw a doodle. The machine tries to guess what it is and ‘autocomplete’ it. How it doesn’t line up/or guess it correctly ends up with your drawing recreated as a strange hybrid mix with dementing visually similar. Do I want to keep the end product? Not really. Do I really cherish the experience? I don’t know, it doesn’t bring much new to the table that Google’s web version didn’t already in terms of getting a sense of how good/ or bad the AI system behind it has gotten at image/object classification and computer vision.

So what is it that it brings? Is it the experience of seeing machine collaborate intelligently in realtime with you?

Kind of like Sougwen’s work — (see below) ?

Sougwen Chung, Drawing Operations Unit: Generation 2 (Memory), Collaboration, 2017 from sougwen on Vimeo.

sjang-lookingoutwards02

Sougwen Chung, Drawing Operations Unit (Generation 1 & 2, 2017-8) 

Drawing Operations Unit: Generation 2 – MEMORY

Sougwen Chung’s Drawing Operations Unit is the artist’s ongoing exploration of how a human and robotic arm could collaborate to create drawings together. In Drawing Operations Unit: Generation 1 (D.O.U.G._1), the robotic arm mimics the artist’s gestural strokes by analyzing the process in real-time through computer vision software and moving synchronously/interpretatively to complement the artist’s mark-making. In D.O.U.G._2, the robotic arm’s movement is generated from neural nets trained on the artist’s previous drawing gestures. The machine’s behavior exhibits its interpretation of what it has learned about the artist’s style and process from before, which functions like a working memory.

I love how the drawing comes into being through a complex dynamic between the human and machine – the act of drawing becomes a performance, a beautiful duet. There is this constant dance of interpretation and negotiation happening between the two, both always vigilant and aware of each other’s motions, trying to strike a balance. The work challenges the idea of agency and creative process. The drawing becomes an artifact that captures their history of interaction.

There are caveats to the work as to what and how much the machine can learn, and whether it could contribute something more than just learned behavior. As I am painfully aware of the limitations of computer vision, I cannot help but wonder how much the machine is capable of ‘seeing’. To what extent could it capture all the subtleties involved in the gestural movements of the artist’s hand creating those marks? What qualities of the gesture and mark-making does it focus on learning? Does it capture the velocity of the gestural mark, the pressure of the pencil against the paper through conjecture? Are there only certain types of drawings one can create through this process?

It would also be wonderful to see people other than the artist draw with the machine, to see the diversity of output this process is capable of creating. The ways people draw are highly individualistic and idiosyncratic, so it would be interesting to see how the machine reacts and interprets these differences. I would also like to see if the machine could exhibit an element of unpredictability that goes beyond data-driven learned behavior, and somehow provoke and inspire the artist to push the drawing in unexpected creative directions.

Project Links:  D.O.U.G._1   |   D.O.U.G._2

ngdon-LookingOutwards-2

NORAA (Machinic Doodles)

A human/machine collaborative drawing on Creative Applications:

https://www.creativeapplications.net/processing/noraa-machinic-doodles-a-human-machine-collaborative-drawing/

  • Explain the project in a sentence or two (what it is, how it operates, etc.);

NORAA (Machinic Doodles) is a plotter that first duplicates the user’s doodle and then based on its understanding of what it is, finish the drawing.

  • Explain what inspires you about the project (i.e. what you find interesting or admirable);

I find the doodles, which are from Google’s QuickDraw dataset, very interesting and expressive. They also reveal how ordinary people think about and draw common objects. They’re very refreshing to look at, especially after spending too much time with fine art. However I always wondered if they’ll look even better if they’re physically drawn instead of being stored digitally.

I think this project brings out these qualities very well with pen and paper drawings.

I’m also drawn to the machinery, which is elegant visually, and well documented in their video.

  • Critique the project: describe how it might have been more effective; discuss some of the intriguing possibilities that it suggests, or opportunities that it missed; explain what you think the creator(s) got right, and how they got it right.

I think the interaction can be more complicated. I think the current idea of how it collaborates with the users is too easy to come up with, and is basically just like a SketchRNN demo. I wonder if other kinds of fun experiences that can be achieved, given that they already have excellent software and hardware. Especially since the installation is shown in September 2018, at which point I think SketchRNN and QuickDraw have already been there for a while.

  • Research the project’s chain of influences. Dig up the ‘deep background’, and compare the project with related work or prior art, if appropriate. What sources inspired the creator this project? What was “their” Looking Outwards?

I think they’re mainly inspired by SketchRNN, which is a sequential model trained on line drawings that also have temporal information.

I think creative collaboration with machines has been explored a lot recently. Google’s Magenta creates music collaboratively with users, and there’s also all those pix2pix stuff that turns your doodles into complex-looking art.

  • Embedding a YouTube or Vimeo video is great, but you should also

  • Prepare and upload an animated GIF to this WordPress.

ya-LookingOutwards-2

Bleep Space is an iOS app and arcade machine in which players explore a sequencer with unfamiliar buttons to create noise-pop music. Each button is tied to a unique sound and visual, and players can assign the button to a sequencer slot to create their own rhythms and melodies.

In the article on Creative Applications Network, Andy Wallace explains that the inspiration for the work came from his experience playing with a Korg synthesizer, a device that he didn’t fully understand. This theme of exploration an unfamiliar space also appears in one of Andy’s other works, Terminal Town, in which players explore the unfamiliar interface of a command-line tool to solve a puzzle.

Perhaps the most compelling part of this work is that the buttons are highly tactile and that hitting them always produces some kind of sound; a common frustration with exploring synthesizers is that some knobs don’t seem to have an immediate effect on the sound, because of different synthesizer “modes” that turn off certain features. However, the interaction of simply triggering the audio samples seems simplistic; and there are other aspects of audio synthesis that could be explored using tactile inputs and explorative play. Works in this area include Rotor by Reactable Systems, which use physical objects on a reactive screen to explore synthesizer systems.

kerjos-lookingoutwards02

Music Box Village

The village holds many musical structures, like the house that produces a choir-like sound when you pull on ropes attached to spinning electrical fans. These many structures offer visitors the opportunity to explore the village’s sounds collaboratively, to see what rhythm or cacophony they can produce together. For professional performers, the village poses the question of how to adapt to any concert venue, how their skills apply to the space, what sounds can they make there, and how they are visible to the public.

While the music box village offers the aesthetic of ruggedness, and offers the opportunity for a communal, spontaneous gathering of amateur musicians, I think it’s clear by the creator’s decision to host live events in it that it is ideally a site for professional performances. One of the aspects of this that I like is that, after seeing professional performers and watching them leave, audience members can revisit the site and try to recreate the same sounds on their own.

lass-LookingOutwards2

The Sandbox of Life by Sensebellum is an installation that uses sand, computer vision, and projection mappings to illuminate a sandbox with different imagery depending on the height of the sand. Users can sculpt the sand using their hands or brushes. The sandbox projects in different modes, including earth terrain, lasers, and even Game of Life cells that emerge from sand boundaries. I am interested in this project because it requires a touch input and produces a visual output, but playing with sand is much more sensory than, for example, touching a screen. There is a fluidity to the sand that creates very interesting projections. I also like how the project includes several different modes, since it’s repurposing the technology to create a variety of experiences. In general, I think projects involving projection mapping are pretty cool! I enjoy the combination of digital and physical that makes art feel more involved.

yeen-lookingoutwards2

Silk is an interactive work of generative art.

It is a website that allows users to create organic shapes with minimal mouse control.

What I appreciate about this generative drawing tool is how ACCESSIBLE(easy to use, easy to access) and how well BALANCED it is: it turns simple strokes into mesmerizing, complex, colorful visuals, yet it gives me much freedom such that I do not feel restrained by its power. I like how this tool represents the immense power of simple ideas. It matches with my personal goal of delivering powerful messages with simple concepts. It is a simple concept WELL DONE.

However, what I do not like about this tool, (though I do not yet have a solution to) is how little personal connection I feel towards “my creation”: all products look pretty much the same(same style, same feel, same mechanics). Although there’s much more to explore, I quickly get bored by it.

It is a tool created 8 years ago by Yuri Vishnevsky with a sound designer Mat Jarvis.

lchi – LookingOutwards-2

Ganbreeder is a machine learning image generator done by Joel Simon. It let users decide which image as the root and how different the new image should be. It also let users to crossbreed two different images to generate new images.
First of all, the “Make Children” button is very addictive and satisfying. With just one button and one slider, it gives me a sense that I have some kind of control or influence on the images that to be generated. I then tried to crossbreed two existing images to create the image that is in my head. I don’t think the GAN got it close, but it seems it doesn’t matter. By the time I saw the newly generated images, I already forgot the image that was in my head and intrigued by the new ones.
I think the project taps into the desire of constantly seeing new, unexpected, stimulating imagery. Combining that with the pseudo sense of power, that you can somehow steer where the images go, it is a facade that getting harder to see past.

arialy-lookingoutwards2

Light Kinetics – Espadaysantacruz Studio

Light Kinetics is an installation of tungsten light bulbs that give light weight– the force of a tap on the first lightbulb will send a light down the wooden rail of light bulbs. An piezo electric sensor on the first bulb captures the force, and the physics is simulating in Unity.

Giving light weight and force is an unusual and beautiful sight. While light’s movement is too fast to see, here its movement has been slowed down across the many bulbs. That being said, while the floating wood is a lovely form, it feels a bit like a smaller prototype to a larger piece. This might be due partially to the mess of cables on the side and the hints of some sort of fishing wire to suspend the structure. On one hand, making the piece too long might cause people to punch the first bulb to try to get their light ball as far as possible, somewhat breaking the subtlety of the piece. But I do think there could be some polishing bits to making the piece feel more complete.

 

jaqaur – Looking Outwards 2

When looking into participatory works of art, one that really stood out to me was “Polyphonic Playground” by Studio PSK.

This piece is a “playground” for adults–complete with swings, slides, and bars to climb on. It’s covered in conductive thread, paint, and tape that generates sound when people touch it. By playing on the piece, a unique “song” of sorts is produced, comprised partially of sounds recorded by beatboxer Reeps One.

The artists behind this piece say that the idea of play was central to the design of the playground. They said that they hoped a playful approach would allow them to better connect with the audience.

One thing I really appreciate about Polyphonic Playground is the intentionality with with the sounds were designed. It feels like an actual instrument, rather than a random collection of noise. This is demonstrated by the fact that it can actually be “performed”on (shown in the video below). It works as well for a trained musician as for a casual participant–a satisfying, fun experience.

Related links:

https://www.bareconductive.com/news/qa-polyphonic-playground-by-studio-psk/

https://www.studiopsk.com/polyphonicplayground.html