conye looking outwards -03

Note: I HAVE CHANGED MY IDEA, but here is the looking outwards I made with an old idea.

Sleeping Speech Idea

Telematic Artwork! If you’re having a good conversation, why let sleep stop the fun? 

sleep talking! 🙂

Potential tools:

# 1: Dreaming like Mad

Dion McGregor was a prolific sleep talker whose nighttime musings were so complex and bizarre that his friend made it his lifelong project to record his sleep talking. Decades later, the tapes resurfaced in the form of a mixtape!

“Dion never actually intended the world to hear his sleep mumbles, instead they were recorded by his songwriting partner Michael Barr, who was fascinated by them. And now the album has now been re-released, 50 years on, by Torpor Vigil Records, along with more recordings of Dion’s sleep stories.” – Dan Wilkinson @ vice

“When Milt Gabler of Decca was interviewed, he called the album ‘one of the biggest flops I ever put out!'” – Dan Wilkinson @ vice

The idea of remixing sleep talking is related to how I want to take sleep movements / noises and remix them into a conversation. I’m inspired by the idea of finding meaning from meaningless speech. However, I understand why every song/album created from this sleep talk was a flop… it’s only compelling to a point.

More on this!

#2 Typatone: “Digital Typewriter that composes songs out of your writing”

The act of writing has always been an art. Now, it can also be an act of music. Each letter you type corresponds to a specific musical note putting a new spin on your composition. Personalize your writing by choosing between six unique moods. Each mood changes speed, filter and color to each letter’s musical note. Easily import text written in other writing applications with a copy and paste interface. When you’ve finished writing, share it and download an audio version with a click of a button! Whether it’s a message, essay, story, or poem explore a new way of writing. Make music while you write.

Not super related, but it takes text and turns it into music. I’d like to do a little bit of the opposite! Sound to text! I’m inspired by this simple interface and the clean and natural integration of the concept into this interface, although the amount of time one spends interacting with this project would be minimal in any situation because the interaction is rather one-dimensional and quick (I tried the demo in the article).

It was really hard to find good art that was related to my idea.. does anyone have any suggestions?

kerjos-lookingoutwards03

Part 1: Skies Painted with Unnumbered Sparks

Source image

This is a city-block-sized installation by sculptor Janet Echelman and media artist Aaron Koblin. (Aaron’s been a part of many a crowdsourced art project, including the Johnny Cash project, the Sheep Market, and This Exquisite Forest.) Viewers complete the work by choosing animations from their smartphones to add to the graphics projected on Echelman’s suspended fiberwork piece.

VAN_Echelman_TEDphotoBret Hartman.jpg

This piece involves participants gathered in one place, unified under the impressive lightshow going on, but their contributions all travel independently to the projectors. If there’s the potential for participants to organize so that they can color the piece as a collective, it doesn’t seem to be happening in the project’s documentation.

Considering telematic art, I’ve been looking for crowdsourced projects that amount to more than just an exquisite corpse (I want something more goal-oriented, like e-Nable’s crowdsourced 3d-printed hands or EteRNA’s modelling). I think Unnumbered Sparks has the potential to be an exquisite corpse, but invites interactions from audience members that can be so independent from each other that it doesn’t need to be anything more than a more general interactive artwork.

What sets this project apart from some other audience-completed installations is that it compresses the results of the contributions into a single object visible to all the participants at once (vs. scrolling through The Sheep Market, or even zooming in to actually see details on r/place), and it limits contributions to the present (vs. the accumulation of drawings in Gutai’s Please Draw Freely, or of clay figures in Urs Fischer’s The Imperfectionist).

Part 2: Watermark

Photo

This is Ann Tarantino’s 2017 work in Millvale, right here in Pittsburgh. It visualizes the flow of water through the town along Girty’s Run, the Allegheny tributary and name of the watershed that provides drinking water to this part of Pittsburgh.

I learned about this visualization project on Saturday while attending the Water Walks Luncheon in East Liberty, a discussion on the watershed issues challenging Pittsburgh communities. Millvale frequently experiences severe flooding that’s been highly destructive to the town. It’s because it sits at the bottom of the watershed, and with urban development built over the streams and ponds on higher elevation, rainwater and snowmelt now wash over parking lots and roadways and right into Millvale homes. Sometimes 6 feet of it.

[cut output image]

This kind of visualization project is so critical because where our water comes and goes is really invisible to us. But it matters, as much a house or a business at risk of flooding does. Building an understanding among Pittsburgh communities of how water flows into Millvale could help drive policy that invests more in ways to channel water out of the town, and in the protection and even re-exposure of the Girty’s Run watershed.

Jackalope-LookingOutwards03

  1. Utsukushiki Tennen by Romane Granger

This is the first part of my looking outwards. It’s an experimental animation made with stop motion. Her other works are done with clay, and this one is likely done with clay as well but I’m not certain. Overall I’m not a huge fan of her work. I think it’s just not something I particularly enjoy watching as someone who prefers narrative over abstraction and also dislikes watching videos out of impatience, but there are really interesting movements and moments in them that I find inspiring for the telematic art project. This project of hers seems less polished overall than her other pieces, such as Ocean Blues #1, but this one has more relevant movements.

2. Flower gifs by Anna Taberko

These are gifs made by Anna Taberko (https://www.instagram.com/anna.taberko/). I like these for similar reasons as the first piece I listed, juicy movements. There’s good easing motions too. I guess my critique for this is then that the colors used are okay but not great. These are made frame by frame using Photoshop and After Effects. Their entire body of work (on instagram at least) just seems to be these zoetropic looping gifs, mostly of flowers, which is cool and good for branding I guess, but a bit boring after a few.

ulbrik-lookingoutwards03

Augmenting Live Coding with Evolved Patterns – Simon Hickinbotham and Susan Stepney

Hickinbotham and Stepney integrated the ability to evolve code patterns into TidalCycles (a collaborative, live coding system for music). As the coder submits bits of code to be used in the music, it is added to a population of patterns. The fitness of patterns is determined through use and votes. Mutant versions are evolved through smartly scrambling the parsed grammar trees from the code. They can then be incorporated back with the hand crafted portions. This project demonstrates a synthesis of coding and machine intelligence for a creative purpose. It integrates programming into a real-time, real life context and facilitates using it as a material.

According to the authors, the work is a novel addition to the field of music generated through artificial evolution because their algorithm resides within the context of live coding. Previous work has focused on evolving whole pieces of music and has faced many challenges with the size of the design space and facilitating human evaluation of fitness. In contrast, live-coded music is particularly well-suited for evolutionary algorithms. The music exists as small segments of digestible code (representing a smaller design space) and the coder is already sending signals that can be used to propel adaptation (which lessens human fatigue in fitness evaluation).

I would like to see an example of how this project could be used in a collaborative setting. In addition, controls for managing the GA population or creating multiple populations would create a more flexible system.

This work and others like it can be found here: EvoMUSART Conference

Epic Exquisite Corpse – Xavier Barrade

 I designed and developed most of the website as a personal project to gain digital experience. It hosted more than 70 000 drawings from 172 countries and is one of the biggest collaborative artwork ever made.

Epic Exquisite Corpse (2011) is an exquisite corpse project built from user drawings tiled infinitely in a 2D plane. Users were given a small rectangular canvas with the edges of adjacent drawings visible. Each drawing they made was added to the whole. The project accumulated 70,000 drawings to make a nearly endless mural.

This project harnesses vast multitudes to work on a single piece of art. There are many variations of online exquisite corpses (such as a comic book, a song (music video), or forest drawing). Usually, they are connected along a single dimension, but Epic Exquisite Corpse is 2D. This lends it an aesthetic quality similar to reddit’s R/Place or the One Million Dollar Homepage.

I would be interested in seeing this idea extended to a non-static canvas. Users could add boids instead of vectors. This would increase the interaction between artists.

It would also be nice to find a way to introduce greater continuity across the plane. I would like to see a narrative arc or coherent structure. This could be imparted through introducing assigned themes through a logic or pattern.

dorsek – Looking Outwards – 2

This project, though not technically an art piece, certainly influenced my vision for the DrawingSoftware project.

To set the stage a  bit, as of about a year ago a set of four scientists from Kyoto University’s Kamitani Lab released the result of a research study on using artificial intelligence (specifically without the use of machine learning ((a method which has been used before for this typed of recording with some success)), and instead through the use of “deep neural networks”) in order to decode the brain scans of people.  Through showing their participants natural images (for varying lengths of times), artificial as well as geometric shapes, and letters from the alphabet, over varying lengths of time… and recording their brains scans at those times in addition to recording when participants were told to think of a specific image, or even well looking at several of the images together. According to the researchers, once the brain waves were scanned, they would then use a computer program to “de-code” the image, or as they like to say ‘reverse-engineer’ it.

What most intrigued me regarding this project was the fact that brain scans were being used to re-generate imagery; that and the technology (which is undoubtedly beyond my capacity of understanding and my own capabilities at the moment) that they used in order to accomplish this.. Reading this is partially what inspired me to try and pursue the creation of a project which would render your dreams out for you as you slept.

Now – what’s wrong with this project? Well as a research piece, I can’t point out anything specific but in general I think my biggest critique is that this isn’t an art piece; the technology isn’t being used in a way that might challenge how we think about the world in any way – there’s no opportunity for revelation or new perspectives with regards to the concept behind the project (which obviously could simply be due to the fact that they are still developing this new way of processing and re-generating imagery via brain scans) which seems to detract from the interesting nature of the project.  I also don’t believe that this will age well  because of that, just for the simple fact that once you get past the initial “woah” with the technology, there’s not really anything else there that they’ve provided as brain food (on purpose at least…)

 

 

sheep – Looking Outwards # 2

Starseed Pilgrim is a game made by Alexander Martin, Ryan Roth, Mert Bartirbaygil, and Allan Offal which I just started playing this week. It’s a game that dissects video game literacy, attempting to capture the feeling of what it means to not know how video games work and how to introduce outsiders into new and unfamiliar terrains. Not only does it invite outsiders in, it asks them to change its world through experimentation with abstract tools. Martin says: “Create systems that are interesting to explore, and people will get more out of their own learning than any tutorial would ever give them.” The game does not have a tutorial, or instructions besides vague poems strewn across its surface. Interactions must be paid attention to- you’ll probably need notes to understand the elements you are playing with. However, in the end, the game asks the player to construct its world, to plan and ultimately make the chain of building blocks that will let their goals be realized. In a way, there is no one way to solve its puzzles. The game is completely emergent, assigning you the role of gardener, refugee, and builder. Abstraction through cellular automatic blocks and corruption are your only real visual guides on the journey.

In the designer’s words: “I really don’t like describing Starseed Pilgrim! But if I don’t, I’m pretty much asking you to buy it based on… images, and that’s worse. It’s a game about discovery and learning, and eventually about mastery of a strange set of tools. It’s been said, and echoed, that it’s a game you have to experience for yourself.”

Martin built Starseed Pilgrim in Flash. It took a year to do- “Starseed Pilgrim had been 90% done for a year and a half before I finally finished it. This wasn’t even a case of “the last 10% takes 90% of the work,” there was honestly almost nothing left to do but to make some tough tiny decisions, write some super easy code, and get sound. Sound ended up being the dealbreaker, though; it was worth the wait.”

I would say it is certainly time consuming, and I just started it. If you are willing to feel like you did the first time you played a videogame, unsure how it worked or how to interact with it, then Starseed Pilgrim should be of interest to you.

 

Looking Outwards 02 – conye

Atlas – Guided, generative and conversational music experience for iOS

above: documentation video

Atlas is an ‘anti game environment’ that creates music and includes ‘automatically generated tasks that are solved by machine intelligence.’ This app aims to question presence, cognition, and ‘corporate driven automatisms and advanced listening practices.’ The user generates music through their interaction with the app, which asks the user questions from John Cage. These questions are ‘focusing on communication between humans’ and ‘concentrates on the marginalized aspects of presence…’ This game looks visually stunning, and I appreciate how it attempts to be a different type of game (‘anti game’). I can’t judge how successful it is in being ‘anti-game’ without playing the game, but I like the addition of the questions into the gameplay mechanic and am a big fan of how clean the visual shapes are.

gif for game

 

 

 

 

 

 

 

It was created with javascript, p5.js in Swift iOS, and Pure Data. It has an example template using libPd and is available in the App Store for $1.99.

 

 

 

atiwari1-lookingOutwards1

Assemblance was the first media-art piece I saw in an art gallery.  I saw it at the Digital Revolutions exhibition at the Barbarian in 2014.  It was created by Umbrellium for the show, by a team of many people and two creative directors.

I found its mix of participatory collaborative interaction with strange visual experiences.  I had never experienced a projection that so clearly was able to define shapes and create semi-solid surfaces.  I found myself feeling almost surprised each time my hand pushed the projected walls away without physically feeling them.

It was successful in eliciting participation amongst the viewers, as there weren’t any explicit instructions detailing the various gestures you could use to draw and remove rigid objects and chains, leaving viewers to show each other the movements to make to activate them.  The objects could be pushed around and would collide with other people’s creations.

I spent a while in the installation and it as also interesting to see how first time participants would react—mostly by drawing a wall around themselves and pushing it around.

The visuals possible were necessarily limited due to being 2D objects extruded out in space over the projection volume, but still had sufficient variability to be satisfying.

 

Although my work isn’t necessarily directly inspired by Assemblage, it still points to interesting directions in participatory, emergent interaction between people.

jamodei-lookingOutwards02

Elegy: GTA USA Gun Homicides

by Joseph Delappe 

CW: animated graphic violence

Elegy: GTA USA Gun Homicides

Link to Actipedia documentation

Link to live Twitch stream

This project is a self-playing version of Grand Theft Auto V that performs as a data visualization for “a daily reenactment of the total number of USA gun homicides since January 1st, 2018.” One interacts with this work by watching the 24/7 live stream on Twitch. As the camera slowly pans backwards, one sees characters in the video game killing each other every few minutes (or more often I guess depending in the day) as a way of marking something that can feel invisible. Elegy is challenging to watch (even though generally I do not find first person shooters to be that triggering). The mix of mediums – i.e. real gun violence vs. video game gun violence vs. statistics on gun violence – presented in a never-ending slow scroll to chill-but-patriotic music creates a performance with the viewer that forces into being a complex and unanswerable dialectic around the reality of the large number of gun homicides in the USA and the apparent impossibility of change. This complication of data is what I find to be the most fruitful aspect of the work for me. I find the work’s attempt to repurpose material observations about our reality – and communicate them in familiar cultural forms in order to visualize the political nature of data – helpful and inspiring.

Elegy-GTA USA Gun Homicides 2.

 

tli-lookingoutwards02

Matthias Dörfelt’s Face Trade is a work that asks viewers to trade a mugshot for a computer-generated portrait, but the transaction is recorded a blockchain semi-permanently.

 

 

 

What interests me about this work is how much the artist gets away with asking from the audience. The trade-off is unnervingly real; not only does the work promise to record your mugshot permanently, you can also see the records for yourself at this website. The tension that the work induces is very real and, for me, very hard to separate from the artistic intention of the work. I can’t help but recoil instinctively and accuse the work of being malicious in itself. I think the audaciousness of this piece makes it successful, and I would like to take away some of Dörfelt’s nerve in my future work.

Looking into Dörfelt’s past work reveals a deep practice in generative art and a recent exploration of blockchain as an artistic tool. This helps me contextualize the choice of using computer-generated portraits as the commodity of the transaction. Face Trade‘s generated portraits seem like an arbitrary trinket that just fills the description of “commodity which the user trades for” (which may very well be the point), but I can see how these portraits build on top of Dörfelt’s previous generative sketches. It’s interesting that in some ways the viewer is paying their identity to view Dörfelt’s next iteration as a maker.