What is it about? ————————————————————-
This project builds a workflow between a crowd simulation and an acoustic simulation with the aim to control the acoustic experience of space by taking into account the given crowd configuration.
The geometry of a space can significantly affect its acoustic performance. In this project crowd behavior mediates between geometry and acoustics. The simulation of people moving in space drives the movement of a kinetic structure, with the goal to affect the acoustic experience of the space by the surface’s changing geometry. The constantly changing crowd aggregation areas provide sound sources used as input in an acoustic simulation; the results of the simulation determine which surface configuration is appropriate for each crowd condition with the help of an evolutionary algorithm. The project is developed as a feedback loop between the Unity game engine, used to simulate the crowd behavior and the resulting surface changes and the Grasshopper parametric modeling platform used to run a ray trace acoustic simulation and an evolutionary solver.
This project was motivated by the work developed in the context of the Smart Geometry 2012 workshop by the Reactive Acoustic Environments Cluster, which I joined as a participant. The objective of this cluster was to utilize the technological infrastructure of EMPAC, in Rensselaer Polytechnic Institute in Troy in order to develop a system reactive to acoustic energy. The result was a surface that changes its form – and therefore acoustic character – in response to multimodal input including sound, stereoscopic vision and multi-touch.
Below a couple of photos and a link that summarize the work done during the Smart Geometry Workshop:
Manta, Reactive Acoustic Environments
cluster leaders: Zackery Belanger, J Seth Edwards, Guillermo Bernal, Eric Ameres
Other precedent work that focuses on the topic of how geometry can be explored in order to affect the acoustic experience of space:
Virtual Anechoic Chamber
This project explores how the acoustic performance of a surface can be modified through geometry or material and more specifically explores the sound scattering / sound diffusive acoustic properties of doubly-ruled surfaces. The project team develops digital parametric models to test the surfaces digitally, using computational acoustic analysis techniques suitable for the prediction of sound scattering, as well as physical scale models.
project page: http://www.responsive-a-s-c.com/
Tunable Sound Cloud
This project is exploring a canopy with real-time responsive capability to enhance acoustic properties of interior environments. This system is designed with a dynamic self supporting spaceframe structure and layered with a responsive surface actuated with memory alloy materials to control sound behavior.
The acoustic performance of a surface, thus the acoustic experience it provides, can be modified through geometry or material. The precedent work cited highlights that.
When sound strikes a surface, it is absorbed, reflected, or scattered; thus if we change the geometry of the surface we get different acoustic properties. If we consider a structure that is kinetic then we can constantly alter the geometry in order to control the acoustic experience of the space. This project explores how crowd behavior can be the driving parameter of how to update the geometry. In a previous work I tried to explore the same idea by capturing crowd movement with a kinect and then trying to infer the crowd distribution so that I can change the kinetic surface according to some preset configurations. This time I chose to set a system where I will be able to create crowd simulations so that I can later explore more variations in crowd behavior.
The project combines two different software into a continuous workflow, where each updates the other. Unity 3d game engine is used to run a crowd simulation (with the help of Unity Steer library) and Grasshopper (for Rhino) is used for the surface modeling, tessellation and panelization as well as the simulation of its kinetic behavior. In the Grasshopper environment two plugins are used, Acoustic Shoot and Galapagos, to perform a qualitative acoustic simulation and to test the results of the simulation in an evolutionary algorithm respectively. Unity sends via OSC (Open Sound Control) signals to Grasshopper regarding the crowd distribution and Grasshopper uses this data to identify the main sound sources. It further uses these sound sources to run an acoustic raytracing for a given amount of time. The results of the acoustic simulation are fed into an evolutionary solver in order to compute which surface configuration is the fittest so that we can reduce reverberation in space. More specifically the evolutionary solver tries to minimize the amount of bounces of the sound rays. The configurations that are output as the best genomes are used to update the mesh back in Unity.
Below a diagram that represents the described workflow:
Grasshopper parametric modeling platform was used to generate the surface geometry. The initial geometry was tessellated and then a kinetic component was applied to each surface grid.
The kinetic component was defined according to a parametric schema in order to be able to capture the range of its movement and also to be able to apply it to four-sided polygons of arbitrary shape. Below a diagram that shows how the component moves, from folded to completely flat position. The red circles define the constraints along which the three free points of the component move, the rest of the points are constraint on the frame of the overall surface.
The idea of the component was developed the Smart Geometry 2012 workshop by the Reactive Acoustic Cluster participant David Hambleton. During the workshop, a prototype of the kinetic component was built. In the current project I created a grasshopper definition to use it in the suggested workflow.
Diagram of the component while moving: from folded to flat (axonometric view).
Diagram of the component in open position (front view).
There is a numeric range (min, max) that controls the movement of the component, for example at max the component is folded, at min is flat. There is one such controller for every movable component on the surface. Below we can see diagram of different configurations of the surface where random values were given to control the movement of each component. The red rays represent the result of the raytracing algorithm for a common given source. The rays represent how sound moves in space while being reflected by the various surfaces. We can observe that the different configurations of the set of components result to different sound behavior.
In the project the controllers that control the movement of each component were fed as the genome in an evolutionary solver. As mentioned the fitness function was trying to minimize the number of bounces of sound in space. The configurations selected as the best genomes are updating the geometry back to unity. This is possible by converting our geometry to a mesh, saving the information in a text file as a connection graph, i.e. by storing the points/nodes of the mesh and to which nodes are connected. This text file is used in unity to rebuild the mesh and update the model.
Here is a demo/video of the project: