Industrial Robot + a LED + Some Code = Painting in the physical world in all 3 dimensions
About the Project
Light painting is a photographic technique where light is moved in front of a camera taking a long exposure. The result is a streaking effect that resembles a stroke on a canvas. This is usually accomplished using a free moving handheld light source which creates paintings with lots of arcs and random patterns. While some artists can achieve recognizable shapes and figures in their paintings, they usually lack proper proportions and appear more abstracted due to the lack of real-time visual feedback while painting. Unlike traditional painting, the lines the artist makes does not persist in the physical space and is only visible using a camera. Recently, arrays of computer controlled LEDs placed on a rigid rod have allowed for highly precise paintings, but only on a single plane.
Industrial Light Painting is a project that for the first time aims to merge the three-dimensional flexibility of a free moving light with the precision of computer controlled light source. Together, these two methods allows for the creation of highly accurate, in both terms of structure and color, light paintings in full three-dimensional space. As in a manufacturing environment, an industrial robot replaces the fluid, less precise movements of a human with highly accurate and controlled motions of a machine. The automated motions of the industrial robot solves the problem of lack of visual feedback to the artist while painting in light, by allowing him or her to create the painting virtually within the software used to instruct the robot as well as the light attached to it.
How it Works
Industrial Light Painting creates full color three-dimensional point clouds in real space using an ABB manufactured IRB 6640 industrial robot. The point clouds are captured and stored using a Processing script and a Microsoft Kinect camera. The stored depth and RGB color values for each point are then fed into Grasshopper and HAL, which are plugins to Rhino, a 3-D modeler. Within Rhino, toolpath commands are created for the industrial robot which instruct the arm how to move to each location in the point cloud. Custom written instructions are also added to make use of the robots built-in low-power digital and analog lines which run to the end of the arm. This allows for precise control of a BlinkM smart LED which is mounted at the end of the arm along with a Teensy microcontroller.
Using DSLR cameras set to capture long exposures, the commanded robot movements along with precise control over the LED recreate the colored point clouds of approximately 5,000 points, within about a 25 minute period.
About the Creators
Jeff Crossman is a master’s student at Carnegie Mellon University studying human-computer interaction. He is a software engineer turned designer who is interested in moving computing out of the confines of a screen and into the physical world.
Kevyn McPhail is a undergraduate student at Carnegie Mellon University studying architecture. He concentrates heavily on fabrication, crafting objects in a variety of mediums pushing the limits of the latest CNC machines, laser cutters, 3D printers, and industrial robots.
Special Thanks To
Golan Levin for concept development support, equipment, and software.
Carnegie Mellon Digital Fabrication Lab for proving access to its industrial robots.
Carnegie Mellon Art Fabrication Studio for microcontroller and other electronic components.
ThingM for providing BlinkM ultra bright LEDs
Additionally the creators would like to thank the following people for their help and support during the making of this project: Mike Jeffers, Tony Zhang, Clara Lee, Feyisope Quadri, Chris Ball, Samuel Sanders, Lauren Krupsaw