Autographic visualization / Generative music
Initial research + concept
For this project, I was interested in the sky and and its invisible trace of sound. Although the sky is generally a visual experience, I wanted to experiment with ways it might become audial as well.
To begin my image to sound research, I looked into a color to sound device, an article on different types of background noise, and Yuri Suzuki’s Face the Music project.
Process
To begin, I crowdsourced images of the sky. Those who contributed could later attend the installation and hear their own images, which I felt was more valuable and personal than the use of stock images.
To begin my image to sound translation process, I found a software called Sonic Photo. After a few rounds of testing, I found that the software was translating the images by detecting technical aspects of each photo such as lines, lightness/darkness and patterns.
It would read the images linearly as shown below:
I toggled with note harmonics, chord notes, harmonic brightness, quality and sound time to find a good set of settings for all the images. Further, in the process of actually translating each image, I became able to predict how certain images would sound through familiarity with the software’s habits. It reminded me of research I had done about Neil Harbisson and his work, The Sound of Color. To summarize, Harbisson is colorblind, but continues to perceive colors by matching them with sounds.
“I find it completely normal now to hear color all the time. At the start though I had to memorize all the names you give to each color. I had to memorize the notes, but after some time all this information became a perception. I didn’t have to think about the notes, and after some time this perception became a feeling. I started to have favorite colors. I started to dream in colors. When I started to dream in color is when I felt that the software and my dream had united. In my dream it was my brain creating electronic sounds, it wasn’t the software…it had become an extension of my senses.”
This pushed me to my next step: considering the colors of each image. As noted before, the software was skilled at reading various technical aspects. However, it lacked the ability to read or interpret colors, especially emotionally like a human could/would. To fill this gap and in light of Harbisson’s work, I coded a key that matched colors to music and human emotions. This way, the Sonic Photo translations could be matched with music and human emotions to create a final sound showcasing the product of software and human translation. In short, software translation (lines, patterns, lightness/brightness) + human translation (colors) = final mix.
The final generative music can be heard with the according images below:
Installation
In my initial sketches, I had planned for the images to be projected on the floor with artifacts hanging from the ceiling or a structure (as inspired by The Art of Bloom). However, after talking with others, I decided to replace the artifacts with a large (paneled) mirror. Since the images would be projected on the ground, it’d be more valuable for visitors to see themselves with the full images (which would be difficult from their standing perspectives). The artifacts could go elsewhere.
Installation mood board
The big idea behind the installation was for people to see large projections of the skies while hearing their according mixes, as if they were hearing the images themselves.
Prior to setting up, I tested different logistical aspects such as projector suspension, angle and square footage. Unfortunately it could not be hung from the ceiling in fear of asbestos! This also posed a challenge for the paneled mirror suspension (in the end, they were clamped and drilled to the top of the 6 ft. x 8 ft. x 8 ft. structure).
For the final installation, visitors could:
1. Stand within the structure to see the skies projected on the floor mats and hear the according generated music
2. See a reflection of themselves amidst the full image in the paneled mirror above
3. Take an extension package with a bracelet kit
The purpose of the bracelet kit was to add an additional layer of processing. Each kit was color coded to match an image used in the installation so that any artifacts made would be direct renditions based on visitors’ experiences.
The entirety of the project went from nature -> software processing -> human processing. It aligns with the idea that there is an ongoing strain of interpretation that can be greatly enriched by different forms of processing (in this project’s case, software and human).
Thanks for reading! 👼