software / live audio-visual media
Sounding Orbs (2014) is a generative, synaesthetic, virtual instrument that allows a computer music performer to synergistically explore a space of light and sound. The conceptual core of this virtual instrument is a structure of forty-eight “orbs,” represented visually by light-producing spheres, and sonically by synthesized tones. When an orb is activated, it produces both light and sound. These orbs are arranged in space according to a multi-dimensional model of traditional equal-tempered pitch perception. In this model, harmonic proximity is correlated with spatial proximity. Thus, harmonic intervals such as octaves, fourths and fifths are represented spatially by short distances, while harmonic intervals such as tritones and minor seconds are represented by long distances.
Furthermore, as in a real space, distant events sound quieter, while closer events sound louder. At the correct distance, virtual observers tend to hear the closest seven vertical columns more prominently than the five distant columns on the back-side of the structure. Each vertical column is composed of different octaves of the same pitch-class. Thus, observers tend to hear seven harmonically related pitch-classes, contributing to a sense of key. As the entire array of orbs rotates, the key modulates as new pitch-classes are introduced. The performer may also change the distance to emphasize more or fewer simultaneously available pitch-classes.
The array of orbs is activated by a two-dimensional, hexagonal cellular automata, which is wrapped onto the three-dimensional structure. Cellular automata are used in other contexts to mathematically model the expansion of populations of living cells across space. In this context, cellular automata were chosen because of their ability to evolve in a dynamic and variegated fashion, smoothly across space, in a way that strongly resembles traditional musical voice-leading.
Some cellular automata rules allow for the evolution of emergent structures: persistent patterns of cells that are the product not of high-level user control, but of local interactions between cells. Thus, there is always the possibility of being surprised by the results of Sounding Orbs. Performers create musical behavior and drama by manipulating the cellular activity directly (by turning on or off certain cells) or environmentally (by changing the rules under which the cellular automaton operates), while navigating the virtual space, highlighting the sound and light activity at various locations.
Numerous controls allow for the alteration of the core structure. The order in which pitch-classes occur can be changed, allowing for unusual harmonic collections. The entire structure can be scattered apart, as can the shape of each individual orb, which alters the timbre of each orb as well. The fidelity of the three-dimensional spherical models can be degraded, as can the quality of the sound waves. In short, the performer has the ability to continually refresh the audio-visual experience with new textures and timbres.
The conceptual foundations of this work draw upon interdisciplinary research in formalized music (Iannis Xenakis, Chris Ariza), mathematics (John von Neumann, John Conway, Stephen Wolfram), music cognition (Roger Shepard, Diana Deutsch, Fred Lerdahl and Gerald Balzano), and the aesthetics of the visual-music tradition (particularly works by Oskar Fischinger and John and James Whitney).
World Premiere: April 13, 2014
“Immersion: Soundtracks for Sea Life” (Part of 2014 UCSD Spring Festival)
Paul Hembree, computer
Scripps Institute: Birch Aquarium – La Jolla, CA
Second Performance: April 19, 2014
California Electronic Music Exchange Concerts
Paul Hembree, computer
Littlefield Concert Hall, Mills College – Oakland, CA