Relational Computing
Connecting with data
L I S T E N
Explain the paradigm shifts in computing as sensory interface. LLMs and similar methods for managing, parsing, simplifying, and HUMANIZING the vast amounts of data we collect are enabling a more natural connection and extension of information. Data, memory, and relationships are relational. Chat-style and “assistant” interfaces allow us to connect with our data in a more natural and human way. These are important early steps toward an answer to the big question for Big Data: what do we do with it all? Chat-style interfaces are popular because there’s no learning curve; it’s natural and intuitive.
We like to interface with things naturally and conversationally — it’s not intuitive or natural to think like a computer.
What then should we mean when we say sound? To approach this issue of sonic identity, I will approach the conceptualization of sound through the framework of physics and acoustics. We must begin with what sound is and the means by which we experience and perceive the phenomenon. I will then examine contemporary technologies and works that employ sound’s non-audible properties and exemplify why we must re-conceptualize and broaden our definition of sound to encompass the mechanical wave spectrum in its entirety.
Holistic sound cannot be conceived anthropocentrically. Sound must be approached wholly, as an external, human-independent physical phenomenon that we have the ability to perceive and interpret in a limited capacity. Though audio may be our primary relationship with sound, it is not the only way we experience or use sound and its properties. Sound, fundamentally, is a vibration or “mechanical disturbance… that propagates through an elastic material medium” .
Mechanical Sound
In the field of acoustics, the whole of the mechanical sound frequency spectrum is broken into three main categories, which form the basis for my argument. In the center of sonic range are the frequencies that we typically and casually refer to as “sound”, known as the audible spectrum or audible sound. This range typically extends from 20 hertz (Hz) to 20,000 hertz, or 20 kilohertz. On the low end, frequencies from below 20 hertz to as low as 0.001 hertz are known as infrasonic sound (literally, below sound). On the high end, frequencies above 20 kilohertz and up to 10 13 hertz are known as ultrasound (above sound). Beyond ultrasound is the contested hypersound, sometimes called “praetersound or microsound” due to the minute size of the waves. At these frequencies, it becomes impossible for sound to physically exist, as “the molecules of the material in which the waves are traveling cannot pass the vibration along rapidly enough” . I will not be considering hypersound a definitive category of sound as its physiological effects are heavily contested and hypersound is not employed nearly to the same extent as infrasound and ultrasound.
Holistic sound cannot be conceived anthropocentrically. Sound must be approached wholly, as an external, human-independent physical phenomenon that we have the ability to perceive and interpret in a limited capacity.
We generally understand sound through the subjective experience of “hearing” sound, but “such a definition is not particularly illuminating and is unduly restrictive, for it is useful to speak of sounds that cannot be heard by the human ear” . Audible sound is only a particular experience of these waves moving through a medium (air, as is typical in our case). What we experience as audible sound is in fact these mechanical waves exciting our sensory mechanisms (ears), which are able to convert and interpret this “range of vibrations into perceptible sounds” .
Sound artists and theorists have proposed deeper and more comprehensive physiological and psychological analyses of sound and our perception of it, such as Pauline Oliveros’ Deep Listening. Yet, these positions privilege (and often seem to not at all consider) sound in any other capacity than audible sound. My position in no way intends to replace or undermine existing theories, but rather to expand upon them, broaden their scope, and to perhaps qualify what others have hinted at. The vagueness of Oliveros’ definition of Deep Listening as a process of “learning to expand the perception of sounds to include the whole space/time continuum of sound” for a moment seems to consider sound in the greater realm that I’m proposing, but neglects to elaborate on what the “space/time continuum of sound” might be. Perhaps following Oliveros’ explanation, Deep Sensing may be a more inclusive term.
The properties and applications of infra and ultrasonic sound seem largely hidden from and unexplored by the artistic community, but are key players in the toolkit of modern science and engineering.
Properties of Non-Audible Sound
The properties and applications of infra and ultrasonic sound seem largely hidden from and unexplored by the artistic community, but are key players in the toolkit of modern science and engineering. In our current framework and common conceptualization of sound, technologies and works that employ the properties and characteristics of infrasound and ultrasound for purposes outside the realm of audio would not be considered valid. For the designers and researchers at Ultraleap, sound is tactile. Through an array of focused ultrasonic transducers, Ultraleap’s “ultrahaptic” technology “creates the sensation of touch in mid-air” by creating a localized point of high pressure with enough force to slightly displace the surface of the skin and create the sensation of touch. The combination of different transducers on the array firing at different ultrasonic frequencies and intensities can be used to “sculpt” a three-dimensional acoustic field that can be felt, but not heard. The implications and applications of a “floating” contactless haptic interface are extraordinary: sterile medical environments, vehicle media control, tactile art installations or experiences, etc.
A demonstration of Ultraleap’s ultrasonic haptics. https://www.ultraleap.com/haptics/
A similar ultrasonic technology was created by a physical computing research group who published in Nature. They created the “multimodal acoustic trap display” (MATD), “a levitating volumetric display that can simultaneously deliver visual, auditory, and tactile content, using acoustophoresis as the single operating principle” (Hirayama et al. 1). Like the Ultraleap device, the MATD also uses an array of transducers to create the sensation of touch, but is able to emit audible frequencies as well as ultrasonic frequencies to achieve two simultaneous results that are perceived as different sensations through the same physical phenomenon. MATDs other trick is using sound to create a volumetric “hologram”. A small foam sphere is levitated and “trapped” in an ultrasonic acoustic field. Through “time multiplexing, …amplitude modulation, and phase minimization (1), the particle is illuminated with red, green, and blue projected light and manipulated in volumetric space at high speeds by the ultrasound to create the persistence of vision in the form of an illusory holographic object.
One sense is creating another: sound is creating touch and assisting with vision. Because sound is inherently physical, we can in this instance understand hearing as a pseudo-sense; a passive and distal modification of touch.
There’s a sensory crossover happening with these ultrasonic transducers. One sense is creating another: sound is creating touch and assisting with vision. Because sound is inherently physical, we can in this instance understand hearing as a pseudo-sense; a passive and distal modification of touch. We are now able to exercise such control over one sensory phenomenon that we can transmute it into another.
With the coming ubiquity of these technologies, our future experiences will be increasingly those of sensory crossover. We have been traditionally careful to differentiate sound and touch as independent senses, but I believe the future of our interactions with technologies and our environments will continue on this trend. If sensation is to be the future of technological interaction and sound is an ideal means to simulate sensory experiences like touch or even persistence of vision (as evidenced by the MATD), it is imperative that we consider sound as more than auditory; as a means for complete sensory immersion and control.
- Berg, Richard E. “Sound.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 18 Nov. 2019, www.britannica.com/science/sound-physics.
- Berg, Richard E. “Ultrasonics.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 06 Oct. 2017, www.britannica.com/science/ultrasonics.
- Berg, Richard E. “Ultrasonics.”
- Berg, Richard E. “Sound.”
- Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press, 2006.
- Oliveros, Pauline. “Deep Listening: Composer’s Sound Practice.” deeplistening.org, 24 June 2003, www.deeplistening.com/site/content/deep -listening-composers-sound-practice.
- Ultraleap, “How Ultrahaptics technology works with Tom Carter”, YouTube.com, 13 Feb 2018, https://www.youtube.com/watchv=4FfSEtrv1Dw.
Sources
Berg, Richard E. “Sound.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 18 Nov. 2019, www.britannica.com/science/sound-physics.
Berg, Richard E. “Infrasonics.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 23 Sep. 2013, www.britannica.com/science/infrasonics.
Berg, Richard E. “Ultrasonics.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 06 Oct. 2017, www.britannica.com/science/ultrasonics.
Dr. Morgan, Matt A., Assistant Pr. Nadrljanski, Mirjan M. et al. Ultrasound Frequencies. https://radiopaedia.org/articles/ultrasound-frequencies?lang=us
Hirayama, R., Martinez Plasencia, D., Masuda, N. et al. “A volumetric display for visual, tactile and audio presentation using acoustic trapping.” Nature 575, 320–323 (2019) doi:10.1038/s41586–019–1739–5. https://www.nature.com/articles/s41586-019-1739-5.
Oliveros, Pauline. “Deep Listening: Composer’s Sound Practice.” deeplistening.org, 24 June 2003, www.deeplistening.com/site/content/deep -listening-composers-sound-practice.
Oohashi, Tsutomu, et al. “Inaudible High-Frequency Sounds Affect Brain Activity: Hypersonic Effect.” Journal of Neurophysiology, 2000 83:6, 3548–3558 https://www.physiology.org/doi/full/10.1152/jn.2000.83.6.3548.
Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press, 2006.
Ultraleap, “How Ultrahaptics technology works with Tom Carter”, YouTube.com, 13 Feb 2018, https://www.youtube.com/watchv=4FfSEtrv1Dw.