Chapter One



For the longest time I was unable to define what sets my compositional practice apart from others’. When I was introduced to acousmatic music in 2014, I was awestruck by the near limitless possibilities of sound – the instinctual nature of composing with sound surprising me the most. Although I was not using the ingrained foundations of chord progressions, tonality/atonality, I was able to compose freely, hearing in my mind’s ear what should come next. It was like a return to the joy of writing little melodies as a kid, before I learned about music theory. I quickly found myself overwhelmed by the freedom. I believe that I work best when there are rules and restrictions in place and acousmatic music gave me the ability to choose my own rules – be it limiting my tools, limiting the sound material or a mixture of the two. I am always my most creative when I work with restrictions.

In each work in this portfolio, I have worked with limitations – some more restrictive than others. 

In Lines, I endeavoured to use only two main effects (with the exception of some reverberation and cutting of samples). These were pitch manipulation in the first half of the piece and extreme equalisation in the second half. Similarly, I restricted my sound material with a bouncing ping-pong ball and glass smash samples in the first half and a field recording of Aberdeen’s Union Street in the second. With these restrictions, I had to explore in depth the possibilities of the two effects and use complex and intricate layering and positioning of the sound files. 

I allowed myself more freedom in my second composition, Sounds of the Silent City. For this piece, I wanted to design a creative sonification of Aberdeen City, exploring not just the sound marks of the city but also the atmosphere. As there is no singular location that would entirely encapsulate Aberdeen, I decided that this piece would take the listener on a journey through different parts of the city. We start on the train bringing us into the station. We hear the sounds of the beach as well as the mechanical clinks and whirring of the engine. Once in the city, we are bombarded with sounds of inner city life – bus breaks, seagulls, crowds of people. From here, we travel out to the far more relaxing location of the beach, sounds of the waves bookending our journey. Within this basic frame, I allowed myself full creative control of the piece. 

Due to the nature of live-coding, the compositional process for Bearing Zero was different from my two previous fixed-media pieces. TidalCycles works by continually looping patterns of samples, which the performer edits live. When using TidalCycles, I often like to work with a handful of similar sound layers, all of which combined create a complex texture. I also enjoy having a note of the sound transformations I would like to make during the performance and their order. Many live coders will perform fully improvised sets, however, I prefer to have a piece in mind and will write myself instuctions for the live performance. As such, it is similar to following a score. During a live coding performance, the performer is typically in front of the audience typing the code, which is being projected for the audience to see. I have done many live coding performances in the past and many non-coders have commented to me that they enjoy watching the code being typed as, even if they do not understand it exactly, they can loosely relate the code to changes in the sound. For Bearing Zero, I wanted to capitalise on the audience’s semi-understanding of what’s happening while addressing the dissonance between performer action and resulting sound. This meant keeping the processes as transparent as possible to allow some of the mystery of the code to dissolve.   

In my audio-visual piece, 57N, I took a slightly different approach to my restrictions. The images were fixed to the idea of starting zoomed out and gradually becoming more specific to a singular location. I wanted the audio to reflect the spaces we were traveling through and NASA had recently released a huge bank of copyright-free sounds, videos, and pictures. Unfortunately, the audio was not high quality, so I decided to take a selection of sounds as inspiration and through various audio manipulation techniques, try to emulate them. It was not my intention to replicate them exactly, but rather to use them as a starting point and see where my ear takes me. The resulting sounds in my piece range from sounding incredibly similar to the NASA sounds to being nothing alike. 

My second audio-visual piece was a collaboration with visual artist, Amy Barnett. The initial restriction was fitting my artform within the context of Barnett’s man-made rock sculptures. We decided that we wanted to take the listener on a journey into the rocks and therefore, took audio samples from them. I then used these as the material with which to make the audio. Barnett created the video portion first before I added the sound. I wanted to avoid simply micky-mousing the audio to fit how the handler is using the rocks while keeping the audio relevant to the visual. I tried to maintain the natural texture of the rock sounds and the hints to gestures that made them.

For the installation in my portfolio, I attempted to be as restrictive as possible and surrender a sizable amount of creative control to the computer programme. In 



The fact that I am always composing with restrictions is one that I have only realised since finalising the S.O.A.P. analysis framework. At the start of my doctoral journey, I was still searching for what defines me as a unique composer. 

Envisioning a purely acousmatic portfolio, I began researching into post-Schaefferian approaches to reduced listening. This lead to research into perception, experience, phenomenology, spectromorphology and agency, to name a selection. The impetus for composing my first portfolio composition, Lines, came from reading Michael Pedersen’s article, Transgressive Sound Surrogacy. In this text, Pedersen talks extensively on sound surrogacies. In his writings on Spectromorphology, Denis Smalley proposed that there are four orders of sound surrogacy: first, second, third, and remote. Surrogacies are concerned with the listener’s confidence in what they hear, specifically in terms of the sound source and cause. First-order surrogacy sounds are sounds with an identifiable and specific source, such as a familiar voice. Second-order sounds have a source that is narrowed down to a category such as a metal windchime, or sports car. The exact windchime or car remains unknown, but the listener can deduce the approximate type of object producing the sound. Third-order surrogacy is where only some characteristics of the sounding object can be identified, such as one that sounds as though it is solid wood or another that is some form of liquid in a container. Often, the listener can identify the approximate material of the object in question, but cannot do the same for its dimensions. Remote-order surrogacy sounds are those where the listener can gain no information about the sounding object. The listener is unaware of the source or cause of the sound. These are inevitably unique to each listener as the level of expertise can vary quite drastically from one person to the next. Pedersen adds to this theory and proposes that we should make a distinction between the recognition of a sound’s source and a sound’s cause. This expansion suggests that a listener may be able to detect an unmistakable action clearly while the sounding object itself is only partially recognisable. The discussion of sonic gestures shares territory with the perception of agency. When we hear a recognisable gesture such as knocking or scraping, we can imagine the agent behind that action. 

The first piece I wrote for my PhD portfolio was a fixed media piece called Lines. It was a study of the perception of gestural agency and how introducing gradual sonic manipulations to the same sound object can drastically change our perception of gestural surrogacy and therefore also agency. The piece splits into two distinct sections, with two polarising sounding objects being the focus of the alterations. I explored the transition from a clear sounding object to an unrecognisable sounding object, starting with vivid gestural qualities and moving to gestural ambiguity. I then reversed this process, starting with obscure sounds and gestures and gradually revealing the new sounding object. This piece is discussed at greater length in the second half of this supporting commentary. At this early stage of my PhD, this was a culmination of my research into listening modes and sonic gestures. 



The second piece I wrote was also a fixed media, acousmatic work. Sounds of the Silent City was an attempt of a creative sonification of Aberdeen. I tried to assume the perspective of a newcomer travelling to the city. I imagined an approximate journey up the North coast by train into the city centre and out towards the beach, a popular destination for any tourists crazy enough to choose cold and windy Aberdeen as their holiday destination. Rather than creating a pure soundscape of the location, I wanted to instil the feel of Aberdeen upon the piece. The train journey up the coast is reliably relaxing, assuming you travel out-with hen party season. The sparkling North sea is on your right and seas of endless green fields are on the left. The train station is in the centre of the city. A five-minute walk is all it takes to arrive at the busiest point – Union Street. Visiting any city centre for the first time can be an intimidating experience. Everything familiar yet unfamiliar at the same time, you try to make sense of the place as quickly as possible, orienting yourself to where you think you want to go. Sounds of the Silent City attempts to use sound marks of Aberdeen in such a way as to convey these feelings of tranquillity, excitement, and intimidation, capturing the essence of a place. Again, this piece is discussed at greater length further on in this commentary. 


Reflecting upon these two compositions as well as my research into listening modes and perception, I realised that my work has common parameters with which I often like to experiment. These parameters are space, objects, agency, and place. At first, it might seem strange to think of these as parameters. We learn that parameters of sound are objective aspects like frequency, volume, texture, and timbre and not subjective things like a sense of agency or place. However, playing with these four parameters can create vastly different pieces and perceptions. I cover the exact definition of these four terms in the next chapter of this supporting commentary.

I began to look into how different ways of presenting an idea could affect the perception of a piece. I identified two main spaces in any given performance of a composition: the digital space and the real space.

All sound must travel through the real world to arrive at the ears of the listener. Live instrumental performers make sound entirely in the real world. However, sounds that need to come through a speaker for us to hear them are sounds from the digital world. 

Working in just one space (e.g. acousmatic compositions that only explore the possibilities of the digital world) presents limitations to the composer in how they can explore their ideas. Such barriers need not be negative. After all, we need to place restrictions upon ourselves to be at our most creative. As my compositional practice is multimedia, I wanted to explore the four parameters listed above while using the real-world space. 



Taking inspiration from Lines, I wanted to revisit the theme of perceived agency by using both the digital and the real-world spaces. I decided to do this through live coding as it involves a live performer in the real-world – me on my laptop – but sounds that come from the digital. I made certain artistic decisions before starting the work: one was to use the sound of an instrument. Having unearthed a guzheng from the University’s storage cupboards recently, I was eager to sample it and use it in a piece. I was aware that few people in my western audience would be able to identify the exact instrument. However, its melodic quality and identifiable plucked nature would be enough of a sonic cue to identify it as some harp-like instrument. Very rarely do we ever see and hear an instrument with no one playing it; therefore, we subconsciously place and human agent as the cause of the sounds. In live coding music, there is no sound originating in the performance space, only from the digital and yet there is a “performer” present. All elements of an instrumental performance are present: the live performer, the sound of an instrument, and the performative agency behind the plucking action. However, there is no physical instrument, and the performer is not plucking anything. Instead, they are typing on a laptop. The code is displayed on the screen so that the process of composition can be as transparent as possible. Although the meaningfulness of the projected code depends upon the viewer’s expertise, in my opinion, TidalCycles is one of the more transparent live coding languages. Although it is not always the case, the code and the resulting change in sound are often harmonious. This synergy allows the audience to easily relate the performer’s typing to what is happening in the music. Humans develop schemas when we perceive the same cause and effect multiple times. Therefore, if the audience consistently perceives particular code having the same impact on the sound, they will develop a sense of expectation. Whenever this code is evaluated, they will expect similar results each time. Bearing Zero uses this to its advantage, using only four layers of sound. Each layer one at a time undergoes the same manipulations, and the audience learns what to expect as the manipulations systematically occur. This gradual process allows the piece to build from very simple to very complex over its fifteen-minute duration. As the samples are of individual string plucks, the piece begins by sounding like a single recording of someone performing rather than just one sample after another. As the texture starts to build, the music oversteps the boundaries of human performance. The sense of performative agency is lost, and the once beautiful sound of the Chinese harp becomes almost mechanical. If the listener had any perception of a performative agent remaining, the introduction of reversed samples seeks to change that. Halfway through the piece, the samples gradually change to a synthesised sound. This shift moves the live coder more into what you would expect a laptop musician to sound like in an attempt to heal the split between the real-world agent and the perceived digital agent. 

Following the completion of Bearing Zero, I thought about how to best illustrate these ideas of space, objects, agency, and place within the real and digital worlds. I developed the following framework.