top of page
Search

Sound Design in the FOREVER Exhibition

  • Writer: John Craig
    John Craig
  • Mar 9
  • 12 min read

Updated: Mar 10



After the completion of my internship period at MOD. I had the fortune of being contracted to continue my work on two soundtracks for their recently launched exhibition FOREVER. The exhibition focuses on ideas and concepts surrounding time, life and death, memory, and existence, with galleries that highlight different perspectives, emerging technologies, and immersive experiences.


I developed the soundtracks for the ‘Endings’ and ‘Infinity’ exhibits, which hit on quite different yet connected concepts of the total exhibition. While the ‘Endings’ exhibit focuses on things past, like extinct species, dying stars, and physical artefacts, the ‘Infinity’ exhibit is the final stop on the intended path through the exhibition, and is meant to be an area for reflection on the total experience; a space to recharge after the heavy topics of the previous galleries.


In fact, the ‘Infinity’ gallery contains no informational text or visual content outside of an interactive visual installation created by Daniel Lawrance, making the soundtrack roughly half of the installed experience in the space.


Endings

Demo/Sound exploration project file for the 'Endings' soundscape.
Demo/Sound exploration project file for the 'Endings' soundscape.

I’ll begin with discussing the ‘Endings’ soundtrack, as it falls more in line with the traditional role of a designed soundscape within a museum environment. Within the gallery space are text panels, physical objects, as well as an interactive installation. Beyond that, the content of the space and the space itself are meant to promote discussion, with comfortable seating available in the center of the gallery.


Neither of these aspects of the designed experience leave a lot of room for the introduction of sound into the space, which is, let’s be honest, pretty common for museum and gallery environments. Nevertheless in this environment we were able to introduce a subtle soundscape that doesn’t detract from the designed experience, and reinforces both the educational and experiential the objectives of the space.


I believe my internship period was extremely helpful in achieving this with this soundscape, as I was able to access design information and begin to work collaboratively with the rest of the team at MOD. This early access allowed for the audio content and visual information content to be curated and shaped simultaneously and to influence each other. I believe that this resulted in a strong tie between both the content and objectives of the soundscape and the exhibit materials.


When I was first briefed on the topic of the space --endings, extinctions, remains-- I immediately thought of the last recording of extinct animals, and specifically the Hawaiian O’o bird. I can’t remember where or when I first heard the sound clip, but every few years a reposted video of it will go micro-viral, with several versions on YouTube with a few million views each. An animated short film on the bird and its famous call was produced in 2022, with just under a million views as of writing. Suffice to say, people enjoy the story of this now-extinct species, or more specifically, the story of the last male left calling out for a mate that no longer exists. This was an instant hit with the rest of the team, and the recording could be easily licensed from the Cornell Ornithology Lab.


To play this recording in the space, we decided to utilise one of the directional speakers in MOD.’s display resources, the Audio Spotlight AS-16i by Holosonics, which projects sound in a narrow beam using high-frequency carrier waves. These speakers are decent at conveying sounds within a limited area but struggle with mid and low-end frequencies, as well as being prone to echoing and reflecting their sound if the target area is not properly dampened to catch the excess energy.


While constricting, these attributes are well-suited to our application and intended experience. The bird call is high-frequency, and the AS-16i has no trouble reproducing it with clarity. We wanted the call to sound like it was coming from a single location while also working its way around the room and reaching multiple ears. To this point, the target location of the AS-16i was left undampened, and sound energy would be free to bounce around the room after being emitted. The effect mirrors the experience of hearing a bird call in a natural environment. This, combined with text content within the exhibit, plays on intrigue and curiosity, in some ways providing a reward or ‘easter egg’ for those who listen.


The bird call, played through the AS-16i, is accompanied by a low-volume background track played through the main room speakers. In the same vein as the bird call, I tried to use sounds in the background track that referenced specific elements or exhibit materials. Dying stars and cosmic rays are a prominent feature in the exhibit, so I did a lot of research into NASA and radio-telescope recordings, as well as space image sonification examples.


While I found many compelling examples, in many cases, the listening experience of these examples delved more into cosmic-horror rather than cosmic-wonder. To tackle this and to avoid the consternation of licensing sound samples from NASA, I elected to recreate some of the sounds I liked for the exhibit from scratch, keeping the qualities I considered to be of value to the visitor experience while leaving behind anything too dissonant or overly anxiety-inducing.


I began with two base layers, one white-noise layer with modulating volume to create a very subtle breathing effect, and another layer inspired by the image-sonification exercises performed on images on nebulas. Both of these elements were tuned to match the pitch of the tonal elements in the ‘Infinity’ gallery as the two galleries are adjacent. The white-noise layer is relatively prominent in the overall track, both as an allusion to the static-y noise of space as recorded by radio-telescopes and as a method for controlling for outside noise spilling into the gallery. The open design of MOD. means that human noise from the museum, and particularly the cafe downstairs, can easily find its way into this particular gallery. The white-noise layer gives some dampening to outside sounds while also providing a new bed for other sounds to be introduced.


Within this bed established by these two layers, I gradually faded in and out some of the replicated sounds I created using manipulated samples and synthesised elements. One is meant to imitate the beating rhythmic signal of a pulsar star, while another mimics shortwave radio recordings of cosmic dust entering the atmosphere; this sound is mirrored in the exhibit with the inclusion of a cosmic-ray detector. This ambient track came out to around 12 minutes and loops consistently in the gallery space. The bird call plays 3 times within an 8-minute looping track on the AS-16i, with it playing roughly every 3 minutes.



Infinity


Because the soundtrack for this gallery is half the total experience of the space, I had a lot of room to add depth to the sound and take up some of the total visitor attention budget. I wanted to make the experience of the space and of listening to the track rich in multiple ways while also meeting the experiential goals of the space within the total space. And because, like with the ‘Endings’ soundtrack, I was able to enter the ideation and design process so early, I was able to push for certain aspects of the sound design that I believe provide extra depth to the experience and potentially promote post-visit discussion.


As mentioned before, the ‘Infinity’ space is meant to be an area of reflection, decompression, and a bit of a positive lift after the sometimes heavy and challenging exhibits of the rest of the exhibition. The visual element, as designed and workshopped by Daniel, utilised the wide-angle projectors already installed in the space to project feeds from cameras recording the projected area. This created a ‘mirror into a mirror’ with elements in the camera's field of view reflecting infinitely into a void. With this in mind, and the concepts of infinity, reflections, scales of time, beliefs surrounding time, points of existence, memories, uplifting, wonder, and peace, I began to search and identify musical or sound concepts that matched the visual experience and the overall concepts of the space.


Immediately I connected the experiential objectives of the space to sound bath therapy and meditation practices, which use bronze or quartz singing bowls and other tonal percussion elements to calm and provide a relaxing environment. I also thought about shepherds tones, loops and repetition, and generative modular synthesis as methods the highlight the concept of infinity.


At the same time as this exploration of sound concepts, I was also considering the visitor experience of the space. I saw an opportunity to try to apply two concepts that I believed to be appropriate for the experiential objectives.


From my experience as a musician, I saw that I could utilise ‘tension and release’ to elevate visitor moods. By implementing small moments of tension or dissonance, the following moments of consonance are given context, and a non-specific micro-narrative is created within the piece. The moments of tension can represent fear, uncertainty, apprehension, or anxiety, while the following release can represent elation, relief, triumph, or wonder. This contrast allows for the mood of the piece to be pushed farther into the positive emotional range than if the track stays consistently consonant. Additionally, I believe that because these micro-narratives are entirely musical/tonal, visitors are able to ascribe their personal meaning to them and more readily relate them to their own experiences.


From my research into video game audio practices, I saw that I could implement a strategy used by some games to encourage continued engagement, repeat engagement, and post-play discussion. Put simply, that strategy is to not worry about each player, or visitor, having the same experience, and, in fact, encouraging differing experiences within the design. Much like in a museum, video game designers often cannot force players to interact or engage with the material in a certain way or at a certain pace. A player running through an area and not observing all the hard work put into the environment and narrative can be disappointing for the designer.


It is even more disappointing for the players who do take their time and explore and observe the designed environment when that engagement is not rewarded. In my thesis, I highlight several ways in which developers reward engagement, collectables, small narrative or world-building elements, and Easter eggs all promote exploration. For this soundtrack, however, I was particularly inspired by the efforts of the game ‘Dear Esther’, which presents different narrative elements to players every time the game is played, creating unique experiences, a reason to re-engage with the game, and inspiring a strong community of discussion outside of the game.


I communicated these concepts to the rest of the design team at MOD. using both visual and audio references. Clips from YouTube were essential in demonstrating certain sounds and musical concepts, while I created graphs and diagrams to relate my experiential concepts for the space. Through this process, certain sounds and concepts were eliminated, while others were shown to be well-suited.


From this, I developed a conceptual plan for the soundtrack that incorporated those elements and concepts. I planned to create a long-form track that was largely consistent, within which I would compose short moments that stand out from the base track. These moments would serve as a reward for those visitors that wish to stay engaged with the space for longer periods of time, while providing differing experiences to those who only engage for a short period.


To realise this concept, I began exploring sounds and creating demo tracks under four different thematic concepts and experiential objectives.


  • First was the main base layer track, meant to promote relaxation, which I built off of the sound bath therapy and meditation concept. While I would have loved to sample real quartz singing bowls for their deeply resonant tones, sourcing and budget can be an issue for such items. I elected to utilise a virtual instrument which recreates their sound, and more importantly. allowed me to finely tune their pitch. With the low tones of the quartz bowls, I added some sparse chimes, as well as notes from a steel tongue drum which I ran through a short reverse delay creating a mirror effect, hitting on the theme of reflections. Above all that I recorded some soft percussion rattles and shakes, and underneath I utilised wind and wave recordings and EQ’d white-noise to create some calming movement.


  • The second track was also a base layer track, but meant to promote wonder and a sense of grand universal scale, with a little apprehension due to the unknown future, and allude to blinking moments of existence. This track was inspired by the concept of generative modular synthesis, where an arrangement of synthesizer and effect modules connected in particular ways through patch cables and set with certain parameters can be made to create its own music infinitely. (see artists: State Azure, Mirko Ruta) For this, no surprise, I used a lot of synthesizers. The main elements, the bass and the bleeps and bloops were created in Harmor by manipulating existing patches, of which there are many. Later I incorporated a bit of a Vangelis horn using Dexed the DX-7 emulator. Underneath the synths I used a recording of soft rain, as well as the occasional sub-bass rumble that resembled distant thunder.


  • The third track was the first ‘moment’ and emerges out of the sound bath base layer. For this section I wanted to push for the inclusion of rhythm and contemporary instrumentation, as a method of promoting comfort through familiarity. I also thought it would be cool to have a relaxing track sort of emerge out of and tie into this sound bath, like form and structure emerging from space before disappearing again. I ended up creating the most demos and having the most back and forth with the team regarding this section, as it was a challenging thing to nail down and ensure it fits with the other sections and the objectives of the space. The final track includes multiple percussion layers with very few pre-recorded samples, a bassline courtesy of the now infamous Cashies Rescue Bass, a Rhodes piano, and a synth texture layer, all recorded live and not quantised, I wanted the track to feel very human and have some rhythmic fluctuation.

Live recording and demo space for the third track
Live recording and demo space for the third track
  • The fourth track was the second ‘moment’ and emerges from the second, ‘space-y future’ base layer. For this section I wanted to create a grand orchestral moment the evoked wonder and hope, positive remembrance, and maybe a bit of determination in the face of the unknown. Again I thought that some traditional instrumentation would be comforting, but it also gave me the opportunity to incorporate some moments of tension, or suspension in this case. I started by composing a short piece with violins playing a descending motif alternating between two big chords played by the rest of the strings, with oboes and clarinets playing arpeggios and following as well. Between the two chords a single violin holds a note while the others drop out, creating this moment of suspension and expectation, before the others come back in.


    I then exported this and used PaulXStretch to extend and blend the piece, giving it an otherworldly almost choral tonality and a slower pace. This served as the base layer for additional violins and synths which added some definition back into the sound after being stretched. Additionally, I composed an intro section for violas that sets up the two chord dynamic, while also setting a kind of solemn tone prior to the comparatively bombastic section. I also added a bit of counterpoint between the (retro)futuristic Vangelis horn, and an airy flute, hinting at a connection between the future and the past.


For all these tracks, I composed to a common root, and using virtual instruments made this relatively easy, however, some manual tuning was required for certain instruments. Elements taken from sound libraries or synthesised were also pitch-corrected. Each of the demo tracks was produced in their own project file, allowing for some extensive trial and error without bloating and slowing down my limited processor speed. When it came time to assemble the various elements into a cohesive piece, I could then take the elements that did work out of the demo projects and begin assembling in a clean workspace. For certain sounds, especially those that are taxing on processing power, I was able to bump them out from MIDI instructions into pure audio in the demo project file, and because I had already done my pitch correction to my established root, I could import them into the new project file without any fuss. This established root meant I could also move around and add or subtract elements from each demo track freely, and they would blend without clashing tonally.

Project file with base layers merged.
Project file with base layers merged.

After assembling all the elements and playing with the overall timing of each section, the resulting piece is around 17 minutes long, with each of the two moments lasting around 3 minutes. After mixing and mastering, I exported the final track and re-looped it in an effort to reduce the ‘CD-hitch effect’ where there is a very brief pause between tracks or at the loop point as the track is re-loaded for playback. Unfortunately, I still have some things to learn about how to mitigate this hitch, as I wasn’t able to completely eliminate it. I was reluctant to put in a silent section to mask the hitch gradually, so I instead opted to reduce its occurrence by doubling the length of the track, meaning the hitch would only occur every 34 minutes instead of every 17, certainly a triage fix, but one that is hopefully effective.


In the space, the track is played through two Focal Audio Alpha 65 studio monitors, a great solution for providing audio fidelity without pushing too much volume. These monitors' effective frequency range allowed me to compose with very deep bass despite the lack of a subwoofer and include considerable dynamic range in the final mix, their flat frequency response allowing for bass frequencies to project as well as higher frequencies. Knowing the hardware that will project the content is extremely useful in the mixing and mastering process, as even without access to the hardware or the ability to test in the space regularly, I was able to work to the technical data-sheet and make adjustments that translated well to reality.


One thing that I believe was instrumental to the production of both of these soundtracks, and something I’m very passionate about for the production of audio content for cultural institutions, is the proper consideration of strengths and weaknesses. My internship allowed me to become very familiar with the spaces in which these soundtracks are installed, as well as the advantages and disadvantages of each gallery. Similarly, I researched the strengths and limitations of the audio tools available in the MOD. inventory. This allowed me to tailor the tracks to those tools and plan the implementation of those tools so that they leverage their strengths and aren’t hampered by their limitations.


FOREVER is open now until November 2025 at MOD. in Adelaide, South Australia. Check out more about the exhibition and plan your visit here.


 
 
 

Comentários


Sign up to be the first to know when we go live.

Thanks for submitting!

© 2023 by John B. Craig. Powered and secured by Wix

bottom of page