Over the last four weeks I’ve been working on the audio post production of a documentary filmed in Papua New Guinea. The documentary addresses alarming health issues dealt by habitants of the rural area with an interesting approach; Australian doctors and nurses — voluntarily led by Stewart Kreltszheim, co-founder of the ‘No Roads to Health’ organisation — embark on an 8-day medical expedition, where they are set to aid the population while trekking at the Owen Stanley Range in the North Coast.
As provided by the client, the technical requirements stated that the project should be mixed both in stereo and in 5.1 and the latter had to be delivered in multi-mono (L - C - R - Ls - Rs - LFE) instead of a 5.1 audio file. Furthermore, loudness standards should’ve complied with the specifications defined by the EBU (European Broadcasting Union), such as an average of -23 LUFS (+ - 1 LU) for dialogue and a maximum of -3 dBTP.
Proceeding to the editing stage, the audio team first spotted the project for voice over cues as referenced from the script. Our microphone of choice was the Shure SM7B, a dynamic microphone well regarded for its flat and smooth frequency response. The levels were recorded quite low, however due to the efficient acoustic treatment of the recording stage we were able to increase the levels without the presence of extraneous noise. Once the voice over was completed, we moved to dialogue editing, whose location recordings consisted of many impurities such as clicks, crackles, rumbles and even abrupt background noise variations throughout the program. We also needed to split the team to specific tasks in order to make sure we could deliver a solid soundtrack for the client.
While the dialogue was the main element of the story, it was also the most challenging one to work with. The idea of a good dialogue edit consists of making it sound as natural as possible. This is achieved by applying fade-ins and outs between two or more different takes, as well as de-noising where appropriate. We’ve incorporated iZotope RX into the session to make the latter possible. De-noising is not only referred to as noise suppression; rather, it is also about fixing the inconvenient characteristics mentioned above. Despite being possible to remove clicks and crackles using the Pencil tool in Pro Tools, this method is cumbersome and time-consuming. With RX the editor can analyse the spectrum of a sound and remove only what’s in the way instead of altering the entire file.
The overall audio post production of a film entails multiple sub-categories, such as:
- Dialogue Editing;
- ADR Recording & Editing;
- Foley Recording and Editing;
- Sound Effects Editing;
- Sound Design;
Ideally, each sub-category is split across multiple audio team members; however, if the person in charge of the audio post production is a freelancer, then the chances of one taking over all of the aforementioned area is more likely to happen. One common misuse I've been hearing a lot is that Foley refers to every sound effect that’s heard in a film—that’s an incorrect definition. In its pure form, Foley is simply the act of performing sound effects to picture in real time. Foley is also related to any interaction between a character and its environment. What differs Foley from sound effects is intensity. The classic example I can thing of is a door hit; when one opens and closes a door, it's a sound effect and when one slams it and it scatters into bits and pieces, then it's a sound effect.
Another blurred line is that of the difference between sound design and sound effects. To my knowledge, sound design are be related to sounds that are larger than life; it’s also what helps the storyteller convey the right emotions to the audience by making it work against music, by itself or even as an interpretation of silence (ie. horror and suspense genres). I’d argue that sound design acts as a double agent: it’s either the crafting of an unknown sound or the design of the overall soundscape of a narrative.
Speaking for myself, I’ve always embraced the idea of doing what I’m afraid of as it forces me to push the boundaries of my creativity. With that being said, however, I don’t feel as if I did so this time. You see, I’ve always thought of documentaries as a truthful representation of life, and therefore never considered the idea of designing soundscapes that’d also immerse the audience into the story. Since the documentary was, indeed, an accurate representation of Papua New Guinea's health issues, I figured that by making the entire dialogue work smoothly followed by placing sound effects where needed and creating a solid atmos edit, then the story would be well told. I didn’t think of ways I could spice the soundscape with creative sound design until the last couple of days before the deadline. In fact, I believe that if I had come across the following observation from award-winning documentary sound designer, Peter Albrechtsen, sooner I would’ve realised strong creative potentials for the project.
“When you do sound design you also do music. There’s not much of a difference between sound and music; instead of using a guitar and a string orchestra, you use a car and the sound of rain - in a way, you have different sounds that are you instruments to create rhythms, dynamics, atmospheres and so on.” - Peter Albrechtsen (2013)
Given Peter's statement, I've decided to create a robust ambience edit. One thing I didn’t take into account, though, was the contextual representation of a sound. For example, I didn’t look for birds and other common sounds specific to Papua New Guinea. I’ve been advised once about being careful with this depending on the project I’m working on. Nevertheless, in the end neither did it damage the story, nor the experience. What I found ourselves doing in the mix was hiding the characteristics of atmospheres instead of using them creatively. While I was mixing my focus was primarily pointed towards making the dialogue clear and within the technical requirements.
To edit the ambiences, I created 16 tracks and divided them into two banks of 8. Each bank relates to a scene or perspective change. They are also colour coded in yellow and red for ergonomics purposes. Having said that, I did find myself having to search for a repetitive sound quite often. What I'll do for other sessions is colour code and label sounds as well. For example, let's say I have a sound called "Birds, Chirp, Forest" at 10:01:33:50 and the same sound is also placed at a later timecode--both would therefore have the same name and colour.
Finally, I prepared a video demonstrating why is this approach effective from a mixing point of view. Enjoy!
- Albrechtsen, Peter (2013). DocHouse Sound Design Masterclass with Peter Albrechtsen. Retrieved from https://vimeo.com/60171257 - (13 minutes, 10 seconds).