AUP230 - Music Intensive

What was initially meant to be a music intensive, in the end it became a blend of music and post production. I was assigned to work on the music editing of a short film. That is, instead of composing to picture, I would source the music from royalty-free resources, find a set of tracks that'd fit with the genre and edit them to picture.

The initial challenge was to find a royalty-free music website; while there are plenty out there, I was looking for one that allowed me download stems for better flexibility in the editing process. Coincidentally, I have been researching about music services for a while due to my major project, which is entirely based on video production for YouTube. The best solution I found was Epidemic Sound, a Swedish production music library, whose goal is to help creators tell stories with quality music. A few YouTubers I follow subscribe to them, and I decided to give it a shot. For creators with less than 500,000 views per video the subscription price is $15, and the creator is allowed to use any of their tracks, free of copyright, and the best part is that all tracks come with stems.

Since it's this intensive is an academic project, I'm using the tracks for academic purposes. If the director enjoys my work and decides to use the tracks, he or she will need to purchase the license for them, which can be quite expensive. A full song costs $99 or $0.99 for each second. From this perspective this approach is quite disadvantageous, although for the sake of the assignment, it has been quintessentially helpful.

The film has got a noir cinematography style, and my intention with the opening scene was to musically reflect that. The opening scene illustrates the main character packing his bags for a photo shoot: you see him tying this shoes, placing camera lens inside the bag, zippering his jacket and directing himself towards the door. This scene reminded me a lot of the Dexter (TV show) introduction, where it shows Dexter's daily routine before going to work. What's clever about it is the perfect synchronisation between sound design, music and video. What I imagine it happens in situations like this is that the production team plans ahead and decides which music they are going to use and how they are going to film it in order to make it work seamlessly. Another example of this approach is the opening sequence of the TV show, True Blood. Despite not having being able to compose the score--I'm not a musician--, I searched for a track that'd fit the mood of the scene and then edited it to fit with hit points. Due to the noir feel of the film, I went with jazz.

What was interesting about using jazz is that it gave the impression that the film is laid-back and casual, while it is not. As it develops, the lead character is surprised by the presence of his wife in the apartment he's photographing from afar. In that moment, the score changes to a suspenseful mood, which I also referenced from Dexter as the series disguises itself from being suspenseful to thriller. Unfortunately, I can't publish the track here, but you can preview the video on the right to get a sense of what I was looking for.

Being a music project, I didn't want to work on sound design not to put a lot of pressure over my shoulders. I haven't been going through a good time lately, so I decided to play it relatively safe on this project and solely stick to two elements: music and dialogue. 

The dialogue edit for this film has been quite challenging as there are plenty os silent moments, not to mention that it was recorded very low, which therefore complicates the repairing process. A trick I've been using to rectify the lack of room tone is that of creating room tone out of convolution reverb.

The process is very simple, and powerful. What you’ve got to do is extract around 10 seconds of perfect room tone; it needs to be clean of extraneous noise such as pops, clicks and the like. Once you have finished the edit, export the roomtone to an audio file, instantiate an audio track an insert the following plugins in the exact order:

 - Signal Generator (White Noise)

- Waves IR-L

- EQ

Then, load the exported room tone into IR-L as an impulse response. You’ll notice that the white noise being generated will now have a reverb that simulates the ‘timbre’ of our original room tone. Finally, you’ll need to EQ the reverb to match the overall sound of the room tone and lower the output. 

And there you have it. Since I will be delivering the project only next Friday, April 6, I’ll write another blog supporting this one with further detail.


AUP230 - Post Intensive

Over the last four weeks I’ve been working on the audio post production of a documentary filmed in Papua New Guinea. The documentary addresses alarming health issues dealt by habitants of the rural area with an interesting approach; Australian doctors and nurses — voluntarily led by Stewart Kreltszheim, co-founder  of the ‘No Roads to Health’ organisation — embark on an 8-day medical expedition, where they are set to aid the population while trekking at the Owen Stanley Range in the North Coast.

As provided by the client, the technical requirements stated that the project should be mixed both in stereo and in 5.1 and the latter had to be delivered in multi-mono  (L - C - R - Ls - Rs - LFE) instead of a 5.1 audio file. Furthermore, loudness standards should’ve complied with the specifications defined by the EBU (European Broadcasting Union), such as an average of -23 LUFS (+ - 1 LU) for dialogue and a maximum of -3 dBTP.  

 Image from 'A Sound Effect' Click  here  to view source.

Image from 'A Sound Effect' Click here to view source.

Proceeding to the editing stage, the audio team first spotted the project for voice over cues as referenced from the script. Our microphone of choice was the Shure SM7B, a dynamic microphone well regarded for its flat and smooth frequency response. The levels were recorded quite low, however due to the efficient acoustic treatment of the recording stage we were able to increase the levels without the presence of extraneous noise. Once the voice over was completed, we moved to dialogue editing, whose location recordings consisted of many impurities such as clicks, crackles, rumbles and even abrupt background noise variations throughout the program. We also needed to split the team to specific tasks in order to make sure we could deliver a solid soundtrack for the client. 

While the dialogue was the main element of the story, it was also the most challenging one to work with. The idea of a good dialogue edit consists of making it sound as natural as possible. This is achieved by applying fade-ins and outs between two or more different takes, as well as de-noising where appropriate. We’ve incorporated iZotope RX into the session to make the latter possible. De-noising is not only referred to as noise suppression; rather, it is also about fixing the inconvenient characteristics mentioned above. Despite being possible to remove clicks and crackles using the Pencil tool in Pro Tools, this method is cumbersome and time-consuming. With RX the editor can analyse the spectrum of a sound and remove only what’s in the way instead of altering the entire file. 

  This picture illustrates the dialogue edit session as it was handled to us. As you can see, there are files scattered all over the place.

This picture illustrates the dialogue edit session as it was handled to us. As you can see, there are files scattered all over the place.

  This picture demonstrates the dialogue edit properly organised. The green tracks stand for SYNC audio, the red tracks stand for WALLA (background chatter; crowds) and the blue tracks stand for PFX (production sound effects: sounds that were recorded in the location recording. I've learned that, in Australia, they'd stay within the SYNC tracks, though all that matters is what works better for you.

This picture demonstrates the dialogue edit properly organised. The green tracks stand for SYNC audio, the red tracks stand for WALLA (background chatter; crowds) and the blue tracks stand for PFX (production sound effects: sounds that were recorded in the location recording. I've learned that, in Australia, they'd stay within the SYNC tracks, though all that matters is what works better for you.

The overall audio post production of a film entails multiple sub-categories, such as:

  • Dialogue Editing;
  • ADR Recording & Editing;
  • Foley Recording and Editing;
  • Sound Effects Editing;
  • Sound Design;
  • Mixing.

This video goes into detail about what I briefly describe. It's 25 minutes long and it's worth watching every second.

Ideally, each sub-category is split across multiple audio team members; however, if the person in charge of the audio post production is a freelancer, then the chances of one taking over all of the aforementioned area is more likely to happen. One common misuse I've been hearing a lot is that Foley refers to every sound effect that’s heard in a film—that’s an incorrect definition. In its pure form, Foley is simply the act of performing sound effects to picture in real time. Foley is also related to any interaction between a character and its environment. What differs Foley from sound effects is intensity. The classic example I can thing of is a door hit; when one opens and closes a door, it's a sound effect and when one slams it and it scatters into bits and pieces, then it's a sound effect.  

Another blurred line is that of the difference between sound design and sound effects. To my knowledge, sound design are be related to sounds that are larger than life; it’s also what helps the storyteller convey the right emotions to the audience by making it work against music, by itself or even as an interpretation of silence (ie. horror and suspense genres). I’d argue that sound design acts as a double agent: it’s either the crafting of an unknown sound or the design of the overall soundscape of a narrative. 

Speaking for myself, I’ve always embraced the idea of doing what I’m afraid of as it forces me to push the boundaries of my creativity. With that being said, however, I don’t feel as if I did so this time. You see, I’ve always thought of documentaries as a truthful representation of life, and therefore never considered the idea of designing soundscapes that’d also immerse the audience into the story. Since the documentary was, indeed, an accurate representation of Papua New Guinea's health issues, I figured that by making the entire dialogue work smoothly followed by placing sound effects where needed and creating a solid atmos edit, then the story would be well told. I didn’t think of ways I could spice the soundscape with creative sound design until the last couple of days before the deadline. In fact, I believe that if I had come across the following observation from award-winning documentary sound designer, Peter Albrechtsen, sooner I would’ve realised strong creative potentials for the project.


 “When you do sound design you also do music. There’s not much of a difference between sound and music; instead of using a guitar and a string orchestra, you use a car and the sound of rain - in a way, you have different sounds that are you instruments to create rhythms, dynamics, atmospheres and so on.”  - Peter Albrechtsen (2013)


Given Peter's statement, I've decided to create a robust ambience edit. One thing I didn’t take into account, though, was the contextual representation of a sound. For example, I didn’t look for birds and other common sounds specific to Papua New Guinea. I’ve been advised once about being careful with this depending on the project I’m working on. Nevertheless, in the end neither did it damage the story, nor the experience. What I found ourselves doing in the mix was hiding the characteristics of atmospheres instead of using them creatively. While I was mixing my focus was primarily pointed towards making the dialogue clear and within the technical requirements. 

To edit the ambiences, I created 16 tracks and divided them into two banks of 8. Each bank relates to a scene or perspective change. They are also colour coded in yellow and red for ergonomics purposes. Having said that, I did find myself having to search for a repetitive sound quite often. What I'll do for other sessions is colour code and label sounds as well. For example, let's say I have a sound called "Birds, Chirp, Forest" at 10:01:33:50 and the same sound is also placed at a later timecode--both would therefore have the same name and colour. 

Finally, I prepared a video demonstrating why is this approach effective from a mixing point of view. Enjoy!

In this video I run through my session adaptation methodology to digital mixing consoles, such as the AVID S6.


  1. Albrechtsen, Peter (2013). DocHouse Sound Design Masterclass with Peter Albrechtsen. Retrieved from - (13 minutes, 10 seconds).

AUS220 - Critical Reflection

This AUS220 series belongs to an assessment from college.

This trimester was the one I took the most risks and put me out of my comfort zone in every opportunity I could find. I've focused on improving everything I've been doing wrong in regards to communication and collaboration. 

I've really enjoyed the live sound and music intensives as they were new to me. I've never participated in the entire production of a song, and being able to collaborate with some ideas--even though I don't understand music that much--, as well as play an instrument that would be utilised later on accounted for a terrific experience. As for live sound, despite not being for me, seeing and hearing effects such as compression and gating being applied in a live environment shed a light into what I found to be the most complex concepts to understand. It was also tremendously satisfying to hear from guests and the band how professional our gig sounded and looked like. 

As I stated in the first paragraph, this was the trimester I took the most risks so far. I've worked on 7 freelance projects while working part-time and studying full-time. I wanted to challenge myself into seizing the most of every hour in every day, and the result was extraordinary. I did deliver all projects to the best of my abilities and received great feedback on all of them. One of them is still on-going and due this weekend--it's an audiobook for visually impaired children. Two of these were collaborations with the game and film departments at SAE and the other ones were comprised of an interview for Sound Ideas, two articles for and an episode edit for the Tonebenders Podcast. In addition, despite not being freelance, I've written 5 extra blogs for AUS220 simply because I love to write.

Speaking of podcast, I was in charge of the editorial for the trimester podcast project. I ran into many challenges involving the narrative arch of our story, as we lacked a bit on elaborating a solid research. As much as I've worked on improving communication and collaboration, it came to a point where I was overwhelmed by discomfort and that held me back when having to find the right people to be interviewed. Nevertheless, I'm very satisfied with the overall quality of our production despite its minor flaws. 

All in all, I believe I did a terrific job this trimester when compared to previous ones. I've worked on improving my insecurities and weaknesses, and I've put myself in a very tight deadline on purpose in order to push my boundaries and take the most out of my education. However, having challenged myself with numerous external projects, I sacrificed efficiency when researching for the podcast. That's not to say that I regret my decisions. Despite such mistake being minor, I'll take the result as a learning experience and work towards improving it from now on. 

Here are the links for the external freelance projects: