
The finished piece from the project synced up with the video from the Video Production module.
The main idea behind our piece for the project was student who realises his mobile phone has malfunctioned on the way into to university. The phone has crashed and keeps looping sound over and over. the student notices that sounds being picked up by the phones microphone are adding to the repetitive loop, he then goes out of his way to add as many sounds to this loop as possible.
‘Using Audition’s noise reduction feature to clean up the recorded sounds’
We chose this idea as we felt we could incorporate into it all the techniques we had been taught in the tutorials. We recorded about 60 different sounds from around the campus and also generated some sound effects to simulate that the phone had crashed. I used cubase to add audio effects to the generated sounds which were automated over time.
'Overview of the Audition project’
Once we had all the sounds I arranged them in Audition. I created three different loops that could be spread across the piece so it wasn’t just one static beat throughout. Once the arrangement was complete I bounced the tracks down to separate WAV files and imported them into Logic for mixing. I did this as I felt Logic handled the large number of track better.
'Mix session in Logic’
Overall I have enjoyed the module and have learnt a lot of new things. I have also been shown new ways to achieve the tasks I could already complete and have greatly improved my knowledge on the subject.
This is the final tutorial before we concentrate fully on project only. We looked at Audition’s effects in more detail and were also taught how to use bus channels in the mixer.
Audio effects in Audition come in two flavours, ‘destructive effects’ which permanently change the audio file, and 'non destructive effects’ which are applied in realtime between the playback of the file and the output of the software. Destructive effects are added to an audio file in the edit view. First select the audio to be effected with the time selection tool, then go to the effects menu and choose the desired effect. The downside to this method is that actual audio file is changed and the effect parameters cannot be automated. The advantage however is that the effect processor is not running in realtime, multiple processors running in realtime will use up CPU cycles and could reduce performance of the computer.
'Non destructive effects’ are added in the multitrack view. Effects are added to audio channels in the mixer and any audio on that particular track will be effected. Multiple effects processors can be added to a single track and their parameters can be changed over time using track automation.
As the effects are running in realtime the original audio file is not changed. The downside is that each effect requires computer processing power and therefore there is a limit to the amount of effects the computer can handle at once.
To reduce the amount of effect processors in a project it is possible to use one processor to effect multiple tracks. First a bus channel is created and routed to the master output.
Each track that needs to be effected is then routed through the bus channel.
The effects are then loaded on to the bus channel and effect any tracks that are routed through it.
Bus channels can also be useful for controlling the level of multiple tracks at once. For example a drum kit is usually recorded with several microphones. The level of each microphone can be adjusted separately and then each track can be routed to a bus channel that can be used to adjust the level of the drum kit as a whole.
Using effects will add interest to the sounds we will use in the project and using bus channels will enable us to better organise the mixer and make it easier to keep track of everything in the mix, especially when working with lots of different sounds.
Arrangement made of loops from tutorial 8
This tutorial we practised taking short loops from pieces of music and then using them to create a new arrangement. We used adobe Audition to do this and we were given a track by Fatboy Slim as a source for the loops.
At first I was finding loops manually by selecting a 4 or 8 beat section of the track and then carefully adjusting the start and end point of the selection using the mouse. I had loop play selected so the selection kept looping over, this helped me tweak the selection so that it would loop exactly in time. I also worked out the tempo of the song in bpm and then used this to find out how many seconds my selection should be. I used this as a rough guide but mostly did it by ear. i made sure ‘snap to zero crossings’ was enabled so my loop was cut at zero amplitude and therefore would not click or pop when looped in the multitrack view. Once I was happy with a loop I copied it to a new file by pressing ctrl + shift + c.
We were then shown 3 other methods of finding and selecting loops which were much quicker and more precise than the manual method I had been used to. The first was using the shortcuts 'shift + [’ and 'shift + ]’ to skip through transients in the audio file. This worked nicely as the piece had loud prominent drums so it was easy to skip through each beat. The HJKL keys can then be used to fine tune the selection and make sure it loops in time.
The second method was to use the F8 key to mark out the beats. Pressing F8 adds a marker so tapping along in time to the track will put a marker on each beat. Once the beats are marked it is easy to select a short loop and copy it to a new file. The drawback with this method is that the software sometimes responds slowly and places the marker a short time after F8 is pressed. This makes the technique less accurate on faster pieces of music but can still be useful as a rough guide.
The final method is similar to the second but instead of manual placing markers, the software can do it automatically. By going to Edit > Auto Cue > Find beats and mark, the software places a marker on everything it thinks is a beat. A loop can then be selected easily. Again this process doesn’t always work correctly, especially if the music has a complicated rhythm.
Now that I had a selection of loops saved as separate files I copied them across to the multitrack view on separate tracks. We were shown how to loop each file by right clicking it, sleeting 'Loop Properties' and checking the 'Loop’ box. Now each file can be looped for any length of time by dragging the right hand side.
Next we were shown how to use time stretching to change the tempo of the audio files without changing the pitch. This is done by right clicking on the audio file and selecting 'Time Stretch Properties’. The amount of stretching can be specified as a percentage, 100% being the original speed, 50% being half the speed and 200% twice the speed. The time stretching algorithm can also be changed in this menu. Different algorithms are better suited to different types of audio such as percussive, melodic or speech.
I then arranged all the loops into a short one minute piece and layered up time stretched copies to fill out the sound. I adjusted the levels using the mixer and then exported the piece as a .wav file.
Beat made from generated sounds in tutorial 7
The aim of this tutorial was to learn how to produce a variety of sound effects using the sound generation feature of Adobe Audition. The first sound we had to create was a'Wind in tress’ effect. First of all we generated 10 seconds of pink noise as a base to work with. Pink noise sounds somewhat like leaves in the wind but has a constant volume that detracts from the realism. To create changes to the volume of the noise we used an amplitude envelope. By drawing in a random up and down swelling envelope pattern it gave the noise a more realistic wind sound, as if the wind was gusting.
The next sound we had a go at creating was a space ship landing straight out of a 1950’s sci-fi movie. We started of by generating a 50Hz sine wave and modulating it with another sine wave. This produced a pulsing machine sound but it had constant pitch. To turn this into a spaceship landing sound the pitch must decrease over time. To do this I entered I frequency of 20Hz on the second page of the sound generation window. The machine sound now started at a frequency of 50Hz and pitched down to 20Hz, this sounded much more like a spaceship landing.
The second part of the tutorial was spent making synthetic drum sounds using the same sound generation tools in Audition. A kick drum was made by modulating a 0.2 second sine wave from 75Hz to 20Hz. I then used an amplitude envelope to control the volume over time.
A Hi Hat sound was made by generating 0.1 seconds of white noise and modulating the amplitude with an envelope to create a short snappy sound. I used a similar technique to create a snare drum sound but this time making it longer and using an EQ to enhance the midrange.
There was still some time left in the tutorial so I copied all my generated sounds to the multitrack view and made a short beat.
Sound generation is another useful technique for the project. It will be useful to produce sounds that we cannot record around the campus. Understanding how a sound is created is also beneficial when editing pre recorded audio.
After learning the basics of Cubase we were shown how to do the same tasks in Logic on the Mac. Although the workflow in Logic is similar to Cubase, I find Logics interface much easier to get along with. Everything is kept in the one window so the workspace is easier to organise.
I imported the sounds from the Cubase tutorial and again arranged a short beat. I added an instrument track and loaded a virtual instrument. Logic doesn’t load VST plugins but uses Apple’s own Audio Unit format or AU for short.
The finished piece I created using Cuabase in Tutorial 5
Moving away from audio editing, we had a look at MIDI sequencing for this tutorial. We were introduced to Steinberg Cubase which is software designed for audio and MIDI sequencing. Midi stands for musical instrument digital interface and is a language used to control electronic instruments in both hardware and software form. A midi sequence doesn’t consist of any audio itself, it is only the control data and must be routed to an instrument in order to create a sound.
We were shown how to add a midi track and route it to a virtual instrument plugin. Cubase uses the VST (Virtual Studio Technology) standard of plugins for both instruments and effects. The virtual instrument I used was Halion One which is a sample playback engine and includes a wide range of built in sounds. I chose an electronic drum kit patch and recorded a simple drum pattern using a midi keyboard to trigger the sounds.
As my timing wasn’t exactly spot on when recording the midi data I used the quantise function to snap all the midi notes to the nearest 1/16th beat. I was shown by the tutor how to use iterative quantise to move the notes closer to the beat but not exactly on the beat. The advantage of this is that it tightens up timing whilst retaining a slightly loose human feel. I experimented with this technique for some of the syncopated hits but made sure the kick and snare were fully quantised. This gave the drum loop an electronic feel with a slight shuffle in between the beats.
Now I had a basic drum beat which I added to using the piano roll editor. I was shown at this point that I could change a setting for the midi track so that it would show a drum map instead of the piano roll. This was much easier for editing the midi data as each drum sound was clearly labelled instead of just showing a piano keyboard.
Cubase isn’t just a MIDI sequencer as it can also sequence audio files. I added an audio track and imported some of the the percussive sounds from the field recording tutorial. I arranged the sounds so that they complimented the midi drum loop I had already created.
At this point I had finished all the tasks required for the tutorial so I decided to add a synth sound using another virtual instrument. I created another MIDi track and assigned it to the Monologue virtual instrument.
I chose a preset sound and recorded some notes in using the midi keyboard. I then arranged all the sounds into a short piece and used the mixer to adjust the volume of each sound.
‘Overview of the completed cubase project’
Cubase is another useful piece of software that we can use for the project.
Beat created in Tutorial 4 using Auditions multitrack view
For this tutorial we were asked to make a short drum loop using the percussive sounds from the field recording. To do this we used the multitrack view in Adobe Audition. We were shown how to change the timeline to ‘Bars and Beats’ and enable snapping so that each sound sound would snap onto a beat, this would make it much easier when arranging the sounds and make sure everything is in sync.
I imported some of the percussive sounds from the previous tutorial and also some kick, snare and hi hat sounds that were provided by the tutor. It was then just a case of dragging each sound onto the multitrack view and arranging them to create a beat.
I put each sound on a separate track so that I could use the mixer view to control the level of each sound individually. I also used the EQ in the mixer to stop some of the sounds from interfering with each other and to get the most from the field recordings.
Another useful tip we were shown was to use the metronome to make sure everything was in time. This was especially useful as I had created a beat with quite a lot of syncopation.
Once we had created a beat we were introduced to synchronisation sheets. A sync sheet is used when syncing audio to video. The idea is to mark down each percussive sound from our beat and work out which frame of video they would fall on. A quick calculation was necessary as we had been working with Beats Per Minute and video frame rate is measured in Frames Per Second. My piece had a tempo of 120bpm so each beat is 0.5 seconds (60 seconds in a minute divided by 120 beats). If the video has a frame rate of 24fps then each beat would take 12 frames of video.
'A Sync Sheet’
Beat creation from percussive sounds and planning video synchronisation will be good practice for the project and make it a lot easier when editing the video and keeping it in sync with the audio.
Percussive sounds captured whilst field recording
The aim of the third tutorial was to get to know the portable audio recording equipment and practice getting clean recordings of percussive sounds from around the university campus. In order to hire the equipment from the help desk we first had to fill in a risk assessment form. This involved a brief description of the task and any possible risks involved. The Risk assessment form only permitted us to use the equipment on campus, this was something we had to keep in mind when planning the sounds to be recorded for the project.
The field recorder we used was Tascam DR-07.
‘Image from www.tascam.com’
Instead of using the built in stereo microphones we connected a shotgun mic to the input of the recorder.
The benefit of this was the highly directional polar response of the shotgun mic, it allowed us to pick up the sounds we wanted to record while keeping background noise to a minimum. This was especially usefully outside as it was a windy day and busses and cars were passing us frequently.
Once we had a range of sounds recorded we headed back to the lab to upload the files to the computer. We were shown two different ways of doing this. The first was to connect the field recorder to the computer via a usb cable and copy the WAV files straight from the recorder’s memory card. The other way was to connect the the recorders line level output to the computers line input, play the sounds from the recorder and record them into Audition. We did this via a hardware mixer to give us control of the level and make sure it didn’t clip.
Once we had the WAV files on the computer we used Audition to cut them up into individual percussive hits that we can later sequence. We did this manually at first but the tutor then showed us how to use the quicker batch process to find each hit automatically. This allowed us to complete the task in a much shorter period of time.
This tutorial was very helpful in teaching us techniques we will use in our project and we will now be able to hire the equipment during free sessions to build up a library of sounds.
In the second tutorial we were given a similar task to tutorial 1. This time however we were not provided with a step by step guide, instead we had to work to a brief which outlined the requirements for the finished product.
Brief
The brief required the dialogue audio file to be cleaned up. The file contained clicking and popping noises and also had a low frequency humming sound at around 50Hz. To get rid of the clicks and pops I experimented Audition’s Click/Pop Eliminator. I highlighted the area of audio with a click sound and used one of the presets to remove it.
To tackle the Humming sound I used the parametric eq. I created a node with a narrow frequency range and gave it some gain. I then moved this up and down the frequency range until the hum was at its loudest. I then reduced the gain at that frequency until the hum was reduced but the dialogue was not affected.
The techniques learnt in this tutorial will be useful in the project as we will be recording outdoors and will have to contend with wind and other background noises. It was also good practice using Audition’s multitrack view.
Tutorial 1 - Finished Video
In the first tutorial we had a look at Adobe Audition and how it can be used to arrange music and other sounds for video. We were provided with a short video clip from a snowboarding TV program, the speech audio files from the clip and some music and sound effects. We were shown how to import the video and audio files into Audition’s multitrack view. We then followed a step by step guide to arrange the clips to match up with the video. This involved lip-syncing the speech audio, adding fades to audio clips and setting a ‘bed’ level for the music to sit under the speech.
‘An overview of the session in the multitrack view’
I was shown by the tutor how to zoom in whilst in the multitrack view. This greatly increased accuracy when positioning the audio files. This was useful when lip syncing the speech which I found the most difficult part of the task.
This tutorial was good practice for the project as the main theme in the project brief is the synchronisation of audio and video.