WRFL: Movement

With the impending move back to the student center, this is an introspective look at the WRFL’s temporary station in the basement of Whitehall.

Editing, Photography, and Narration by Clay Greene

Special Thanks to:
Ben Allen,
Grant Sparks,
Max Smith,
Nick Warner,
John Herbst,
and Reggie Smith

The Basics of Audio Processing in Premiere Pro 2018


The first and most important thing to understand about audio processing is that there is no right way to do it.  However, there are a lot of wrong ways to do it.  What I mean by this is that there is no catch-all, perfect function that makes the content beautiful.  With a flick of the wrist, or the turn of a knob, a magnificent sound can become wholly unlistenable.  Therefore, in this blog, I am going to details the basics of audio processing.   Understanding these fundamental ideas will allow you to make your own creative decisions for your own work.

When trying to make your creative decision it is important to play around with the equipment and functions that you have.  You have heard good audio before.  Playing around with your equipment can give you the experience you need to replicate that quality in your work.

An important thing to note is that you never know what equipment the audience is going to be listening with.  With that in mind, you need to make sure that you edit with good headphones or earbuds.  A good set of headphones for audio mixing have a flat frequency response and good isolation.  If you do not have a set of headphones that have a flat frequency response, take that into consideration when you are mixing.  Maybe your headphones are bass-heavy, so you what may sound like a lot of bass to you, may sound just right with a flat response.  If you do not have a set of headphones with good isolation, make sure you work in a quiet room.  A loud fan, A/C, or noisy neighbors can mask noise in the audio you are trying to work with.


Recording and the Microphone


The first step to processing your audio well is having a good recorded signal.  If it doesn’t sound good to begin with, no amount of processing will make it so.  Having a sound source that is clearer and louder than the noise around it will allow for a good recording.  Some recorders have a level of noise in the system.  Don’t use these recorders.  To minimize the risk of using a recorder like this, make sure you do a field test with the equipment before recording for your project.

Possibly the most important element of recording equipment is the microphone.  The microphone takes the sound and converts it into an electronic signal.  Commonly microphones are directional, this is known as a “cardioid” microphone.  To get a good signal out of a cardioid microphone, make sure that it is close to the sound source.  This garners a better bass frequency response, as well as creating a more intimate sound.  Moving it farther away typically has less bass, but you get the sound of the source in the room.  For a lot of applications, generally closer is better, but understanding different opportunities can allow you to make more informed decisions.

Another common type of microphone picks up sound in all directions.  This is known as an omni, or “omnidirectional,” microphone.  With these microphones, you don’t have to worry about placing it in the proper direction.   However, there is a lot more of the distant “room sound” that comes with these microphones.

A good piece of equipment to have as well is a windscreen.  They cover the diaphragm, the audio conversion section, of a microphone and help keep out low frequency noise generated by wind or certain consonants (“p” sounds are particularly bad for English speakers).

It is also important to know that the microphones you are using are of a good quality.  Almost all spec sheets for microphones can be found online.  You can look up frequency responses of many different microphones.  Look up reviews and comparisons.  The differences between some microphones are absolutely marginal compared to the price point.  As with headphones, however, it is most important that you understand the limitations and abilities of your equipment.  Understanding what frequency ranges or volumes that a particular microphone excels at allows you to make good decisions when recording.




After you record a good clear signal, you drop your .wav or .aiff file into Premiere and begin processing the audio.  The first effect you will use is an effect called “compression”.  In this article, we will be using a reference photo from the Premiere Pro CC 2018 effect titled “Single Band Compressor”. However, most of these controls are almost universal for compressors.

Compression is an inherently simple process.  It makes the whole signal louder, then takes the loudest sections and brings them down to the volume of the lower sections.  This makes the volume of a track easy for you to control when mixing proper audio.  On top of this simple core process, there are quite a few functions that allow a lot of nuance from a compressor.

Single Band Compressor in Premiere Pro 2018

The function titled “Attack” controls how quickly the actual compression effect begins.  A fast attack means that the compression begins sooner.  A slow attack means that the compression begins later.  This time can range from a few microseconds, this is common on specialized compressors called “limiters,” to around 100 milliseconds.  A slower attack tends to sound more natural as the initial volume of the sound, the transient, is retained.  Any time longer than 30ms can be too slow for most applications, however it is a particular effect to be used wisely.

The function titled “Release” controls how quickly the compression effect resolves.  Similar to attack, a fast release means the compression resolves sooner and a slow release means the compression resolves later.  This time can range from ten or so milliseconds to three or four seconds.  A longer release time can make the compression sound more natural, but it can stifle sudden quiet sections.

The function titled “Threshold” denotes the gain at which the compressor starts functioning.  If the recorded sound passes the level indicated on the threshold, the compressor activates bringing it down to the proper level.  A lower threshold means that more of the signal is compressed, and a higher threshold means that less of the signal is compressed.  It is possible to set the threshold so high that there is never any compression.  A good placement for this function is different on every single recording and should be tweaked accordingly.

The function titled “ratio” controls the rate at which the audio that passes the threshold is compressed.  For example, a ratio of 4:1 means for every 4db above the threshold on input, only 1db will leave the output.  A higher ratio means more compression, which can typically result in a more obvious, and possibly “worse”, sound.  The high compression does allow for a more consistent volume after the compression.


The Equalizer


The next important step is using the equalizer, or EQ, to shape the frequency response and clean up the sound.  If there are some frequencies that are too loud, you can use EQ to bring those down.  Alternatively, if the audio is lacking some frequencies, these can be brought back in.  EQ can also be used as an effect to emulate old sounding equipment, walkie talkie speakers, or other effects.  While Premiere has a multitude of EQ effects, the reference photos will be of the “Graphic EQ (10 Band)” and the “Parametric EQ” effects.

Graphic EQ (10 Band) in Premiere Pro 2018

Graphic EQ has a specific number of bands that each have a stationary frequency.  If the slider is moved down, it takes the frequency out.  If the slider is moved up it adds the frequency.  Premiere’s Graphic EQ function can allow for a custom volume range, as well as 20 band and 30 band functions.  Graphic EQ is useful in that it forces you to have a starting point when deciding what frequencies to add or subtract.

Parametric EQ in Premiere Pro 2018

Parametric EQ has a similar base function to that of Graphic EQ.  It cuts and boosts particular frequencies.  Each individual band of Parametric EQ is more functional, so it requires a level of nuance.  One parameter that can be controlled is “Frequency”.  This affects the frequency at which the band is centered.  Another parameter is “Gain”.  This affects how much the frequency is cut or boosted.  A third parameter is “Q”.  This affects the width of the band.  A lower Q means that the band is wider, as shown in these images.

Q = 1
Q = 2
Q = 4


Another two types of EQ are called Low Pass and High Pass Filters.  Low Pass Filters cut out high frequencies, they let the lows pass through.  On the other side, High Pass Filters cut out low frequencies, they let the highs pass through.  Low Pass Filters are good for cutting high end noise.  High Pass Filters are good for cutting out low end noise that may be present if a windscreen wasn’t used.




To finish your audio, you have to make sure that the levels are correct.  When checking levels, the output meter on the right side of the Premiere window should peak, or hit its maximum, around five or six decibels.   It is also important to make sure the focal point of the audio is easily heard.  If there is music or sound effects underneath the voice-over, make sure that the voice is most prominent in the audio mix.

Most importantly take a break from your work.  The more you listen to something, the more you rationalize it.  The sections that will be weird to the audience will sound normal to you.  So, take your time and pay attention to the details.

The Last Origin “Tired Eyes” on WRFL Live!


The Last Origin describes themselves as a “high energy, indie rock band”.  They came in and played at WRFL Live!  on March 21st


The Last Origin is:

John Anderson, Guitar and Vocals

Grant Howell, Bass

Kenny Tayce, Drums

and guest Emma Treg, Guitar


Facebook: https://www.facebook.com/TheLastOrigin/

Sound Cloud: https://soundcloud.com/thelastorigin


WRFL Live! is a weekly radio show that features bands from the central Kentucky area.

The show airs Wednesday from 8pm-10pm on 88.1, WRFL Lexington.

You can also listen to the stream from wrfl.fm


Facebook: https://www.facebook.com/wrfllive/


Instagram: https://www.instagram.com/wrfl_live/

Saturnz Barz: In Comparison

Saturnz Barz is the first official single from the Gorillaz off their 2017 album Humanz.  With this they released two music videos.  One is a traditional, 16:9 animation.  The other is a VR video.  With over 11 million views on the VR video, it ranks as one of the most watched VR music videos on YouTube.  Both the traditional and VR videos tell the same narrative of the band visiting a ghost house.  The adaptation of the traditional video into a VR context is an interesting one and there are a few things they did well, and some that were done poorly.

During the first half of the intro to the music video, the audience is placed within a train car with the traditional Saturnz Barz music video playing on a phone in front of them.  This scene helps to establish where center is for the audience so that they have enough time to orient themselves before the important part of the video begins.  After this the audience is placed within the ghost house with the band as they explore.  In the traditional video, there are plenty of transitions during this section where the camera moves to pull the characters into frame.

In the VR video, however, the character’s location is centered in the shot, allowing the user to immediately find them.  In the traditional video there is this transition when the music begins where Noodle places a record on a turntable, it and the camera spin around, and become a cake that 2D is looking to eat.

Thankfully, this spinning transition was cut from the VR video and replaced with the record flashing and becoming the cake.  Throughout the video, there is a lot of interesting movement of the characters that forces the audience to follow them around.  In the penultimate scene, the audience follows the character Murdoc around as they orbit Saturn and float around the audience.  In the final scene of the VR video, the audience is immediately cut into the same setting as the intro.  This allows the audience to re-orient themselves to center when the penultimate scene ends with the audience facing to their right.

This video is not perfect however.  Aside from simulating the direction that the camera is pointing in the traditional video, there isn’t much reason to look around in this video.  Some of the scenes orbiting Saturn have something happening on the opposite side from the focal point, but they’re not that important.  A fair amount of the scenes have the audience looking to their side about 90 degrees, which can be uncomfortable.  The orbiting Saturn scenes can be incredibly disorienting.  There’s a whole lot of movement in these scenes and a whole lot of movement the audience themselves would take part in to follow the characters.  One of the Saturn scenes has the character Murdoc fly all the way behind and around the audience which can force them to whip their head around to see where he actually went if they are seated.  These issues of movement come from a lot of the scenes having the same blocking as the scenes in the traditional video.  Murdoc makes the same movement in the traditional video.


Through it all, this is still an extremely well thought out video.  The attention to detail on what can make an appropriate transition for the audience between scenes is incredibly impressive.  While there are issues, most of the issues present in the video couldn’t be resolved without completely overhauling the animation by recreating the scenes from the ground up.  Maybe they wanted the Saturn scenes to be disorienting.  It has a very strange effect and Gorillaz isn’t known for attempting to be normal.