Even though we live for a
good “Before n’ After”, we don’t watch HGTV only for the home makeover. It
takes courage to admit that we watch for the crazy house hunting mom-drama. We get so caught up in the house hunting
drama, that we forget about what we originally tuned in for; that
A powerful transformation
can sell the dramatic change the “Property Brothers” worked so hard for. VR,
however, has the potential to make the makeover even better. One small problem
stops us from making it effective in VR, however. A wipe in 2D is easy to
animate, but it isn’t easy to do that in 3D.
In this tutorial, I will
teach you how to animate a transitional wipe in VR. It’s perfect for a
transformation like in HGTV or the wipe on Avengers: Infinity Wars when Thanos
reveals the destruction of his home planet.
While you may be a magical Premiere Pro warlock, getting the footage perfect before post-production is essential. Here are your goals when filming:
Make sure to stick the camera in the exact
same position. Place it on the same spot on the floor as well as the same
Next, the camera and its settings should
be the same. Ideally, you should use the highest resolution and use 30fps. If
there are dark shadows, change the white balance so that the image is brighter.
Brighter pixels are sharper pixels!
Now that we have all our footage ready to go, stitch your footage and we can finally open Premier Pro! Import the videos and lay them directly on top of each other. Place the Before above the After.
The second step is to
align the footage so that the landmarks are on the same part of the screen.
Reduce the opacity of the top layer so that you can see both simultaneously.
Then apply the “VR Rotate Sphere” and adjust the layers until they are aligned
Side Note: If you are just a bit off and you can’t get it in the EXACT position, don’t worry too much. The human mind will compensate for small differences.
Moving on, we are going
to start adding effects. Select the effects tab and look up the “crop” tool.
Drag the effect onto the top layer. Select the “Effects Controls” tab on the
left of the screen. The effect on the bottom should say “Crop”. Click it.
Enter for the left is 0%
and then 75% for the right. You can personalize the size to your tastes, but I
found this to be a good balance between the “Before” and “After”.
There should be a small
chunk of the before shifted slightly to the left with a harsh line revealing
the “After” layer under it. No feathering. No motion. Just a block of footage standing alone like a
kid in the supermarket pretending that they didn’t get lost.
To get some motion going,
we need to use the dreaded key frames. They can be a little tricky for first
timers, so it is important to carefully follow these instructions.
Make sure your timeline
in effects control is at the zero mark. In crop effect, you will see a stop
watch icon. Click on the stop watch on the left and the right crop effects. Two diamonds should appear in the beginning of
the timeline. Move the timeline cursor to when you want the animation to end.
15-30 seconds should be enough. Now you can change the values to 75% for the
left and 0% for the right. Two more
diamonds should appear in the timeline.
You can now play back the
video and see that it’s moving. The hard part is over. You can relax, now.
To give it the finishing
touches, we need to feather out the harsh lines of the top layer. Luckily, this
effect is already in the crop effect. The value of the feather tool should be
at zero. Change it to 500. This will ensure that both layers blend seamlessly.
If “ifs” and “buts” are candy and nuts, you should be finished. Congratulations!
The first and most important thing to understand about audio processing is that there is no right way to do it. However, there are a lot of wrong ways to do it. What I mean by this is that there is no catch-all, perfect function that makes the content beautiful. With a flick of the wrist, or the turn of a knob, a magnificent sound can become wholly unlistenable. Therefore, in this blog, I am going to details the basics of audio processing. Understanding these fundamental ideas will allow you to make your own creative decisions for your own work.
When trying to make your creative decision it is important to play around with the equipment and functions that you have. You have heard good audio before. Playing around with your equipment can give you the experience you need to replicate that quality in your work.
An important thing to note is that you never know what equipment the audience is going to be listening with. With that in mind, you need to make sure that you edit with good headphones or earbuds. A good set of headphones for audio mixing have a flat frequency response and good isolation. If you do not have a set of headphones that have a flat frequency response, take that into consideration when you are mixing. Maybe your headphones are bass-heavy, so you what may sound like a lot of bass to you, may sound just right with a flat response. If you do not have a set of headphones with good isolation, make sure you work in a quiet room. A loud fan, A/C, or noisy neighbors can mask noise in the audio you are trying to work with.
Recording and the Microphone
The first step to processing your audio well is having a good recorded signal. If it doesn’t sound good to begin with, no amount of processing will make it so. Having a sound source that is clearer and louder than the noise around it will allow for a good recording. Some recorders have a level of noise in the system. Don’t use these recorders. To minimize the risk of using a recorder like this, make sure you do a field test with the equipment before recording for your project.
Possibly the most important element of recording equipment is the microphone. The microphone takes the sound and converts it into an electronic signal. Commonly microphones are directional, this is known as a “cardioid” microphone. To get a good signal out of a cardioid microphone, make sure that it is close to the sound source. This garners a better bass frequency response, as well as creating a more intimate sound. Moving it farther away typically has less bass, but you get the sound of the source in the room. For a lot of applications, generally closer is better, but understanding different opportunities can allow you to make more informed decisions.
Another common type of microphone picks up sound in all directions. This is known as an omni, or “omnidirectional,” microphone. With these microphones, you don’t have to worry about placing it in the proper direction. However, there is a lot more of the distant “room sound” that comes with these microphones.
A good piece of equipment to have as well is a windscreen. They cover the diaphragm, the audio conversion section, of a microphone and help keep out low frequency noise generated by wind or certain consonants (“p” sounds are particularly bad for English speakers).
It is also important to know that the microphones you are using are of a good quality. Almost all spec sheets for microphones can be found online. You can look up frequency responses of many different microphones. Look up reviews and comparisons. The differences between some microphones are absolutely marginal compared to the price point. As with headphones, however, it is most important that you understand the limitations and abilities of your equipment. Understanding what frequency ranges or volumes that a particular microphone excels at allows you to make good decisions when recording.
After you record a good clear signal, you drop your .wav or .aiff file into Premiere and begin processing the audio. The first effect you will use is an effect called “compression”. In this article, we will be using a reference photo from the Premiere Pro CC 2018 effect titled “Single Band Compressor”. However, most of these controls are almost universal for compressors.
Compression is an inherently simple process. It makes the whole signal louder, then takes the loudest sections and brings them down to the volume of the lower sections. This makes the volume of a track easy for you to control when mixing proper audio. On top of this simple core process, there are quite a few functions that allow a lot of nuance from a compressor.
The function titled “Attack” controls how quickly the actual compression effect begins. A fast attack means that the compression begins sooner. A slow attack means that the compression begins later. This time can range from a few microseconds, this is common on specialized compressors called “limiters,” to around 100 milliseconds. A slower attack tends to sound more natural as the initial volume of the sound, the transient, is retained. Any time longer than 30ms can be too slow for most applications, however it is a particular effect to be used wisely.
The function titled “Release” controls how quickly the compression effect resolves. Similar to attack, a fast release means the compression resolves sooner and a slow release means the compression resolves later. This time can range from ten or so milliseconds to three or four seconds. A longer release time can make the compression sound more natural, but it can stifle sudden quiet sections.
The function titled “Threshold” denotes the gain at which the compressor starts functioning. If the recorded sound passes the level indicated on the threshold, the compressor activates bringing it down to the proper level. A lower threshold means that more of the signal is compressed, and a higher threshold means that less of the signal is compressed. It is possible to set the threshold so high that there is never any compression. A good placement for this function is different on every single recording and should be tweaked accordingly.
The function titled “ratio” controls the rate at which the audio that passes the threshold is compressed. For example, a ratio of 4:1 means for every 4db above the threshold on input, only 1db will leave the output. A higher ratio means more compression, which can typically result in a more obvious, and possibly “worse”, sound. The high compression does allow for a more consistent volume after the compression.
The next important step is using the equalizer, or EQ, to shape the frequency response and clean up the sound. If there are some frequencies that are too loud, you can use EQ to bring those down. Alternatively, if the audio is lacking some frequencies, these can be brought back in. EQ can also be used as an effect to emulate old sounding equipment, walkie talkie speakers, or other effects. While Premiere has a multitude of EQ effects, the reference photos will be of the “Graphic EQ (10 Band)” and the “Parametric EQ” effects.
Graphic EQ has a specific number of bands that each have a stationary frequency. If the slider is moved down, it takes the frequency out. If the slider is moved up it adds the frequency. Premiere’s Graphic EQ function can allow for a custom volume range, as well as 20 band and 30 band functions. Graphic EQ is useful in that it forces you to have a starting point when deciding what frequencies to add or subtract.
Parametric EQ has a similar base function to that of Graphic EQ. It cuts and boosts particular frequencies. Each individual band of Parametric EQ is more functional, so it requires a level of nuance. One parameter that can be controlled is “Frequency”. This affects the frequency at which the band is centered. Another parameter is “Gain”. This affects how much the frequency is cut or boosted. A third parameter is “Q”. This affects the width of the band. A lower Q means that the band is wider, as shown in these images.
Another two types of EQ are called Low Pass and High Pass Filters. Low Pass Filters cut out high frequencies, they let the lows pass through. On the other side, High Pass Filters cut out low frequencies, they let the highs pass through. Low Pass Filters are good for cutting high end noise. High Pass Filters are good for cutting out low end noise that may be present if a windscreen wasn’t used.
To finish your audio, you have to make sure that the levels are correct. When checking levels, the output meter on the right side of the Premiere window should peak, or hit its maximum, around five or six decibels. It is also important to make sure the focal point of the audio is easily heard. If there is music or sound effects underneath the voice-over, make sure that the voice is most prominent in the audio mix.
Most importantly take a break from your work. The more you listen to something, the more you rationalize it. The sections that will be weird to the audience will sound normal to you. So, take your time and pay attention to the details.
In typical photography or video, creators have a lot of control over what their camera picks up. A key aspect that can help or hurt a final image is exposure.
With the field of virtual reality and 360 video expanding rapidly, it is important for budding videographers and photographers in this area to take care to properly expose the images they will capture.
For this tutorial, I will mostly talk about controlling exposure with the RICOH Theta V. This model is RICOH’s newest 360 camera and allows users to capture high-resolution photos and videos in 4K/30 fps. The Theta V retails for $399.95 and has some accessories, such as a spatial audio microphone, that can be bought separately. I believe that this camera is easy enough for beginners to use, but has enough quality for advanced videographers to use as well.
A videographer could manually control when the Theta V records, but he or she should download the mobile app the works with the camera. The app, called “Ricoh Theta S,” can be found for free on the App Store and the Google Play Store. From the app, users can not only turn the camera on and off, but they can preview what the camera sees and adjust ISO, shutter speeds and white balance as needed. Theta V users should note that in order to adjust these setting, the camera must be connected to the phone over Wi-Fi.
To adjust the exposure, a camera operator needs to open the preview viewer within the app. From there, users can press “EV” in the lower left-hand corner and use the slider to allow more or less light into the camera. The shutter speed, ISO and white balance can also be fine-tuned from this screen.
According to 360Rumors, “the correct exposure is the one that shows a real world object at the same ‘brightness” as in real life.” The previous hyperlinked article, which also gives a pretty general overview about exposure itself, said that with most 360 cameras on the market, users have limited exposure control but these types of cameras also are less susceptible to getting the wrong exposure because it can evaluate the whole scene around the camera.
As someone who has just learned how to create 360 videos, I did not personally take exposure into account as I should have until was presented a challenge in my recent video that I made with classmate Paidin Dermody.
Last month, we decided to film the University of Kentucky’s MacAdam Student Observatory as a 360 video project. The facility is only open on clear nights, which does not allow for much light for starters. Also, some white fluorescent lights were also available within the observatory, but most of the research and study done within the observatory is completed under red lights, which allow person to preserve their night vision.
We wanted to be able to authentically capture this uncommon lighting, as we are both journalists and wanted to show what a typical experience is like in the observatory. Thus, we had to learn about controlling the settings on our camera while in the field.
Check out our final project below. It can also be viewed on kykernel.com.
My first go at a time lapse in 360 degrees was an adventure, so I am going to lay out two ways you can go about creating your own. Your first option, albeit probably the wrong option and the one I used for my first try at this, was to leave the camera running for 25 to 30 minutes (or however long you need it to and however long the camera will allow), and then try to take that file and speed it up directly in Premiere Pro. Your second option for creating a time lapse in 360 is to do it directly through the camera and company’s apps. The latter of these two options is probably your best bet, but I will explain the processes for both.
Long Exposure, Edit in Premiere Pro:
Step One- Film
Set up your 360-degree camera. I used a Ricoh Theta V when creating my time lapse, and it worked great. The quality was extremely good. This camera will only allow exposures of up to 25 minutes, but don’t worry if you need a longer exposure. You can edit the clips together in Premiere Pro and they will seamlessly play without you even noticing that it was filmed in pieces.
Step Two- From Camera to Computer
This part is a bit tricky and what gave me the most trouble. The file size for the long exposure is too large for the normal processing system to handle, so when the file gets imported to your computer it is too big to open and will likely show an error message. Luckily, there is a fix for this. Assuming you have the Ricoh Theta app to stitch your videos, there is an additional download that will transfer files larger than 4 GB. The file transfer app can be found at https://theta360.com/en/support/download/ (You can also download the Ricoh Theta app here). After you transfer the file and are able to open it, continue with your normal stitching procedure.
Step Three- Add to Timeline, Change Speed
The next step you need to do is add your stitched video to your timeline in Premiere Pro. Once you do this, click on the video clip in your timeline and then right click on it. Choose the option that says “Speed/Duration.” You will then see a box that you can change the number for “Speed %.” The normal speed is 100%. If you want to change your 30-minute long video clip into 30 seconds, you are going to need to up this to about 5000%. You can play around with the exact percentage number until you get it how you like it.
And that’s it, you’ve created a time lapse, albeit the hard and slower way.
Timed Photo Exposures in Ricoh Theta Apps:
In the world we live in today, there is always an easier and faster way of doing things. Lucky for us, the Ricoh Theta App combined with the Theta+ app allows you to create a time lapse directly in its system.
The goal of virtual is to create an immersive experience and too many times the audio is a forgotten component towards that goal. Since having good audio is so crucial — or at least bad audio is so detrimental — the best practice is to use a third party mic, like a lav mic, to record the audio from interviews that will be played over the video.
Adding this audio to the video is easy when the interview is done off screen, but it can get tricky when you are redubbing audio that the 360 camera has already picked up. Thankfully Adobe Premiere Pro has some built-in tools to help with this process.
The first step in merging the recorded audio clip with the video clip is like with any project, just making sure that all of your material is organized and easily acessible. This will help with the merging of the two clips, especially if you are matching up numerous video clips with one audio file, which is what is being done in the examples laid out below.
Once you have the audio and video clips ready to go you can start by selecting both files by holding the Command button. After they are selected, you can right click to bring up options and then select “Merge Clips.”
This can also be done using the drop-down menus, as shown below.
After you do this a prompt like the one below will show up. You need to be sure to set the synchronize point to audio, so that Premiere will know to process the two clips and match up at the right spot. You also should make sure that the “Remove Audio From AV Clip” is selected so that you only get the higher quality audio from the mic in the video.
At this point, you are essentially done as long as the audio clip and video clips you started with were the same length. But just in case you are dealing with multiple shots and only one audio recording, there is still more that needs to be done.
If you are matching up multiple video clips to a single audio clip you will have to repeat the process above for each individual video clip. You will also have to trim the resulting merged file because it will be as long as the audio clip because it will just relay the video over the adjacent spot.
So you will see a lot of nothing until you trim the clip to only the part where the video is matched with the audio.
You want this:
Now you have a video clip with the added benefit of better audio from a mic.