This tripped me up recently and I really should know better.
Don’t mix wireless and cabled microphones with differing amounts of latency because if you do you may have a nasty and difficult to remove echo or phase issues in your audio.
Digital + Analog don’t mix well.
In my particular case I was using a couple of Sony UWP-D wireless microphones to mic up two out of 3 members of a discussion panel. For the 3rd member I had planned to use another UWP-D but that microphone became unavailable at the last minute, so instead I used a lower cost digital microphone that works on the 2.5Ghz band. There is absolutely nothing fundamentally wrong with this lower cost microphone but the digital processing and transmission adds a very slight delay to the audio.
The Sony UWP-D’s are extremely low latency (delay) microphones and the audio arrives at the camera almost instantly. However most of the lower cost digital microphones have a very slight delay. That delay may be 1 frame or less, but there is still a delay. So the audio from the digital microphone arrives at the camera slightly late. If this is the only microphone you are using this isn’t an issue. But if you mix a very low latency microphone with one with a very slight delay, if both mics pick up any of the same sounds in the background there will be an echo or possibly a phase issue.
As the delay is almost never exactly 1 frame this can be difficult to resolve in most normal video post production suites where you can only shift things in 1 frame increments.
Phase Issues:
Phase issues occur when the audio from one source arrives very slightly out of sync with the other so that the one source cancels certain frequencies of the other out when the two are mixed together. This can make the audio sound thin or have a reduced frequency response.
So… don’t mix different types of digital wireless microphones and don’t mix lower cost digital microphones with more expensive low latency microphones. And when you are checking and monitoring your audio listen to a full mix of all your audio channels. If you monitor the channels separately the echo or any phasing issues might not be heard.
In a one person interview situation , you could just choose ch 1 boom or ch2 2.5Gz radio mic could you not . There would be no need to mix them ?
IMHO…Never mix a lav mic with a boom mic, otherwise you are going to introduce reverb and comb filtering. If you need to transition to lav mic because of a subject moves out of the range of the boom mic, cleanly cross fade between the mics, don’t mix them together. I know some editors in post will mix lav and boom together in the mix for a preferred sound (not my technique). If you work with an editor like that and use any digital wireless system, you absolutely need to inform the editor of any signal delays so they can account for the delay in post.
Very common to mix a lav and boom. Lav will always give you that bass heavy chest wall audio while a boom will give a more natural sounding audio with more sibilance. If the delay is the same for both mics and the boom is in close to the subject there is rarely be any significant issue with this.
Yes, if the audio is completely separate and the separate sources will never be mixed, then small delays are of no consequence.
A lot of editors will use a mix between the mics quite often in the final sweetening. A Lav and a decent boom shotgun have quite different character differences. Especially when recording in hard walled, lots of glass and tiled floor spaces. That’s another topic, as those spaces can add secondary echoes on top of any mic latency disparities.
I wish in camera (fx9) you could set a manual phase delay.
Are there cameras that can? And if you get it wrong you could have a bigger mess to deal with later.
Not that I know of. But i know my sound devices mixer can.
Perhaps it’s better left to audio mixers. Camera operators have enough to do as it is and really need to be concentrating on images not sound. If we start putting too many advanced audio capabilities into cameras it will be even harder to get the budget for a sound recordist.
Great article and something I’ve never thought about. However you wrote:
“As the delay is almost never exactly 1 frame this can be difficult to resolve in most normal video post production suites where you can only shift things in 1 frame increments.”
In Premiere Pro, you can edit in Audio Time Units and realign the audio track down to the sample rate. Still not a trivial matter, but slipping and listening for phase differences can be accomplished below the frame rate.
Yes, if you know how to do it and you have the time you can do it, but it isn’t trivial and most would have no idea how to do it. And it assumes you will be doing more than just an edit.
There are a lot of issues here that deserve a fuller discussion. Yes, you should absolutely understand what delays there are in your signal path, the impact of such delays, and how to correct such delays when they occur. There are many situations that will require mixed delay setups. They are completely workable if you understand your system and how to accommodate for the delays.
Yes, lots of IF’s. If you are in control of the post production, If you have a suitable system, If you have the time, If you know how to do it, If you haven’t pre-mixed the audio. Best is to not have them in the first place.
Also – use an audio recorder so you have the mics on separate tracks. Problem solved.
Off topic…
Would love a released update about FX9 v4.00 firmware update and how to import LUT’s from cloud. Just updated and was memorized that the custom buttons can light up and didn’t even know the camera could do that.. Feels like a Christmas light show early… lol.
Not necessarily. If your mics are out of phase or sync, you will still need to re-sync them so every mic is in sync with every other in post which can be a time consuming operation. Especially if there is any timing variation which can happen with 2,5Ghz mics.
Totally agree sir!
Sony UDP-D is about 0.24 ms delay
Rode Wireless Go 2 is about 3.5-4 ms delay (2.4GHz)
Sennheiser AVX is about 19 ms (1.9GHz)
I can get away with using the Rode wireless sets along with an XLR shotgun. But trying to use the Sennheiser AVX system and the results were totally unacceptable due to the approx 1/2 frame echo. On editing jobs where I have encountered this problem with the Sennheiser AVX wireless I have dropped the camera files into Vegas Pro where you can unlink “Quantize to Frames” which allows you sub-frame slip and slide the audio track to line it with cable mics or UDP-Ds or the like. I then render out a synced .WAV file, which I can now throw into Resolve or any other NLE. Once lined up with the start of the video, it’s just a matter of linking the clips, so you don’t lose sync when you start cutting tracks up. You can get away with the AVXs when working with just them because a half-frame delay will get through most news and current affairs QA checks.
Interesting , usually work with audio person but have one man band ,sit down interview only ,shoot coming up. I have an AVX , as idiot proof and love how the receiver just plugs into the camera , and hang a 416 from a boom / stand . I guess editors in the past has just used one track or the other , or had to jump through some hoops to mix them ? don’t really want to change, as AVX is just so easy to deal with, and frankly if they don’t pay for audio , Im not going to make it more of a concern to me than my real job . But nice to give them a choice I guess.
Phasing issues are a pain. Used to beg camera operators to PLEASE genlock the cameras if they were shooting four person intverviews with two microport receivers on each camera …
Now most of our shows are finished in Pro Tools and I know that plugins like Auto-Align Post 2 have really increased both the sound quality but also the mood of our sound mixers 😉
Where aligning the phase of multiple sources in the past was a very manual and tedious process it can now be fixed automatically in seconds.
Just found this v useful article as I was about to buy a Sony ECM-W3 digital mic set to use with my FX6. Many of the solutions in the comments, e.g. being able to slip audio tracks by less than a frame, ignore Alister’s main point: if digital mic B picks up any of the audio being recorded by analogue mic A, and you then slip the B track to synchronise its audio, that’s when you create the echo on the A audio. In most situations where I want more than 2 audio channels, e.g. a 1 + 2 interview, this will be very hard to avoid. Thanks for the warning Alister, I’ll have to think some more.