Tag Archives: quality

XAVC-I or XAVC-L which to choose?

THE XAVC CODEC FAMILY

The XAVC family of codecs was introduced by Sony back in 2014.  Until recently all flavours of XAVC were based on H264 compression. More recently new XAVC-HS versions were introduced that use H265. The most commonly used versions of XAVC are the XAVC-I and XAVC-L codecs. These have both been around for a while now and are well tried and well tested.

XAVC-I

XAVC-I is a very good Intra frame codec where each frame is individually encoded. It’s being used for Netflix shows, it has been used for broadcast TV for many years and there are thousands and thousands of hours of great content that has been shot with XAVC-I without any issues. Most of the in flight shots in Top Gun Mavericks were shot using XAVC-I. It is unusual to find visible artefacts in XAVC-I unless you make a lot of effort to find them. But it is a high compression codec so it will never be entirely artefact free. The video below compares XAVC-I with ProResHQ and as you can see there is very little difference between the two, even after several encoding passes.


 

XAVC-L

XAVC-L is a long GOP version of XAVC-I. Long GoP (Group of Pictures) codecs fully encode a start frame and then for the next group of frames (typically 12 or more frames) only store any differences between this start frame and then the next full frame at the start of the next group. They record the changes between frames using things motion prediction and motion vectors that rather than recording new pixels, moves existing pixels from the first fully encoded frame through the subsequent frames if there is movement in the shot. Do note that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit (while XAVC-I is 10 bit).

Performance and Efficiency.

Long GoP codecs can be very efficient when there is little motion in the footage. It is generally considered that H264 long GoP is around 2.5x more efficient than the I frame version. And this is why the bit rate of XAVC-I is around 2.5x higher than XAVC-L, so that for most types of  shots both will perform similarly. If there is very little motion and the bulk of the scene being shot is largely static, then there will be situations where XAVC-L can perform better than XAVC-I.

Motion Artefacts.

BUT as soon as you add a lot of motion or a lot of extra noise (which looks like motion to a long GoP codec) Long GoP codecs struggle as they don’t typically have sufficiently high bit rates to deal with complex motion without some loss of image quality. Let’s face it, the primary reason behind the use of Long GoP encoding is to save space. And that’s done by decreasing the bit rate. So generally long GoP codecs have much lower bit rates so that they will actually provide those space savings. But that introduces challenges for the codec. Shots such as cars moving to the left while the camera pans right are difficult for a long GoP codec to process as almost everything is different from frame to frame including entirely new background information hidden behind the cars in one frame that becomes visible in the next. Wobbly handheld footage, crowds of moving people, fields of crops blowing in the wind, rippling water, flocks of birds are all very challenging and will often exhibit visible artefacts in a lower bit rate long GoP codec that you won’t ever get in the higher bit rate I frame version.
Concatenation.
 
A further issue is concatenation. The artefacts that occur in long GoP codecs often move in the opposite direction to the object that’s actually moving in the shot. So, when you have to re-encode the footage at the end of an edit or for distribution the complexity of the motion in the footage increases and each successive encode will be progressively worse than the one before. This is a very big concern for broadcasters or anyone where there may be multiple compression passes using long GoP codecs such as H264 or H265.

Quality depends on the motion.
So, when things are just right and the scene suits XAVC-L it will perform well and it might show marginally fewer artefacts than XAVC-I, but those artefacts that do exists in XAVC-I are going to be pretty much invisible in the majority of normal situations. But when there is complex motion XAVC-L can produce visible artefacts. And it is this uncertainty that is a big issue for many as you cannot easily predict when XAVC-L might struggle. Meanwhile XAVC-I will always be consistently good. Use XAVC-I and you never need to worry about motion or motion artefacts, your footage will be consistently good no matter what you shoot. 

Broadcasters and organisations such as Netflix spend a lot of time and money testing codecs to make sure they meet the standards they need. XAVC-I is almost universally accepted as a main acquisition codec while XAVC-L is much less widely accepted. You can use XAVC-L if you wish, it can be beneficial if you do need to save card or disk space. But be aware of its limitations and avoid it if you are shooting handheld, shooting anything with lots of motion, especially water, blowing leaves, crowds etc. Also be aware that on the F5/F55, the FS5, FS7, FX6 and FX9 that in UHD or 4K XAVC-L is 8 bit while XAVC-I is 10 bit. That alone would be a good reason NOT to choose XAVC-L.

HELP! There is banding in my footage – or is there?

I’ve written about this before, but it’s worth bringing up again as I keep coming across people that are convinced there is a banding issue with their camera or their footage. Most commonly they have shot a clear blue sky or a plain wall and when they start to edit or grade their content they see banding in the footage.

Most of the cameras on the market today have good quality 10 bit codecs and there is no reason why you should ever see banding in a 10 bit recording, it’s actually fairly uncommon in 8 bit recordings unless they are very compressed or a lot of noise reduction has been used.

So – why are these people seeing banding in their footage? 

99% of the time it is because of their monitoring. 

Don’t be at all surprised if you see banding in footage if you view the content on a computer monitor or other monitor connected via a computers own HDMI port or a graphics card HDMI port. When monitoring this way it is very, very common to see banding that isn’t really there. If this is what you are using there will be no way to be sure whether any banding you see is real or not (about the only exception to this is the screen of the new M1 laptops). There are so many level translations between the colourspace and bit depth of the source video files, the computer desktop, the HDMI output and the monitors setup that banding is often introduced somewhere in the chain. Very often the source clips will be 10 bit YCbCr, the computer might be using a 16 bit or 24 bit colour mode and then the  HDMI might only be 8 bit RGB. Plus the gamma of the monitor may be badly matched and the monitor itself of unknown quality.

For a true assessment of whether footage has banding or not you want a proper, good quality video monitor connected via a proper video card such as a Blackmagic Decklink card or a device such as a BlackMagic UltraStudio product. When using a proper video card (not a graphics card) you bypass all the computer processing and go straight from the source content to the correct output. This way you will go from the 10 bit YCbCr direct to a 10 bit YCbCr output so there won’t be extra conversion and translation stages adding phantom artefacts to your footage.

If you are seeing banding, to try to understand whether the banding you are seeing is in the original footage or not try this: Take the footage into your grading software, using a paused (still) frame enlarge the clip so that the area with banding fills the monitor and note exactly where the edges of the bands are. Then slowly change the contrast of the clip. If the position of the edges of the bands moves, they are not in the original footage and something else is causing them. If they do not move, then they are baked in to the original material.