Tag Archives: resolve

Pixels should not be confused for resolution.

Let me start with the definition of “resolution” as given by the Oxford English Dictionary:

“The smallest interval measurable by a telescope or other scientific instrument; the resolving power.
  • the degree of detail visible in a photographic or television image.”
     

OK, so that seems clear enough – measurable or visible degree of detail.

Expanding that a little further when we talk about the resolution of an image file such as a Jpeg, TIFF etc, or perhaps RGB or YCbCr* video frame, if we have a 4K image that will normally mean a 4K pixel wide image. It will have 4K wide of red, 4K wide of blue and 4K wide of green, three lots of 4K stacked on top of each other so it is capable of containing any colour or combination of colours at 4K of points or pixels across, in effect a 4K wide image will have 12K of values across the image.

Now we know what resolution means and how it is normally used when describing an image what does it mean when we say a camera has an 8K sensor? Generally this statement means that there will be 8K of pixels across the sensor. In the case of a single sensor that is used to make a colour image some of these pixels will be for Red, some for green and some for blue (or some other arrangement of a mix of colour and clear pixels).  But does this also mean that 8K sensor will be able to resolve a 8K of measurable of visible detail – no, it does not.



Typically a single sensor that uses a colour filter array (CFA) won’t be able to resolve fine details and textures anywhere close to the number of horizontal pixels. So, to say that a camera with a single 8K or 4K colour sensor is a camera that can resolve an 8K or 4K image will almost certainly be a lie. 

Would it be correct to call that 4K colour sensor a 4K resolution sensor? In my opinion no – it is not correct because if we use a bayer sensor as an example then it will only actually have 2K of green, 1K of red and 1K of blue pixels on any one row. If we compare that to a 4K image such as a Jpeg then the Jpeg image will be made up of 4K wide of green, 4K  wide of red, 4K wide of blue pixels. It has the ability to resolve any colour or combination of colours with 4K precision. Meanwhile that 4K bayer sensor can not, it simply doesn’t have sufficient pixels to sample each colour at 4K, in fact it doesn’t even get close.

Clever image processing can take the output from a 4K bayer sensor and use data from the differing pixels to calculate, estimate or guess what the brightness and colours are at each point across the whole sensor and the actual measurable luminance resolution will typically come out at around 0.7x the pixel count, the chroma resolution will be even lower.  So if we use the dictionary definition of resolution and the measured or visible details a 4K bayer sensor can resolve we can expect a camera with a 4K pixel across bayer sensor to have a resolution of around 2.8K. Your 4k camera is unlikely to actually be able to create an image that can truly be said to be 4k resolution.

But the camera manufacturers don’t care about this. They want you to believe that your 4K camera is a 4K resolution camera. While most are honest enough not to claim that the camera can resolve 4K they are also perfectly happy to let everyone assume that this is what the camera can do. It is also fair to say the most 4K bayer cameras perform similarly, so your 4K camera will resolve broadly similarly to every other 4K bayer camera and it will be much higher resolution than most HD cameras. But can it resolve 4K, no it can not.

The inconvenient truth that bayer sensor don’t resolve anywhere near the pixel count is why we see 6K or 8K sensors becoming more and more popular as these sensors can deliver visibly sharper, more detailed 4K footage than a camera with a 4K bayer sensor can.  In a 4K project the use of an 8K camera will deliver 4K luma and chroma resolution that is not far behind and as a result your 4K film will tend to have finer and more true to life textures. Of course all of this is subject to other other factors such as lens choices and how the signal from the camera is processed, but with like for like an 8K pixel camera can bring real, tangible benefits for a lot of 4K projects compared to a 4K pixel camera.  

At the same time we are seeing the emergence of alternative colour filter patterns to the tried and trusted bayer pattern. Perhaps adding white (or clear) pixels for greater sensitivity, perhaps arranging the pixels in novel and different ways. This muddies the water still further as you shouldn’t directly compare sensors with different colour filter arrays based on the specification sheet alone. When you start adding more alternately coloured pixels into the array you force the spacing between each individual colour or luma sample to increase. So, you can add more pixels but might not actually gain extra resolution, in fact the resolution might actually go down. As a result 12K of one pattern type cannot be assumed to be better than 8K of another type and vice versa. It is only through empirical testing that you can be sure of what any particular CFA layout can actually deliver. It is unsafe to simply rely on a specification sheet that simply quotes the number of pixels. And it is almost unheard of for camera manufacturers to actually publish verifiable resolution tests these days…….   ….. I wonder why that is?


* YCbCr video or component video can be recorded in a number of ways. A full 4:4:4  4K YCbCr image will have 4K of Y (luma or brightness), a full 4K of the chroma difference blue and a full 4K of chroma difference Red. The chroma difference values are a more efficient way to encode the colour data so the data takes less room but just like RGB etc there are 3 samples for each pixel within the image. Within a post production workflow if you work in YCbCr the image will normally be processed and handled as 4:4:4.

For further space savings many YCbCr systems can if desired subsample the chroma, this is when we see terms such as 4:2:2. The first digit is the luma and the 4 implies every pixel has a discrete value.  In 4:2:2 the 2:2 means that the chroma values are interleaved, every other pixel on every other line, so the chroma resolution is halved, this saves space. This is generally transparent to the viewer as our eyes have lower chroma resolution than luma.

But it is important to understand the 4:2:2 and 4:2:0 etc are normally only used for recording systems in cameras etc where saving storage space is considered paramount or in broadcasting and distribution systems and codecs where reducing the bandwidth required can be necessary. SDI and HDMI signals are typically passed as 4:2:2. The rest of the time YCbCr is normally 4:4:4. If we do compare 4K  4:2:2 YCbCr which is 4K x 2k x 2K to a 4K Bayer sensor which has 2K G, 1K R, 1K B it should be obvious that even after processing and reconstruction the image derived from a 4K bayer sensor won’t match or exceed the luma and chroma resolutions that can be passed via 4:2:2 SDI or recorded by a 4:2:2 codec. What you really want is a 6K or better still an 8K bayer sensor.

Copying a LUT from a Sony FX camera into DaVinci Resolve.

A question that comes up quite a bit is – how do I get the LUT I have been using in the camera into DaVinci Resolve.

There are two parts to this. The first is how do you get the LUT you are using in the camera, out of the camera. Perhaps you want to export the s709 LUT or perhaps some other LUT.

To export a LUT from the camera you can use the embedded LUT option that is available when using the Cine EI mode. 
If you turn on “Embedded LUT” on the camera and record a clip the camera will save the LUT on the SD card under:

FX3/FX30 – private – M4ROOT – GENERAL – LUT folder.

FX6/FX9 – private – XDROOT – GENERAL – LUT folder.

Then to get a LUT into DaVinci Resolve the easy way is to go to the Resolve preferences colour management page, scroll down and there is an “open LUT folder” button that will open the LUT folder. Copy your LUT into this folder. Then click on the “Update Lists” button. Now your LUT will be available to use in Resolve.

Don’t switch base ISO mid shot if using Cine-EI!

Switching base ISO mid recording in Cine-EI is causing some metadata issues in Resolve and perhaps other applications, so I strongly recommend you do not switch the base ISO mid shot.

DaVinci Resolve now reads the metadata from footage shot by the FX6 and FX9 in the Cine-EI mode to automatically add the correct exposure offset. So, shoot at  800 ISO base with the EI set to 200 and Resolve will add a -2 stop offset to the footage so that it looks the same as it did when you shot. Shoot at 800 ISO base and 3200 EI and again the correct +2 stop offset is applied.

However if you shoot at 800 base ISO, perhaps with 800 EI and then half way through the shot change the base ISO to high and 12,800 ISO, perhaps with 12,800 EI Resolve gets a bit confused. It will use the new base ISO but the original EI and as a result from the point where you switch base ISO the footage will look extremely under exposed.

So, if you must change the base ISO, it is better to stop recording, switch base and start recording again.

DaVinci resolve Frame Rendering Issue and XAVC

There is a bug in some versions of DaVinci Resolve 17 that can cause frames in some XAVC files to be rendered in the wrong order. This results in renders where the resulting video appears to stutter or the motion may jump backwards for a frame or two. This has now been fixed in version 17.3.2 so all user of XAVC and DaVinci Resolve are urged to upgrade to at least version 17.3.2.

https://www.blackmagicdesign.com/uk/support/family/davinci-resolve-and-fusion

DaVinci Resolve 16.1.2 Released.

Blackmagic Design have just released the latest update to DaVinci Resolve. If you have been experiencing crashes when using XAVC material from the PXW-FX9 I recommend you download and install this update.

If you are not a Resolve user and are struggling with grading or getting the very best from any log or raw camera, then I highly recommend you take a look at DaVinci Resolve. It’s also a very powerful edit package. The best bit is the free version supports most cameras. If you need full MXF support you will need to buy the studio version, but with a one off cost of only $299 USD it really is a bargain and gets you away from any horrid subscription services.

https://www.blackmagicdesign.com/support/family/davinci-resolve-and-fusion

New Training Videos For DaVinci Resolve.

Blackmagic Designs DaVinci Resolve is a really amazing piece of software, especially given that there is a free version that packs in almost all of the power of the full paid studio version.

Today, post production grading is becoming an ever more important part of the video production process. In the past basic colour correction functions of most edit applications were enough for most people. But now if you are shooting using log or raw it’s very important that you have the right toolset to take advantage of the benefits that log and raw offer.

For decades I have used Adobe Premiere for my editing and it has allowed me to create many great videos from broadcast TV series to simple corporates. As an edit application it’s still pretty solid. But now I shoot almost everything using log and raw and I have never been completely happy with the results from Premiere, even with Lumetri.

So I started to do my grading in Resolve and I have never looked back. The degree of control I have in Resolve is much greater. There are wonderful features such as DaVinci’s own Colour Managed workflow or the ACES workflow which makes dealing with log and raw from virtually any camera a breeze. If you want a film look choose ACES, for more punchy looks choose DaVinci Color Managed. You don’t need LUT’s, exposure adjustments are easy and you can then add all kinds of different secondary corrections such as power windows quickly and easily. The colour managed workflow are particularly beneficial if you wish to produce HDR versions of your productions.

But until recently my workflow was a 2 stage workflow. Edit in Premiere, then grade and finish in Resolve. But the last couple of versions of Resolve have seen some huge advances in its editing speed and capabilities. The editor is now as good as anyone else’s, so I am now editing in Resolve too. It’s a very similar to Premiere so it didn’t take long to make the switch.

One question that I am often asked is where to find good training information and guides for Resolve. Well clearly Blackmagic Design have been listening as they have now released a series of videos that will help guide anyone new to Resolve through the basics. In total there are 8 hours of easy to follow video. The manual is also pretty good!

If you have never tried Resolve then I really urge you to give it a go. It is an incredibly powerful piece of software. It isn’t difficult to master once you see how it’s laid out, how the different “rooms” work and how to use nodes. When I started with it I really found it all quite logical. You start in the “media” room to bring in your material, then progress on to the edit room for editing, finishing in the deliver room to encode and produce your master files and other output versions.

So do take a look at the videos linked below if you want to learn more about Resolve and do give it a try. Remember the free version will do almost everything that the full version will. The full Studio version isn’t expensive and features one of the best suites of noise reduction tools anywhere. It only costs a one off payment of $299.00 USD, no silly subscription fees to keep having to pay as with Adobe!

One last thing before I get to the videos: If you do a lot of grading you really should get a proper control panel. I have the Blackmagic micro panel and this really speeds up my grading. If you don’t have a panel you can only adjust a single grading parameter at a time. With a panel you can do things like bringing up the gain while pulling down the black level. This allows you to see the interaction between your different adjustments much more dynamically and it’s just plain faster. Most of the key functions have dedicated controls so you can quickly dial in a bit of contrast, switch to log mode, bypass a node and boost the saturation all through direct controls, very much quicker than with just a mouse. The use of the micro panel has probably halved the amount of time it takes me to grade a typical project – and – I’m getting a better result because it’s more intuitive.

So here are the videos:

Introduction to Editing.
Colour Grading. Fusion Part 1. VFX and Graphics Fusion Part 2. 3D FX Fairlight Audio Part 1. Fairlight Audio Part 2. Delivery and Encoding. Media Mangement. DaVinci Resolve Mini Panel.

DaVinci Resolve, ACES and the “Sony Raw” input transform.

A quick heads up for users of Resolve with Sony Raw and X-OCN. Don’t make the same mistake I have been making. For some time I have been unhappy with the way the Sony raw looked in DaVinci Resolve and ACES prior to grading. Apparently there used to be a small problem with the raw input transform that could lead to a red/pink hue getting added to the footage. This problem was fixed some time ago. You should now not use the the “Sony Raw” input transform, if you do, it will tint your Raw or X-OCN files slightly pink/red. Instead you should select “no transform”. With no transform selected my images look so much nicer and match Sony’s own Raw Viewer so much better. Thanks to Nick Shaw of Antler Post for helping me out on this and all on the CML list.

Can DaVinci Resolve steal the edit market from Adobe and Apple.

I have been editing with Adobe Premiere since around 1994. I took a rather long break from Premiere between 2001 and 2011 and switched over to Apple and  Final Cut Pro which in many ways used to be very similar to Premiere (I think some of the same software writers were used for FCP as Premiere). My FCP edit stations were always muti-core Mac Towers. The old G5’s first then later on the Intel Towers. Then along came FCP-X. I just didn’t get along with FCP-X when it first came out. I’m still not a huge fan of it now, but will happily concede that FCP-X is a very capable, professional edit platform.

So in 2011 I switch back to Adobe Premiere as my edit platform of choice. Along the way I have also used various versions of Avid’s software, which is another capable platform.

But right now I’m really not happy with Premiere. Over the last couple of years it has become less stable than it used to be. I run it on a MacBook Pro which is a well defined hardware platform, yet I still get stability issues. I’m also experiencing problems with gamma and level shifts that just shouldn’t be there. In addition Premiere is not very good with many long GOP codecs. FCP-X seems to make light work of XAVC-L compared to Premiere. Furthermore Adobe’s Media encoder which once used to be one of the first encoders to get new codecs or features is now lagging behind, Apples Compressor now has the ability to do at he full range of HDR files. Media Compressor can only do HDR10. If you don’t know, it is possible to buy Compressor on it’s own.

Meanwhile DaVinci Resolve has been my grading platform of choice for a few years now. I have always found it much easier to get the results and looks that I want from Resolve than from any edit software – this isn’t really a surprise as after all that’s what Resolve was originally designed for.

DaVinci Resolve a great grading software and it’s edit capabilities are getting better and better.

The last few versions of Resolve have become much faster thanks to some major processing changes under the hood and in addition there has been a huge amount of work on Resolves edit capabilities. It can now be used as a fully featured edit platform. I recently used Resolve to edit some simpler projects that were going to be graded as this way I could stay in the same software for both processes, and you know what it’s a pretty good editor. There are however a few things that I find a bit funky and frustrating in the edit section of Resolve at the moment. Some of that may simply be because I am less familiar with it for editing than I am Premiere.

Anyway, on to my point. Resolve is getting to be a pretty good edit platform and it’s only going to get better. We all know that it’s a really good and very powerful grading platform and with the recent inclusion of the Fairlight audio suite within Resolve it’s pretty good at handling audio too. Given that the free version of Resolve can do all of the edit, sound and grading functions that most people need, why continue to subscribe to Adobe or pay for FCP-X?

With the cost of the latest generations of Apple computers expanding the price gap between them and similar spec Windows machines – as well as the new Macbooks lacking built in ports like HDMI, USB3 that we all use every day (you now have to use adapters and dongles). The  Apple eco system is just not as attractive as it used to be. Resolve is cross platform, so an Mac user can stay with Apple if they wish, or move over to Windows or Linux whenever they want with Resolve. You can even switch platforms mid project if you want. I could start an edit on my MacBook and the do the grade on a PC workstation staying with Resolve through the complete process.

Even if you need the extra features of the full version like very good noise reduction, facial recognition, 4K DCI output or HDR scopes then it’s still good value as it currently only costs $299/£229 which is less than a years subscription to Premiere CC.

But what about the rest of the Adobe Creative suite? Well you don’t have to subscribe to the whole suite. You can just get Photoshop or After Effects. But there are also many alternatives. Again Blackmagic Design have Fusion 9 which is a very impressive VFX package used for many Hollywood movies and like Resolve there is also a free version with a very comprehensive tools set or again for just $299/£229 you get the full version with all it’s retiming tools etc.

Blackmagic Designs Fusion is a very impressive video effects package for Mac and PC.

For a Photoshop replacement you have GIMP which can do almost everything that Photoshop can do. You can even use Photoshop filters within GIMP. The best part is that GIMP is free and works on both Mac’s and PC’s.

So there you have it – It looks like Blackmagic Design are really serious about taking a big chunk of Adobe Premiere’s users. Resolve and Fusion are cross platform so, like Adobe’s products it doesn’t matter whether you want to use a Mac or a PC. But for me the big thing is you own the software. You are not going to be paying out rather a lot of money month on month for something that right now is in my opinion somewhat flakey.

I’m not quite ready to cut my Creative Cloud subscription yet, maybe on the next version of Resolve. But it won’t be long before I do.

How to create a user LUT for the PMW-F5 or F55 in Resolve (or other grading software).

It’s very easy to create your own 3D LUT for the Sony PMW-F5 or PMW-F55 using DaVinci Resolve or just about any grading software with LUT export capability. The LUT should be a 17x17x17 or 33x33x33 .cube LUT (this is what Resolve creates by default).

Simply shoot some test Slog2 or Slog3 clips at the native ISO. You must use the same Slog and color space as you will be using in the camera.

Import and grade the clips in Resolve as you wish the final image to look. Then once your happy with your look, right click on the clip in the timeline and “Export LUT”. Resolve will then create a .cube LUT.

Then place the .cube LUT file created by the grading software on an SD card in the PMWF55_F5 folder. You may need to create the following folder structure on the SD card, so first you have a PRIVATE folder, in that there is a SONY folder and so on.

PRIVATE   :   SONY   :    PRO   :   CAMERA   :    PMWF55_F5

Put the SD card in the camera, then go to the File menu and go to “Monitor 3D LUT” and select “Load SD Card”. The camera will offer you a 1 to 4 destination memory selection, choose 1,2,3 or 4, this is the location where the LUT will be saved. You should then be presented with a list of all the LUT’s on the SD card. Select your chosen LUT to save it from the SD card to the camera.

Once loaded in to the camera when you choose 3D User LUT’s you can select between user LUT memory 1,2,3 or 4. Your LUT will be in the memory you selected when you copied the LUT from the SD card to the camera.