Let me start with the definition of “resolution” as given by the Oxford English Dictionary:
-
the degree of detail visible in a photographic or television image.”
OK, so that seems clear enough – measurable or visible degree of detail.
Expanding that a little further when we talk about the resolution of an image file such as a Jpeg, TIFF etc, or perhaps RGB or YCbCr* video frame, if we have a 4K image that will normally mean a 4K pixel wide image. It will have 4K wide of red, 4K wide of blue and 4K wide of green, three lots of 4K stacked on top of each other so it is capable of containing any colour or combination of colours at 4K of points or pixels across, in effect a 4K wide image will have 12K of values across the image.
Now we know what resolution means and how it is normally used when describing an image what does it mean when we say a camera has an 8K sensor? Generally this statement means that there will be 8K of pixels across the sensor. In the case of a single sensor that is used to make a colour image some of these pixels will be for Red, some for green and some for blue (or some other arrangement of a mix of colour and clear pixels). But does this also mean that 8K sensor will be able to resolve a 8K of measurable of visible detail – no, it does not.
Typically a single sensor that uses a colour filter array (CFA) won’t be able to resolve fine details and textures anywhere close to the number of horizontal pixels. So, to say that a camera with a single 8K or 4K colour sensor is a camera that can resolve an 8K or 4K image will almost certainly be a lie.
Would it be correct to call that 4K colour sensor a 4K resolution sensor? In my opinion no – it is not correct because if we use a bayer sensor as an example then it will only actually have 2K of green, 1K of red and 1K of blue pixels on any one row. If we compare that to a 4K image such as a Jpeg then the Jpeg image will be made up of 4K wide of green, 4K wide of red, 4K wide of blue pixels. It has the ability to resolve any colour or combination of colours with 4K precision. Meanwhile that 4K bayer sensor can not, it simply doesn’t have sufficient pixels to sample each colour at 4K, in fact it doesn’t even get close.
Clever image processing can take the output from a 4K bayer sensor and use data from the differing pixels to calculate, estimate or guess what the brightness and colours are at each point across the whole sensor and the actual measurable luminance resolution will typically come out at around 0.7x the pixel count, the chroma resolution will be even lower. So if we use the dictionary definition of resolution and the measured or visible details a 4K bayer sensor can resolve we can expect a camera with a 4K pixel across bayer sensor to have a resolution of around 2.8K. Your 4k camera is unlikely to actually be able to create an image that can truly be said to be 4k resolution.
But the camera manufacturers don’t care about this. They want you to believe that your 4K camera is a 4K resolution camera. While most are honest enough not to claim that the camera can resolve 4K they are also perfectly happy to let everyone assume that this is what the camera can do. It is also fair to say the most 4K bayer cameras perform similarly, so your 4K camera will resolve broadly similarly to every other 4K bayer camera and it will be much higher resolution than most HD cameras. But can it resolve 4K, no it can not.
The inconvenient truth that bayer sensor don’t resolve anywhere near the pixel count is why we see 6K or 8K sensors becoming more and more popular as these sensors can deliver visibly sharper, more detailed 4K footage than a camera with a 4K bayer sensor can. In a 4K project the use of an 8K camera will deliver 4K luma and chroma resolution that is not far behind and as a result your 4K film will tend to have finer and more true to life textures. Of course all of this is subject to other other factors such as lens choices and how the signal from the camera is processed, but with like for like an 8K pixel camera can bring real, tangible benefits for a lot of 4K projects compared to a 4K pixel camera.
At the same time we are seeing the emergence of alternative colour filter patterns to the tried and trusted bayer pattern. Perhaps adding white (or clear) pixels for greater sensitivity, perhaps arranging the pixels in novel and different ways. This muddies the water still further as you shouldn’t directly compare sensors with different colour filter arrays based on the specification sheet alone. When you start adding more alternately coloured pixels into the array you force the spacing between each individual colour or luma sample to increase. So, you can add more pixels but might not actually gain extra resolution, in fact the resolution might actually go down. As a result 12K of one pattern type cannot be assumed to be better than 8K of another type and vice versa. It is only through empirical testing that you can be sure of what any particular CFA layout can actually deliver. It is unsafe to simply rely on a specification sheet that simply quotes the number of pixels. And it is almost unheard of for camera manufacturers to actually publish verifiable resolution tests these days……. ….. I wonder why that is?
* YCbCr video or component video can be recorded in a number of ways. A full 4:4:4 4K YCbCr image will have 4K of Y (luma or brightness), a full 4K of the chroma difference blue and a full 4K of chroma difference Red. The chroma difference values are a more efficient way to encode the colour data so the data takes less room but just like RGB etc there are 3 samples for each pixel within the image. Within a post production workflow if you work in YCbCr the image will normally be processed and handled as 4:4:4.
For further space savings many YCbCr systems can if desired subsample the chroma, this is when we see terms such as 4:2:2. The first digit is the luma and the 4 implies every pixel has a discrete value. In 4:2:2 the 2:2 means that the chroma values are interleaved, every other pixel on every other line, so the chroma resolution is halved, this saves space. This is generally transparent to the viewer as our eyes have lower chroma resolution than luma.
But it is important to understand the 4:2:2 and 4:2:0 etc are normally only used for recording systems in cameras etc where saving storage space is considered paramount or in broadcasting and distribution systems and codecs where reducing the bandwidth required can be necessary. SDI and HDMI signals are typically passed as 4:2:2. The rest of the time YCbCr is normally 4:4:4. If we do compare 4K 4:2:2 YCbCr which is 4K x 2k x 2K to a 4K Bayer sensor which has 2K G, 1K R, 1K B it should be obvious that even after processing and reconstruction the image derived from a 4K bayer sensor won’t match or exceed the luma and chroma resolutions that can be passed via 4:2:2 SDI or recorded by a 4:2:2 codec. What you really want is a 6K or better still an 8K bayer sensor.