Pixels should not be confused for resolution.

Let me start with the definition of “resolution” as given by the Oxford English Dictionary:

“The smallest interval measurable by a telescope or other scientific instrument; the resolving power.
  • the degree of detail visible in a photographic or television image.”
     

OK, so that seems clear enough – measurable or visible degree of detail.

Expanding that a little further when we talk about the resolution of an image file such as a Jpeg, TIFF etc, or perhaps RGB or YCbCr* video frame, if we have a 4K image that will normally mean a 4K pixel wide image. It will have 4K wide of red, 4K wide of blue and 4K wide of green, three lots of 4K stacked on top of each other so it is capable of containing any colour or combination of colours at 4K of points or pixels across, in effect a 4K wide image will have 12K of values across the image.

Now we know what resolution means and how it is normally used when describing an image what does it mean when we say a camera has an 8K sensor? Generally this statement means that there will be 8K of pixels across the sensor. In the case of a single sensor that is used to make a colour image some of these pixels will be for Red, some for green and some for blue (or some other arrangement of a mix of colour and clear pixels).  But does this also mean that 8K sensor will be able to resolve a 8K of measurable of visible detail – no, it does not.



Typically a single sensor that uses a colour filter array (CFA) won’t be able to resolve fine details and textures anywhere close to the number of horizontal pixels. So, to say that a camera with a single 8K or 4K colour sensor is a camera that can resolve an 8K or 4K image will almost certainly be a lie. 

Would it be correct to call that 4K colour sensor a 4K resolution sensor? In my opinion no – it is not correct because if we use a bayer sensor as an example then it will only actually have 2K of green, 1K of red and 1K of blue pixels on any one row. If we compare that to a 4K image such as a Jpeg then the Jpeg image will be made up of 4K wide of green, 4K  wide of red, 4K wide of blue pixels. It has the ability to resolve any colour or combination of colours with 4K precision. Meanwhile that 4K bayer sensor can not, it simply doesn’t have sufficient pixels to sample each colour at 4K, in fact it doesn’t even get close.

Clever image processing can take the output from a 4K bayer sensor and use data from the differing pixels to calculate, estimate or guess what the brightness and colours are at each point across the whole sensor and the actual measurable luminance resolution will typically come out at around 0.7x the pixel count, the chroma resolution will be even lower.  So if we use the dictionary definition of resolution and the measured or visible details a 4K bayer sensor can resolve we can expect a camera with a 4K pixel across bayer sensor to have a resolution of around 2.8K. Your 4k camera is unlikely to actually be able to create an image that can truly be said to be 4k resolution.

But the camera manufacturers don’t care about this. They want you to believe that your 4K camera is a 4K resolution camera. While most are honest enough not to claim that the camera can resolve 4K they are also perfectly happy to let everyone assume that this is what the camera can do. It is also fair to say the most 4K bayer cameras perform similarly, so your 4K camera will resolve broadly similarly to every other 4K bayer camera and it will be much higher resolution than most HD cameras. But can it resolve 4K, no it can not.

The inconvenient truth that bayer sensor don’t resolve anywhere near the pixel count is why we see 6K or 8K sensors becoming more and more popular as these sensors can deliver visibly sharper, more detailed 4K footage than a camera with a 4K bayer sensor can.  In a 4K project the use of an 8K camera will deliver 4K luma and chroma resolution that is not far behind and as a result your 4K film will tend to have finer and more true to life textures. Of course all of this is subject to other other factors such as lens choices and how the signal from the camera is processed, but with like for like an 8K pixel camera can bring real, tangible benefits for a lot of 4K projects compared to a 4K pixel camera.  

At the same time we are seeing the emergence of alternative colour filter patterns to the tried and trusted bayer pattern. Perhaps adding white (or clear) pixels for greater sensitivity, perhaps arranging the pixels in novel and different ways. This muddies the water still further as you shouldn’t directly compare sensors with different colour filter arrays based on the specification sheet alone. When you start adding more alternately coloured pixels into the array you force the spacing between each individual colour or luma sample to increase. So, you can add more pixels but might not actually gain extra resolution, in fact the resolution might actually go down. As a result 12K of one pattern type cannot be assumed to be better than 8K of another type and vice versa. It is only through empirical testing that you can be sure of what any particular CFA layout can actually deliver. It is unsafe to simply rely on a specification sheet that simply quotes the number of pixels. And it is almost unheard of for camera manufacturers to actually publish verifiable resolution tests these days…….   ….. I wonder why that is?


* YCbCr video or component video can be recorded in a number of ways. A full 4:4:4  4K YCbCr image will have 4K of Y (luma or brightness), a full 4K of the chroma difference blue and a full 4K of chroma difference Red. The chroma difference values are a more efficient way to encode the colour data so the data takes less room but just like RGB etc there are 3 samples for each pixel within the image. Within a post production workflow if you work in YCbCr the image will normally be processed and handled as 4:4:4.

For further space savings many YCbCr systems can if desired subsample the chroma, this is when we see terms such as 4:2:2. The first digit is the luma and the 4 implies every pixel has a discrete value.  In 4:2:2 the 2:2 means that the chroma values are interleaved, every other pixel on every other line, so the chroma resolution is halved, this saves space. This is generally transparent to the viewer as our eyes have lower chroma resolution than luma.

But it is important to understand the 4:2:2 and 4:2:0 etc are normally only used for recording systems in cameras etc where saving storage space is considered paramount or in broadcasting and distribution systems and codecs where reducing the bandwidth required can be necessary. SDI and HDMI signals are typically passed as 4:2:2. The rest of the time YCbCr is normally 4:4:4. If we do compare 4K  4:2:2 YCbCr which is 4K x 2k x 2K to a 4K Bayer sensor which has 2K G, 1K R, 1K B it should be obvious that even after processing and reconstruction the image derived from a 4K bayer sensor won’t match or exceed the luma and chroma resolutions that can be passed via 4:2:2 SDI or recorded by a 4:2:2 codec. What you really want is a 6K or better still an 8K bayer sensor.

12 thoughts on “Pixels should not be confused for resolution.”

  1. And in mine opinion, the more pixels on one sensor, the less light they are able to process which is why the camera’s with 4K, 6K and so on, rely on their gain when filming in low light conditions. Having a 3 sensor camera (even in HD) the pixels are larger and more able to pick up images without the gain (or higher ISO).

    1. This used to be true when sensors only had a single layer because the readout electronics took up a significant amount of the sensors surface area. So the more pixels you put on the wafer, the smaller the area of the light gathering part became and sensitivity decreased.

      In the days of 3 chip cameras these were mostly CCD, so the readout circuit was at the bottom of the chip and not in the light path. CCD also has some noise benefits because the noisy digital processing is done off chip, but the way a CCD is read makes it unsuitable for video at resolutions much beyond HD. Most 3x CMOS cameras aren’t particularly sensitive either.

      Now that we have multi-layer CMOS sensors where the readout circuits are in a layer below the pixels, the size of each individual photosite doesn’t really change the overall sensitivity of the sensor as the readout electronics no longer get in the way and the processing can be moved further away from the photosites to minimise noise. As a result you can get great sensitivity even with very small pixels and groups of pixels small pixels can be combined to create virtual large pixels with little to no change in sensitivity, either in camera or in post production. The combining of smaller pixels to create a larger virtual pixel can bring useful improvements to the noise over a single larger pixel as the readout noise becomes the average of the multiple pixels and thus lower than the noise associated with a single pixel. And 2 small pixels will capture just as many photons as a single pixel of the same combined surface area. The A7S3 and FX3 are a great example of this. They actually have an 8K sensor, but the pixels are read out in pairs for a faster readout and lower noise. And the A7S3/FX3 with it’s very small pixels is considerably more sensitive than most earlier cameras. The overall sensitivity of a sensor is no longer determined by the pixel pitch. Another example might be the 8K (pixel) Venice which in it’s s35 mode will is reading out at a much finer pixel pitch than the s35 F55/F5 yet is twice as sensitive and has a much improved SNR.

      1. Thanks for the reply and explanation Allister.
        Can you explain me one more thing. Howecome the Sony PMW-300 (i know, they are becomming old) work on a 0db gain perfectly in theatre recording but using the BM URSA Broadcast G2, i had to gain to +18DB and still wasn’t happy about the brightness of the footage as compared to the old full HD 3 sensor Sony’s? Thanks and kind regards,

        1. I don’t know why.

          Blackmagic rate the URSA Broadcast G2 at 400 ISO at low base and 3200 at high base, based on those specs it should be considerably more sensitive, especially in the high base mode than the PMW300 which is around 200-300 ISO at 0dB. What lenses were you using? The PMW-300 stock lens is a pretty fast f1.9.

          1. I use Canon 4K broadcast lenses in several variaties with 1.8 . I do a lot of theatre recording for television broadcast

          2. Then something doesn’t seem right. 3200 ISO is around 20dB greater than 300 ISO, so based on the spec sheets the Ursa in high base mode should be 3 to 4 times more sensitive than the PMW-300. Even if you allow for the vagaries of ISO ratings, something really doesn’t seem to be right, I would expect the Broadcast G2 to outperform the PMW-300 by a good margin in low light. I know I wouldn’t want to go back to a PMW-300 – when I first tried videoing the Northern lights with an EX1 (same sensor as the PMW-300, 3.6 micron pixel pitch) I had to combine the longest slow shutter setting with interval record to get exposures over a second long. The original A7S was a game changer, but now the latest sensors are even better and I can shoot the Northern Lights with an 8K camera with shutters speeds from 1/24th to 1/8th of a second and have lower noise. I’m putting only a tiny fraction of the light on the 8K (4.1 micron pixel pitch) sensor, but getting a far better output than was ever possible with the EX1, PMW-200, PDW-700 etc whose pixels weren’t really that much smaller.

          3. I’ll add to this some pixel pitch numbers:

            EX1/PMW-300/PMW-200 etc 3.6 microns.
            Typical Super 35 4K sensor 6.7 microns.
            Ursa Broadcast 6K s35 around 4.4 microns.
            Typical 6K FF sensor around 5.9 microns.
            Typical 8K FF sensor around 4.1 microns.

            And we know that the Sony F5/F55 (6.7um) is not as sensitive as the Venice 1 (5.9um) which is not as sensitive as the 8K Venice 2 (4.1um).

    1. Most 4K TV’s and monitors that claim to have a 4K screen will have 4K of each colour (and white where white pixels are used) so can resolve a 4K image correctly.

      I think it’s interesting to consider that with so much “4K” content being shot with 4K bayer sensors, that when people say – “I can see the difference, but it doesn’t seem that much better than HD” it is safe to put some of that down to the fact that they are quite possibly watching content that is in reality only around 2.8K. It is also likely why many are disappointed with 8K TV’s – Something shot with an 8K bayer sensor may only be around 4K-5K (lens performance starts to be a further issue). So comparing that footage on a comparable quality 4K TV to a 8K TV really isn’t going to look significantly different. The sweet spot right now is using an 8K bayer sensor to shoot for 4K delivery. The content I shoot with my 8K sensors almost results in greater sharpness and finer textures when viewed on a large 4K screen than the same shot with a 4K bayer sensor.

      Further to this in a workshop I did recently where we compared several top end cameras from Arri, Sony and Red and started looking at some rather colour extreme colour grades, in the blind viewing all the 8K cameras were seen to have finer and less processed looking textures than the 4K cameras.

  2. Indeed very odd looking at your numbers. A mystery which I will not want to solve, since my PMW-300 are working for the best in theatre. I do have som PP setting which might be of influence in my results 😉
    Please keep up the good work you are doing as long as possible.
    Kind regards,
    John

  3. Hi Alistair, I Like your Expertise as always. Thanks for all you do for the Community. One qustion left out for me.
    Why is this so:
    Quote: …..the readout noise becomes the average of the multiple pixels and thus lower than the noise associated with a single pix…….
    Is the readout Noise of a bigger pixel more than the noise of a smaller pixel? If so, why?
    Thanks in advance, Matthias

Leave a Reply to alisterchapman Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.