All posts by alisterchapman

What is Dual Base ISO and why is it important?

Almost all modern day video and electronic stills cameras have the ability to change the brightness of the images they record. The most common way to achieve this is through the addition of gain or through the amplification of the signal that comes from the sensor. 

On older video cameras this amplification was expressed as dB (decibels) of gain. A brightness change of 6dB is the same as one stop of exposure or a doubling of the ISO rating. But you must understand that adding gain to raise the ISO rating of a camera is very different to actually changing the sensitivity of a camera.

The problem with increasing the amplification or adding gain to the sensor output is that when you raise the gain you increase the level of the entire signal that comes from the sensor. So, as well as increasing the levels of the desirable parts of the image, making it brighter, the extra gain also increases the amplitude of the noise, making that brighter too.

Imagine you are listening to an FM radio. The signal starts to get a bit scratchy, so in order to hear the music better you turn up the volume (increasing the gain). The music will get louder, but so too will the scratchy noise, so you may still struggle to hear the music. Changing the ISO rating of an electronic camera by adding gain is little different. When you raise the gain the picture does get brighter but the increase in noise means that the darkest things that can be seen by the camera remain hidden in the noise which has also increased in amplitude.

Another issue with adding gain to make the image brighter is that you will also normally reduce the dynamic range that you can record.

This is because amplification makes the entire signal bigger. So bright highlights that may be recordable within the recording range of the camera at 0dB or the native ISO may be exceed the upper range of the recording format when even only a small amount of gain is added, limiting the high end.

Adding gain amplifies the brighter parts of the image so they can now exceed the cameras recording range.

 

At the same time the increased noise floor masks any additional shadow information so there is little if any increase in the shadow range.

Reducing the gain doesn’t really help either as now the brightest parts of the image from the sensor are not amplified sufficiently to reach the cameras full output. Very often the recordings from a camera with -3dB or -6dB  of gain will never reach 100%.

Negative gain may also reduce the cameras dynamic range.



A camera with dual base ISO’s works differently.

Instead of adding gain to increase the sensitivity of the camera a camera with a dual base ISO sensor will operate the sensor in two different sensitivity modes. This will allow you to shoot at the low sensitivity mode when you have plenty of light, avoiding the need to add lots of ND filters when you want to obtain a shallow depth of field. Then when you are short of light you can switch the camera to it’s high sensitivity mode.

When done correctly, a dual ISO camera will have the same dynamic range and colour performance in both the high and low ISO modes and only a very small difference in noise between the two.

How dual sensitivity with no loss of dynamic range is achieved is often kept very secret by the camera and sensor manufacturers. Getting good, reliable and solid information is hard. Various patents describe different methods. Based on my own research this is a simplified description of how I believe Sony achieve two completely different sensitivity ranges on both the Venice and FX9 cameras.

The image below represents a single microscopic pixel from a CMOS video sensor. There will be millions of these on a modern sensor. Light from the camera lens passes first through a micro lens and colour filter at the top of the pixel structure. From there the light hits a part of the pixel called a photodiode. The photodiode converts the photons of light into electrons of electricity. 

Layout of a sensor pixel including the image well.

In order to measure the pixel output we have to store the electrons for the duration of the shutter period. The part of the pixel used to store the electrons is called the “image well” (in an electrical circuit diagram the image well would be represented as a capacitor and is often simply the capacitance of the the photodiode itself).

The pixels image well starts to fill up and the signal output level increases.

Then as more and more light hits the pixel, the photodiode produces more electrons. These pass into the image well and the signal increases. Once we reach the end of the shutter opening period the signal in the image well is read out, empty representing black and full representing very bright.

Consider what would happen if the image well, instead of being a single charge storage area was actually two charge storage areas and there is a way to select whether we use the combined image well storage areas or just one part of the image well.

Dual ISO pixel where the size of the image well can be altered.

When both areas are connected to the pixel the combined capacity is large. So it will take more electrons to fill it up, so more light is needed to produce the increased amount of electrons. This is the low sensitivity mode. 

If part of the charge storage area is disconnected and all of the photodiodes output is directed into the remaining, now smaller storage area then it will fill up faster, producing a bigger signal more quickly. This is the high sensitivity mode.

What about noise?

In the low sensitivity mode with the bigger storage area any unwanted noise generated by the photodiode will be more diluted by the greater volume of electrons, so noise will be low. When the size of the storage area or image well is reduced the noise from the photodiode will be less diluted so the noise will be a little bit higher. But overall the noise will be much less that that which would be seen if a large amount of extra gain was added.

Note for the more technical amongst you: Strictly speaking the image well starts full. Electrons have a negative charge so as more electrons are added the signal in the image well is reduced until maximum brightness output is achieved when the image well is empty!!

As well as what I have illustrated above there may be other things going on such as changes to the amplifiers that boost the pixels output before it is passed to the converters that convert the pixel output from an analog signal to a digital one. But hopefully this will help explain why dual base ISO is very different to the conventional gain changes used to give electronic cameras a wide range of different ISO rating.

On the Sony Venice and the PXW-FX9 there is only a very small difference between the noise levels when you switch from the low base ISO to the high one. This means that you can pick and choose between either base sensitivity level depending on the type of scene you are shooting without having to worry about the image becoming unusable due to noise.

NOTE: This article is my own work and was prepared without any input from Sony. I believe that the dual ISO process illustrated above is at the core of how Sony achieve two different base sensitivities on the Venice and FX9 cameras. However I can not categorically guarantee this to be correct.

The “E” in “E-Mount” stands for Eighteen.

A completely useless bit of trivia for you is that the “E” in E-mount stands for eighteen. 18mm is the E-mount flange back distance. That’s the distance between the sensor and the face of the lens mount. The fact the e-mount is only 18mm while most other DSLR systems have a flange back distance of around 40mm means thare are 20mm or more in hand that can be used for adapters to go between the camera body and 3rd party lenses with different mounts.

Here’s a little table of some common flange back distances:

MOUNT FLANGE BACK SPARE/Difference
e-mount 18mm
Sony FZ (F3/F5/F55) 19mm 1mm
Canon EF 44mm 26mm
Nikon F Mount 46.5mm 28.5mm
PL 52mm 34mm
Arri LPL 44mm 26mm
Sony A, Minolta 44.5mm 26.5mm
M42 45.46mm 27.46mm

If you have an AXS-AR1, you need to update the firmware.

A firmware bug has been identified with the Sony AXS-AR1 AXS and SXS card reader that can result in the corruption of the data on a card when performing concurrent data reads. To ensure this does not happen you should update the firmware of your AXS-AR1 immediately. 

For more information please see the post linked below on the the official Sony Cine website where you will find instructions on how to perform the update and where to download the necessary update files.

https://sonycine.com/articles/sony-axs-ar1-firmware-update—do-this-now/

More about S-Cinetone and the so called Venice Color Science.

UPDATED WITH NEW INFO, Nov 23rd 2019.

What is the “Venice Look”?

Sony had often been criticized for having a default look to their cameras that wasn’t “film like”. This was no accident as Sony have been a leading producer of TV cameras for decades and a key thing for a broadcaster is that both old and new cameras should match. So for a very long time all of Sony’s cameras were designed to look pretty much like any other TV camera.

But this TV look wasn’t helping Sony to sell their film style cameras. So when they developed the image processing for the Venice camera a lot of research was done into what makes a pretty picture. Then over a period of about 18 months a new LUT was created for the Venice camera to take advantage of that sensors improved image quality and to turn the output into a beautiful looking image. This LUT was designed to still leave a little room to grade so is slightly flat. But it does include a big highlight roll off to help reserve a lot of the cameras dynamic range.

This LUT  is called s709 (I think it simply stands for “Sony 709) and it’s a large part of the reason why, out of the box, the Venice camera looks the way it does. Of course a skilled colourist might only rarely use this LUT and may make the output from a Venice look very different, but a Venice with s709 is regarded as the default “Venice look”, and it’s a look that a lot of people really, really like. It’s what comes out of the SDI ports, is what’s seen in the viewfinder and can be recorded to the SxS cards unless you select the legacy 709(800) LUT. s709 is the LUT applied by default to X-OCN from Venice by default. 

What is Color Science

Colour Science is the new fancy term that Red have turned into a catch-all for anything to do with colour and it’s now much abused.  Every color video camera ever made uses color science to determine the way the image looks. it’s nothing new. All colour science is, is how all the different elements of a camera and it’s workflow work to produce the final colour image. But in the last couple of years it seems to have become to mean “color magic” or “special sauce”.

If we are to be totally accurate the only camera with Venice colour science is Venice. No other camera has exactly the same combination of optical filters, sensor, processing, codecs and workflow. No other camera will replicate exactly the way Venice responds to light and turns it into a color image. You might be able to make the output of another camera appear similar to a Venice, but even then it won’t be the same colour science. What it would be is perhaps the “Venice look”.

The FS5 II and it’s new default look.

So when Sony released the FS5 II they were very careful to describe the default mode as providing a Venice “like” image, tuned to provide softer, alluring skin tones using insight and expertise gained during the development of Venice.  Because that’s what it is, it looks more like Venice than previous generations of Sony cameras because it has been tuned to output a image that looks similar. But it isn’t really Venice color science, it’s a Venice look-a-like or at least as similar as you can get, even though it’s a very different sensor, but with a touch of extra contrast added to make it more suitable for an out of the box look that won’t necessarily be graded.

And the PXW-FX9 and s-Cinetone?

The FX9  has new colour filters, a new sensor, new processing. But it is not a Venice. In Custom mode it has what Sony are now calling “S-Cinetone” which is set to become their new default look for their film style cameras. This once again is based on the Venice look and shares many similarities to the Venice colour science, but it will never be the full Venice colour science because it can’t be, it’s different hardware.  S-Cinetone is a combination of a gamma curve called “original” and a matrix called “S-Cinetone” in the FX9. When used together S-Cinetone gives similar colours to Venice but has  increased contrast suitable for direct-to-air applications where the material won’t be graded (s709 in comparison is  flatter).  S-Cinetone has a very gentle highlight roll off and produces a film like look that is tailored for video productions rather than the flatter s709 look which is designed for on set monitoring on film style shoots. If you want you can mix different gamma curves with the S-Cinetone matrix to have the Venice like colours but with different contrast ranges to suit the scene that you are shooting. If you need a broadcast safe image you can use Hypergamma1 with the S-Cinetone matrix.

Is the Venice look always there?

Previous generations of Sony cameras used a common default 709 gamma often denoted as STD5 combined with a 709 colour matrix. This is what most of us probably called the “Sony look”. The exact colour science in each camera with this look would have been quite different as there were many combinations of filters, sensors and processing, but those variations in processing were designed such that the final output of generations of Sony TV cameras all looked almost exactly the same. This too still exists in the FX9 and when set to STD5 the FX9 will produce an image very, very close to earlier generations of Sony camera. But from this new sensor with the latest filters etc. you can still have the old look. This just demonstrates how the broad brush use of the term colour science is so confusing as the FX9 is a new camera with new colour science, but it can still look just like the older cameras.

What about when I shoot S-Log3?

When shooting S-Log3 with the FX9, then you are shooting S-Log3. And S-Log3/S-Gamut3 )or S-gamut3.cine) is a set standard where certain numerical values represent certain real world colours and brightnesses. So the S-Log3 from an FX9 will look very similar to the S-Log3 from a Venice, which is similar to the S-Log3 from a F55 which is similar to the S-Log3 from an FS7.

But compared to an FS7 at least, the different, improved sensor in the FX9 will mean that it will be able to capture a bigger dynamic range, it will have less noise and the sensors response to colour is much improved. BUT it will still be recorded in the same manner using the same gamma curve and colour space with the same numerical values representing the same brightness levels and colours. However the fact that the sensor is different will mean there will be subtle differences within the image. One obvious one being the extra dynamic range, but also things like better colour separation and more true to life color response at the sensor level.

Then you apply the s709 LUT, the very same LUT as used for Venice. So those very same numerical values are turned into the same expected colours and brightness levels. But because it’s a different sensor some values may have been better captured, some worse, so while overall the image will look very, very similar there will be subtle differences, it’s the subtle differences that make one look more natural or more pleasing than the other.  For example the FX9 image will have less noise and greater DR than the image from and FS7. In addition the FX9 images will have more pleasing looking skin tones because from what I have seen the sensor responds better to the tones that make up a face etc.

Why not use the same name for s709 and S-cinetone?

S-Cinetone is different to s709. One is a gamma curve plus colour matrix designed to be recorded as is for television and video applications. You can’t change middle grey or white, you can’t alter the highlight or shadow ranges, other than by using alternate gammas with the S-Cinetone matrix. The default “original” gamma curve has  more contrast than the S-Log3 + s709 LUT and the colours although similar are slightly different.

s709 is a LUT applied to S-Log3 material designed to provide a film like look for on set monitoring. Both S-Cinetone and s709 will look similar, but they are two different things that require two very different workflows, to call them the same thing would be confusing. You get a call from the producer “I want you to shoot S-Cinetone”…. Which one? The log one or the S-Cinetone one?

Because the FX9’s optical low pass filter, ND filter, sensor colour filters, pixels, sensor output circuits and initial processing of the image are all the same whether in S-Cinetone or S-Log3, then those aspects of the colour science are common for both. But when shooting s-Log3 you have a huge range of options in post, not just s709.

So in reality the FX9 has several different color sciences. One that mimics a default Venice camera without needing to shoot log and grade. One that mimics earlier generations of sony TV cameras. Another that mimics a Sony Venice when shooting S-Log3 and using the s709 LUT.

The PXW-FX9 in the real world.

There are already a few setup and staged video samples from the new Sony PXW-FX9 circulating on the web. These are great. But how will it perform and what will the pictures look like for an unscripted, unprepared shoot? How well will the autofocus work out in the street, by day and by night? How does the S-Cinetone gamma and colour in custom mode compare with S-Log3 and the s709 Venice LUT compare?

To answer these questions I took a pre-production FX9 into the nearby town of Windsor with a couple of cheap Sony E-Mount lenses. The lenses were the Sony 50mm f1.8 which costs around $350 USD and the 28-70mm f3.5-f5.6 zoom that costs about $400 USD and is often bundled as a kit lens with some of the A7 series cameras.

To find out how good the auto focus really is I decided to shoot entirely using auto focus with the AF set to face priority. The only shot in the video where AF was not used is the 120fps slow-mo shot of the swans at 0:53 as AF does not work at 120fps.

Within the video there are examples of both S-Cinetone and S-Log3 plus the s709 LUT. So you know which is which I have indicated this is the video. I needed to do this as the two cut together really well. There is no grading as such. The S-Cinetone content is exactly as it came from the camera. The CineEI S-Log3 material was shot at the indicated base ISO and EI, there was no exposure offset. In post production all I did was add the s709 LUT, that’s it, no other corrections.

The video was shot using the Full Frame 6K scan, recording to UHD XAVC-I.

For exposure I used the cameras built in waveform display. When in CineEI I also used the Viewfinder Gamma Display assist function. Viewfinder Gamma assist gives the viewfinder the same look as the 709(800) LUT. What’s great about this is that it works in all modes and at all frame rates. So even when I switched to 2K Full Frame scan and 120fps the look of the image in the viewfinder remained the same and this allowed me to get a great exposure match for the slow motion footage to the normal speed footage. 

AUTOFOCUS.

There are some great examples of the way the autofocus works throughout the video. In particular the shot at 0:18 where the face priority mode follows the first two girls that are walking towards the camera, then as they exit the frame switches to the two ladies following behind without any hunting. I could not have done that any better myself. Another great example is at 1:11 where the focus tracks the couple walking towards the camera and once they exit the shot the focus smoothly transitions to the background. One of the nice things about the AF system is you can adjust the speed at which the camera re-focusses and in this case I had slowed it down a bit to give it a more “human” feel.

Even in low light the AF works superbly well. At 1:33 I started on the glass of the ornate arch above the railway station and panned down as two people are walking towards me. The camera took this completely in it’s stride doing a lovely job of shifting the focus from the arch to the two men. Again, I really don’t think I could have done this any better myself.

NOISE.

Also, I am still really impressed by how little noise there is from this camera. Even in the high ISO mode the camera remains clean and the images look great. The low noise levels help the camera to resolve colour and details right down into the deepest shadows. Observe how at 2:06 you can clearly see the different hues of the red roses against the red leather of the car door, even though this is a very dark shot.

The reduction in noise and increase in real sensitivity also helps the super slow motion. Compared to an FS7 I think the 120fps footage from the FX9 looks much better. It seems to be less coarse and less grainy. There is still some aliasing which is unavoidable if you scan the sensor at a lower resolution, but it all looks much better controlled than similar material from an FS7.

DYNAMIC RANGE.

And when there is more light the camera handles this very well too.  At 1:07 you can see how well S-Cinetone deals with a very high contrast scene. There are lots of details in the shadows and even though the highlights on the boats are clipped, the way the camera reaches the end of it’s range is very nice and it doesn’t look nasty, it just looks very bright, which it was.

For me the big take-away from this simple shoot was just how easy it is to get good looking images. There was no grading, no messing around trying to get nice skintones. The focus is precise and it doesn’t hunt.  The low noise and high sensitivity means you can get good looking shots in most situations. I’m really looking forward to getting my own FX9 as it’s going to make life just that little bit easier for many of my more adventurous shoots.

For more information on the PXW-FX9 click here. 

Or take a look at the Sony website.

External Recording Options On The FX9.

There are a lot of people discussing the raw output option for the FX9. In particular the need to use the XDCA-FX9 adapter. First I do understand where those that think the XDCA is big, bulky and ugly are coming from. It certainly won’t win any design awards! And I also understand that we would all probably prefer to have the raw out direct from the camera body, but that’s not going to happen.

Besides which, the raw option won’t get enabled until some time next year in a firmware update. So in the meantime what are the alternatives?

Up to 30fps the FX9 can output UHD in 10 bit 4:2:2 over HDMI. At 60fps I’m led to believe that you can output 10 bit 4:2:2 UHD over the 12G SDI, but I have yet to actually test this.  The ability to output 30fps UHD over SDI requires 6G SDI and the standards for 6G SDI are still all over the place, but once the standards settle I am led to believe that 6G SDI should be added via a firmware update.

What this means is that it will be possible to output 10 bit 4:2:2 to an external recorder from launch at 24fps all the way to 60fps using either HDMI, SDI or a combination of the two. So I will be looking at using an Atomos Ninja V with the AtomX 12G SDI adapter to record the 10 bit output using ProResHQ for those projects where I really want to squeeze every last bit of image quality out of the camera.

Don’t get me wrong, XAVC-I is a great codec, especially if you want compact files, but ProRes HQ will give me just a tiny bit less compression for those really demanding projects that I get involved in (for example shooting demo content for some of the TV manufacturers).

Atomos Ninja V on an A6300. This diminutive little recorder will be a great option for conventional video recording from the FX9

 

10 bit S-Log3 is very gradable. Because the FX9 has much less noise than the F5, FS7 or FS5 there will be no need to offset the exposure as I feel that you need to do with those cameras. So ProResHQ from an FX9 will be very, very nice to work with and the Ninja V is small, compact and uses less power than the larger Shogun models, great for when I will be travelling with the camera.

So while the camera won’t have raw for a while and perhaps even when the raw option does become available there are other ways to get some really great, highly gradable material from the FX9. Internal XAVC being one, but if you need ProRes you have some good options.

So What Does the XDCA-FX9 add?

I do suspect that the XDCA-FX9 is more than just a pass through for the raw data from the camera to the raw out SDI. To get 16 bit raw out of a camera is far more challenging than the 12 bit that the FS7 and FS5 produce. There must be some clever processing going on somewhere to squeeze 16 bit raw down a single SDI cable and I suspect that processing will be done in the XDCA-FX9 unit. The XDCA-FX9 obviously does contain a lot of processing power as it has it’s own fan cooling system. It does help balance the camera and the FX9 with XDCA-FX9 and a V-lock battery does sit very nicely on your shoulder.

In addition the XDCA adds a whole host of streaming and internet connectivity functions allowing the FX9 to be used for live news via 4G dongles without the need for a Live-U or satellite truck. Plus it has  handy switched D-Tap and Hirose power outputs.

I do look forward to getting an XDCA-FX9 for my FX9 and look forward to the raw being enabled. But even so there will also be many cases where I suspect the convenience of the compact Ninja V with the AtomX SDI adapter will be the perfect fit. It’s always good to have multiple options.

Just to make it clear – The Ninja V cannot and almost certainly never will be able to record raw from an FS5/FS7 or FX9, only conventional component video.

Can You Shoot Anamorphic with the PXW-FX9?

The simple answer as to whether you can shoot anamorphic on the FX9 or not, is no, you can’t. The FX9 certainly to start with, will not have an anamorphic mode and it’s unknown whether it ever will. I certainly wouldn’t count on it ever getting one (but who knows, perhaps if we keep asking for it we will get it).

But just because a camera doesn’t have a dedicated anamorphic mode it doesn’t mean you can’t shoot anamorphic. The main thing you won’t have is de-squeeze. So the image will be distorted and stretched in the viewfinder. But most external monitors now have anamorphic de-squeeze so this is not a huge deal and easy enough to work around.

1.3x or 2x Anamorphic?

With a 16:9 or 17:9 camera you can use 1.3x anamorphic lenses to get a 2:39 final image. So the FX9, like most 16:9 cameras will be suitable for use with 1.3x anamorphic lenses out of the box.

But for the full anamorphic effect you really want to shoot with 2x  anamorphic lenses. A 2x anamorphic lens will give your footage a much more interesting look than a 1.3x anamorphic. But if you want to reproduce the classic 2:39 aspect ratio normally associated with anamorphic lenses and 35mm film then you need a 4:3 sensor rather than a 16:9 one – or do you?

Anamorphic on the PMW-F5 and F55.

It’s worth looking at shooting 2x Anamorphic on the Sony F5 and F55 cameras. These cameras have 17:9 sensors, so they are not ideal for 2x Anamorphic. However the cameras do have a dedicated Anamorphic mode. When shooting with a 2x Anamorphic lens because the 17:9 F55 sensor, like most super 35mm sensors, is not tall enough, after de-squeezing you will end up with a very narrow 3.55:1 aspect ratio. To avoid this very narrow final aspect ratio, once you have de-squeezed the image you need to crop  the sides of the image by around 0.7x and then expand the cropped image to fill the frame. This not only reduces the resolution of the final output but also the usable field of view. But even with the resolution reduction as a result of the crop and zoom it was still argued that because the F55 starts from a 4K sensor that this was roughly the equivalent of Arri’s open gate 3.4K. However the loss of field of view still presents a problem for many productions.

What if I have Full Frame 16:9?

The FX9 has a 6K full frame sensor and a full frame sensor is bigger, not just wider but most importantly it’s taller than s35mm. Tall enough for use with a 2x s35 anamorphic lens! The FX9 sensor is approx 34mm wide and 19mm tall in FF6K mode.

In comparison the Arri  35mm 4:3 open gate sensor is area is 28mm x 18.1mm and we know this works very well with 2x Anamorphic lenses as this mimics the size of a full size 35mm cine film frame. The important bit here is the height – 18.1mm with the Arri open gate and 18.8mm for the FX9 in Full Frame Scan Mode.

Sensor sizes and Anamorphic coverage.

Crunching the numbers.

If you do the maths – Start with the FX9 in FF mode and use a s35mm 2x anamorphic lens. 

Because the image is 6K subsampled to 4K the resulting recording will have 4K resolution.

But you will need to crop the sides of the final recording by roughly 30% to remove the left/right vignette caused by using an anamorphic lens designed for 35mm movie film (the exact amount of crop will depend on the lens). This then results in a 2.8K ish resolution image depending on how much you need to crop.

4K Bayer doesn’t won’t give 4K resolution.

That doesn’t seem very good until you consider that a 4K 4:3 bayer sensor would only yield about 2.8K resolution anyway.

Arri’s s35mm cameras are open gate 3.2K bayer sensors so will result in an even lower resolution image, perhaps around 2.2K. Do remember that the original Arri ALEV sensor was designed when 2K was the norm for the cinema and HD TV was still new. The Arri super 35 cameras were for a long time the gold standard for Anamorphic because their sensor size and shape matches the size and shape of a full size 35mm movie film frame. But now cameras like Sony’s Venice that can shoot both 6K and 4K 4:3 and 6:5 are starting now taking over.

The FX9 in Full Frame scan mode will produce a great looking image with a 2x anamorphic lens without losing any of the field of view. The horizontal resolution won’t be 4K due to the left and right edge crop required, but the horizontal resolution should be higher than you would get from a 4K 16:9 sensor or a 3.2K 4:3 sensor. Unlike using a 16:9 4K sensor where both the horizontal and vertical resolution are compromised the FX9’s vertical resolution will be 4K and that’s important.

What about Netflix?

While Netflix normally insist on a minimum of a sensor with 4K of pixels horizontally for capture, they are permitting sensors with lower horizontal pixel counts to be used for anamorphic capture. Because the increased sensor height needed for 2x anamorphic means that there are more pixels vertically. The total usable pixel count when using the Arri LF with a typical 35mm 2x anamorphic lens is 3148 x 2636 pixels. Thats a total of  8 megapixels which is similar to the 8 megapixel total pixel count of a 4K 16:9 sensor with a spherical lens.  The argument is that the total captured picture information is similar for both, so both should be, and are indeed allowed. The Arri format does lead to a final aspect ratio slightly wider than 2:39.

Alexa LF v FX9 and super 35mm 2x anamorphic.

 

So could the FX9 get Netflix approval for 2x Anamorphic?

The FX9’s sensor has is 3168 pixel tall when shooting FF 16:9  as it’s pixel pitch is finer than the Arri LF sensor.  When working with a 2x anamorphic super 35mm lens the image circle from the lens will cover around 4K x 3K of pixels, a total of 12 megapixels on the sensor when it’s operating in the 6K Full Frame scan mode. But then the FX9 will internally down scale this to that vignetted 4K recording that needs to be cropped.

6K down to 4K means that the 4K covered by the lens becomes roughly 2.7K. But then the 3.1K from the Arri when debayered will more than likely be even less than this, perhaps only 2.1K

But whether Netflix will accept the in camera down conversion is a very big question. The maths indicates that the resolution of the final output of the FX9 would be greater than that of the LF, even taking the necessary crop into account. But this would need to be tested and verified in practice. If the math is right, I see no reason why the FX9 won’t be able to meet Netflix’s minimum requirements for 2x anamorphic production. If this is a workflow you wish to pursue I would recommend taking the 10 bit 4:2:2 HDMI out to a ProRes recorder and record using the best codec you can until the FX9 gains the ability to output raw. Meeting the Netflix standard is speculation on my part, perhaps it never will get accepted for anamorphic, but to answer the original question –

 – Can you shoot anamorphic with the FX9 – Absolutely, yes you can and the end result should be pretty good. But you’ll have to put up with a distorted image with the supplied viewfinder (for now at least).

Thinking about new lenses for the FX9?

Sony 28-135mm f4 zoom on the PXW-FX9

If you are starting to think about lenses to take advantage of the FX9’s amazing autofocus capabilities then you should know that I have tested quite a few different lenses on the FX9 now. I have yet to find a Sony lens where the AF hasn’t worked really well. Even the low cost Sony 50mm f1.8 and 28mm f2 lenses worked very well. Infact I actually quite like both of these lenses and they represent great value for the money.

But what I have found is that non Sony lenses have not worked well. I have been testing a range of lenses on various pre-production cameras. Maybe this situation will improve through firmware updates, I would hope so, but I honestly don’t know. The E-mount Sigma 18-35 and 20mm art lenses I tried were not at all satisfactory. The AF worked, but in what appears to be a contrast only mode. The autofocus was much slower and hunted compared to the fast, hunt free AF with the Sony lenses. You would not want to use this which is a great shame as these lenses are optically very nice.

It’s the same story when using Canon EF lenses via both Metabones and Viltrox adapters (I have not tested the Sigma MC11). Phase AF does not appear to work, only contrast and it’s slow.

So if you are thinking about buying lenses for the FX9 the only lenses I can recommend right now are Sony lenses. Don’t (at this stage at least) buy other brand E-mount lenses or expect lenses to be used via adapters unless you can find a way to test them on an FX9 first.

PXW-FX9 Event Stockholm 9th October.

I will be in Stockholm on October the 9th to demonstrate the new PXW-FX9 as part of Sony’s FX9 roadshow. There is a morning and an afternoon session. For more information and to book a place on the morning session please CLICK HERE.

For the afternoon session please CLICK HERE.

I will be showing the FX9 at further events in Sweden, Holland and Norway in October so keep an eye out for the info on these. I will also be in Canada at the end of November for workshops in Toronto, Montreal and Vancouver.