For this years Glastonbury festival I chose to use a combination of a Sony A1, FX3 and FX30 (we also used a DJI Pocket 3 and a Wirral wire cam). These are all small cameras and the screens on the back of them really rather small. So, I wanted to use an external monitor to make it easier to be sure I was in focus.
I have been aware of the Portkeys monitors for some time, and in particular their ability to remotely control the Sony cameras via WiFi. So this seemed like the perfect opportunity to try out the LH7P as it would give me the ability to control the cameras touch tracking autofocus using the monitors touch screen. So, I obtained a demo unit form Portkeys to try. Click here for the Portkeys LH7P specs.
I have to say that I am pretty impressed by how well this relatively cheap monitor performs. It has a 1000 Nit screen so it’s pretty bright and overall the colour and contrast accuracy is pretty good. It won’t win any awards for having the very best image, but it is pretty decent and certainly good enough for most on camera applications.
The LH7P is HDMI only, but this helps keep the weight and power consumption down. While mostly made of plastic it does feel robust enough for professional use. But I wouldn’t be rough with it.
The monitor is very thin and very light. It runs off the very common Sony NP-F style batteries or via a DC in socket that accepts 7 to 24 volts, a surprisingly large range that allows you to use it with almost any battery found in the world of film and TV. It uses very little power at around 9 watts, so the larger NP-F type batteries will run it for at least 3 or 4 hours.
It’s a touch screen monitor and the touch operated menu system is quite straightforward. One small issue is that if you are using the monitors touchscreen to control the cameras touch autofocus you can’t also use the touchscreen to access the menu system or change the cameras other settings, it’s one or the other. When connected to a camera, to use the monitors menus or access the camera settings you must have the touch tracking focus control turned off. If you are using the touch tracking controls there are 4 assignable buttons on the top of the monitor and you can assign things like peaking, zebras, false colour etc to these, so most of the time having to choose between touch focus or touch menus isn’t a big drama as these switches can be used to turn on and off your most commonly used exposure and focus tools. But you do have to remember to turn off the touch tracking if you want to change another setting from the monitor.
When you are using the monitor to control the touch tracking it is very responsive and because there is very minimal latency due to the direct HDMI connection to the camera it works well, just touch where you want the camera to focus. The only downside is that you don’t get a tracking box on the monitors screen. This is because Sony don’t output the tracking box overlay over the HDMI.
As a result there may be times where you do need to look at the LCD on the back of the camera to see what the camera is tracking. When I used it a Glastonbury I didn’t really find this to be too much of a problem, f I was unsure of what the camera was focussing on, I simply touched the LH7P’s screen where I wanted to focus.
Pairing the monitor with the camera is simple, but you do need to make sure the cameras wifi is set to 2.4Ghz as this is the only band the monitor supports. To see how to pair it with an FX3 please watch the video linked above. Once connected I found the connection to be very stable and I didn’t experience any unexpected disconnects, even when the venue at Glastonbury was completely full.
I have to say that this low cost monitor has really surprised me. The image quality is more than acceptable for a 7″ monitor and controlling the camera via the monitors touch screen is a very nice way to work, especially given the small size of the LCD screen on a camera like the FX3 or A1. I haven’t had it all that long, so I don’t know what the long term reliability is like, but for what it costs it represents excellent value.
I’m running a film making workshop around “how to get the film look” in Dubai for Nanlite and Sony on the 25th of May. During the workshop I will be showing how to expose S-Log3 on the Sony FX series cameras, how to use CineEI and then looking at film style lighting using Nanlite fixtures. We will look at a couple of different types of scenes, an office, a romantic scene and also at how to light for greenscreen.
I will also be at Cabsat 2024, so do drop by the Nanlite booth to say hello.
This is part 2 of my 2 part look at whether small cameras such as a Sony FX3 or A1 really can replace full size cinema cameras.
For this part of the article to make sense you will want to watch the YouTube clips that are linked here full screen at at the highest possible quality settings, Preferably 4K. Please don”t cheat, watch them in the order they are presented as I hope this will allow you to understand the points I am trying to make better.
Also, in the videos I have not put the different cameras that were tested side by side. You may ask why – well it’s because if you do watch a video online or a movie in a cinema you don’t see different cameras side by side on the same screen at the same time. A big point of all of this is that we are now at a place where the quality of even the smallest and cheapest large sensor camera is likely going to be good enough to make a movie. It’s not necessarily a case of is camera A better than camera B, but the question is will the audience know or care which camera you used. There are 5 cameras and I have labelled them A through to E.
The footage presented here was captured during a workshop I did for Sony at Garage Studios in Dubai (if you need a studio space in Dubai they have some great low budget options). We weren’t doing carefully orchestrated camera tests, but I did get the chance to quickly capture some side by side content.
So lets get into it.
THE FINAL GRADE:
In many regards I think this is the most important clip as this is how the audience would see the 5 cameras. It represents how they might look at the end of a production. I graded the cameras using ACES in DaVinci Resolve.
Why ACES? Well, the whole point of ACES is to neutralise any specific camera “look”. The ACES input transform takes the cameras footage and converts it to a neutral look that is meant to represent the scene as it actually was but with a film like highlight roll off added. From here the idea is that you can apply the same grade to almost any camera and the end result should look more or less the same. The look of different cameras is largely a result of differences in the electronic processing of the image in post production rather than large differences in the sensors. Most modern sensors capture a broadly similar range of colours with broadly similar dynamic range. So, provided you know the what recording levels represent what colour in the scene, it is pretty easy to make any camera look like any other, which is what ACES does.
The footage captured here was captured during a workshop, we weren’t specifically testing the different cameras in great depth. For the workshop the aim was to simply show how any of these cameras could work together. For simplicity and speed I manually set each camera to 5600K and as a result of the inevitable variations you get between different cameras, how each is calibrated and how each applies the white balance settings there were differences between in the colour balance of each camera.
To neutralise these white balance differences the grading process started by using the colour chart to equalise the images from each camera using the “match” function in DaVinci Resolve. Then each camera has exactly the same grade applied – there are no grading differences, they are all graded in the same way.
Below are frame grabs from each camera with a slightly different grade to the video clips, again, they all look more or less the same.
The first thing to take away from all of this then is that you can make any camera look like pretty much any other and a chart such as the “color checker video” and software that can read the chart and correct the colours according to the chart makes it much easier to do this.
To allow for issues with the quality of YouTube’s encoding etc here is a 400% crop of the same clips:
What I am expecting is that most people won’t actually see a great deal of difference between any of the cameras. The cheapest camera is $6K and the most expensive $75K, yet it’s hard to tell which is which or see much difference between them. Things that do perhaps stand out initially in the zoomed in image are the softness/resolution differences between the 4K and 8K cameras, but in the first un cropped clip this difference is much harder to spot and I don’t think an audience would notice especially if the one camera is used on it’s own so the viewer has nothing to directly compare it with. It is possible that there are also small focus differences between each camera, I did try to ensure each was equally well focussed but small errors may have crept in.
WHAT HAPPENS IF WE LIFT THE SHADOWS?
OK, so lets pixel peep a bit more and artificially raise the shadows so that we can see what’s going on in the darker parts of the image.
There are differences, but again there isn’t a big difference between any of the cameras. You certainly couldn’t call them huge and in all likelihood, even if for some reason you needed to raise or lift the shadows by an unusually large amount as done here (about 2.5 stops) the difference between “best” and “worst” isn’t large enough for it to be a situation where any one of these cameras would be deemed unusable compared to the others.
SO WHY DO YOU WANT A BETTER CAMERA?
So, if we are struggling to tell the difference between a $6K camera and a $75K one why do you want a “better” camera? What are the differences and why might they matter?
When I graded the footage from these cameras in the workshop it was actually quite difficult to find a way to “break” the footage from any of them. For the majority of grading processes that I tried they all held up really well and I’d be happy to work with any of them, even the cameras using the highly compressed internal recordings held up well. But there are differences, they are not all the same and some are easier to work with than the others.
The two cheapest cameras were a Sony FX3 and a Sony A1. I recorded using their built in codecs, XAVC-SI in the FX3 and XAVC-HS in the A1. These are highly compressed 10 bit codecs. The other cameras were all recorded using their internal raw codecs which are either 16 bit linear or 12 bit log. At some time I really do need to do a proper comparison of the internal XAVC form the FX3 and the ProResRaw that can be recorded externally. But it is hard to do a fully meaningful test as to get the ProResRaw into Resolve requires transcoding and a lot of other awkward steps. From my own experience the difference in what you can do with XAVC v ProResRaw is very small.
One thing that happens with most highly compressed codecs such as H264 (XAVC-SI) or H265(XAVC-HS) is a loss of some very fine textural information and the image breaking up into blocks of data. But as I am showing these clips via YouTube in a compressed state I needed to find a way to illustrate the subtle differences that I see when looking at the original material. So, to show the difference between the different sensors and codecs within these camera I decided to pick a colour using the Resolve colour picker and then turn that colour into a completely different one, in this case pink.
What this allows you to see is how precisely the picked colour is recorded and it also shows up some of the macro block artefacts. Additionally it gives an indication on how fine the noise is and the textural qualities of the recording. In this case the finer the pink “noise” the better, as this is an indication of smaller, finer textural differences in the image. These smaller textural details would be helpful if chroma keying or perhaps for some types of VFX work. It might (and say might because I’m not convinced it always will) allow you to push a very extreme grade a little bit further.
I would guess that by now you are starting to figure out which camera is which – The cameras are an FX3, A1, Burano, Venice 2 and an ArriLF.
In this test you should be able to identify the highly compressed cameras from the raw cameras. The pink areas from the raw cameras are finer and less blocky, this is a good representation of the benefit of less compression and a deeper bit depth.
But even here the difference isn’t vast. It certainly, absolutely, exists. But at the same time you could push ANY of these cameras around in post production and if you’ve shot well none of them are going to fall apart.
As a side note I will say that I find grading linear raw footage such as the 16 bit X-OCN from a Venice or Burano more intuitive compared to working with compressed Log. As a result I find it a bit easier to get to where I want to be with the X-OCN than the XAVC. But this doesn’t mean I can’t get to the same place with either.
RESOLUTION MATTERS.
Not only is compression important but so too is resolution. To some degree increasing the resolution can make up for a lesser bit depth. As these camera all use bayer sensors the chroma resolution will be somewhat less than the luma resolution. A 4K sensor such as the one in the FX3 or the Arri LF will have much lower chroma resolution than the 8K A1, Burano or Venice 2. If we look at the raised shadows clip again we can see some interesting things going on the the girls hair.
If you look closely camera D has a bit of blocky chroma noise in the shadows. I suspect this might be because this is one of the 4K sensor cameras and the lower chroma resolution means the chroma noise is a bit larger.
I expect that by now you have an idea of which camera is which, but here is the big reveal: A is the FX3, B is the Venice 2, C is Burano, D is an Arri LF, and E is the Sony A1.
What can we conclude from all of this:
There are differences between codecs. A better codec with a greater bit depth will give you more textural information. It is not necessarily simply that raw will always be better than YUV/YCbCr but because of raws compression efficiency it is possible to have very low levels of compression and a deep bit depth. So, if you are able to record with a better codec or greater bit depth why not do so. There are some textural benefits and there will be fewer compression artefacts. BUT this doesn’t mean you can’t get a great result from XAVC or another compressed codec.
If using a bayer sensor than using a sensor with more “K” than the delivery resolution can bring textural benefits.
There are differences in the sensors, but these differences are not really as great as many might expect. In terms of DR they are all actually very close, close enough that in the real world it isn’t going to make a substantial difference. As far as your audience is concerned I doubt they would know or care. Of course we have all seen the tests where you greatly under expose a camera and then bring the footage back to normal, and these can show differences. But that’s not how we shoot things. If you are serious about getting the best image that you can, then you will light to get the contrast and exposure that you want. What isn’t in this test is rolling shutter, but generally I rarely see issues with rolling shutter these days. But if you are worried about RS, then the Venice 2 is excellent and the best of the group tested here.
Assuming you have shot well there is no reason why an audience should find the image quality from the $6K FX3 unacceptable, even on a big screen. And if you were to mix and FX3 with a Venice 2 or Burano, again if you have used each camera equally well I doubt the audience would spot the difference.
BACK TO THE BEGINNING:
So this brings me back to where I started in part 1. I believe this is the age of the small camera – or at least there is no reason why you can’t use a camera like an FX3 or an A1 to shoot a movie. While many of my readers I am sure will focus on the technical details of the image quality of camera A against camera B, in reality these days it’s much more about the ergonomics and feature set as well as lens and lighting choices.
A small camera allows you to be quick and nimble, but a bigger camera may give you a lot more monitoring options as well as other things such as genlock. And….. if you can – having a better codec doesn’t hurt. So there is no – one fits all – camera that will be the right tool for every job.
As Sony’s new Burano camera starts to ship – a relatively small camera that could comfortably be used to shoot a blockbuster movie we have to look at how over the last few years the size of the cameras used for film production has reduced.
Only last year we saw the use of the Sony FX3 as the principle camera for the movie the Creator. What is particularly interesting about the Creator is that the FX3 was chosen by the director Gareth Edwards for a mix of both creative and financial reasons.
To save money or to add flexibility?
To save money, rather than building a lot of expensive sets Edwards chose to shoot on location using a wide and varied range of locations (80 different locations) all over Asia. To make this possible he used a smaller than usual crew. Part of the reasoning that was given was that it was cheaper to fly a small crew to all these different locations than to try to build a different set for each part of the film. The film cost $80 million to make and took $104 million in the box office, a pretty decent profit at a time when many movies take years to break even.
The FX3 was typically mounted on a gimbal and this allowed them to shoot quickly and in a very fluid manner, making use of natural light where possible. A 2x anamorphic lens was used and the final delivery aspect ratio was a very wide 2.76:1. The film was edited first and then when the edit was locked down the VFX elements were added to the film. Modern tracking and rotoscoping techniques make it much easier to add VFX into sequences without needing to use green or blue screen techniques and this is one of those areas where AI will become a very useful and powerful tool.
You don’t NEED a big camera, but you might want one.
So, what is clear is that you don’t NEED a big camera to make a feature film and The Creator demonstrates that an FX3 (recording to an Atomos Ninja) offers sufficient image quality to stand up to big screen presentation. I don’t think this is really anything new, but we have now reached the stage where the difference in image quality between a cheap $1500 camera like the FX30 and a high end “cinema” camera like the $70K Venice 2 is genuinely so small that an audience probably won’t notice.
There may be reasons why you might prefer to have a bigger camera body – it does make mounting accessories easier and will often have much better monitoring and viewfinder options. And you may argue that a camera like Venice can offer greater image quality (as you will see in part 2 – it technically does have a higher quality image than the FX3), but would the audience actually be able to see the difference and even if they can would they actually care? And what about post production – surely a better quality image is a big help with post – again come back for part 2 where I explore this in more depth.
And small cameras will continue to improve. If what we have now is already good enough things can only get better.
8K Benefits??
Since the launch of Burano I’ve become more and more convinced of the benefits of an 8K sensor – even if you only ever intend to deliver in 4K, the extra chroma resolution from actually having 4K of R and B pixels makes a very real difference. Venice 2 really made me much more aware of this and Burano confirms it. Because of this I’ve been shooting a lot more with the Sony A1 (which possibly shares the same sensor as Burano). There is something I really like about the textural quality in the images from the A1, Burano and Venice 2 (having said that after spending hours looking at my side by side test samples from both 4K and 8K cameras while the difference is real, I’m not sure it will always be seen in the final deliverable). In addition when using a very compressed codec such as the XAVC-HS in the A1 recording at 8K leads to smaller artefacts which then tend to be less visible in a 4K deliverable. This allows you to grade the material harder than perhaps you can with similarly compressed 4K footage. The net result is the 10 bit 8K looks fantastic in a 4K production.
I have to wonder if The Creator wouldn’t have been better off being shot with an A1 rather than an FX3. You can’t get 8K raw out of an A1, but the extra resolution makes up for this and it may have been a better fit for the 2x anamorphic lens that they used.
So many choices….
And that’s the thing – we have lots of choices now. There are many really great small cameras, all capable of producing truly excellent images. A small camera allows you to be nimble. The grip and support equipment becomes smaller. This allows you to be more creative. A lot of small cameras are being used for the Formula 1 movie, small cameras are often mixed with larger cameras and these days the audience isn’t going to notice.
Plus we are seeing a change in attitudes. A few years ago most cinematographers wouldn’t have entertained the idea of using a DSLR or pocket sized camera as the primary camera for a feature. Now it is different, a far greater number of DP’s are looking at what a small camera might allow them to do, not just as a B camera but as the A camera. When the image quality stops being an issue, then small might allow you to do more.
This doesn’t mean big cameras like Venice will go away, there will always be a place for them. But I expect we will see more and more really great theatrical releases shot with cameras like the FX3 or A1 and that makes it a really interesting time to be a cinematographer. Again, look at The Creator – this was a relatively small budget for a science fiction film packed with CGI and other effects. And it looked great. Of course there is also that middle ground, a smaller camera but with the image quality of a big one – Burano perhaps?
In Part 2……
In part 2 I’m going to take some sample clips that I grabbed at a recent workshop from a Venice 2, Burano, A1 and FX3 and show you just how close the footage from these cameras is. I’ll also throw in some footage from an Arri LF and then I’ll “break” the footage in post production to give you an idea of where the differences are and whether they are actually significant enough to worry about.
This is a really useful teeny tiny input/output box from the guys at Mutiny. It allows users to input timecode into the FX3 or FX30 as well as connect a remote rec run control to start or stop recording. This will be so useful for those using the camera on a crane or jib as well as many other applications where the camera needs to be controlled remotely.
The Mutiny TC-R/S for Sony FX3 and FX30 camera, feeds Timecode IN and R/S (remote triggering) via the multi-terminal (Multiport). Works with every FIZ wireless follow focus system (Preston, Arri, C-Motion, Nucleus, Heden, etc) as well as every timecode generator (Tentacle, Deity, Deneke, Ambient, etc). Orders start shipping Monday in the order taken. https://mutiny.store/products/tcrs
Next week I head out to Norway for my annual trip in search of the Northern lights. Like last year I will try to stream the Aurora live from Norway. Of course this does depend on the weather and whether the Aurora comes out to play.
The plan is to stream each evening from around 6pm CET Central European time starting from February 2nd. I will stream for as long as I can when the Aurora is visible. I have scheduled 5 YouTube live streams but there will likely be more added depending on the weather and many other variables that are out of my control. These streams may start later than planned or get interrupted if I need to move the camera position or if I run out of power. As well as the scheduled streams I intend to include additional streams where I will go over the equipment used and things like that.
To stream the Aurora I will be using various pieces of kit including my Sony FX3 camera connected to an Accsoon Seemo or an Accsson CineView. The Seemo connects to an iPhone directly via a cable and I can then stream the output of the FX3 from the phone. However the area where I will be doesn’t have the best cell phone signal so I might need to use the CineView. With the Cineview connected to the camera I can send the pictures to my phone and then stream from the phone. This way I can put the phone in a location where there is a better signal.
Every year as many of my regular readers will know I run tours to the very north of Norway taking small groups of adventurers well above the arctic circle in the hope of seeing the Aurora Borealis or Northern Lights. I have been doing this for around 20 years and over the years as cameras have improved it’s become easier and easier to video the Aurora in real time so that what you see in the video matches what you would have seen if you had been there yourself.
In the past Aurora footage was almost always shot using long exposures and time lapse sometimes with photo cameras or with older video cameras like the Sony EX1 or EX3 which resulted in greatly sped up motion and the loss of many of the finer structures seen in the Aurora. I do still shoot time lapse of the Aurora using still photos, but in this video I give you a bit of behind the scenes look at one of my trips with details of how I shoot the Aurora with the Sony FX3 in real time and also with the FX30 using S&Q motion. The video was uploaded in HDR so if you have an HDR display you should see it in HDR, if not it will be streamed to you in normal standard dynamic range. The cameras used are Sony’s FX3 and FX30. The main lenses are the Sony 24mm f1.4 GM and 20mm f1.8 G but when out and about on the snow scooters I use the Sony 18-105 G power zoom on the FX30 for convenience.
I used the Flexible ISO mode in the cameras to shoot S-Log3 with the standard s709 LUT for monitoring. I don’t like going to crazy high ISO values as the images get too noisy, so I tend to stick to 12,800 or 25,600 ISO on the FX3 or a maximum of 5000 ISO on the FX30 (generally on the FX30 I stay at 2500). If the images are still not bright enough I will use a 1/12th shutter speed at 24fps. This does mean that pairs of frames will be the same, but at least the motion remains real-time and true to life.
If that still isn’t enough rather than raising the ISO still further I will go to the cameras S&Q (slow and quick) mode and drop the frame rate down to perhaps 8fps with a 1/8th shutter, 4fps with a 1/4 shutter or perhaps all the way down to 1fps and a 1 second shutter. But – once you start shooing at these low frame rates the playback will be sped up and you do start to loose many of the finer, faster moving and more fleeting structures within the aurora because of the extra motion blur.
So much of all of this will depend on the brightness of the Aurora. Obviously a bright Aurora is easier to shoot in real time than a dim one. This is where patience and perseverance pays off. On a dark arctic night if you are sufficiently far north the Aurora will almost always be there even if very faint. And you can never be sure when it might brighten. It can go from dim and barely visible to bright and dancing all across the sky in seconds – and it can fade away again just as fast. So, you need to stay outside in order to catch the those often brief bright periods. On my trips it is not at all unusual for the group to start the evening outside watching the sky, but after a couple of hours of only a dim display most people head inside to the warm only to miss out when the Aurora brightens. Because of this we do try to have someone on aurora watch.
During 2024 we should be at the peak of the suns 11 year solar cycle, so this winter and next winter should present some of the best Aurora viewing conditions for a long time to come. My February 2024 Norway trip is sold out but I can run extra trips or bespoke tours if wanted so do get in touch if you need my help. There is more information on my tours here: https://www.xdcam-user.com/northern-lights-expeditions-to-norway/
I will be back in Norway from the 1st of February, keep an eye out for any live streams, I will be taking an Accsoon SeeMo to try to live stream the Aurora.
A lot of people like to shoot anamorphic with the FX3 or FX6. And they do get great looking images. The best example of this most recently is the blockbuster movie “The Creator” which was shot with an FX3 using 2x anamorphic lenses.
But there are a couple of things to consider with Anamorphic.
The first is what aspect ratios does the sensor support and what is the aspect ratio you want to deliver. The FX3 is always either 16:9 or 17:9 so that means that if you want you final output to have that classic 2.39:1 (2.40:1) aspect ratio then you need to use a 1.3x anamorphic while shooting 16:9 as a 1.3x lens as this will allow you to use the full sensor.
If you use a 1.6x lens and do not crop the sides of the image in post you will have a much narrower 2.8:1 aspect ratio. 1.6x lenses work best with 3:2 sensors. With a 2x anamorphic lens you would end up with an extremely narrow 3.5:1 aspect ratio unless you do some serious side cropping – which will reduce the horizontal resolution of the final image. If you use a classic 2x anamorphic lens designed for 35mm film you will almost certainly have a noticeable vignette on either side of the frame as these lenses are designed for the narrow but tall frame of 35mm film. You are going to need to remove this vignette by cropping. If you only deliver in HD this may not be an issue, but for 4K delivery it means your footage is no longer really 4K. As a side note it is interesting that for “The Creator” this is exactly how they shot, using 2x anamorphics. But I am led to believe that extensive use of AI was made when scaling the image in post. If you do need to crop the image the FX9 has a bit of an advantage as the sensor operates at 6K in full frame, so the 4K recordings have higher resolution than the recordings from the FX3 or FX6 (remember a bayer sensor on actually resolves at about 75% of the pixel count, so a 4K sensor delivers a 3K image while a 6K sensor delivers a 4K image). Burano will be a good camera to use as even after you crop in to the 8K (pixel) image what is left will still be around 6K of pixels and full 4K resolution.
Then the other is de-squeeze. It can be quite challenging to focus if you have the wrong de-squeeze and if the collimation of the lens is off you may not notice that the horizontal and vertical focus points are different , so shots may not be as sharp as they should be. You could always use an external monitor with the de-squeeze you need.
So, depending on how you look at it the only lenses that might be considered to be “fully compatible” will be full frame 1.33x anamorphics as these will give the classic 2.40:1 aspect ratio without cropping and the camera supports 1.33x de-squeeze. But these are not common. Any other anamorphic squeeze ratio will require some post work. Classic 2x anamorphics were designed for super 35mm open gate 4:3 sensors and when used like this they still needed a slight side crop for 2.39:1. Use them on a FF 16:9 sensor and you will need to make a big side crop. For Full Frame anamorphic lenses these days it is common to use a 6:5 scan which is more square than 4:3 and the side crop is no longer needed. Additionally for FF, 1.8x squeeze is becoming very common and designed specifically to work with a FF 6:5 sensor. But – sadly the FX3 doesn’t really have a scan mode tall enough to fully take advantage of modern FF anamorphics. But that doesn’t mean you can’t use them, it’s just not an ideal situation.
This is another one from Social Media and it the same question gets asked a lot. The short answer is…………
NO.
Even with Sony’s earlier S-Log3 cameras you didn’t need to ALWAYS over expose. When shooting a very bright well lit scene you could get great results without shooting extra bright. But the previous generations of Sony cameras (FS5/FS7/F5/F55 etc) were much more noisy than the current cameras. So, to get a reasonably noise free image it was normal to expose a bit brighter than the base Sony recommendation, my own preference was to shoot between 1 and 1.5 stops brighter than the Sony recommended levels (click here for the F5/F55, here for the FS7 and here for the FS5).
The latest cameras (FX30, FX3, FX6, FX9 etc) are not nearly as noisy, so for most shots you don’t need to expose extra bright, just expose well (by this I mean exposing correctly for the scene being shot). This doesn’t mean you can’t or shouldn’t expose brighter or darker if you understand how to use a brighter/darker exposure to shift your overall range up and down, perhaps exposing brighter when you want more shadow information and les noise at the expense of some highlight range or exposing darker when you must have more highlight information but can live with a bit more noise and less shadow range.
What I would say is that exposure consistency is very important. If you constantly expose to the right so every shot is near to clipping then your exposure becomes driven by the highlights in the shot rather than the all important mid range where faces, skin tones, plants and foliage etc live. As the gap between highlights and the mids varies greatly exposure based on highlights tends to result in footage where the mid range is up and down and all over the place from shot to shot and this makes grading more challenging as every shot needs a unique grade. Base the exposure on the mid range and shot to shot you will be more consistent and grading will be easier.
This is where the CineEI function really comes into its own as by choosing the most appropriate EI for the type of scene you are shooting and the level of noise you are comfortable with and basing the exposure off the image via the built in LUT will help with consistency (you could even use a light meter set to the ISO that matches the EI setting). Lower EI for scenes where you need more shadow range or less noise, higher EI for scenes where you must have a greater highlight range. And there is no -“One Fits All” setting, it depends on what you are shooting. This is the real skill, using the most appropriate exposure for the scene you are shooting (see here for CineEI with the FX6 and with the FX9)
So how do you get that skill? Experiment for yourself. No one was born knowing exactly how to expose Log, it is a skill learnt through practice and experimentation, making mistakes and learning from them. In addition different people and different clients will be happy with different noise levels. There is no right or wrong amount of noise. Footage with no noise often looks very sterile and lifeless, but that might be what is needed for a corporate shoot. A small to medium amount of noise can look great if you want a more film like look. A large amount of noise might give a grungy look for a music video. Grading also plays a part here as how much contrast you push into the grade alters the way the noise looks and how pleasing or objectionable it might be.
All anyone on here can do is provide some guidance, but really you need to determine what works for you, so go out and shoot at different EI’s or ISO’s, different brightness levels, slate each shot so you know what you did. Then grade it, look at it on a decent sized monitor and pick the exposure that works for you and the kinds of things you shoot – but then also remember different scenes may need a different approach.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Name
Domain
Purpose
Expiry
Type
wpl_user_preference
www.xdcam-user.com
WP GDPR Cookie Consent Preferences
1 year
HTTP
YSC
youtube.com
YouTube session cookie.
54 years
HTTP
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Name
Domain
Purpose
Expiry
Type
VISITOR_INFO1_LIVE
youtube.com
YouTube cookie.
6 months
HTTP
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Name
Domain
Purpose
Expiry
Type
__utma
xdcam-user.com
Google Analytics long-term user and session tracking identifier.
2 years
HTTP
__utmc
xdcam-user.com
Legacy Google Analytics short-term technical cookie used along with __utmb to determine new users sessions.
54 years
HTTP
__utmz
xdcam-user.com
Google Analytics campaign and traffic source tracking cookie.
6 months
HTTP
__utmt
xdcam-user.com
Google Analytics technical cookie used to throttle request rate.
Session
HTTP
__utmb
xdcam-user.com
Google Analytics short-term functional cookie used to determine new users and sessions.
Session
HTTP
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Name
Domain
Purpose
Expiry
Type
__cf_bm
onesignal.com
Generic CloudFlare functional cookie.
Session
HTTP
NID
translate-pa.googleapis.com
Google unique id for preferences.
6 months
HTTP
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Name
Domain
Purpose
Expiry
Type
_ir
api.pinterest.com
---
Session
---
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.