ASK & DISCUSS

INDEX

Best settings for Nikon DSLR filmmaking

10 years, 4 months ago - Kyri Saphiris

I now have a Nikon D5200 which I want to use for some short film work.

If anyone else has a similar camera, or knows about DSLRs like it, can they advise me on the best settings to use for filmmaking?

In particular, there seems to be a RAW option - can I shoot video in this? If yes/ no, what about the picture profile settings? Can I/ should I customise these for either better eventual picture quality and/ or wider options in the edit/ colouring?

Thanks for any advice.

Only members can post or respond to topics. LOGIN

Not a member of SP? JOIN or FIND OUT MORE

Answers older then 1 month have been hidden - you can SHOW all answers or select them individually
Answers older then 1 month are visible - you can HIDE older answers.

10 years, 4 months ago - Paddy Robinson-Griffin

What it could down to is that sensors are digital, and designed to work within certain limits. Think of sensors as a load of little buckets (pixels) with photon balls landing in them. The balls in each bucket get counted 24 times a second, the buckets emptied, and start again. They may get 0, a few, a lot, full to the brim, or overflowing each time. Because there are always a few random bits of debris, anything below say 5 balls is counted as being close enough to 0 to be 0, and the bucket overflows with 100 balls. 0-5 balls are the 'blacker than black' - the sensor is under stimulated, so the detail gets ignored as it could easily be random debris. If 130 balls should all land in a bucket, 30 fall off the top into the gutter - they don't fit, so each measurement gets recorded as 100. This is 'whiter than white', losing detail. I'll stretch this analogy further in a minute, so get to grips with this bit first :)

Sensors are massively less forgiving than the human eye, so for instance if you see a view with sunlight and a cave, your eyes can see the details in the cave and can also handle the bright sunlight. The sensor may not see the details in the cave, because all the levels of darkness are too similar. Alternatively, the sensor can't distinguish any detail if there's too much might hitting the sensor. This means you get skies completely burnt out, with no wispy cloud textures, for instance. For a bright day you might use a neutral density filter (grey, basically, so no colour tint) so you can still get detail without blowing the whites out, but that means you lose detail if there's dark bits in the same scene.

Some cameras have a higher 'dynamic range' (can see more detail on dark and light scenes simultaneously - HDR), but it's a bit of a bodge and not common for video.

Back to the analogy - the ND filter might be a screen that only allows half the balls to land in a bucket. You can see that now 100 balls and 130 balls get screened and counted as 50 and 65 - great! You've now got detail in the bright bits of the image! But at a cost - now to get any detail in the dark bits you need 10 balls before it'll even register. HDR techniques supply the screen and remove it and take two sets of readings (it helps to think of it as actually recorded as two streams of video), then when you get into post, you take the screened bits of one image and unscreened bits of another to try to keep more detail. Always looks a bit pony, though.

Ready for more? Beware, bit of a stretch here, so this is bonus reading... How do we count the balls from each pixel bucket? Let's assume we can't count them for some reason, but we could see how high they come up the bucket easily. Maybe we can measure in centimetres the height of the highest ball, and call that our reading - it's pretty accurate for us. But buckets aren't perfect cylinder shapes, they're bucket-shaped. They'll fill up faster at the bottom and slower towards the top. Still with me? What if the buckets were a different shape - a perfect tube, a weird wobbly shape, etc? That's effectively (stretching the analogy) the gamma curve we're describing, how fast the bucket fills up for a certain number of balls.

I've now either given you an epiphany, or blown your head. Hope its the former!

Response from 10 years, 4 months ago - Paddy Robinson-Griffin SHOW

10 years, 4 months ago - Paddy Robinson-Griffin

>>>But getting the exposure right is the starting point. If you over expose, you will have the details in the blacks (enough balls in the buckets to register those black details) but more than 100 balls in the whites, so a lot of the details in the whites will disappear. If you under expose, you won't have enough balls in the buckets for the black areas, so no black detail, but you'll have less than 100 balls in the brightest parts of the picture, so all the white detail will be there.<<<

That's pretty much it. Yes, of course it's an oversimplification, and we get into some pretty chunky physics and maths when you drill into it - but don't worry. I'll bet you only a single digit percentage of people actually understand the technical process, but most can still produce a good looking image. It's FAR easier now than it ever has been in the whole of history, computing power and technological developments give so much more leeway and support leaving you to do the creative bit. They don't do everything for you, but they can do a LOT.

As you get more experienced, you might find you want to push the cameras, or work with older, different ones, or whatever. Understanding the underlying physics (even in metaphor like the buckets and screen) will always stand you well as no matter what changes come, they won't bend physics ;-)

Biggest help, as John mentions above, will be zebras - they show you where you're burning the whites out. The blacks - it's less common to not realise a shot is too dark, so you don't have to worry as much. You'll notice.

All that said, don't let camera settings stop you getting out there and filming. Go TODAY and shoot a minute or two of test, and find a few things you just notice - trees in silhouette, pigeons walking, whatever. Just use the camera, play with it, shoot with it, and get familiar enough that you can start telling stories.

Response from 10 years, 4 months ago - Paddy Robinson-Griffin SHOW

10 years, 4 months ago - Matt Jamie

here's some suggestions from a d800 user - might be similar: http://nikondslrvideo.com/tutorials/best-nikon-picture-control-settings-for-videography/

Response from 10 years, 4 months ago - Matt Jamie SHOW

10 years, 4 months ago - John Lubran

I don't have any hands on experience with your camera but I'd be very cautious about bringing sharpness down to absolute zero. Do some tests before actual production. There was a short craze a while back for going super soft that didn't last very long because it was a silly idea then, it might be a silly idea now too. Getting camera settings right from the start is very much better than trying correct it in post. Decent contrast adjustments rarely work well in post if you need to correct more than about one stop. The only thing that can be totally corrected and played about with, with joyful abandon, in post, is colour balance and saturation, which is the main part of post grading. Get audio right too. Most thing are a huge pain in the bum to fix if the camera picture and sound are not right. Better to trust your own eyes and ears while carrying our enough pre production tests and tweaks before committing to anything. That's the job description for goodness sake! Trusting the remote opinion of others has often lead to disaster.

Response from 10 years, 4 months ago - John Lubran SHOW

10 years, 4 months ago - Kyri Saphiris

Well I think the Alexa bod would definitely make the better film ... because Hitchcock died a long time ago so you'd get nothing from him now! ;-) I fully get you and yes, Hitchcock would probably create some kind of masterpiece, even on the most basic of smartphones!

I never really fully understand black blacks, white whites, dynamic range etc. I've read up on it in the past but always remain fairly confused. Was reading something about gamma a while back; that threw me completely!

Good point about having some sense and procedure to any testing. That would be very helpful later when looking back on everything and trying to make useful comparisons.

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Paddy Robinson-Griffin

Heh heh, I know, but it is worth it, honest!

Don't just shoot randomly though, you'll get more value by being methodical. Do some tests in the same sort of setups - INT DAY or EXT NITE or whatever. Want a shot from inside a tunnel up a canal into the sun, in silhouette in your film? Recreate it in test. Top tip, the camera can record (poor) audio, so for each clip, speak which settings you use, and when you come to see how it all looks in post, you'll remember which settings you used with which clip.

As a rule of thumb, though, at the acquisition stage (filming), you want to keep as much detail as you can, which you can elect to throw away later. That means avoiding blacker than black, whiter than white, best bitrate available, etc.

Remember this - Hitchcock had nothing even approaching the technology we have now, and made incredible films. Get the best from your camera, but remember it's just one part of the overall film. You could send a hack out with an Alexa and make a bad film, you could send Hitchcock out with a smartphone camera - I bet I know who'll make the better film ;)

Response from 10 years, 4 months ago - Paddy Robinson-Griffin SHOW

10 years, 4 months ago - Kyri Saphiris

Fascinating stuff all round! One thing I haven't understood is why there are two lines at the black end of the waveforms, one at "10" and one at "0" (zero). According to the above description, the blacks only get crushed when they start going below the zero line, so what's the point of the additional line at "10" (and of course, there is the top line at "100" for the whites).

I've heard and read about zebras. Apparently people that have used it say they find it extremely useful to have as a feature on their equipment. If I am correct in thinking, I believe that some DSLRs have this but usually with a firware hack, Magic Lantern or something like that. Even then, it's only limited to a handful of cameras for which these hacks have been engineered. I'm not sure which, if any, DSLRs natively offer the zebras facility but if any do it will I'm sure be the very high end models, the category within which my camera does not quite belong!

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Kyri Saphiris

Thanks. I think I've set the camera for the highest bit rate but, now that you've mentioned it, I'll check.

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Kyri Saphiris

Really useful comments there, thanks very much. The idea behind my post was to trust the experience and knowledge of others much more experienced than myself so that, if the consensus was, say, to bring sharpness to zero and reduce contrast by a certain amount, and saturation too, then that I would just go with that consensus. But what you say makes sense. I should somehow do some testing myself.

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Kyri Saphiris

That's a hugely brilliant post!

It sounds like, if you have a cave/ sunlight scenario, the ND filter reduces the number of balls reaching the bucket (photons hitting the pixels!). But, and I am new to DSLRs themselves, what we are doing is reducing the exposure. So, bringing ISO to 100, making the aperture smaller, reducing the shutter speed{*} are all tweaks that can be combined to reduce the exposure (instead of, or in addition to, the ND filter). All these adjustments in effect reduce the numbers of balls filling up the buckets.

{* although this is not really variable for video work like it is for photography}.

The 100 and 130 balls being reduced to 50 and 65 made sense but threw me a bit when you said it's at the expense of the detail in the blacks. But I think it does make sense. If the detail needs 10 balls to be registered by the bucket, the ND filter (or camera settings adjustments to reduce exposure) that reduce all the 10-ball buckets to 5-ball buckets means that they will simply not register and be completely black. So the detail is lost.

I noticed in my editing programming (I downloaded Lightworks as there is a free version, although it is limited) there is a graphical representation available of the clips you import (vectorscope, I think it's called). It shows what I think is the light of each frame (how full the buckets are). I think the range is 10 to 100 (not 0 to 100, which I don't understand, unless it relates to the fact that you said we need 10 balls for a bucket to register) and you can see on it that some bits of the "graph" may go below the 10 or get clipped at the 100. Using the colour correction settings you can squash or expand the graph (bring it in line to the 10 to 100 range or expand to fill it, as appropriate). I've noticd that gamma keeps the extremes but just "squishes" the graph (bunches up or stretches out bits whilst still remaining within the range that's already there).

The camera I bought, Nikon D5200, I believe is a mid range camera of its type, and does have things like High Dynamic Range settings although I haven't touched these as I really didn't know what they meant. Cameras, editing, colour, light, the universe and everything ... it's all just so complicated isn't it!?

But getting the exposure right is the starting point. If you over expose, you will have the details in the blacks (enough balls in the buckets to register those black details) but more than 100 balls in the whites, so a lot of the details in the whites will disappear. If you under expose, you won't have enough balls in the buckets for the black areas, so no black detail, but you'll have less than 100 balls in the brightest parts of the picture, so all the white detail will be there.

I've read that a lot of filmmakers tend to slightly underexpose everything as a rule. I think their argument is that, by doing so, you have "more options in the edit". Not too sure how this works but I am guessing the theory is that all your whites are intact and somehow you can lift the blacks in some way to get back the detail.

Yeap, it really is complicated!

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - John Lubran

Ideally one would have a 'wave form monitor' to hand at all times! Several low cost cameras these days provide a sort of cut down wave form monitor that I suspect few amateur operators know what to do with. Understanding what a wave form monitor does though is useful for understanding what the basic issue of over peaking whites and crushing blacks is all about. The Wave Form Monitor measures exposure in terms of 'volts' (not to be entirely confused with the volts your electrician deals with) The monitor provides a graph like grid with one volt being the top line and zero on the bottom line. The main purpose of the monitor is to let you know when any part of the moving image 'peaks' over the one volt point because that's the point at which over exposed parts of the picture simply cannot resolve any detail and become electronically unprocessable, particularly in terms of broadcast transmissions and other forms of replication, which is why such errors are referred to as ‘illegal’ by broadcast transmission engineers. Such over exposure is not fatal however, it just means that the areas of picture affected can only be corrected by replacing the over exposed areas with the colour white, or in post with any other mat overlay available, but any actual detail, in that white arera, of the subject being filmed is irrevocably lost. For example, skilled wedding videographers get very precise when filming wedding dresses, they don’t want to lose the fine lace detail of the material so must be very careful not over expose or else the dress just becomes a flat white; for them it’s often a very tight exposure call. The degree to which any given camera or format can handle this range of exposure is called its ‘latitude’ and marks one of the more significant qualitative differentials between competing bits of kit. Resolution is very far from being everything! Currently it’s the wide latitude in terms of contrast and colour range that still makes high end film the best.

At the bottom end of our wave form monitors graph, its base line of zero, is where the same ability of the camera loses any detail in the darkest part of the picture, it’s called crushing the blacks. Crushing the blacks is not transmission ‘illegal’ and is often used as a deliberate artistic device. The latitude range of a camera can be rebiased, if the camera offers such control options, by adjusting the ‘knee’ or ‘pedestal’ setting to adjust the contrast handling along the available spectrum the camera has, increasing latitude at either end of the available spectrum. However what degree of latitude one gains at one end of the spectrum one loses at the other, the number of degrees in the spectrums arc is fixed. The use of graduated filters, ND’s and lights are how we extend the capability of the camera beyond the limitations of its latitudinal range.

Whilst all this might seem a technical hurdle for the uninitiated the best grade one monitor is still the human eyeball. Most decent cameras these days provide a zebra pattern generator. If in doubt as to over peaking exposure to mat white set the zebra to mark a pattern where any part of the picture is over one volt, it makes it easy to avoid. You’ll need to consult the cameras manual as to where the manufacture has set that point on the zebra.

Response from 10 years, 4 months ago - John Lubran SHOW

10 years, 4 months ago - Matt Jamie

If you can adjust saturation, contrast and sharpness settings it's worth making these are set to a lower level so you get a 'flatter' image. (you can add contrast, sharpness and colour in post easier than you can take it away). Worth experimenting with a few different settings to see which works best for you.

Response from 10 years, 4 months ago - Matt Jamie SHOW

10 years, 4 months ago - Kyri Saphiris

That's useful, thanks. Looks like bringing "sharpness" down to absolute zero and reducing contrast and saturation by at least one point is a way to go.

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Kyri Saphiris

I know, I should really! It's just that testing is a bit boring! But it's got to be done. I think I'll spend a day just shooting random stuff and trying out a couple of different profiles to see what the footage looks like.

Response from 10 years, 4 months ago - Kyri Saphiris SHOW

10 years, 4 months ago - Paddy Robinson-Griffin

You won't be able to record RAW - RAW is a total dump of the sensor data, and would make for HUGE files, even if it was possible to write to memory card fast enough.

I don't know the Nikons, but shoot at the highest bitrate you can, as that'll give you the greatest latitude in the edit/grade.

Response from 10 years, 4 months ago - Paddy Robinson-Griffin SHOW

10 years, 4 months ago - Paddy Robinson-Griffin

Always test for yourself! Test cameras, lenses, effects, lighting, everything - testing takes a few hours BUT those hours are cheap compared with trying to find out when you're on set with a load of people around you getting impatient! Time spent in preparation is never wasted.

Response from 10 years, 4 months ago - Paddy Robinson-Griffin SHOW