It’s been an embarrassingly long time since I’ve written a blog post, and so I logged in today and discovered a bunch of really good comments and questions have been left on my blog the past year or so. This one is a good one and so I want to post my response as a standalone article.
1) No question, film has better dynamic range than digital. Admittedly the difference is increasingly becoming slimmer. My question is, when film is converted to digital for special effects purposes, does it not lose that dynamic range? I read that digital typically has 256 shades of grey (lol!) but film is infinite. When the film is captured by the digitising machine and all, doesn’t it lose that range, and maintain that loss through to when it is spewed back on to film and shipped to cinemas?
There’s a few things in play here; color bit depth, dynamic range, and whether the digital values are measured (recorded) and / or interpreted on a linear or logarithmic scale. Color bit depth in a digital file, whether is it a scan from negative or from a digital camera is determined by the number of bits of data that represent the values of R,G and B channels from back to white. If you imagine each channel separately as shades of grey from black to white, then a 8-bit per channel color bit depth would give you 256 increments or “steps”. However this is not a likely color bit depth you will come across in professional digital cinema acquisition or post production. The minimum bit depth considered adequate is 10-bits per channel, giving you 1024 increments per R, G, B channel. Above this is 12-bit, 16-bit per channel and 32 bit per channel.
As you can imagine the file sizes per frame ramp up very quickly for any given image resolution with this increase in color bit depth. Dynamic range has nothing to do with color bit depth, it is purely a factor of a camera imaging sensor’s sensitivity limits. It’s typically referred to in “stops” which directly relate to how the dynamic range of film stocks is typically given, and also aperture “stops” on a lens. In both cases it is measuring the same thing. It’s the range between the darkest blacks and the brightest whites that a sensor is able to resolve and still capture incremental detail. Think about the difference in light levels between a shaded interior, and a exterior lit by full sunlight. There is a massive scale in play there, and typically digital imaging devices have been poor substitutes for silver halide when it comes to capturing a wide scale above or below any given exposure. This has changed and I would argue is no longer a factor.
The last thing to keep in mind is how the image information is recorded and interpreted. Our eye’s sensitivity to light is not linear, and neither is photographic emulsion. Both perceive finer increments on the darker end than the brighter end, and this typically follows a more or less logarithmic curve. Imaging sensors are natively linear in their sensitivity to light but how the data is recorded can mimic a logarithmic scale assigning more of those “steps” I explained in color bit depth at the low end than at the higher end. This is one very good way to minimize wasted data with a form of “natural” compression as a digital image that has been captured in 10-bits per channel on a logarithmic scale can visually be very similar in fidelity to a 12-bit or more linear image at a much smaller file size.
These three factors all come into play when comparing the performance of celluloid and digital in image acquisition.