What You See Is Not What You Get
The world of WYSIWYG (What You See Is What You Get) simply doesn’t apply to photography. It is all but impossible to replicate on screen or indeed on print, exactly what you see through your eyes. There’s limiting factors everywhere, from the lens, to the sensor, to the electronics in your camera, the software used to process your images, your monitor used to display them, through to the printer you use and the paper you print on to. Having a basic grasp of these limitations won’t necessarily make you a better photographer, but it will certainly help you take better images on a more consistent basis.
Dynamic Range (Contrast)
Even on the brightest day, the human eye can pick out detail in bright areas as well as simultaneously being able to pick out detail in shadow/dark areas. The pupils in our eye continuously dilate and contract, in real time and our mind stitches all the image information together. Where as our cameras capture one single moment in time and the sensor simply can’t capture all that detail. We have to lose some data, whether it be in the highlights (blown highlights/over exposure?) or in the shadows (blocked up shadows/under exposure?). Or perhaps we expose for the midtones and lose data both in the highlights and the shadow detail? But chances are, given the limited dynamic range of cameras today we more often than not, can not capture the contrast of a real life scene.
For sure there are tricks we can use to try and reduce the range in a scene. Landscape shooters often use graduated neutral density filters which will darken the sky whilst leaving the land/sea untouched – reducing the contrast in the scene. Portrait shooters may use fill flash which adds light to shadow detail, again reducing contrast. Or we can blend exposures, which is the technique I used for the shot above. I used one exposure for the model and one exposure for the window (7 stops apart ). Then blended the exposure in Photoshop, giving a final image of reduced contrast. And of course there’s HDR (which I hate with a passion) which is also a blending technique combining multiple exposures together – but often looks far too fake and mid tone. Regardless of what technique you use, it’s important to understand the range of your camera so that you can make the correct choice(s) at capture time so that you can photograph the scene which you’re seeing in your minds eye.
A RAW file format contains the exact image data captured by your camera and registered by the sensor. When we look at a digital image, each pixel has a colour and brightness, but that’s not how it is captured. Each sensor site/pixel on a sensor can actually only see light of one colour. If a different colour light hits that sensor site, it is discarded/blocked/ignored. Our sensors are built up of multiple arrays (2×2) of sensor sites that can read red (x1) blue (x1) and green (2) light – green being more prominent due to the human eye being more sensitive to that colour.
So a RAW file has to be converted from this array or mesh into a a pixel format through the process of demosaicing. The end result of this process can and does vary depending on the software used to perform it. It can be done in camera by choosing to shoot JPG, in which case the cameras hardware handles the process for you and spits out JPG files (more about these in a moment), or you can use RAW converter software to do it like ACR (Adobe Camera Raw) or CaptureOne. But even though your software choice can make a difference to the output, what’s more important is the choices you make during the conversion.
I’ve talked about bits and bytes already in my blog, but to summarise in one sentence: “the more bits you have the better“. Most RAW file formats are 12~14 bits. What this means is that they can represent 68,719,476,736 or 4,398,046,511,104 colours respectively which sounds like a lot of colours (and it is), but in the world of B&W where we take colour out of the equation, that equates to just 4,096 or 16,384 shades of grey that the image can represent.
As there are no common 12 bit or 14 bit image formats, when we convert our images from RAW, we can choose to either convert the image to an 8 bit format or 16 bit. In the case of 8 bits, we are effectively throwing away a HUGE amount of data, where we can now only represent 16,777,216 colours and just 256 shades of grey. Or in the case of 16 bits, we actually extrapolate our 12 or 14 bit data, creating a 16 bit version, we can represent so many colours that makes the Irish nations debt look small or 65,536 shades of grey. Note that this isn’t truly 16 bits – it’s really only 12 (or 14) made to represent 16.
There are a huge amount of articles and blog posts about why people should choose 16 bit over 8 and vice versa, but I just can’t buy into the 8 bit argument (at all). Particularly if you print images (A3 and above) and have sections of an image where there’s a smooth transition in colour/hue/brightness of a single colour (i.e. the sky), then it’s often VERY evident that an image just doesn’t have enough bits, with obvious banding between colour transitions, rather than a smooth one.
Red Is Not Red and Blue Is Not Blue
Not all colours are equal. What exactly constitutes red? How many shades of blue can there be? A “colour space” describes how many colours an image can potentially have (regardless of the number of bits that the image uses). RAW images live in a world where they don’t have a colour space, but when we convert it from a RAW file at this point we need to make a decision which colour space to map the image into. Some spaces have a wider gamut than others, in that they can represent more colours. A general rule of thumb is that you should use the widest colour space possible (i.e. ProPhoto) but the issue with this is that a lot of devices can’t display or print this gamut, so you can’t actually make use of the extra colours. For instance, most internet browsers are still not colour space aware, or come with the setting turned off by default and they assume the image is sRGB. If it is not, it can look flat and desaturated. So choosing a colour space can limit your options. You can convert from one to the other, but if you throw away colours by going from a wider gamut to a smaller one (i.e. ADOBE RGB to sRGB) you can never get those colours back – they’re gone for ever. For me, my colour space of choice (editing/viewing and printing) is always ADOBE RGB with images converted to sRGB when uploaded online.
The image above is representative only of how two colour spaces include two varying sets of colour. The smaller polygon (sRGB) is shown to have a smaller colour gamut than the larger polygon/wider colour gamut of Adobe TGB
Even when you’ve captured your image, captured the range you want and have converted and edited the image in the colour space you want, that’s not the end of the story (unfortunately). Like everything in this world, all things are not equal. Not all displays/monitors are equal. Some will have different bit depths to others and even if they are the same, their ability to reproduce colours accurately may vary. In reality you should be working on IPS panel LCD’s with as high a bit depth as you can afford. Unfortunately really good monitors are really expensive But if colour accuracy is important to you, this is a place worth investing (at least a little bit).
There is a caveat though… you need to remember that most people viewing your images online, are doing so using pretty run of the mill consumer displays. So it’s likely your viewing audience are experiencing a different viewing experience of your images than you are. But if colour accuracy is important to you, in particular between print and screen, then having a good monitor is essential.
Print & Paper
Going to print introduces more headaches. You might have the most wonderful image on screen, but that doesn’t mean you can print it. Depending on the colour space you edited in and the colour space your printer is capable of reproducing, you may run into difficulties. If your monitor can not display colour accurately, then your prints can look colour shifted to the image on screen. Prints inevitably look somewhat flatter than on screen images, because the images on your monitors are back lit – prints are not. And then of course there’s the inks you use, particularly if you use third party inks – will they allow you achieve colour accuracy?
And lets not forget the paper. Not all paper/printer/ink combinations can produce the same range of colours. I have had experiences in the past where some papers (i.e. hahnemuhle) simply can’t cope with even the least saturated red and other papers (Epson) struggle with magentas. So it’s important to edit in a “proofed” version of the file, where you’re profiling your printer and paper choice as you edit.
So what’s the moral of the story? Do you need to know all this stuff? I certainly think so, at the very least you should be vaguely aware of it. For sure there are photographers out there that have never thought of this stuff and yet still manage to produce amazing results. But I like playing it safe. I like to know, at the time of capture, what limits there are and what I can do to overcome these limits or indeed what I can do so that my image won’t be restricted by them. Rather that than miss the opportunity of a great shot simply because I wasn’t thinking and planning ahead.