Bits & Bytes
A lot of people talk about “bits”, they discuss the benefits of editing in 16 bit mode, or converting from 16 bit to 8 bit. But what does it all really mean? Hopefully this thread will go a long way to explaining the basics and some of the concepts about how bit’s relate to photography.
In the beginning – what is a BIT?
We need to start somewhere, so the best place to start is describing what a BIT or binary digit really is. We’re used to the decimal system, where we count from 0 through to 9. So our basic unit can hold one of ten values. Binary is another counting system, where the basic digit can hold either one of two values, those being a 1 or a 0
Shades of gray
It’s quite easy at this point to relate this concept to photography, without needing to concern ourselves about how our pictures ended up being represented by bits in the first place. For the moment, let’s assume we live in a B&W world. If we assign a value of 0 to being white and a value of 1 being black, we are now able to create a picture. Anywhere, a pixel is 1, it’s black and anywhere a pixel is 0 it’s white. So with a 1 bit image, we are able to represent two separate tones. Obviously if we were to try and create an image here, it would be pretty terrible. For black parts of a shot, we’d need to densely populate the the area with black pixels and for white parts, with white pixels. For the “grays” well we use more or less pixels in the area to try and imitate gray. But at the end of the day, it will be a high contrast shot, as we can only really have either black or white.
One extra bit
If we go from one bit to using two bits, we can now represent 4 different shades of gray, as we have four different combinations of 1 and 0. These are 00,01,10 and 11. This is a very similar concept to our decimal system. Originally we could only represent 0 through to 9 using a single digit. But if we introduce just one more digit we can now represent values from 0 through to 99. So with a 1 bit image, we had only black and white, where as now we can have a white, an off white, a light black and a pure black. One extra bit has bought us twice the tonal range we had before. If we only had 2 bits per pixel, the quality would be pretty poor, but it’s still significantly better than 1 bit.
8 Different tones
Taking this concept and moving along, we introduce a 3rd bit, giving us now 8 different values (000,001,010,011,100,101,110,111) allowing us to represent 8 different shades of gray from white through to black. If you look at the pattern, each bit we add is basically doubling the range of tones we can use.
A 4 bit image
Adding a fourth bit, we now double the number of possible combinations, from 8 to 16. It’s actually a pretty significant increase considering the cost of 1 extra bit. Storage wise, 1 single bit costs so little, but the 8 extra shades of grey it brings, really can make a huge difference to the quality of an image. So we now can represent values from 0 through to 15, or 16 different shades of gray.
A BYTE and our first 8 bit image
We’ve seen that by adding a single bit, we can double the number of shades of gray that can be represented. The table below shows this scale. On the left we use the digit 2, as a binary digit can only be one of 2 states (unless we get into quantum physics – a debate to be had another day!). The number to the right of it, shows how many bits we use and the answer shows how many shades of gray can be represented.
So at the moment, I’ve discussed and shown examples up until a 4 bit image, which is capable of representing 16 different tones. We could continue to utilise the gains of adding single digits, but it’s uneconomical to do so in terms of the structure of microprocessors and memory. Whilst we use BIT’s to represent shades, a computer normally likes to group BIT’s into larger storage units called BYTES. A BYTE consists of 8 bits. So rather than going from 4 bits to 5, let’s jump straight to one byte and our first 8 bit image.
With 8 bits, we now have the capability to represent 256 different tones from white through to black. This amount of tones really affords us a very smooth transition from white through to black and to the visible eye, there shouldn’t be any noticeable difference in transition between shades. So an 8 bit monochrome image can and should be considered a high quality monochrome image and in fact if you ever save a black and white JPG, this is exactly what you have! But more about this later.
What about colour?
So far I’ve only discussed different shades of gray. Well this is because in reality, our colour images are basically made up of different shades of gray. Most people think of colour as being in the RGB colour space and for the purposes of this, it makes a lot of sense to stick to this concept. Obviously if we have a colour pixel, there is definitely more to it than a shade of gray between white and black. Each colour we have is made up of a combination of some or all of the prime colours, which are Red, Green and Blue. So to describe the colour of a pixel, we need to actually have 3 separate values, one for red, one for green and one for blue. These separate colours are split off into channels. A colour channel is a black and white image. Where ever there is none of that colour present, the value will be black, where it’s completely present it’s represented by white. Then where there’s different intensities of the colour present, we represent it by the equivalent shade of gray.
So, before with monochrome, we only had one channel, where each pixel was 8 bits. Now, we have 3 channels, where each pixel has actually 24 bits (8 for red, 8 for green and 8 for blue). Some people call this 24 bit colour or true colour, but don’t confuse it with the fact that each channel is only represented by 8 bits. BUT (and it’s a big but) the range of colours than can be represented by 3X8 bit pixels is pretty enormous, in fact it gives you the possibility to have one of 16.7 million colours! Note the very subtle difference between bits per channel – 8 and bits per pixel – 24
This 16.7 million figure is arrived at by multiplying the number of possible combinations of each pixel by each other: 256x256x256 (number of combinations for red X number of combinations for green X number of combinations for blue) or by simply going 2^24=16.7 million (give or take)
Ok, so an 8 bit image in RGB colour space has the possibility to represent 16.7million colours. But if we have a monochrome image in RGB does this mean we have 16.7 million shades of gray? Well the answer is unofortunately NO. For a monochrome image, without any toning or colour cast, then the Red channel is identical to the green channel which in turn is identical to the blue one. They have to be, or else we’d end up with colour in the image. Each pixel has to be a neutral tone, so must contain the same red, green and blue components. So even if we do have a B&W RGB 8 bit image, we are still only capable of producing a maximum of 256 shades of gray.
If you’re wondering at this point why we’d bother saving in RGB, then you’re asking yourself a great question, because in reality we are producing a file which is 2/3rds bigger than it needs to be. In fact we could save only one channel and reduce disk space significantly. Well, the reason is that most applications like to work in the RGB colour space, so it’s just easier to leave it with 3 channels, even though they’re all duplicates.
The more the merrier?
Hopefully you’ve been able to follow me so far and can quite clearly see that each additional bit we add, significantly increases the quality of the shot, increases the number of shades of gray we can represent and number of colours we can access. Is there a limit to the number of bits we use? Well the answer to this is no, we can (and do) have 16 bit images, there is now support for 32 bit images in Photoshop – called HDR or high dynamic range and there is nothing stopping us moving on to 64 bit images, 128 bit images or even 256 bit images. Each one of them significantly increases the potential number of shades of gray and range of colours.
But just because we can continue to increase the number of bits, does it mean we should? It’s been said that the human eye can only discern about 10 million colours, so in reality saving anything more than an 8 bit image (24 bits per pixel) could be argued as being overkill. But that said, even if we can’t see the transition between two colours, THEY ARE THERE. Saving an image in 16bit mode (48 bits per pixel) gives us access to a whopping 281 Trillion colours.
I’ll give more arguments further on why we should save and work in 16 bit mode, but one good argument I always use is for monochrome shots. If we have 16 bits rather than 8, we can now represent 65536 different shades of gray. Whilst it comes to colour, we may not be able to discern between 10 million and 281 Trillion colours, but for sure it can be possible to discern the difference between 256 shades of gray and 65536.
Not one person can argue that an 8 bit image is better than a 16 bit one.. and anyone that does are plain and simply wrong. What they can argue is that is there much point, if we can’t see or discern the differences then why bother? And to be honest, this is a perfectly legitimate argument. But leaving that one aside for the moment, it’s as plain as day, higher bit depth images contain more data hence are capable of storing higher quality images. One peculiarity though, is that most of our cameras actually only capture data in 12 bits (12bits per channel, 36bits per pixel). So when we move into Photoshop or what ever other editing software we use, where do the extra 4 bits come from? Well at the moment, in 12 bits, we are capable of representing 4096 shades of gray (per channel), or 68.7 billion colours (36bits per pixel). But when we move to 16 bit mode, we can now and do represent 65,536 shades of gray or 281 trillion colours. These extra shades/colours have to come from somewhere and this somewhere is via a process called interpolation.
Interpolation allows us “create” a new set of data points from our fixed set and is carried out using various different mathematical algorithms. It’s not that complex really and it’s something we can all do or have possibly done before. For example if I told you we had the existing data stream: 0,4,5,7 where 0 is white and 7 is black (3 bits). Now let’s make this a 4 bit data stream, where we can represent 16 values from 0 through to 15. One simple method of interpolation would be to simply double the values, so we’d end up with 0,8,10,14
We’ve effectively interpolated upwards. From a picture point of view, what has happened? Well one thing that’s happened is that in our 3 bit image, we had a pure black value (7), which has been interpolated upwards to be 14. This is no longer pure black, it’s going to be slightly lighter than it. Also before, we had a smooth transition between tones 4 and 5, where as now there is a difference in tonal range between them and transitions between tones may or may not become apparent.
Obviously this is a very simple interpolation. In order to retain image integrity, a more advanced algorithm is required. One that looks at surrounding pixels and makes informed decisions about what value my final pixel should be. There are numerous algorithms including linear (very similar to the one I described), polynomial, spline, rational and trigonometric to name but a few, all of which have their own distinct characteristics, advantages and disadvantages. But the point is this… we are going from 12 bits to 16 bits (or any lower bit count to higher). First and foremost we are fundamentally changing the data and underlying image, all so as we can represent data that was never there in the first place!!! So why do we do it? Well if we have a 12 bit image from the camera, we can either interpolate upwards, always retaining the 12 bit data as well as the new interpolated data or we can throw away 4 of those bits and edit in 8 bit mode. For me, this is a no brainer… I never conciously throw away data, so I’ll interpolate upwards from 12 to 16 and edit in this space. By the way, some camera manufacturers now have 16 bit RAW files, i.e. Hasselblad
This also brings up an interesting argument where some people have an 8 bit image, convert it to 16 and then edit. In a way you’re fooling yourself. The original 8 bit data is the master. You have added interpolated detail which was never there in the first place and then you’re editing it.. Make’s no sense to me?
8 bit vs 16 bit mode
This is as an argument I don’t expect to settle with this one article. There are so many arguments for and against both sides. The 8 bit lobby will argue mainly on the premise “what’s the point?” where the 16 bit lobby will show histograms with spikes and spaces where data has been lost etc. At the end of the day, only in the very rare cases of very badly edited images, can you really see the difference between an 8 bit and 16 bit image. If you take a shot that’s exposed well with the minumum amount of editing, then in reality the 8 bit version should be indistinguishable from the 16 bit. There are numerous websites out there that challenge people to prove otherwise and so far, I haven’t seen anyone make a convincing case.However, if you get images where they’re underexposed and require heavy editing where you need to perhaps drag detail from shadows or recover highlights, then the extra bit depth can be very useful, purely and fundamentally because it contains more data. These are the rare cases though.
I come at it from another angle, which so far I have never seen anyone be able to come up with an argument against
Storage is cheap
At the end of the day an 16 bit image contains 100% more information (please note the word information) than an 8bit image. This fact is indisputable. Disc space/storage these days is so cheap, in fact where the cost of paper has more or less stayed constant over the last 10 years, storage has decreased by a magnitude of 1000s of percent with cost of storage per MB working out at less than $0.03 (american dollar) at the moment. So for me it’s a no brainer. I can not understand why anyone would conciously decide to simply discard data by throwing it away and working in 8 bit mode where as it’s effectively free and costs pretty much nothing to keep it in 16, even if the differences aren’t discernible. Besides I have invested so heavily in my flow, from lens through to body, why wouldn’t I want to work in the biggest space possible?