Whenever you take a photo with a digital camera, the camera processes the raw photography data from its sensor, and produces an image. This process is quite complex, but the camera performs it very quickly, and the photographer sees the end result on the camera’s LCD screen.
Image created in-camera
Many digital cameras provide settings which allow the photographer to fine-tune this in-camera processing, like Contrast and Saturation. But, once the image is processed, the raw data is thrown away, and the photographer is left with a processed image.
Some digital cameras provide the ability to save raw image data, and allow the photographer use it instead of (or in addition to) the processed image. Simply stated, the benefit of keeping the raw data is in the ability to perform the processing stage at a later time, in a considered and controlled way, as many times as required.
Image created with raw processor
Many parallels can be drawn between raw processing and film processing. Raw data can be seen as the Negative, and the finished image as the Print. In-camera processing is like using slide film, and raw processing is like making your own prints from negatives in a darkroom. Indeed, raw image files are sometimes called Digital Negatives.
Processed images are commonly distributed as JPEG files, because JPEG is an efficient space-saving format, achieved by throwing away many minor details from an image, and reducing the accuracy of the images’ colours. The end result is perfectly adequate for display on a computer monitor, digital frame, or projector screen.
Popular image manipulation software can work on both processed and raw images, but the inaccuracies already present in JPEG images will be exaggerated by further processing, and detail which has already been lost during in-camera processing can never be recovered.
Sometimes knowing how a process works helps us to understand something better. So, if you’re the sort of person who doesn’t want to know what happens under the bonnet when you start your car, you should skip this part. Otherwise, read on…
My 12MP camera takes photos 4288 pixels wide and 2848 pixels high; each pixel has a red, a green, and a blue (RGB) value. So, you might be excused in thinking that there are 36 million photocells, in groups of three, a bit like a TV screen. You’d be wrong. Half of the 12 million pixels in my camera measure only green; the other half, either red or blue. So, how does it produce a 12MP RGB image? By interpolation: it guesses how much missing red, green or blue there might be in every pixel site, based on the amounts in adjacent pixels.
Pixels on a digital camera sensor.
Cameras mostly record reflected light, from the sky or man-made light. The colour of these light sources are rarely ‘white’, but are just a mixture of other colours. Humans unconsciously take the colours of the light source into account, and see a balanced image. Cameras try to copy this, by making a guess at the light’s colour from the scene it captures, with varying degrees of success. It uses this guess in adjusting the colours of the image to produce a balanced photo.
Church lit by a yellow sun and a blue sky; Table lit by a yellow-green fluorescent light.
Each photocell measures the amount of photons which fall on it, and the value recorded is directly proportional to the amount of light it receives. Human sight is different. Our brains record *relative* brightness, called Stops or EV. So, another job the camera performs is to translate pixel brightness into relative brightness. This is technically known as Gamma (logarithmic) Conversion, and it has the side effect of losing highlight detail.
Nice enough, but, if I’m honest, it wasn’t this colourful.
As well as this, the camera does even more adjustments, based on aesthetic preferences for sharpness, contrast, and colour saturation; and it makes all the brightest areas white and the darkest areas black. The end result is a pleasing image, closer to the viewer’s perception/memory than what the camera actually recorded. And, when it creates the JPEG, the camera compresses the finished image by throwing away some fine detail and reducing the tonal resolution.
Making changes to the finished image is possible, but not ideal, as it exaggerates its inaccuracies. It’s better to go through the raw conversion process again. Things like exposure and white balance correction are more easily done early, and produce better quality results.
Landscape photo with and without graduated filter processing.
Modern raw conversion software has a host of photography-based tools to reproduce things like graduated filters and sensor spot removal, as well as chromatic aberration and vignette correction — all better done during raw processing — and what’s more, it can do this to lots of photos at once, in just a few seconds. This is because a major advantage of working in raw, is that image processing settings are stored as instructions, and the image data is left untouched.
These raw processing settings can be revised at will, or removed altogether, and they take up very little space on your computer, compared with, say, Photoshop documents (PSDs). So, the larger sizes of raw files become less of a disadvantage when post-processing.
Keith Nuttall, 2009