Sensors and noise

 

Digital SLRs capture images on electronic sensors, which are a large array of tiny light-sensitive diodes called photosites. Each photosite has a color filter over it, so it’s sensitive to only one of the three primary colors: red, green, and blue. This array – called a Bayer pattern – has twice as many green areas as there are red and blue to account for how your eye works.

 

 

Full frame sensors are the same size as one negative on a 35mm roll, which measures 24mm x 36mm. Canon cropped sensors are also called APS-C sensors, and measure 14.8mm x 22.2 mm, resulting in an area that’s only 40% of a full frame.

The inner rectangle below is a Canon APS-C 1.6x crop factor sensor, shown inside a full frame sensor. You can immediately see why cropped sensors result in increased effective focal length: you’re looking at only a fraction of the full frame photo, which is equivalent to zooming in. For this reason, many photographers prefer cropped sensors for sports and wildlife photography where extra reach is always on the wish list. Landscape photographers who need to show sweeping vistas often prefer full frame sensors.

 

 

When you release the shutter, photons of the color of the filter over each photosite begin to enter the site, where they are converted into electrons, which results in a voltage that is then converted by your camera’s analog-to-digital converter (ADC) circuit into pixels that come together to form the entire picture. How many levels of voltage the circuitry is able to differentiate depends on the capture bit depth; for Canon cameras, this is 14-bits, which means that 16384 levels of brightness can be detected per photosite. If you shoot RAW, you’ll be able to use this entire range in post-processing, but if you shoot JPEG, you’ve reduced your bit depth to 8, so you have only 256 levels to work with, which will result in severe banding. I’ll discuss this in more detail in a workflow article.

All electronic components have hot electrons that contribute to background noise; what this means is that even if you put the cap in front of your lens before releasing the shutter, the camera’s ADC will still detect a voltage level above 0, and pass this on as a valid pixel. Your eyes, however, will perceive this as unwanted junk in what should have been a perfectly black exposure.

Let’s call the amount of photons that a photosite has captured “signal”, and the always-present signature of the electronic components of the camera caused by hot electrons and other effects “noise”. The signal-to-noise-ratio or SNR is what defines perceived noise in photos. The higher the SNR, the less noisy your photo will be, and this is why dark areas of your photo – such as the sky – are always noisier than bright areas: the signal is lower in dark areas than it is in bright areas.

Larger sensors usually have larger photosites so full frame sensors gather more photons per photosite than do cropped sensors. To your eyes, therefore, photos taken with a full frame sensor camera will have less visible noise than those taken with cropped sensor cameras.

When you increase the ISO, the ADC circuit goes into overdrive, but while you have made the photosite more sensitive to photons, you have also increased the background noise. Sensor designs are constantly being improved, so newer models will have less noise at high ISO, but your aim must still be to use the lowest ISO possible.

Given all of the above, you now understand why, in the article on exposure, I mentioned that you need to expose to the right, while preserving your highlights: if you underexpose a photo, the SNR is low, so the photo is inherently noisy. If, in post-processing, you try to increase the exposure a bit, you’ll also amplify noise. One best-selling author has mentioned in one of his photography books: “if you’re shooting RAW, you can underexpose the photo by 2 stops, knowing that you can always fix this in post-processing.” Nothing could be further from the truth! Don’t believe everything that pros throw at you!

ID: noise