Sharpening in Photoshop -- Part I

Article and Photography by Ron Bigelow

Photoshop CS or Photoshop CS2 Used in this Tutorial

One of the determinants of final image quality is the how well an image is sharpened. In fact, as the print size is increased for any given image, the quality of the sharpening becomes ever more important. Yet, it is not uncommon for photographers, who spend thousands of dollars to get the best equipment available in hopes of getting very high quality images, to lose some of that quality at the end of the process by using less than ideal sharpening methods.

For many, the issue of sharpening starts and stops at Unsharp Mask (USM). Yet, USM is only the beginning. There are other sharpening tools. Perhaps more important, USM and the other tools can be used as the basis for far more sophisticated and powerful sharpening techniques.

It is important to set the purpose of this article up front. While this article will detail the proper use of USM and some of the other tools, its goal is not to simplify sharpening down to its easiest level. To do so would be to diminish the potential of the sharpening tools. In other words, the reader will not find advice to set the sharpening tools to such-and-such settings and click the OK button. While such advice may make life easier, it would not result in optimum quality images. Rather, optimum quality images are created when the photographer analyzes the sharpening options in relation to the content of the subject matter, the initial quality of the image, the method of image capture, and the output options (e.g., inkjet print, wet processing, or web display). Only such a careful analysis can determine the best approach to sharpening a particular image. Furthermore, what is optimum for one image is not likely optimum for another -- different images require different tools, settings, and approaches. The purpose of this article is to delineate the tools and methods, their use, and their strengths and weaknesses in relation to the particulars of images. In short, this article is a thinking person's guide to sharpening. To this end, this article will cover the following:

What is Sharpness

In practical terms, we understand the concept of sharpness. Even a person unschooled in photographic concepts can look at an image and tell you if it is sharp or not. However, the technical definition of sharpness is less well understood. Ask even an accomplished photographer to provide a precise, technical definition of sharpness, and you my get nothing more than a blank stare. Yet, understanding sharpness is integral to understanding sharpening tools and what they do. Further, understanding the sharpening tools is integral to creating the best possible photographic output.

Sharpness is actually determined by two factors: resolution and acutance. Resolution is more closely aligned with what most people think of as sharpness. Resolution is the ability to resolve fine detail. Resolution charts that have lines that get finer and finer are often used to test the resolution of lenses. Resolution is generally measured in line pairs per millimeter LP/mm). The more LP/mm that a lens can resolve, the greater the resolution of the lens. In practical terms, resolution is the ability to reveal detail. Photographic equipment of high resolution reveals fine detail while that of low resolution can not. Thus, resolution is largely determined by the camera and lens (actions taken on an image after capture can degrade resolution, but no actions can create resolution that was not previously there). Sharpening software has little to do with resolution.

Figure 1: Acutance

Acutance has to do with contrast. In particular, the contrast of adjacent, or near adjacent, pixels. The human eye and brain interpret light pixels lying next to dark pixels as an edge. The quicker the transition from light to dark (i.e., the greater the contrast), the sharper edges appear to be. This is demonstrated in Figure 1. In both Image 1 and Image 2, there is a light gray rectangle next to a dark gray rectangle. In Image 1, the transition between the light gray box and the dark gray box, as well as the transitions between both boxes and the green background, is quick. In this image, the contrast, and therefore the acutance, is high. As can be clearly seen, the edges appear sharp and well defined. Image 2 has the same configuration of boxes except that the transitions between the boxes, and the boxes and the background, have been made more gradual. The contrast along the edges has been lowered. Image 2 has a low acutance. This low acutance is interpreted by the eye as a lack of sharpness.

The important point here is that the appearance of sharpness in Image 1 and the lack thereof in Image 2 has nothing to do with resolution or detail. It has everything to do with contrast along edges.

We now come to one of the most important points about sharpening tools: sharpening tools increase the apparent sharpness of an image by increasing the acutance of the image. In other works, sharpening tools increase the contrast along edges. The corollary is that the sharpening tools do not in any way increase resolution or detail. They can only make detail that was already present in the image stand out by increasing the contrast along the edges of the detail.

This brings up an important aspect of sharpening. Sharpening can not bring an out of focus image into focus. An out of focus image has low resolution. Since sharpening can not increase resolution, it can not rescue the image from its focus problem. Thus, sharpening works best with images that have crisp detail due to the use of quality equipment and proper photographic technique.

Why is sharpening Needed

Figure 2: Sensor

So, sharpening increases image acutance. This begs the question, "Why is it necessary to increase the acutance of digital images in the first place?" After all, film doesn't require sharpening (unless it has been scanned). The key lies in the nature of the sensor. The sensors in digital cameras are made up of arrays of pixels. Each pixel measures the light at a tiny area on the sensor. Furthermore, except in the case of the Foveon sensor, each pixel measures only one color of light (either red, green, or blue). Figure 2 shows a diagram of a 16 x 16 pixel section of a sensor. Each colored box represents a pixel. An actual sensor would be composed of millions of these little pixels.

Figure 3: Iris

Now, imagine a flower as in Figure 3. In order to take a photograph of this flower with a digital camera, the light from the flower must pass through the camera lens and be projected onto the pixel array that constitutes the sensor. Of particular interest is what happens at the edges.

Figure 4: Iris Edge
Figure 4 shows an enlargement of the iris. A place along the edge, that has very high contrast, has been chosen. On the petal side of the edge, the tone is almost pure white. While the background side of the leaf is almost pure black. Three points along this edge have been chosen for analysis. Point 1 is on the petal side of the edge and is composed of the white of the petal. Point 2 is on the background side of the edge and is composed of the black of the background. Point 3 is exactly on the edge. What needs to be looked at is how these points are recorded by the pixels. For the moment, we will think only in term of black and white. The effects of color will be mentioned in a bit.
Figure 5: How Pixels Record Edges

Figure 5 compares the light that hits these pixels to how the pixels record the light. Pixel 1 receives white light from the white part of the flower. The pixel accurately records this as a white point. Pixel 2 receives little or no light from the black background. The pixel accurately records this as a black point. However, Pixel 3 has a problem; the edge crosses through this pixel. Part of the pixel receives white light from the white part of the flower, and part of the pixel receives little or no light from the black background. Unfortunately, the pixel can not record part white and part black. It can record only one shade. Therefore the pixel records a gray tone. When the eye looks at the flower, it sees a crisp white/black edge. What the sensor sees is a softened white/gray/black edge. In essence, the pixels have lowered the acutance of the edge.

This is one of the two main reasons that images from digital cameras need to be sharpened. Edges that cross pixels are not accurately recorded as crisp edges; rather, the acutance of edges is lowered due to the way the pixels record those edges when they cut across pixels.

The other reason that images need to be sharpened is due to color and demosaicing. As mentioned above, each pixel can measure only one color of light. Actually, the pixels don't really even measure color. The pixels are colorblind. They can neither see nor measure color. All of those pixels measure only the intensity of light. In a sense, the pixels are only measuring tones of gray. So, how are the colors produced? Through filters and software magic. In the digital camera, there is a color filter array sitting above the sensor. This filter array filters the light so that each pixel sees only one of three colors of light. Some of the pixels see only red light, some only see green light, and the others only see blue light. In order to create the color that is seen in the image, software looks at each pixel and determines the light intensity of the color of light at that pixel. The software also looks at the light intensity at each of the pixel's neighboring pixels (which will have their own colored light levels). Using this information, the software calculates a color for each pixel and assigns that color value to the pixel. This process is called Bayer interpolation.

The Bayer interpolation can cause an acutance problem along edges where color transitions occur. For instance, assume that there is a dark green leaf against a bright, pale, blue sky. What the eye sees is a crisp, green/blue edge. On the other hand, the Bayer interpolation may interpolate an intermediate color that will soften the edge and result in a lowering of the acutance along the color edge.

In addition, processing of an image in an image editing program can further reduce the acutance of the image.

Regardless of the cause of the loss of acutance, the cure is sharpening of the image.

How Sharpening Works

Figure 6: Edge Histograms

Sharpening restores the loss of acutance due to digital capture or image processing. What sharpening does is make the dark side of edges darker and the light side of edges lighter. This is shown in Figure 6. This figure shows histograms of an edge. The pixels go from dark on the left side of the histograms to light on the right side.

The first histogram shows the tonal distribution of this edge as seen by the eye. The transition is abrupt. This is seen as a sharp edge. The second histogram shows the tonal distribution of the edge as captured by a digital camera. The edge transition is now more gradual (due to the reasons covered above). This has caused a loss of acutance. The edge would now appear unsharp. The third histogram shows the edge after sharpening. The sharpening has made the dark side of the edge even darker and the light side even lighter. Even though there is still some transition from one side of the edge to the other, the acutance had been significantly increased. The eye will now see the edge as being sharp again. In essence, sharpening uses an increase in edge contrast to trick the eye into seeing edges as sharper than they were at the time of capture by a digital camera.


How Unsharp Mask Works

Ever wonder how a tool designed to increase the sharpness of images came to be called USM. Actually, it is a throwback to the days of darkrooms. In the days when darkrooms reigned supreme and image editing programs did not exist, darkroom practitioners would take a negative and use it to create a second, somewhat blurred, negative. They would then sandwich the blurred negative with the original and print. This procedure would increase the sharpness of the print.

USM does essentially the same thing, only digitally - which is faster and cheaper. What USM does is find edges and darken the dark side while lightening the light side (as shown in the third histogram in Figure 6). The problem is that USM isn't able to actually detect edges. Instead, it detects areas of high contrast. To do so, USM borrows from the darkroom technique: USM creates a copy of the original layer, blurs the copy, sandwiches the two copies, and calculates the difference in tonal values between the original and blurred images.

Figure 7: How Unsharp Mask Detects Edges

Figure 7 illustrates how USM finds areas of high local contrast. Image 1 shows an edge that is less than completely sharp. Image 2 shows a copy of Image 1 that has been blurred. Image 3 shows the original image sandwiched with the blurred image. The values of these two images were then subtracted. As can be seen in Image 3, areas of little contrast show up as dark. Areas of high contrast (the edge) show up as lighter. Thus, image 3 shows the edge as a lighter line down the middle of the image. Once the areas of high local contrast have been identified by this technique, USM can increase the contrast, of those areas, on the original image -- thus, increasing the acutance.

Figure 8: Image with Many Edges

This procedure of identifying edges is shown in Figure 8 and Figure 9. Figure 8 shows an original image that has a lot of edges. Figure 9 shows how the procedure was able to identify the edges. In Figure 9, the edges show up as light while areas of no edges show up as dark.

Then, USM can proceed to increase the contrast along the edges. The result will be an image that appears much sharper than when originally captured by the digital camera.


Figure 9: Edges Identified for Further Sharpening


Sharpening -- Part II