DOI: 10.14714/CP103.1901
© by the author(s). This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0.
Bernhard Jenny, Monash University | bernie.jenny@monash.edu
Ambient occlusion is widely used in computer graphics to accentuate areas in a three-dimensional scene that are hidden from bright lighting. On maps, ambient occlusion adds spatial depth and nuanced texture to shaded relief images. Ambient occlusion images for maps can be calculated from digital elevation models using the Eduard relief shading application. This paper illustrates the use of ambient occlusion for cartographic terrain shading and introduces two extensions: (1) oriented ambient occlusion for modulation with a direction of illumination, and (2) reduced ambient occlusion for valley bottoms with consistent brightness.
Ambient occlusion is widely used in video games and other graphics applications to add depth to computer-generated three-dimensional scenes. This article describes the application of ambient occlusion for terrain shading to enhance subtle details of a terrain surface. The best results are produced by combining ambient occlusion with traditional shaded relief images; the occlusion layer adds a nuanced texture that accentuates narrow drainages and other terrain features that are occluded from lighting (Figure 1). Ambient occlusion works best with detailed high-resolution elevation models for shaded relief at large and medium scales.
Figure 1. Ambient occlusion combined with shaded relief images. From top to bottom: Gore Range, Colorado (1 meter resolution); Massanutten, Virginia (10 meter); and Crater Lake, Oregon (3.3 meter). Elevation models from Kennelly et al. (2021).
Ambient occlusion is added at the end of the rendering procedure. After a two-dimensional shaded image is computed, ambient occlusion calculates an occlusion value for each pixel that locally darkens the image.
In the time since Bavoil, Sainz, and Dimitrov (2008) introduced a computationally efficient method for ambient occlusion, many extensions to their algorithm have been proposed, but all extensions are variations of the following concept: for each pixel in an image, a search is conducted for occluding objects within a hemisphere centered on the corresponding location in the three-dimensional scene. If there are occluding objects around this location, the brightness of the pixel is reduced; if there are no occluding objects, the brightness of the pixel is unaltered.
Applying this concept to cartographic terrain shading, imagine a pixel in the bottom of a narrow valley. This pixel is surrounded by many occluding terrain elements like valley slopes and mountain ridges, resulting in a high ambient occlusion value. In contrast, a pixel on an exposed mountain ridge has neighboring terrain elements that are either located at elevations lower than the pixel’s elevation or at distances exceeding the radius of the hemisphere. No other terrain elements occlude the pixel on the mountain ridge, so its brightness is not reduced.
All terrain visualizations in this article were created with Eduard (eduard.earth), a software application for producing shaded relief, ambient occlusion shading, and other types of cartographic terrain visualizations. Once these have been generated, Eduard can combine them with a variety of blending modes, masks, and adjustment curves. Eduard has a convenient, user-friendly interface for adjusting ambient occlusion parameters (Figure 2).
Figure 2. Rendering and combining ambient occlusion and shaded relief layers with Eduard. Adjustable ambient occlusion parameters include (in the sidebar on the right side) scan radius, aperture angle and direction for oriented ambient occlusion, amount of terrain details, reduction of ambient occlusion in flat areas to prevent dark valley bottoms, and anti-aliasing. swissALTI3D elevation model of Glarus by swisstopo downsampled to 10-meter resolution.
This article is structured as follows: The following section discusses the concept of ambient occlusion calculation. The article then introduces oriented ambient occlusion modulated with a direction of illumination and discusses a method for reducing ambient occlusion in flat areas. Appendix A provides details about the algorithm of the latter method.
When rendering ambient occlusion for an orthographic shaded relief, the algorithm starts with a white image that is to receive ambient occlusion. The algorithm visits each pixel and repeatedly decreases the brightness of the pixel when an occluding object is found. To search for occluding objects, elevation values are sampled along radially arranged rays (Figure 3, left). Along each ray, elevation is extracted from the elevation model at a series of sampling points to determine whether that elevation is occluding the central pixel.
Figure 3. Orthographic top-down view of an elevation model. The small circles are the sampling points for computing ambient occlusion for the central pixel. This example has 20 sampling rays and a scan radius r equal to three grid cells. Left: circular sampling. Right: oriented sampling with an aperture angle 𝛼 of 135° oriented along a top-left illumination direction indicated by the dashed line.
Various heuristics for choosing a sampling distance along the rays are possible. A constant sampling distance equal to the cell size of the elevation model results in visually pleasing images. Elevation values are sampled with bilinear interpolation.
Experiments with various numbers of radial rays found that for terrain shading, 20 rays with an angular spacing of 360° ÷ 20 = 18° produce smooth visual results. Fewer sampling rays can result in visual noise.
The scan radius r is the primary parameter when computing an ambient occlusion image. As illustrated in Figure 4, a short scan radius accentuates narrow valleys and escarpments, whereas a long scan radius darkens the slopes of larger valleys and adds a grayscale gradient to valley bottoms.
Figure 4. Ambient occlusion with a scan radius of 7 (left), 25 (middle), and 100 pixels (right). Great Sand Dunes, Colorado, 3.3-meter elevation model from Kennelly et al. (2021).
To determine whether a sampled elevation is occluding the central pixel, the vertical gradient of the line connecting the elevation at the central pixel with the elevation of the sample point is compared to the vertical gradient of the line to the previously sampled position. If the new gradient is greater than the previous gradient, then the new position is occluding the central pixel and its brightness value is reduced. The amount of brightness reduction is proportional to the vertical elevation distance between the new and previous elevation, multiplied by an attenuation factor that decreases the occlusion effect of distant elevations. Additional details about the algorithm can be found in the presentation and conference abstract by Bavoil, Sainz, and Dimitrov (2008).
Conventional ambient occlusion scans the area around each pixel in a circular pattern to detect occluding objects (Figure 3, left). This results in ambient occlusion images that are invariant to the direction of illumination (Figure 5, top left).
Figure 5. Ambient occlusion with a 360° scan angle (top row) and oriented ambient occlusion with a 135° scan angle with a northwest illumination orientation (bottom row). Ambient occlusion images are combined with a shaded relief (right column) with the multiply blending mode. A curves adjustment compensates for the darkening of the multiply blending mode. Churfirsten, Switzerland, 30-meter elevation model from Kennelly et al. (2021).
Alternatively, ambient occlusion can be modulated with a direction of illumination, in which the sampling rays are laid out in a fan-like arrangement (Figure 3, right). The example in Figure 3 applies a standard illumination direction for shaded relief from the top left, indicated by the dashed line. The rays radiate accordingly around the opposing direction towards the bottom right. The aperture angle 𝛼 in this example is 135°. The aperture angle and the orientation of the radiating rays can be adjusted to any direction of illumination.
The resulting ambient occlusion image is shown in the bottom left of Figure 5. Compared to the ambient occlusion image with a circular 360° arrangement (Figure 5, top left), the brightness contrast with oriented ambient occlusion is stronger. Narrow valleys, gullies, and rock structures are more strongly accentuated. With oriented ambient occlusion, shades on valleys slopes are lopsided; slopes with a northwest exposure remain bright, whereas slopes with a southeast exposure are darkened.
In the right column of Figure 5, the two ambient occlusion images are combined with a shaded relief generated by a neural network imitating manual relief shading (Jenny et al., 2020). A moderate amount of aerial perspective is also applied (Jenny and Patterson, 2021). Both blended images contain informative texture details, but the image with the 360° arrangement lost some of the original shading’s clearly structured illuminated and shaded slopes because illuminated slopes of main landforms are darker (Figure 5, top right). The combination of shaded relief with oriented ambient occlusion (Figure 5, bottom right) shows a stronger contrast between illuminated and shaded slopes of large landforms and more clearly accentuates narrow drainages and other terrain details.
Standard ambient occlusion can result in flat plains and lakes with varying brightness, which can be visually pleasing for waterbodies or wide valley bottoms. However, it can also be distracting when bright flat plains transition into narrow valleys, which are rendered considerably darker. Additionally, dark valley bottoms are likely to reduce the visual contrast with overlaying map features and result in a map that is difficult to read. An example of a dark valley bottom can be seen in Figure 4 (right). Figure 6 (top) is an example of an alpine lake that is excessively dark.
Figure 6. Ambient occlusion darkening a lake and a valley bottom (top); ambient occlusion reduced in flat areas (bottom). Scan radius r = 200 pixels, aperture angle 𝛼 = 200°. Klöntalersee, Switzerland, swissALTI3D elevation model by swisstopo downsampled to a 10-meter resolution.
Dark valley bottoms can be brightened by applying an adjustment curve with a mask for flat areas, but reliably identifying coherent flat valley bottoms with an algorithmic method is surprisingly difficult. Adjusting the ambient occlusion algorithm can reduce the darkening of valley bottoms. The dark flat surfaces result from occluding sampling points that are both high relative to the valley bottom and close to the flat valley. Appendix A introduces a computational method that decreases the occluding effect of elevations that are (a) particularly high when compared to the elevation of a flat plain, and (b) particularly close to a flat plain. Figure 6 (bottom) illustrates the effect of this adjustment. Figure 2 shows a larger geographic extent of the same region at a slightly different scale rendered with a combination of shaded relief and an ambient occlusion image with reduced occlusion in flat areas.
Computation time for calculating ambient occlusion is fast with a graphics card because it can be calculated simultaneously for multiple pixels. For example, calculating ambient occlusion for an elevation model of 5,000 × 5,000 pixels with a radius of 30 cells and 20 scanning rays takes less than one second on an Apple Mac computer with an M1 chip.
As illustrated by the figures in this article, combining ambient occlusion with shaded relief enhances subtle and nuanced terrain details, which is most effective at large and medium scales. A detailed high-resolution elevation model is required to accentuate small details, such as narrow drainages and rock escarpments, which especially benefit from ambient occlusion. Oriented ambient occlusion is useful to preserve contrast between illuminated and shaded slopes of large landforms and accentuate valleys, while adjusting the ambient occlusion for flat areas avoids dark valley bottoms with irregular brightness. Eduard is a tool that can efficiently produce ambient occlusion and shaded relief images that are visually pleasing and effective.
Special thanks to Brooke E. Marston for editing this article and suggesting valuable improvements, Tom Patterson for his insightful feedback and tireless software testing, Practical Cartographer’s Corner editor Alicia Cowart for her suggestions for improving this article, Assistant Editor Daniel Huffman for typesetting this article, and the anonymous reviewer for feedback on the appendix of this article.
The author is one of the developers of the Eduard app. He does not receive any financial benefit from the sales of Eduard.
Bavoil, Louis, Miguel Sainz, and Rouslan Dimitrov. 2008. “Image-space Horizon-based Ambient Occlusion.” SIGGRAPH 2008, Los Angeles, California, August 11–15, 2008. New York: Association for Computing Machinery. https://doi.org/10.1145/1401032.1401061.
Jenny, Bernhard, Magnus Heitzler, Dilpreet Singh, Marianna Farmakis-Serebryakova, Jeffery Chieh Liu, and Lorenz Hurni. 2020. “Cartographic Relief Shading with Neural Networks.” IEEE Transactions on Visualization and Computer Graphics 27 (2): 1225–1235. https://doi.org/10.1109/TVCG.2020.3030456.
Jenny, Bernhard, and Tom Patterson. 2021. “Aerial Perspective for Shaded Relief.” Cartography and Geographic Information Science 48 (1): 21–28. https://doi.org/10.1080/15230406.2020.1813052.
Kennelly, Patrick J., Tom Patterson, Bernhard Jenny, Daniel P. Huffman, Brooke E. Marston, Sarah Bell, and Alexander M. Tait. 2021. “Elevation Models for Reproducible Evaluation of Terrain Representation.” Cartography and Geographic Information Science 48 (1): 63–77. https://doi.org/10.1080/15230406.2020.1830856.
Shary, Peter A., Larisa S. Sharaya, and Andrew V. Mitusov. 2002. “Fundamental Quantitative Methods of Land Surface Analysis.” Geoderma 107 (1–2): 1–32. https://doi.org/10.1016/S0016-7061(01)00136-7.
This appendix provides details about the algorithm for reducing ambient occlusion in flat areas to prevent valley bottoms from becoming excessively dark. Ambient occlusion by Bavoil, Sainz, and Dimitrov (2008) is attenuated for pixels in flat areas and pixels occluded by close elevations.
The notation follows the presentation by Bavoil, Sainz, and Dimitrov (2008), slide 24, “Per-Sample Attenuation.” The recording of their SIGGRAPH 2008 presentation, available as supplementary material to their paper, is recommended for an introduction to the ambient occlusion algorithm. The presentation contains diagrams that illustrate the various components of ambient occlusion computation.
The amount of per-pixel darkening for regular ambient occlusion WAO is initialized with WAO = 0, and WAO is increased each time an occluding surface point is found when stepping along a scan line (Equation 1).
WAO += W(S2) ∙ (AO(S2) - AO(S1)) (1)
Equation 1 was proposed by Bavoil, Sainz, and Dimitrov (2008). S1 is the last sample point that was found to occlude the pixel, and S2 is the current sample point that is occluding the pixel. AO(S1) and AO(S2) are the ambient occlusions contributed by S1 and S2. W(S2) is the attenuating weight applied to the difference between AO(S1) and AO(S2). Equations for AO(S1), AO(S2), and W(S2) can be found in the presentation by Bavoil, Sainz, and Dimitrov.
Equation 1 is extended with an additional dimensionless attenuation factor f, which incorporates a measure of flatness nearby the pixel under consideration (Equation 2).
WAO += (W(S2) ∙ (AO(S2) - AO(S1))) ÷ f, and f ≥ 1 (2)
The attenuation factor f equals 1 when the slope at the central pixel is steeper than a user-definable gradient threshold Gmax, or when the occluding sample point S2 is farther away than a user-definable distance threshold Dmax.
What follows is a description of the calculation of the attenuation factor f. We first apply a Gaussian blur filter with 𝜎 = 1 to the elevation model. The gradient G at the central pixel is then computed from the blurred elevation model (Shary, Sharaya, and Mitusov 2002) and normalized with the gradient threshold parameter Gmax to obtain a normalized gradient g between 0 and 1 (Equation 3). Note that the normalized gradient g is computed for the central pixel and does not vary along the sampling ray.
g = G ÷ Gmax, and g ≤ 1 (3)
A normalized distance value d between 0 and 1 is computed from the distance D between the central pixel and the occluding sample point S2. Dmax is the threshold parameter for the distance (Equation 4).
d = D ÷ Dmax, and d ≤ 1 (4)
The normalized gradient and the normalized distance are combined (Equation 5).
w = √(g2 + d2), and w ≤ 1 (5)
In Equation 5, w is a dimensionless weight that we limit to a maximum value of 1. If w is close to 0, the occluding elevation is close to the central pixel and the central pixel is flat; if w is close to 1, the occluding elevation is far away and/or the slope at the central pixel is far from being flat. The attenuation factor f is finally computed with a linear mapping and a user-definable attenuation parameter a using Equation 6. The attenuation factor f reduces normal ambient occlusion if the pixel is in a flat area or if there are occluding elevations close to the pixel.
f = a - (a - 1) ∙ w, and f ≥ 1 (6)
There are three user-definable parameters (Gmax, Dmax, a) for this calculation. Gmax is the gradient threshold; for pixels with a slope flatter than Gmax, the ambient occlusion is reduced. Dmax is the distance threshold; occluding elevations closer than Dmax result in a reduced ambient occlusion. The attenuation parameter a controls how much the brightness of flat areas is adjusted. To simplify the graphical user interface for controlling the adjustment of flat areas, Gmax and Dmax can be derived from a with: Gmax = tan-1(2a) and Dmax = r/2, where is r the scan radius. These heuristics simplify the user interface to a single adjustable value and produce satisfactory results.