DOI: 10.14714/CP83.1351

An Impressionistic Cartographic Solution for Base Map Land Cover with Coarse Pixel Data

Paulo Raposo, University of Tennessee, Knoxville | praposo@utk.edu

Cynthia A. Brewer, The Pennsylvania State University | cbrewer@psu.edu

Kevin Sparks, Oak Ridge National Laboratory | sparksk16@gmail.com

Several everyday cartography applications do not require sharply precise base maps, and in fact benefit from their generalization or deliberate obscuration, such as tourist or transit maps. Additionally, raster data fine enough for a given map scale are not always available. We present a method of creating an impressionistic land cover base map for topographic mapping in which the above two conditions are true, using the National Land Cover Database (NLCD) of the US Geological Survey (USGS). The method is based on reclassification, upsampling, constrained randomization at class boundary edges, and deliberate use of colors with very similar lightness values. The method spans both scientific geospatial data treatment and artistic cartographic design, and both generalizes and enhances the data. The processing, automated in ArcGIS™, is detailed, and examples of the product are provided.

KEYWORDS: land cover; topographic mapping; upsampling; raster generalization; enhancement; uncertainty

INTRODUCTION

Cartographers often regard the data processing and synthesis at the earlier stages of mapmaking as the most time-consuming part of the task. Frequently, data layers compiled in a geographic information system (GIS) start out with different levels of spatial, temporal, and attribute resolution, and resolving these differences involves careful analytical consideration. Simplification is typically the main approach to take at this point, bringing higher-resolution data down to appropriately comparable levels of detail as those in the coarser input data.

Occasionally data enhancement is appropriate, albeit challenging to achieve or justify in an analytical sense. Enhancement as an operation is usually considered to be among the set of all generalization operators, though it is not typically thought of as an increase in spatial detail. Operations such as displacing buildings away from roadways to ensure a visible gap (Neun, Burghardt, & Weibel 2009), exaggerations of portions of route networks (Reimer 2010), or terrain shading generalizations that enhance ridges (Marston & Jenny 2015) each enhance. These treatments typically sacrifice achievable planimetric accuracy in a small portion of the map (e.g., displacing a building by 1 mm at 1:25,000 makes its placement 25 m off of its real position), but are regularly considered acceptable in map making. The readability (i.e., legibility, visual hierarchy, and symbol & shape resolution) and aesthetic benefits of making these enhancements outweigh the small losses of spatial accuracy they introduce.

We present an automated suite of data processing and map design techniques for enhancing raster land cover data for cartographic depiction in cases where some boundary accuracy can be sacrificed, such as in general-purpose topographic mapping. Our method is useful for treating data representing classed phenomena, but is particularly well suited to cartographic depictions of land cover, where the boundaries between classes are often naturally fuzzy (e.g., between a grassy field and a forested area). The product layer functions best as a translucent base map, and when combined with other topographic feature layers such as terrain shading and thematic vectors. Figure 1 illustrates the product land cover for the area around Hermitage, Missouri; the land cover is shown in combination with terrain shading, vector symbols, and an orthoimagery base.

We developed our method in the context of a redesign of the United States Geological Survey (USGS) US Topo topographic mapping series. The US Topo 1:24,000 topographic map series is now served freely online in the GeoPDF file format. Several new feature layers have been added to the updated map series to reflect user requirements gathered by the USGS (Sugarbaker, Coray, & Poore 2009), with most of the new themes illustrated with existing federal geospatial datasets. This paper reports on our efforts to use National Land Cover Database (NLCD) 2011 data in 1:24,000 US Topo map series, as well as in multi-scale USGS National Map Viewer map products.

Figure 1. Hermitage, Missouri.

Figure 1. Hermitage, Missouri.

Our method arose from the challenge inherent in using the 30 m NLCD data at 1:24,000, at which scale they appear heavily pixelated (Figure 2). Each pixel prints at a size of 1.25 mm, whereas they must be only 0.25 mm to ensure that they are barely discernible by the human eye at typical viewing distances. Aiming for this size suggests that NLCD data are only appropriate for use at 1:120,000 and smaller cartographic scales. The method we developed here simultaneously upsamples classed data in a manner related to interpolation or super-resolution techniques (Atkinson 2005), as well as offers a set of graphic variable decisions for portraying the product raster that remains deliberately vague at class borders.

Figure 2. Unchanged NLCD data shown with transparency over a USGS topographic map sheet at 1:24,000.

Figure 2. Unchanged NLCD data shown with transparency over a USGS topographic map sheet at 1:24,000.

The treatments detailed here both cartographically generalize and enhance the input classed raster. The number of classes is reduced by means of simple reclassification, region shapes are simplified using constrained stochastic “airbrushing” (as explained below), colors are chosen to deliberately ambiguate region borders, and the product is at higher pixel resolution than the input. Our method is differentiated from other methods proposed for achieving higher-resolution raster land cover because it explicitly takes into account how the product will be depicted on a map. The processing is both science and art, in that achieving a higher resolution land cover base map is done with a stochastic process in precisely-constrained areas of the raster, and the symbolization is chosen to create an “impressionistic” visualization in the artistic sense.

THEORETICAL CONTEXT AND RELATED WORK

Cartographic generalization research has been considerably less common on raster than on vector datasets. Early work sought to establish frameworks and theory on the various kinds of raster generalization possible (McMaster & Monmonier 1989; Weibel 1992). Land cover and digital elevation models (DEMs) have been the focus of most cartographic work on raster generalization. Monmonier (1983) notes how land cover generalization is more naturally approached in the raster domain. He describes the process as involving several smoothing and aggregation-of-class-region operations, requiring a series of criteria choices for such things as region inclusion or exclusion, and hierarchical importances across aggregated classes, among others. He later (1987) details the need for areal displacement when generalizing class regions, comparing how continuity is more or less crucial to diverse land cover classes and region morphologies. Subsequent raster generalization work has focused on using combined vector and raster techniques in categorical generalization (Peter & Weibel 1999; Steiniger & Weibel 2005), often employing amalgamation techniques borrowed from vector polygon data treatments (Li & Su 1995; Regnauld & Revell 2007; Zhang et al. 2013). Such methods have typically been parameterized by cartographic constraints (Harrie & Weibel 2007). Researchers have often taken a morphological approach, selecting, aggregating, deforming and displacing polygonal raster regions to acceptable degrees for generalization (Li 1994; Su & Li 1995; Su et al. 1997; Cámara & López 2000). Morphological analysis of raster regions has been applied with direct reference to available thematic or geomorphological information about each region (Brassel & Weibel 1988; Mackaness & Edwards 2002; Gao, Gong, & Li 2004). Researchers have also explored the use of morphing techniques for static and animated products (Li & Wong 2008; Pantazis et al. 2009).

A promising approach to land cover mapping is texture synthesis, developed in computer graphics, and gaining attention in cartographic and geographic information science research (Mariethoz & Lefebvre 2014; Dumas et al. 2015). Jenny, Jenny, and Cron (2012) present an application of texture synthesis to cartography, seeking to create artistic, “pseudo-natural” maps of land cover by effecting gradual transitions between classes. They describe texture sythesis by example, a family of techniques in which pixel colors are chosen based on a comparison of other pixels in the scene in similar spatial arrangement to similarly-colored neighboring pixels. This method is proposed for making “transition textures” between classes in land cover cartography. They describe map design parameters dictating where such textures could be used (224):

To avoid a technical look of the land cover category boundary, the subtextures could dissolve into subtexture islands. The number and size of these subtexture islands within the neighbouring subtexture should be arranged naturally; e.g., become smaller with increasing distance from the boundary. The width of this patchy transition zone should appear plausible. The boundary between the subtextures should vary naturally (e.g., undulate), but also must not diverge too much from the category boundary defined in the underlying land cover dataset.

Their experimental techniques have been applied to cartographic panorama views (Jenny & Jenny 2013). An approach with similar realism and creative license is taken by Patterson (2002), and applied to small-scale land cover in combination with terrain shading for the US National Atlas (2013). While these techniques are promising, they do not address increases in pixel resolution.

Overcoming the limitations of fixed numbers and sizes of pixels—that is, getting more pixels and therefore higher-fidelity samples—in remote sensing equipment and data has been a focus of much research for the past three-or-so decades (Cracknell 1998; Campbell and Wynne 2011). Hardware solutions include the creation of smaller sensor elements or larger sensor arrays (along with appropriate optical lenses), but these solutions are either impractical, prone to noise, or expensive, for technical reasons beyond the scope of this article. Software image-processing methods have been favored due to the difficulties of hardware solutions, with algorithms proposed from diverse imaging fields such as medical microscopy, computer vision, and geospatial remote sensing. Much of the research on improving pixel-based land cover classification in recent decades has focused on mixed pixels, which are those pixels whose spectral profiles are produced by an aggregation of multiple land cover types present in that pixel’s instantaneous field of view (IFOV). These are distinct from pure pixels, wherein the land cover types present in the IFOV are relatively uniform. Mixed pixels occur at any spatial resolution (i.e., pixel size), “often at the edges of large parcels or along long linear features, such as rivers or highways, where contrasting brightnesses are immediately adjacent to one another” (Campbell & Wynne 2011, 291). A greater proportion of mixed pixels in a scene leads to greater inaccuracies in classified products (Smith et al. 2003; Latifovich & Olthof 2004) because there is more uncertainty in the image (Congalton et al. 2014). Efforts to determine the contents of mixed pixels have typically sought to establish probabilities for the presence of each of a set of land cover types in a given pixel, with relative probabilities calculated using linear and non-linear mathematical models (Marsh et al. 1980; Ichoku & Karnieli 1996; Mather & Tso 2009; Roy et al. 2014; Chen et al. 2015; Imbiriba et al. 2016). Recent research has focused on machine learning techniques (e.g., active learning, neural networks, support vector machines), where algorithms are trained on curated datasets before being used on pre-classified data (Foody & Mather 2004; Tuia et al. 2011; Samat et al. 2014; 2016). Foody (1999) states that training sets emphasizing raster region border pixels (i.e., where mixed pixels typically occur) tend to give classification neural networks more generalizable knowledge.

In remote sensing and other imaging disciplines, methods that attempt to resolve variations smaller than the sensor pixel size are referred to as super-resolution. Super-resolution methods generally take advantage of aliasing in the captured image (i.e., distortion and mis-identification of signals in the image due to insufficient resolution), which is what causes mixed pixels. These are distinct from interpolation, which can increase resolution but does not recover fluctuations unresolved by the sensor. Super-resolution algorithms employ a wide diversity of approaches (see Tian & Ma [2011] and Nasrollahi & Moeslund [2014] for comprehensive reviews), but can generally be divided among those that function on single or multiple images, and then again on those that function in the spatial domain of the pixels (i.e., the pixel values in topological and metric relation to each other) and those that operate in the frequency domain (i.e., on the pixels after a suitable mathematical transform, such as the Fourier transform or wavelet analysis). Most algorithms in the literature and in imaging practice operate in the spatial domain (Nasrollahi & Moeslund 2014). The theoretical approach taken is generally to regard the given image(s) as a decimated product of either the higher- or infinitely-detailed, hypothetical original scene after some mathematical function, the function modeling the optical and/or sampling process that produced the existing, low-resolution image. Images at some higher target resolution are then derived by theoretical reconstruction of the original scene; much of the diversity of the methods developed over the years is in how reconstruction occurs.

One of the earliest super-resolution methods developed was applied to Landsat 4 data, which featured multiple translated views of the same areas of the Earth (Tsai & Huang 1984). Single-image methods typically take a more purely theoretical approach, often using machine-learning techniques applied to preprepared training data (Freeman et al. 2002; Yang et al. 2010; Kwon et al. 2015), using repetitive adjacency patterns within a single image (Glasner, Bagon, & Irani 2009), or taking advantage of repetitive texture elements in the image (Park et al. 2010). Such methods may also apply distortions and noise to the input image to generate hypothetical other images of the same scene (Nasrollahi & Moeslund 2014), in order to obtain a set from which to reconstruct a high-resolution scene. Reconstruction of the hypothetical high-resolution image, especially in the context of facial recognition applications, is sometimes termed “hallucination” (Baker & Kanade 2002). Super-resolution methods work with the ratio luminance pixel values, not nominal or ordinal data such as classed land cover images.

METHOD

We create a generalized land cover map layer, using constrained stochastic raster region edge enhancement and color symbolization to deliberately obfuscate and soften land cover class boundaries. The resulting map layer is produced at a finer resolution, and yields an impressionistic or painterly representation of generalized land cover. Since the resulting land cover layer is an image, it lends itself easily to standard image zooming and resampling as will happen in a multi-scale interactive mapping interface (i.e., scale-space theory transformations apply: see Lindeberg [2008; 2014]). Zooming is a certain context in which the land cover layer is meant to be used, since it would be served digitally in zoomable PDF media.

Our method involves several raster data operations, each of which is either a generalization or an enhancement of the data. First, a suitable target resolution of the raster data for use in the map product is calculated using the output map scale and the graphic resolution of the map medium (Tobler 1987). For mapping at 1:24,000, we use a target resolution of 2 meters. Classed land cover as well as percent canopy and impervious surface data layers from the 2006 NLCD are used. Because the first data layer is nominal and the latter two are ratio, their processing progresses in independent threads until the final stages of land cover layer production.

All processes are automated using a single Python script within ArcGIS™, making the method amenable to large map series production. All of the processes described here use various tools available in ArcMap™ and the Esri Spatial Analyst™ extension package.

NOMINAL DATA TREATMENT

The NLCD is maintained and made publicly available on-line by the USGS. Derived from unsupervised classification (Anderson et al. 1976) of Landsat Enhanced Thematic Mapper+ (ETM+) data, it provides nationally-consistent land cover for the entire conterminous United States at 30 m pixel resolution (Homer et al. 2007). The NLCD contains several distinct datasets, including a categorical classed land cover layer, and percent-coverage layers for tree canopy and impervious surface. While there exist sporadic coverages of authoritative land cover data at higher spatial resolutions throughout the United States, their lack of ubiquity makes them a problematic source for nationwide mapping.

RECLASSIFICATION

One of the main generalization operations we undertake is simple reclassification, reducing the dozens of NLCD classes to four categorical classes and two magnitude classes. This is done to minimize thematic and visual complexity of the land cover presented to the reader, such that the product land cover serves as a generalized overview, rather than a high-resolution analytical data layer. The process is opposite to the aim of super-resolution, where smaller pixel sizes are sought in order to determine sub-input-pixel thematic variation. The aim here is to produce a general impression of land cover rather than provide precision cartometric analysis data.

The classed layer of the 2006 NLCD uses a scheme equivalent to the Anderson Land Cover Classification System (Anderson et al. 1976). We reclassify these to three classes, being agricultural, grassland, and barren land (Table 1). Agricultural land includes all types of land used to grow food or animal feed (i.e., row crops, orchards, pasture, etc.), and is aggregated to a single class to denote areas where topographic map users would presumably want to avoid trespassing. Our grasslands class includes all the classes from the “shrubland” and “herbaceous” supercategories of the NLCD, thereby denoting any lands principally bearing small shrubs, grasses, sedge and moss that hasn’t been classed as pasture (i.e., isn’t commercially used). The barren land class includes those areas classed in the NLCD as rocky, sandy, or made of clay, and generally devoid of vegetation (i.e., deserts, talus, bedrock, etc.). Grassland and barren land are given in our output base map as two general types of landscapes topographic map users may find passable. These three classes are given on the assumption they provide a general impression of land cover for macro-level navigation and natural resource management, rather than a detailed analysis supported by the original NLCD and other land cover data sources.

Table 1. The input NLCD classes aggregated by reclassification into each of the three product classes.

Table 1. The input NLCD classes aggregated by reclassification into each of the three product classes.

Our reduction of the many NLCD classes is a basic model generalization operation (Sester 2008), engineered to keep the land cover base map visually and thematically simple, rather than complex. Greater numbers of classes are obviously possible (e.g., sub-classes of the existing three), but having them would require a larger palette of graphic variables to symbolize them. Greater thematic granularity offered by higher numbers of classes comes at the expense of greater graphic complexity, and, thereby, greater difficulty in map reading, and greater dependency on legends. In the context of a US Topo redesign, these three classes constitute a significant increase over the previously non-existent land cover information, without introducing a great deal of visual complexity. We seek to keep the land cover thematically and visually simple on the rationale that other map layers overlaid should not have to overcome a complex land cover base map in the overall visual hierarchy.

Water bodies such as ocean, lakes, rivers, and reservoirs are the fourth categorical class in our land cover layer. This is produced using National Hydrography Dataset (NHD) High Resolution (Simley & Carswell Jr. 2009) polygons by simple rasterization at the output resolution, and included to denote areal bodies of water. Other hydrographic features such as glaciers and marshes are excluded; the rationale for this is that polygonal NHD data overlaid on our land cover base map can be used to symbolize these features more accurately than they are represented in the NLCD.

Canopy cover and impervious surface (i.e., built-up) areas, while present in the classed NLCD data, are instead represented in our method by their respective dedicated NLCD data layers, representing their percent coverage values with color saturation gradients. These pixels are prepared using a simple upsampling and thresholding technique described later.

BOUNDARY UNCERTAINTY RATIONALE FOR CLASS EDGE CHANGE

The next step in processing classed pixels for land cover is to introduce softer (i.e., less pixelated) edges between regions. We describe the technical rationale for our impressionistic, “airbrush” solution in this section.

The raster data model is intuitive and simple, being a collection of regularly-spaced samples or derived data points. The model is particularly useful and intuitive for continuous phenomena, since the topology of and distance between data points is implicit. Nonetheless, there exist certain conceptual ambiguities in the model, particularly with respect to pixel assignment and the fact that cell size is variable (Raposo & Samsonov 2014). Classed rasters contain cells whose category has to be defined by some statistical process, though the signal present in the cell area during data capture typically varies throughout, as is the case in mixed pixels. Also, the variable cell resolution in the model directly introduces the Modifiable Areal Unit Problem (Openshaw 1984).

The latter problem, arising from variable cell sizes, is particularly salient whenever raster resolution is changed, such as in resampling to a larger cell size for map generalization. Cell size is directly related to the spatial precision of the dataset, and changes to resolution drive error propagation through scale. In spatial data such as geographic rasters, the measure of space over which one sample is collected, being the cell size, is the spatial frequency of the dataset. The Nyquist-Shannon sampling theorem (Nyquist 1928; Shannon & Weaver 1949) describes the frequency range over which data of a given sampling frequency can be considered precise. In rasters, the cell size determines the distance over which the dataset is imprecise. According to the theorem, a dataset should have a frequency of one-half or smaller than the highest frequency (i.e., smallest variation) of the phenomenon it seeks to reliably sample or represent. By corollary, a dataset is imprecise over distances smaller than twice the model frequency resolution. For geographic rasters, this translates to twice the pixel size (Tobler 1987). This means that single pixels, in terms of their ability to differentiate geographic variable fluctuation across their extents, should be regarded with considerable uncertainty. Of course, the foregoing has not considered classification accuracy; for the purposes of this work, we assume no classification uncertainty or error.

With geographic phenomena such as land cover, raster pixel uncertainty is compounded by the naturally imprecise boundaries frequently encountered in the physical world. Grasslands, for example, regularly grade into their neighboring areas, such as forests or deserts. Certain land cover regions have more sharply-defined borders than others, such as roadways or mechanically-tended agricultural fields. In the case of fine-enough raster data, sharp boundaries are representable, but the location of these is not precisely represented if the data are coarse, and therefore the exact location of such boundaries must be regarded as uncertain in the absence of other, more precise data.

THE UNCERTAINTY CORRIDOR

The two interacting factors of natural edge ambiguity and cell imprecision lead us to consider boundaries between raster land cover classes as lying along an uncertainty corridor, constituted of mixed pixels. Figure 3 illustrates the uncertainty corridor between two distinct land cover regions. According to the sampling theorem, we cannot be certain of precisely detecting or representing an object that is smaller than twice the pixel size. It follows that the precision of the location at which one land cover class ceases and another begins is no finer than the width of two cells in our input data. We therefore regard the cell immediately on either side of a region border as inherently uncertain, and use the concept of a 2-cell uncertainty corridor to describe all such uncertain areas in the input raster.

Figure 3. The uncertainty corridor between two raster class regions.

Figure 3. The uncertainty corridor between two raster class regions.

We do not attempt an interpolation or super-resolution in the uncertainty corridor for several reasons. First, since the data are nominal, interpolation and super-resolution are not arithmetically possible. Interpolation lacks theoretical basis, since the operation does not recover thematic variation not captured in the input data; in the absence of this we cannot be certain any interpolated pixel is accurate. Interpolation, super-resolution, or mixed-pixel analysis could be attempted on raw, multi-band sensor data and used to create a classed land cover raster at an appropriate resolution; such a process would involve computation costs that may make it impractical for map series production.

“AIRBRUSH” REGION EDGE TREATMENT

The process of categorical land cover class generalization is illustrated in Figures 4 and 5 over a small agricultural region; the same processing is applied to grassland and barren land regions. The reclassification described earlier is illustrated across numbers 1 and 2 in Figure 4.

Figure 4. The process of categorical land cover class generalization (part 1 of 2).

Figure 4. The process of categorical land cover class generalization (part 1 of 2).

Following reclassification, the raster regions are shrunk by 2 cells and expanded back by 1. This process, common in morphological analysis, has several purposes. First, it eliminates single cells or areas where a class is only 1 or 2 cells wide (i.e., below the width that can be safely regarded as precise), thereby simplifying the shapes and spatial distribution of land cover patches, and removing imprecise, isolated cells. This leaves regions one cell thinner than they are in the input data (see Figure 4, number 3). Removing isolated or thin regions of cells in this manner reduces the analytical precision of the data, but this is by design, since the product being developed is a generalized land cover base map and not a precise analytical dataset.

The shrink-and-expand process creates the 2-cell, 60 m wide uncertainty corridor between any two adjacent regions (see Figure 3). The shrunk regions are polygonized, and buffers are calculated around them (Figure 4, number 4). Buffer regions are 30 m wide, corresponding to the input cell size and one-half the width of the uncertainty corridor.

The shrunk class regions are now upsampled to 2 m resolution. A random-assignment raster is calculated in the buffered regions around these, also at 2 m resolution (Figure 4, number 5). This raster is generated such that one out of every five pixels contain a binary flag, with all other pixels being null. The raster containing the regions and that containing the randomly-placed pixels are mosaicked into one. Each of the isolated, randomly-placed pixels is used as a seed around which the edge of the raster class is expanded by seven, 2 m cells (Figure 4, number 6). This process creates a meandering, ragged edge around each land cover raster region, with a dispersed appearance similar to airbrush painting. We remove most small gaps remaining in the uncertainty corridor, as well as simplify the boundary edge, by expanding and shrinking the class pixel regions by 3 pixels (see Figure 5 numbers 7 and 8). At this point, the final class boundary “airbrush” edges have been geometrically defined.

Figure 5. The process of categorical land cover class generalization (part 2 of 2).

Figure 5. The process of categorical land cover class generalization (part 2 of 2).

The final product uses a color gradient effect at class edges to produce a feathered appearance. In order to provide for pixel values at the margins of class regions that will drive color gradients, two more transformations occur. First, the class regions are inverted such that their areas become null cells, while other areas contain a binary flag (Figure 5, number 9). The Euclidean distance for each null-value cell to the closest data-containing cell is calculated, yielding a distance raster inside the land cover class regions (see Figure 5, number 10). To isolate only those cells near the margins of these regions, we apply a threshold to the distance raster such that all pixels with values greater than 10 (i.e., all pixels further than 10 m from the margin) are given the value of 10; this produces “plateaus” in each land cover class region (Figure 5, number 11). Class regions now have cell values between 0 and 10, with values increasing inward in the area within 10 m of the region edge. These distance numbers are later used to drive color gradient application to create the feathered, “airbrushed” appearance desired; internal areas of regions take on a full classification color, while the edges feather to allow class colors to gradually modulate into each other. The culmination of this edge gradient with the meandering, randomized edges each class is given is the basis of the “airbrushing” name we give the process.

TREATMENT OF RATIO LAND COVER CLASSES

Interpolation and thresholding constitute the data processing procedures for percent canopy and impervious surface data. For each, the raster is upsampled using bilinear interpolation three times, from 30 m cells to 15 m, then to 5m, and finally to the target resolution of 2 m. The objective of this repeated resampling is to ensure a smooth interpolation. The interpolated raster is then thresholded at 20%, so that areas below 20% are removed. Figure 6 illustrates the results of this procedure on percent impervious surface and canopy rasters. The top row shows the original 30 m NLCD cells, while the bottom row shows our generalized, higher-resolution product. Impervious surface shows as increasingly white, while canopy shows as increasingly green.

Figure 6. Percent impervious surface (left) and percent canopy coverage (right) rasters before (top, 30 m cells) and after (bottom, 2 m cells) resampling and thresholding.

Figure 6. Percent impervious surface (left) and percent canopy coverage (right) rasters before (top, 30 m cells) and after (bottom, 2 m cells) resampling and thresholding.

CANVAS “PATCHING”

Once the categorical and ratio classes have been processed, their spatial union is calculated. Any isolated areas of the map where no class is assigned are identified. This is necessary since the airbrushed classed regions have stochastic edges and the thresholded regions have thresholded ones, making it likely that small unclassed patches, a few pixels in width, occur between classed regions. Some of these areas will correspond to small classed areas in the classed NLCD input that were deliberately removed at the earlier shrink and expand operations. These regions occur within the uncertainty corridor and are by definition less than 2 input pixels (60 m) in width; they are typically much thinner still. At the margins of the impervious surface and canopy rasters, visual inspection over multiple examples determined that empty class areas created by the 20% thresholding were virtually equivalent in size to those defined between classed regions. We term these areas “patches,” and resolve the issue of their presence in the design phase of our process by giving these pixels a neutral “canvas” color. The selection of this color reflects those we make for our classes so that all colors blend together, both in the design and perceptual senses; further discussion of the use of color in our solution is given below. Once colored appropriately, these areas serve as small, gradual transitions between classes in the manner described by Jenny, Jenny, and Cron (2012).

As mentioned before, areas containing certain hydrographic features such as glaciers and marshes are not included in our output land cover classification. As with “patches,” these areas are classed to also carry the neutral canvas color. This reflects our design decision to use vector NHD data to represent these features, above the generalized land cover base map (i.e., glaciers shown with polygons, marshes shown with texture and pattern fills).

FINAL RASTER FLATTENING

The final spatial processing step in the method is to mosaic the nominal rasters, ratio rasters, and patches into a single, flattened raster layer. Since there will be small overlaps between classes for the same reasons there were patches between them, classes are mosaicked together using a hierarchy of decreasing importance:

  1. rasterized hydrography (the most precise layer)
  2. percent impervious surface
  3. percent tree canopy coverage
  4. agricultural areas
  5. grassland areas
  6. barren areas
  7. patches

The rationale behind this hierarchy is that the location of anthropogenic land cover class boundaries will be of greater importance to most US Topo users than will the boundaries of natural classes. This reflects findings from a USGS National Map User survey, where respondents indicated a desire to have access to data that would exhibit changes over time (Sugarbaker et al. 2009). Rasterized hydrographic features are given top priority because they are the most precise of all the datasets going into the land cover raster. Impervious surface, most frequently representing concrete, asphalt, and other human-created land cover, is then the next most prioritized land cover type. Tree canopy is next, since this is a long-standing feature type relevant to topographic maps, and present in some historical and recent versions of USGS topographic series. Percent impervious and canopy coverage are also given greater priority over the three categorical land cover types since their upsampling represents a true spatial interpolation (rather than a stochastic edge derivation), and because these layers are more informative in that they represent continuous magnitudes rather than nominal classes. The three remaining nominal classes follow, in order of presumed usefulness for navigation or natural resource management. Finally, canvas patches fill any space not claimed by at least one of the preceding classes, including areas more precisely mapped with NHD vector symbols such as glaciers or wetlands.

The output land cover base map is created as an 8-bit raster, to which a color map is applied. An 8-bit raster is chosen because it keeps file sizes small, relative to deeper bit depths. Before flattening, pixel values among the nominal land cover classes are 0 through 10 m, and 20% through 100% for the ratio classes, after each class has been thresholded. Cell values are remapped for each class using simple offsets such that each populates a predefined integer range between 0 and 255 (Table 2).

Table 2. The classes used in the land cover implementation, their input values at the end of geoprocessing, and their remapped cell values in a single 8-bit integer raster for symbolization using a color map.

Table 2. The classes used in the land cover implementation, their input values at the end of geoprocessing, and their remapped cell values in a single 8-bit integer raster for symbolization using a color map.

COLORS

In addition to the airbrushed edge, the deliberately vague, “impressionistic” edges rely heavily on class colors whose lightness values are very similar. Color lightness, also called “value” or “luminosity,” is a parameter distinct from hue and saturation. When colors of similar lightness are presented in adjacent areas, the human eye has little ability to differentiate between them, even across different hues and saturation levels (Livingstone and Hubel 1988; Brewer 1994; 1996). We use this effect in our color selections to make the location of where one land cover class ends and another begins deliberately unclear. When the final land cover layer is translucent and used with an underlying orthoimage, these locations are made somewhat sharper by lightness differences in the imagery.

The color palette used in our final land cover map layer is given in Table 3. Lightness values for agricultural areas, grasslands, barren areas, water bodies and canvas areas are kept similar at approximately 69% (using HSV in the RGB color model). Forested and built-up categories grade in lightness according to the percent coverage in the given pixel, and thereby do not maintain a similar lightness value. However, because these two classes naturally fluctuate from low to high lightness values throughout the mapped area, they too have an airbrush-like, painterly effect.

Table 3. Color specifications.

Table 3. Color specifications.

Linear color ramps are used for all nominal raster classes (Figure 7). A linear color ramp is defined for each class, starting at the common, neutral canvas color, and ramping up to the palette color for the class. In the case of agricultural, grassland, and barren classes, canvas is used at the very outer class edge (i.e., at the center of the uncertainty corridor) and the class color is used at the inside end of the feathered-edge gradient (at 10 m into the region), as well as throughout the region’s inner area. In the case of percent canopy and impervious surface classes, canvas is used at the lowest value (20%), and the palette class color is used at full coverage (100%). These color choices allow adjacent, differently-colored regions to visually fade into each other; the sum effect of these color choices along with the randomized edge generation described before constitute the “airbrushed” land cover base map effect (Figure 8).

Figure 7. The palette of color gradients used.

Figure 7. The palette of color gradients used.



Figure 8. The “airbrush” edge effect for region border ambiguation, using stochastically-generated edges and colors of very similar lightness.

Figure 8. The “airbrush” edge effect for region border amibiguation, using stochastically-generated edges and colors of very similar lightness.

To implement automatic color assignment, we created a color map file (i.e., an Esri .lyr file) containing our color ramps defined over the integer ranges to which we algebraically shifted our raster classes. An RGB value is specified for each integer value in the final, algebraically-shifted raster (see Table 2). ArcMap™ uses this color map file to define the symbology for any land cover raster produced by our method, producing consistent symbology across any number of maps in series.

DEMONSTRATION: LAND COVER AT 1:24,000

Figures 1, 9, 10, and 11 provide examples of our land cover map layer in conjunction with other typical map layers: road and hydrographic vectors, terrain shading, and orthoimagery. The land cover layer is particularly effective as a translucent overlay on orthoimagery because it recolors the imagery, providing ancillary visual cues to suggest what is present at any location. This recoloring also applies some degree of standardization of color to the base map when an orthoimage is present, unifying the overall appearance. The use of white for built-up areas helps to accentuate roadways in particular, especially when these are symbolized with white or pale vector lines. This “ghosting” or “glow” effect is particularly helpful where roads are apparent in the orthoimagery but absent in the vector data, as is sometimes the case for The National Map.

Figure 9. St. Louis, Missouri.

Figure 9. St. Louis, Missouri.



Figure 10. Memphis, Texas.

Figure 10. Memphis, Texas.



Figure 11. Atlanta, Georgia.

Figure 11. Atlanta, Georgia.

TREATMENTS FOR SMALLER MAP SCALES

The same design concept is applied to small-scale land cover. Processing is similar but simpler, since only generalization and not edge enhancement is required at coarser map resolutions; i.e., there is no need to derive sub-pixel classes. The product land cover consists of the same five land cover classes, plus water bodies; “canvas” patches are unnecessary, because no stochastic airbrushing is used. The three input rasters are first reclassified, following exactly as in the large-scale land cover (see Table 1). NLCD data are resampled to a resolution equal to 0.00025 times the target scale, using nearest neighbor resampling for the classed NLCD data, and cubic convolution for both the percent impervious and percent canopy rasters. The percent impervious surface and canopy rasters are thresholded for values 20% and higher, as in the 1:24,000 case. Water body polygons are rasterized at the calculated resolution. The land cover class rasters are mosaicked with the same priority order as used in the large-scale processing (minus canvas patches). The same color specifications are used, with the exception that agricultural lands, grasslands, and barren lands are represented without color ramps at their edges. The product land cover still exhibits a painterly quality, despite no longer using the “airbrush” effect at class edges, because the similar color lightness values continue to effectively blur edges.

Figure 12 illustrates the product of this processing at various scales in the area around Memphis, Texas. The original-resolution NLCD data are reclassified and colored according to our specifications in A, the product of our large-scale processing is shown in B, and land cover for two sample smaller scales are shown in C and D, respectively.

Figure 12. Multi-scale land cover products.

Figure 12. Multi-scale land cover products.

CONCLUSIONS

Our cartographic solution achieves an informative land cover base map layer for use at 1:24,000 or similarly large topographic map scales from NLCD data which have been both generalized (i.e., reclassified into fewer classes and geometrically simplified) and enhanced (i.e., produced at higher resolution with randomized class edges). The goal of the resulting land cover base map is not to improve the analytical usefulness of NLCD data or to present a map layer for precise cartometric analysis; rather, it is a solution for making use of coarse land cover data at larger map scales when an imprecise general impression of land cover constitutes an acceptable base map.

Increasing the resolution of spatial data by interpolation is a dubious task when there is no further information by which to be certain about the interpolation accuracy. Usually, when two or more layers at different spatial resolutions need to be used together in cartographic representation or analysis, the finer-resolved ones are coarsened to match the coarsest one. This paper has presented work that has attempted to do the opposite for the sake of solving a practical map design problem posed by the USGS. We have based our methods on scientific principles, but there is undoubtedly also a great deal of art and subjective creativity in our approach.

Our cartographic product successfully provides a generalized and painterly representation of land cover. The same methods might be useful for other kinds of classed phenomena, with the proviso that uncertain, ambiguous boundary edges are appropriate or acceptable. The representation produced by this method is abstract, and does not achieve greater precision than the input pixels. Indeed, while the pixels in the output have been made finer, this has deliberately happened at the expense of precision. As discussed above, this loss of certainty remains constrained to boundary edges.

We believe this generalized representation is useful for several reasons, especially when applied to land cover. First, imprecise region borders often reflect reality, such as might be seen between forests and grasslands, where types of land cover grade into each other. Also, the product land cover layer is an image, much like a photograph, meaning that it readily lends itself to scaling and zooming as a reader explores a topographic map in any “slippy” map digital context where pan and zoom functionality is available (e.g., a US Topo GeoPDF file). We hope our product land cover base map conveys our intention: that general impressions of the land cover can be identified for the sake of map viewing and visualization, but that borders, given the data at hand, need not be precisely delineated.

NOTE

Python source code for the scripts developed and used here is available by contacting the authors.

ACKNOWLEDGEMENT

This research was financially supported by a USGS Center of Excellence for Geospatial Information Science (CEGIS) grant awarded by Dr. E. Lynn Usery through the Cooperative Ecosystem Studies Units (CESU).

REFERENCES

Anderson, J. R., E. E. Hardy, J. T. Roach, and R. E. Witmer. 1976. A Land Use and Land Cover Classification System for Use With Remote Sensor Data: U.S. Geological Survey Professional Paper 964. Washington, DC: US Government Printing Office.

Atkinson, P. M. 2005. “Sub-pixel Target Mapping from Soft-classified, Remotely Sensed Imagery.” Photogrammetric Engineering & Remote Sensing 71(7): 839–846. doi: 10.14358/PERS.71.7.839.

Baker, S., and T. Kanade. 2002. “Limits on Super-Resolution and How to Break Them.” IEEE Transactions on Pattern Analysis and Machine Intelligence 24(9): 1167–1183. doi: 10.1109/TPAMI.2002.1033210.

Brassel, K. E., and R. Weibel. 1988. “A Review and Conceptual Framework of Automated Map Generalization.” International Journal of Geographical Information Science 2(3): 229–244. doi: 10.1080/02693798808927898.

Brewer, C. A. 1994. “Color Use Guidelines for Mapping and Visualization.” In Visualization in Modern Cartography, Vol. 2, edited by A. M. MacEachren & D. R. F. Taylor, 123–147. Oxford: Pergamon.

———. 1996. “Prediction of Simultaneous Contrast between Map Colors with Hunt’s Model of Color Appearance.” Color Research and Application 21(3): 221– 235. doi: 10.1002/(SICI)1520-6378(199606)21:3<221::AID-COL4>3.0.CO;2-U.

Cámara, M., and F. López. 2000. “Mathematical Morphology Applied to Raster Generalization of Urban City Block Maps.” Cartographica 37(1): 33–48. doi: 10.3138/A428-760T-1647-84P2.

Campbell, J. B., and R. H. Wynne. 2011. Introduction to Remote Sensing, 5th edition. New York: Guilford Press.

Chen, Y., Y. Ge, G. B. M. Heuvelink, J. Hu, and Y. Jiang. 2015. “Hybrid Constraints of Pure and Mixed Pixels for Soft-Then-Hard Super-Resolution Mapping With Multiple Shifted Images.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8(5): 2040–2052. doi: 10.1109/JSTARS.2015.2417191.

Congalton, R. G., J. Gu, K. Yadav, P. Thenkabail, and M. Ozdogan. 2014. “Global Land Cover Mapping: A Review and Uncertainty Analysis.” Remote Sensing 6(12): 12070–12093. doi: 10.3390/rs61212070.

Cracknell, A. P. 1998. “Synergy in Remote sensing—What’s in a Pixel?” International Journal of Remote Sensing 19(11): 2025–2047. doi: 10.1080/014311698214848.

Dumas, J., A. Lu, S. Lefebvre, J. Wu, and C. Dick. 2015. “By-example Synthesis of Structurally Sound Patterns.” ACM Transactions on Graphics 34(4): 1–12. doi: 10.1145/2766984.

Foody, G. M. 1999. “The Significance of Border Training Patterns in Classification by a Feedforward Neural Network Using Back Propagation Learning.” International Journal of Remote Sensing 20(18): 3549–3562. doi: 10.1080/014311699211192.

Foody, G. M., and A. Mathur. 2004. “Toward Intelligent Training of Supervised Image Classifications: Directing Training Data Acquisition For SVM Classification.” Remote Sensing of Environment 93(1–2): 107–117. doi: 10.1016/j.rse.2004.06.017.

Freeman, W. T., T. R. Jones, and E. C. Pasztor. 2002. “Example-based Super-resolution.” IEEE Computer Graphics and Applications 22(2): 56–65. doi: 10.1109/38.988747.

Gao, W., J. Gong, and Z. Li. 2004. “Thematic Knowledge for the Generalization of Land Use Data.” The Cartographic Journal 41(3): 245–252. doi: 10.1179/00087040X13959.

Glasner, D., S. Bagon, and M. Irani. 2009. “Super-Resolution from a Single Image.” Proceedings of the 12th IEEE International Conference on Computer Vision (ICCV), 349–356. Kyoto: IEEE. doi: 10.1109/ICCV.2009.5459271.

Harrie, L., and R. Weibel. 2007. “Modelling the Overall Process of Generalisation.” In Generalisation of Geographic Information: Cartographic Modelling and Applications, edited by W. A. Mackaness, A. Ruas, and L. T. Sarjakoski, 67–87. Oxford: Elsevier.

Homer, C. G., J. Dewitz, J. Fry, M. Coan, N. Hossain, C. Larson, N. Herold, A. McKerrow, J. N. VanDriel, and J. Wickham. 2007. “Completion of the 2001 National Land Cover Database for the Conterminous United States.” Photogrammetric Engineering and Remote Sensing 73(4): 337–341.

Ichoku, C., and A. Karnieli. 1996. “A Review of Mixture Modeling Techniques for Sub-pixel Land Cover Estimation.” Remote Sensing Reviews 13(3–4): 161–186. doi: 10.1080/02757259609532303.

Imbiriba, T., J. C. M. Bermudez, C. Richard, and J. Y. Tourneret. 2016. “Nonparametric Detection of Nonlinearly Mixed Pixels and Endmember Estimation in Hyperspectral Images.” IEEE Transactions on Image Processing 25(3): 1136–1151. doi: 10.1109/TIP.2015.2509258.

Jenny, H., and B. Jenny. 2013. “Challenges in Adapting Example-Based Texture Synthesis for Panoramic Map Creation: a Case Study.” Cartography and Geographic Information Science 40(4): 297–304. doi: 10.1080/15230406.2013.795001.

Jenny, H., B. Jenny, and J. Cron. 2012. “Exploring Transition Textures for Pseudo-Natural Maps.” In GI_Forum 2012: Geovisualization, Society and Learning, edited by T. Jekel, A. Car, J. Strobl, & G. Griesebner, 130–139. Salzburg, Austria: Wichmann Herbert. https://researchbank.rmit.edu.au/view/rmit:35253.

Kwon, Y., K. I. Kim, J. Tompkin, J. H. Kim, and C. Theobalt. 2015. “Efficient Learning of Image Super-Resolution and Compression Artifact Removal with Semi-Local Gaussian Processes.” IEEE Transactions on Pattern Analysis and Machine Intelligence 37(9): 1792–1805. doi: 10.1109/TPAMI.2015.2389797.

Latifovic, R., and I. Olthof. 2004. “Accuracy Assessment Using Sub-Pixel Fractional Error Matrices of Global Land Cover Products Derived from Satellite Data.” Remote Sensing of Environment 90(2): 153–165. doi: 10.1016/j.rse.2003.11.016.

Li, D. Z. 1994. “Mathematical Morphology in Digital Generalization of Raster Map Data.” Cartography 23(1): 1–10. doi: 10.1080/00690805.1994.9713969.

Li, Z., and B. Su. 1995. “Algebraic Models for Feature Displacement in the Generalization of Digital Map Data Using Morphological Techniques.” Cartographica 32(3): 39–56. doi: 10.3138/H7K9-2160-2765-8230.

Li, Z., and M. Wong. 2008. “Animating Basic Operations for Digital Map Generalization with Morphing Techniques.” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVII(B2): 637–642.

Lindeberg, T. 2008. “Scale-space.” In Encyclopedia of Computer Science and Engineering Vol. IV, edited by B. Wah, 2495–2504. Hoboken, New Jersey: John Wiley and Sons.

———. 2014. “Scale Selection.” In Computer Vision: a Reference Guide, edited by K. Ikeuchi, 701–713. New York: Springer US.

Livingstone, M., and D. Hubel. 1988. “Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception.” Science 240: 740–749.

Mackaness, W., and G. Edwards. 2002. “The Importance of Modelling Pattern and Structure in Automated Map Generalisation.” Proceedings of the Joint Workshop on Multi-Scale Representations on Spatial Data. Ottawa: Canada Centre for Remote Sensing (CCRS). http://generalisation.icaci.org/images/files/workshop/workshop2002/macedwards.pdf.

Mariethoz, G., and S. Lefebvre. 2014. “Bridges Between Multiple-Point Geostatistics and Texture Synthesis: Review and Guidelines for Future Research.” Computers & Geosciences 66: 66–80. doi: 10.1016/j.cageo.2014.01.001.

Marsh, S., P. Switzer, W. S. Kowalik, and R. J. P. Lyon. 1980. “Resolving the Percentage of Component Terrains Within Single Resolution Elements.” Photogrammetric Engineering and Remote Sensing 46(8): 1079–1086. doi: 10.1016/0034-4257(82)90006-2.

Marston, B. E., and B. Jenny. 2015. “Improving the Representation of Major Landforms in Analytical Relief Shading.” International Journal of Geographical Information Science 29(7): 1144–1165. doi: 10.1080/13658816.2015.1009911.

Mather, P., and B. Tso. 2009. Classification Methods for Remotely Sensed Data, Second Edition. New York: CRC Press.

McMaster, R. B., and M. Monmonier. 1989. “A Conceptual Framework for Quantitative and Qualitative Raster-Mode Generalization.” Proceedings of the Annual GIS/LIS Conference, 390–403. Orlando, Florida.

Monmonier, M. S. 1983. “Raster-Mode Area Generalization for Land Use and Land Cover Maps.” Cartographica 20(4): 65–91. doi: 10.3138/x572-0327-4670-1573.

———. 1987. “Displacement in Vector- And Raster-Mode Graphics.” Cartographica 24(4): 25–36. doi: 10.3138/FW8R-2122-PT42-53M2.

Nasrollahi, K., and T. B. Moeslund. 2014. “Super-Resolution: A Comprehensive Survey.” Machine Vision and Applications 25(6): 1423–1468. doi: 10.1007/s00138-014-0623-4.

Neun, M., D. Burghardt, and R. Weibel. 2009. “Automated Processing for Map Generalization Using Web Services.” GeoInformatica 13(4): 425–452. doi: 10.1007/s10707-008-0054-3.

Nyquist, H. 1928. “Certain Topics in Telegraph Transmission Theory.” Transactions of the American Institute of Electrical Engineers 47(2): 617–644. doi: 10.1109/T-AIEE.1928.5055024.

Openshaw, S. 1984. The Modifiable Areal Unit Problem. Norwich, UK: Geo Books, Regency House.

Pantazis, D., B. Karathanasis, M. Kassoli, A. Koukofikis, and P. Stratakis. 2009. “Morphing Techniques: Towards New Methods for Raster Based Cartographic Generalization.” Proceedings of the 24th International Cartographic Conference. Santiago: ICA.

Park, M., K. Brocklehurst, R. T. Collins, and Y. Liu. 2010. “Translation-Symmetry-Based Perceptual Grouping with Applications to Urban Scenes.” In Computer Vision – ACCV 2010, edited by R. Kimmel, R. Klette, and A. Sugimoto, 329–342. Springer Berlin Heidelberg. doi: 10.1007/978-3-642-19318-7_26.

Patterson, T. 2002. “Getting Real: Reflecting on the New Look of National Park Service Maps.” Proceedings of the International Cartographic Association Mountain Cartography Workshop. Mt. Hood, Oregon. http://www.shadedrelief.com/realism/index.html.

———. 2013. “Natural Earth for the National Atlas.” shadedrelief.com. http://www.shadedrelief.com/USGS_NE.

Peter, B., and R. Weibel. 1999. “Using Vector and Raster-Based Techniques in Categorical Map Generalization.” Proceedings of the ICA Workshop on Generalisation and Multiple Representation.

Raposo, P., and T. Samsonov. 2014. “Towards General Theory of Raster Data Generalization.” 17th ICA Workshop on Generalisation and Multiple Representation, Vienna, Austria.

Regnauld, N., and P. Revell. 2007. “Automatic Amalgamation of Buildings for Producing Ordnance Survey® 1:50 000 Scale Maps.” The Cartographic Journal 44(3): 239–250. doi: 10.1179/000870407X241782.

Reimer, A. W. 2010. “Understanding Chorematic Diagrams: Towards a Taxonomy.” The Cartographic Journal 47(4): 330–350. doi: 10.1179/000870410X12825500202896.

Roy, P., S. Goswami, S. Chakraborty, A. T. Azar, and N. Dey. 2014. “Image Segmentation Using Rough Set Theory: A Review.” International Journal of Rough Sets and Data Analysis (IJRSDA) 1(2): 62–74. doi: 10.4018/ijrsda.2014070105.

Samat, A., P. Du, S. Liu, J. Li, and L. Cheng. 2014. “Ensemble Extreme Learning Machines for Hyperspectral Image Classification.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7(4): 1060–1069. doi: 10.1109/JSTARS.2014.2301775.

Samat, A., J. Li, S. Liu, P. Du, Z. Miao, and J. Luo. 2016. “Improved Hyperspectral Image Classification by Active Learning Using Pre-Designed Mixed Pixels.” Pattern Recognition 51: 43–58. doi: 10.1016/j.patcog.2015.08.019.

Sester, M. 2008. “Abstraction of GeoDatabases.” In Encyclopedia of GIS, edited by S. Shekhar & H. Xiong, 7–10. New York: Springer, US.

Shannon, C. E., and W. Weaver. 1949. The Mathematical Theory of Communication. Urbana and Chicago: University of Illinois Press.

Simley, J., and W. Carswell Jr. 2009. The National Map—Hydrography. U.S. Geological Survey Fact Sheet 2009-3054.

Smith, J. H., S. V. Stehman, J. D. Wickham, and L. Yang. 2003. “Effects of Landscape Characteristics on Land-Cover Class Accuracy.” Remote Sensing of Environment 84(3): 342–349. doi: 10.1016/S0034-4257(02)00126-8.

Steiniger, S., and R. Weibel. 2005. “A Conceptual Framework for Automated Generalization and its Application to Geologic and Soil Maps.” Proceedings of the XXII International Cartographic Conference.

Su, B., and Z. Li. 1995. “An Algebraic Basis for Digital Generalization of Area-Patches Based on Morphological Techniques.” The Cartographic Journal 32(2): 148–153. doi: 10.1179/caj.1995.32.2.148.

Su, B., Z. Li, G. Lodwick, and J.-C. Muller. 1997. “Algebraic Models for the Aggregation of Area Features Based Upon Morphological Operators.” International Journal of Geographical Information Science 11(3): 233–246. doi: 10.1080/136588197242374.

Sugarbaker, L., Coray, K. E., & Poore, B. (2009. The National Map Customer Requirements: Findings from Interviews and Surveys (Tech. Rep. No. U.S. Geological Survey Open-File Report 2009–1222). Reston, Virgina: U.S. Department of the Interior, U. S. Geological Survey.

Tian, J., and K.–K. Ma. 2011. “A Survey on Super-Resolution Imaging.” Signal, Image and Video Processing 5(3): 329–342. doi: 10.1007/s11760-010-0204-6.

Tobler, W. R. 1987. “Measuring Spatial Resolution.” Proceedings of the International Workshop on Geographic Information Systems, Vol. 48. Beijing, China.

Tsai, R., and T. Huang. 1984. “Multiframe Image Restoration and Registration.” In Advances in Computer Vision and Image Processing, Vol. 1, edited by R. Tsai and T. Huang, 317–339. Stamford: JAI Press Inc.

Tuia, D., E. Pasolli, and W. J. Emery. 2011. “Using Active Learning to Adapt Remote Sensing Image Classifiers.” Remote Sensing of Environment 115(9): 2232–2242. doi: 10.1016/j.rse.2011.04.022.

Weibel, R. 1992. “Models and Experiments for Adaptive Computer-Assisted Terrain Generalization.” Cartography and Geographic Information Systems 19(3): 133–153. doi: 10.1559/152304092783762317.

Yang, J., J. Wright, T. S. Huang, and Y. Ma. 2010. “Image Super-Resolution Via Sparse Representation.” IEEE Transactions on Image Processing 19(11): 2861–2873. doi: 10.1109/TIP.2010.2050625.

Zhang, X., T. Ai, J. Stoter, M.-J. Kraak, and M. Molenaar. 2013. “Building Pattern Recognition in Topographic Data: Examples on Collinear and Curvilinear Alignments.” GeoInformatica, 17(1): 1–33. doi: 10.1007/s10707-011-0146-3.