DOI: 10.14714/CP89.1402
Nadia H. Panchaud, ETH Zürich | nadia.panchaud@ethz.ch
Lorenz Hurni, ETH Zürich | lhurni@ethz.ch
Custom user maps (also called map mashups) made on geoportals by novice users often lead to poor cartographic results, because cartographic expertise is not part of the mapmaking process. In order to integrate cartographic design functionality within a geoportal, we explored several strategies and design choices. These strategies aimed at integrating explanations about cartographic rules and functions within the mapmaking process. They are defined and implemented based on a review of human-centered design, usability best practices, and previous work on cartographic applications. Cartographic rules and functions were made part of a cartographic wizard, which was evaluated with the help of a usability study. The study results show that the overall user experience with the cartographic functions and the wizard workflow was positive, although implementing functionalities for a diverse target audience proved challenging. Additionally, the results show that offering different ways to access information is welcomed and that explanations pertaining directly to the specific user-generated map are both helpful and preferred. Finally, the results provide guidelines for user interaction design for cartographic functionality on geoportals and other online mapping platforms.
KEYWORDS: geoportal; web cartography; usability evaluation; user interaction; interface design; interactive cartography
Geospatial datasets are abundantly available nowadays thanks to technological advances in data capture, storage, processing, and distribution, as well as to the democratization of (online) cartography. Geoportals and online mapping platforms offer an appropriate means and environment for publishing, displaying, and distributing geospatial data. However, datasets are often uploaded onto those platforms in raw form or with minimal thought given to their symbolization. The map mashups created by novice users on those platforms tend to produce results of low cartographic quality because no cartographic knowledge or professional cartographer is included in the process (Harrie, Mustière, and Stigmar 2011) and because the different datasets have been symbolized on an individual basis and thus are not optimal for combination.
Cartographic principles have been gradually formalized and integrated mostly within standalone tools (e.g., Color Brewer for color schemes [Brewer and Harrower 2013] and the subsequent similar “brewers,” for map symbols and type [Schnabel 2007; Sheesley 2006]) and sometimes in small ways within geoportals aimed at the larger public. Yet, most cartographic knowledge is neither easily accessible nor well integrated within online platforms on which the public creates custom user-generated maps.
Our motivation in this work is to aid casual mapmakers in making better user-generated maps within online mapping platforms, by offering them functions based on cartographic principles. Concretely, our aim is to design and evaluate an interface and related user interactions for cartographic functions. These functions rely on cartographic concepts such as figure-ground and color contrast to improve the overall visual hierarchy and legibility of the map mashups.
Due to the nature of cartographic knowledge and the target audience of geoportals, there are specific challenges. First, a lay audience might hold a very different conceptual model than trained cartographers of how a map and its contents are organized. Moreover, individual conceptual models among the lay audience are much more variable. Second, cartographic knowledge is made of principles, guidelines, and a certain amount of subjectivity, and thus it is necessary to communicate the flexibility of that knowledge. Furthermore, it is unclear what types of interaction best support the introduction of cartographic knowledge to geoportal users in the context of the specific maps they will create. There are also open questions regarding how to design interactions that are based on cartographic knowledge and allow the discovery of such knowledge by casual mapmakers. Concepts of usability and human-centered design can help answer these questions, but there is a need to test concrete design implementations to gain a deeper understanding in the context of cartographic applications.
The first objective of our research was to explore relevant design principles to support the integration of cartography-related user interactions, and to implement them in an existing geoportal. Second, we investigated the different types of user interactions that were implemented, evaluating them in regard to their usability and appropriateness for cartographic functions and knowledge. Finally, we derived interaction design guidelines from these evaluations.
For the usability test, an existing geoportal and a framework offering smart cartographic functions were used. This geoportal allows the creation of map mashups from its available data and its cartographic functions; it also helps to improve the quality of the mashups by checking for appropriate content based on map types, by optimizing the drawing order of the layers, and by improving the visual hierarchy (Panchaud, Iosifescu Enescu, and Hurni 2017). The functions also explain choices to users; these kinds of explanations should not stay hidden, but should be open to the user, and capitalized on by integrating them within the workflow and the wizard GUI (graphical user interface). A wizard is a type of user interface that guides users through a sequence of defined steps to perform a task or solve a problem. They are also called “assistants” and are widely used in most operating systems.
How map readers interact with maps and mapping platforms can be better understood by looking into fundamental concepts such as human-centered design and usability. Based on those fundamental concepts, previous researchers have already gained insights and set best practices specific to designing maps and interactions on mapping platforms for an improved and more user-friendly experience.
Previous research and best practices overwhelmingly show that the comprehension of the users’ needs and expectations is crucial for designing optimal user interactions (Roth and Harrower 2008). Such comprehension is central to the concept of “human-centered design” (HCD), also known as “user-centered design” (UCD), popularized as early as 1988 and defined by Norman (2013, 8) as an “approach that puts human needs, capabilities, and behavior first.” The HCD approach has led to significant advantages such as improved usability of GUIs and tools, fewer errors during use, and faster learning times (Norman 2005).
With the emergence of the HCD/UCD doctrine, several sets of principles were developed to support its implementation. In Figure 1, we present here the core ideas of HCD with Shneiderman’s (1987) eight golden rules, Norman’s (1990) original seven principles, and Norman’s (2013) revised seven principles.
Figure 1. Overlaps and differences between the different lists of principles for human-centered design.
The diagram reveals overlaps and differences among the principles lists. Common to all, constraints are described as a tool to help guide the user through possible interactions and prevent the use of functions that are not available at certain points. Additionally, actions should be easily reversible, so that users can undo potential mistakes and feel free to explore the interface without fear of making an error. Feedback about user actions and the state of the system is also cited as crucial for a positive user experience.
Important concepts unique to Norman’s (2013) principles are affordances and signifiers. Affordances are the relationships between object appearances and the capabilities of the users: they help the users determine their possible interactions with the object. Some affordances are perceivable and act as a signal. When they are not perceivable, additional signifiers are needed; they are clues that convey how to use the objects (Norman 2013). They aim to reduce the number of settings and icons that need to be learnt before using the system by making them intuitive, easy to remember, and logical (linked to Norman’s principle of “mapping” — N4/n6 in Figure 1), and they help to reduce short-term memory (STM) load (S8). Consistency (in interface design, but also in sequences of actions and terminology across the system) also supports the reduction of demands on STM and lets users focus on the content of the application and problem solving instead of on interface comprehension (Shneiderman and Plaisant 2005).
In the context of interfaces for geospatial data and visualization, it means that the interactions built into the GUI must make sense and be intuitive: for instance, users should not spend time deciphering the icons and buttons (Timoney 2013; see principle S8 in Figure 1). Additionally, understanding the user context and providing direct controls to the user are critical steps to preventing errors (Haklay and Nivala 2010; see S7).
While the above-mentioned principle lists give valuable insight into HCD, Gould and Lewis’s (1985) framework offers a more comprehensive approach and is the most widely adopted (Haklay and Nivala 2010). The three core principles are: (1) an early focus on the users and tasks, (2) the use of empirical measurements to evaluate the design, and (3) an iterative process. The first point deals with the importance of the user’s goals and tasks as the drivers for the design. Moreover, it implies that characteristics, behavior, context of use, work, and environment should be considered as well. Then, only through empirical measurements (e.g., the user’s reactions and performance) can one evaluate whether there are improvements from the prototype to the final version. Finally, the design process should go through several iteration cycles of design, test, measure, redesign, etc., as often as necessary (Gould and Lewis 1985).
As seen above, the HCD approach is supported by a large body of work demonstrating the importance of carefully considering the needs, capabilities, and preferences of the target audience in designing interactions. In the context of map mashups, as opposed to traditional cartography, the map user is also often the mapmaker (Roth 2013) and thus the user has a double profile of needs and expectations which have to be taken into account.
Often the designers of online mapping environments regard their users as homogeneous, but group and individual differences exist. For instance, Slocum et al. (2001) mention expertise, culture, and age among several other characteristics, while Fairbairn et al. (2001) also refer to the users’ expectations, experience, competences, and preferences. These various user differences lead to multiple user perspectives, and thus treating them as a monolithic group is inadequate (Haklay 2003); it is considered best practice to acknowledge different user skills and knowledge, especially between experts and casual users (Fairbairn et al. 2001; Jenny et al. 2010), as well as differences among laypeople themselves (Meng and Jacek 2009; Shneiderman and Plaisant 2005).
Consequently, there is no “one size fits all” interface (van Elzakker and Wealands 2007), but even so, aiming to cater to universal usability can help (Shneiderman and Plaisant 2005; see S2 in Figure 1). Suggestions from previous work are to design methods of interaction that can be adapted to the end user in terms of complexity (Slocum et al. 2001; Fiedukowicz et al. 2012; Jenny et al. 2010) and to provide flexibility in unfamiliar situations (MacEachren and Kraak 1997). Increasing interface complexity or its degrees of freedom can render tasks more difficult for users and thus alienate them (Slocum et al. 2001; Jones et al. 2009; Andrienko and Andrienko 2006).
The success of an interface also depends on how well it supports the user’s interactions with the application. The concept of usability is central to such success and is defined in the ISO 9241-11 standard as the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (as quoted in Resch and Zimmer 2013, 1019; and He, Persson, and Östman 2012, 89). Van Elzakker and Wealands (2007) describe effectiveness as achieving goals with accuracy and completeness, efficiency as minimal resource expenditure, and satisfaction as comfort of use and a positive attitude. Additionally, Nielsen (1993) defines usability with the help of five attributes: learnability (the system is easy to learn), efficiency (a high level of productivity should be possible, once the system is learnt), memorability (easy to remember), errors (low error rate and easy recovery), and satisfaction (pleasant to use).
The cascading information-to-interface ratio is another approach to adapting to different user profiles (novice or new users vs. advanced or regular users) by providing increasing levels of complexity in the interface (Roth and Harrower 2008). This consists of a multi-layered interface and can help fill the divide between novice and advanced users (Roth 2013). By showing only the most important parameters at first and the more complex ones on demand, one can offer a simple interface at first sight for the novice user, while allowing the advanced user to access the complexity of the system as well. It is similar to “progressive disclosure,” which hides parameters until they are actually needed (Wardlaw 2010).
Even though complex interfaces allow different users the flexibility to take cartographic actions in different orders, the productivity paradox has led interface designers to constrain the interface by reducing the number of cartographic functions or the degree of flexibility in order to increase productivity (Roth 2013). Other works pertinent to cartography likewise support the idea of constraining the interface for improved user experience (Dou et al. 2010; Keehner et al. 2008; Jones et al. 2009).
Previous work also offers key, concrete insights about interface characteristics that support improved usability. Interfaces should be consistent and systematic (Roth 2012); offer a small visual footprint (Roth and Harrower 2008); make important components visible; offer smart and adaptive functions (MacEachren and Kraak 2001); use appropriate metaphor as well as provide sensible default values depending on the context of use (Cartwright et al. 2001); use interface controls that feel most natural or intuitive (Harrower and Sheesley 2005); and avoid irrelevant interactivity and inconsistencies in information feedback (Jones et al. 2009). Additionally, windows should be reused and their number limited, and the same information should not be displayed in different places (Lauesen and Harning 2001). Also, pop-up windows should be avoided because users do not like them for several reasons (interruption, occlusion of the screen, require action to go back to the main window) and tend to close them right away without looking at the content (Resch and Zimmer 2013). To prevent further user frustration, interfaces should display warning messages and block unsupported actions early as well as allow users to save the state of the system or its results (Jenny et al. 2010). Redundant functionality, irrelevant interactivity, and inconsistencies in information feedback are also problems to take into account. Finally, implementing conventions that are used on popular websites can prevent the users from being surprised or confused at the results of the interaction. Such an example would be the double-click zooming used by Google Maps, an interaction that many users expect in other map applications (Wardlaw 2010).
The role of symbols and icons must not be underestimated, and their design should aim at clarity and accuracy, easy and correct interpretability (thanks to affordances and signifiers), and visual feedback when in use (Resch and Zimmer 2013). Even though the data-ink ratio (Tufte 1983) should be high to limit the footprint of the GUI, an overly minimalist icon design might not offer enough clues to allow the users to deduce its functions (Roth and Harrower 2008).
Finally and most importantly, Beaudouin-Lafon (2004) advocates designing interaction instead of interfaces because the interface is only a means, whereas the goal is to provide user-system interactions of high quality. Roth (2013, 64) defined cartographic interactions as “the dialogue between a human and a map mediated through a computing device.” Thus the interface is of the utmost importance in optimally supporting the dialogue of cartographic interactions.
Beyond issues of usability and human-centered design, one should also consider how the dialogue between the user and the application is designed, and how it is able to capture the users’ needs and contexts, and translate them into map specifications (data layers, map scale, symbology, etc.) that the application can handle.
Collecting user preferences via textual menus is difficult, and providing map examples or samples can help the process (Balley et al. 2014) and allow the users to better express their needs. Then, the challenge is to be able to infer appropriate map specifications from the user requirements. Balley et al. (2014) mention two different approaches: either following a static reasoning process using rules after having gathered the requirements, such as in the work of Forrest (1999); or reconciling cartographic constraints and the user’s preferences in an iterative process, as used by Christophe (2011) for designing map legends.In the field of assisted map creation, there have been different attempts to organize and formalize cartographic knowledge and to put it at the disposal of a larger public using a graphic interface, including expert systems (Forrest 1993) or assistance for on-demand map creation via web services (Jolivet 2008). The gathering and formalizing of cartographic principles from experts and best practice map series is a common thread. The framework behind the interactions that are tested in this paper follows from this previous work, but focuses on functionalities for laypersons creating map mashups, and with a logic fundamentally independent from the application in which the data are visualized. The framework also relies heavily on semantic information, in the form of metadata about the meaning of the geospatial content, to deal with cartographic constraints. For instance, semantic metadata allow differentiation of roads from rivers from administrative boundaries. These distinctions enable the definition of finer cartographic rules and constraints in the framework.
The GUI is the access point to the functionality of any application and thus if not properly designed, it can hamper the use of the even the best application. A clear, well-thought-out concept and several rounds of design iteration are often needed before reaching an optimal interface.
For this study, we made use of an existing geoportal GUI as our starting point; as compared to starting from scratch this offers both design opportunities and constraints. First, there are benefits to using an existing framework and design that has already gone through several design iterations: the foundation is solid. At the same time, it gives the chance to perform yet another iteration on the general GUI design. However, there can also be some constraints as the technologies used are fixed and there might be limitations to what an existing framework can do.
Our geoportal is built on a traditional, three-tier architecture leveraging databases to serve maps via web map services (WMS) and a custom-built SVG GUI. Service-driven cartographic visualization has proven its potential (Iosifescu-Enescu, Hugentobler, and Hurni 2010; Iosifescu et al. 2013), however the same functions could also be coupled to a vector tile-based architecture with styling on the client side. Cartographic principles are integrated within the geoportal via cartographic functions that help the users when they create their own maps with the geoportal content. This includes checking whether the selection of layers is appropriate for a specific map type, re-ordering the layers to prevent unwanted overlaps, and a function which improves the mashup’s visual hierarchy by modifying the style of the background layers (for more information, especially concerning issues with map mashups, see Panchaud, Iosifescu Enescu, and Hurni [2017]). We decided to provide a background style function because a recurring issue found in map mashups from geoportals is the fact that most layers are symbolized in saturated color schemes matching a foreground style definition. As the functions mimic different parts of the cartographic workflow, a natural design choice for their integration is to use a wizard, allowing the user to go through the decision points of the map design process step by step.
We began by redesigning the GUI with input from a usability study done on a sibling project using the same GUI framework (Kellenberger et al. 2016). The GUI redesign also used principles derived from the literature and best practices that were not respected in earlier design iterations; project-specific needs also played a role. The common aspect to the changes was the optimization of the GUI’s visual footprint: most of the space should be given to the map, and the GUI should not be cluttered in order to give enough space to the important features (Figure 2). Furthermore, some interface features that were lacking consistency were redesigned to offer a smoother and more consistent user experience.
Figure 2. Examples of design changes to the geoportal GUI. (a) The large banner at the top served no important purpose and thus it was made thinner. (b) Important functions had icons that were too small and many users did not notice them; their size was more than doubled with the new design. (c) Icons linked to unused functions and interactivity were removed.
As mentioned earlier, a wizard was used to organize the geoportal’s cartographic functions meaningfully. A wizard should be able to capture the user’s requirements in an efficient manner with a minimal number of clicks, while offering a pleasant user experience. We integrated cartographic functions within the GUI over two major design iteration cycles. The first design iteration included organizing the cartographic functions and interactions into steps to offer a smooth wizard workflow. Our different steps were: (1) layer selection, (2) map definition, (3) layer order, (4) visual hierarchy, and (5) final map. Figure 3 shows the steps and how they related to the cartographic functions. The selection of layers occurs at the beginning because the users were familiar with selecting layers as a first step before downloading them (as this was a pre-existing geoportal function). Steps 2, 3, and 4 check user parameters against map content and offer to optimize different aspects of the map. To add support for thematic mapping (i.e., classification method and color scheme selection) would require an additional step between 3 and 4. In traditional cartographic workflows, there would be a step to pick symbols; however, as the symbology modifications in step 4 rely on the existing layer styles where the symbols are defined, there is no need for symbol selection in this specific application.
Figure 3. Workflow concept of the wizard. The top row shows the steps the users go through; the bottom row, the cartographic functions operating in the background.
The second design iteration cycle led to the development of a dual GUI, allowing for a “geoportal” mode and a “wizard” mode. Common elements are kept from one mode to the other (e.g., map view, reference map, and navigation tools), while specific elements come and go as the user switches between the geoportal GUI and the additional features of the wizard. Going from one mode to the other is always possible thanks to a tab system (Figure 4a) and there is a large “Launch Wizard” button in the geoportal mode (Figure 4b).
Figure 4. Part of the GUI showing the switch between geoportal and wizard modes using a tab system (a) and direct access to the wizard (b).
We organized information flows going from the wizard to the user in several levels based on the type, complexity, and depth of information provided. This cascading type of organization of the interactions helps with providing crucial information at first sight in the interface with little noise, while providing access to more detailed information on demand. Complex information about the inner workings of the cartographic functions is available for advanced or curious users, but does not clutter the interface unnecessarily for the other users. Table 1 breaks down the levels, while Figure 5 provides examples of the information cascade.
There is an important conceptual difference between a warning and an error message. A warning conveys a cautionary message about something that might be wrong or that is missing. When no action is taken upon receiving a warning, the system can go on and assume sensible default values. Thus warnings should be discreet, and not hamper the progress of the system to the next step or break the user’s flow of thoughts.
An error message, by contrast, is much more critical and should capture the attention of the users and instruct them to act in order to remediate the problem. Without action and modification of the parameters, the system cannot go on. Thus the design and implementation choices for the error messages must make them much more noticeable than the warnings.
When a user changes a parameter involved in a compatibility check, the check is run in the background and an icon appears next to the parameter if a warning or an error is found (see Figure 6). At this stage, nothing prevents the user from continuing to tweak parameters within the same wizard window. However, when moving on to the next window, if any error message is not resolved, a pop-up window will appear and block the process while explaining the problem and suggesting corrective actions (see Figure 7). Once the issue is solved, the user can move to the next step.
We conducted a usability test, focused on the users’ behavior with the tools that were developed as well as on the design choices. More specifically, we tried to identify whether users found the wizard functionality to be helpful and efficient and how frequently they looked at the explanations and warnings while using the tools.
PARTICIPANTS
In total nine participants were recruited for the usability study: four women, five men. All were either working or studying at the university level, but none were active or trained in the field of cartography. Their participation was voluntary and they were not compensated. All participants use maps (digital and paper) at least once a month, while five of them used maps several times a week or more often. Their primary map use was for wayfinding and route planning. They also used maps for research and teaching purposes and during their hobbies (e.g., hiking, travelling, and out of curiosity). The number of the participants was chosen in order to cover different levels of familiarity with geoportals: three participants had never used a geoportal, three had used them a few times, and three used them often.
TASKS
A scenario and a series of tasks were developed for the usability test. The scenario was specified in such a way that the opportunity to use each function arose at least once. There were different types of functions present in the interface: some performed cartographic tasks and others provided additional information about the functions or cartographic principles. It was not necessary to use all the tools to complete the tasks from the scenario. However, this allowed us to observe whether the participants used tools or not, in which way, and with what frequency.
The scenario was as follows: “You want to create an overview map of the Brașov region with the natural parks to have an idea of the protected areas of this region.”
Then, more detailed tasks and instructions were given to the participants. The tasks were chosen to follow the workflow of the wizard: (1) select layers, (2) verify and/or adjust the map definition parameters, (3) verify and/or adjust layer order, (4) verify and/or adjust the visual hierarchy, and (5) pick a new symbolization method for the background layers.
The goal of this scenario was to cover basic cartographic tasks that a layperson might undertake and that are found in some form on many public geoportals (data selection and combination, spatial extent definition, and simple modifications of the symbolization). The exact scale for the map was not explicitly specified; participants could zoom in more or less depending on their interpretation of the scenario.
PROCEDURE
Before starting, the goals and procedure of the usability test were explained to the participants. Then, the usability test consisted of a familiarization phase, the actual test, a questionnaire, and a structured interview. During the scripted introduction, we explained the project, the tools developed, and the goals of the usability study to the participants. Then, the participants had a guided familiarization time with the geoportal and wizard. Afterwards, the participants received the scenario and tasks to accomplish. Their screen, mouse movements, and clicks were recorded during the test, while notes were taken during the structured interview. Next, the participants were given a survey consisting of (1) a User Experience Questionnaire (UEQ; Laugwitz, Held, and Schrepp 2008); (2) a workload estimation with the NASA Raw Task Load Index (RTLX; Hart and Staveland 1988); (3) general feedback questions; and (4) a demographic information questionnaire. The UEQ allows a quick assessment of the user experience of interactive products, whereas the RTLX helps assess the user’s perceived cognitive workload while using the wizard system as a whole. The structured interviews at the end allowed us to gather qualitative information about design choices and the participants’ impressions.
USAGE OF CARTOGRAPHIC FUNCTIONS
Figure 8 shows how much time each participant spent on the different tasks during the test, as well as how they approached the test. For instance, participants D and E read the instructions carefully and then went straight to the tasks without much exploration, maybe because they were familiar with geoportals and needed less time to complete the tasks; whereas participants A, B, and F spent less time on the instructions and much more on exploring the different functions and options of the wizard. It is notable that none of the participants used all the possible functions and explanations (Figure 9). Generally, and not surprisingly, the more functions or help used, the longer the participants spent on the geoportal. The general explanations about the main concepts and the warnings were used 53% and 74% of the time, respectively.
Figure 8. Time spent on each task or function. Note: the start point is the participant’s first interaction with the geoportal.
Figure 9. Number of interactions encountered or used at least once by each user, based on type (general explanation, warning explanations, and others).
Due to the fact that the scenario and defined task were precise, the participants reached similar end results during the test. They all managed to create the map according to the scenario. We show in Figure 10 one example of a map before & after the layer re-ordering and background modifications. Layers that were initially hidden, such as the road network, are no longer hidden and the strong background layer of land use has been de-emphasized. These changes improved the legibility and comprehension of the map by providing a clearer visual hierarchy of the map content and prevented unwanted feature overlaps.
Figure 10. Example of an initial layer selection by the participants (left) and end result after the use of the reorder and background functions.
USER EXPERIENCE QUESTIONNAIRE
The UEQ is based on 26 pairs of opposing adjectives, which are then averaged into six scales: attractiveness (overall impression), perspicuity (how easy familiarization is), efficiency (whether tasks can be solved without unnecessary effort), dependability (feeling of control during interactions), stimulation (how exciting and motivating), novelty (how innovative and creative). The scales range from -3 (extremely poor) to 3 (extremely good). Due to how the scale scores are built and the fact that participants tend to avoid extremes, it is uncommon to observe values beyond -2 and 2. A value greater than 1.5 is considered to be a good experience.
The results in Figure 11 show all six scales have positive values, of which four scales are at or above 1.5: attractiveness, efficiency, dependability, and stimulation. The novelty scale receives the lowest score with a mean of 0.917: however, this score is above what is considered to be a positive evaluation (>0.8) and it is above the average value from the UEQ benchmark (see Figure 12). The benchmark has been set by combining 246 studies using UEQ result data from a broad range of products (business software, web pages, web stores, social networks). Thus comparing our results with the data in the benchmark helps to demonstrate the relative quality of our application compared to other products (Laugwitz, Held, and Schrepp 2008). Based on the individual scores of the perspicuity scale, the application is not perceived to be as easy (uncomplicated) as it could be (score of 1.1 for the pair), even though the score is above average when compared to the UEQ benchmark. Additionally, the confidence intervals at 95% also stay in the positive range.
Figure 12. UEQ Benchmark and usability study participant mean ratings. The scales are all above average, good, or excellent.
PERCEIVED WORKLOAD AND FEEDBACK
The raw scores of the RTLX in Figure 13 show that participants perceive the physical demand and the frustration as being low. The performance score is 1 for a perfect performance and 21 for failure, and with a mean of 5.33, it indicates that participants felt they achieved their tasks to a large extent. Score variations for performance and physical demand are small among the participants.
Figure 13. RTLX scores of perceived workload. Left: box-and-whisker plot displaying, the minimum, 1st quartile, median, mean (black point), 3rd quartile, and maximum. Right: mean and standard deviation for each RTLX scale.
However, accomplishing the tasks is perceived as requiring a higher mental demand, which is not surprising because the wizard offers insights into complex cartographic design processes and rules. The average effort required and the average temporal demand are just below the middle mark of 11. The temporal demand is the workload with the most dispersed distribution, which can be explained by the fact that time is subjective and because fulfilling the tasks could be achieved with or without spending time on the additional information and help provided.
From the UEQ, we saw that the application was perceived to be slightly complicated, but it did not lead to frustration or failure, as shown by the RTLX.
For the general feedback questions, participants had to answer the seven questions seen in Table 2 using a Likert scale of “Strongly agree” (=5) to “Strongly disagree” (=1). Due to how the questions were phrased (positive or negative), low or high average values can both be positive in meaning. Thus, the averages have been re-aligned from 1 to 5, with 5 being the positive meaning. The re-aligned scores were also used to create the clustered matrix seen in Figure 14. The clustered matrix shows three very positive participants (I, A, G), five positive participants (C, H, F, B, D), and one average evaluation from participant E.
Table 2. Average response to the feedback questions. Re-aligned scores: 5 = positive evaluation, 1= negative evaluation.
The participants found the additional information about the cartographic functions helpful while also agreeing they were well integrated. The participants did not perceive they were making many mistakes, which corroborates the results of the RTLX regarding frustration, effort, and performance. Furthermore, the participants did not agree that the system was complex or cumbersome to use. However, their opinion was a little bit more split on statements about how easy the system is to use. They also disagreed with the statement about inconsistencies in the system and making mistakes, showing a positive evaluation of the wizard overall. Finally, while there is no correlation between their evaluation and the time the participants spent on the system, the general feedback scores seem to be negatively correlated with how high the participants estimated their task load to be (higher general feedback score = lower task load estimations), with a Pearson correlation coefficient of -0.77 and p-value of 0.014. This fact is not surprising; however, with only nine participants, one should interpret this only as a marked trend.
The structured interview at the end allowed us to gather qualitative feedback and the reasoning behind participant choices or actions. We quickly review here the points that either were mentioned several times or that are of special interest. The reasons mentioned for giving positive feedback about the additional information mostly concerned the opportunity to learn more about an unknown field. Moreover, having access to the rationale behind the cartographic functions was appreciated, which might explain the high score of the helpfulness question. The participants explained the reason why they did not use a specific function that was accessible via an icon image and provided pictorial explanation: even though the icon was mentioned in the familiarization phase, the participants either did not realize it was an icon and/or were too focused on the text itself. This is clearly a design choice that needs further improvements. Suggestions for improvement were to change its color, or transform it into a link within the text. More generally, links and interactive features should be in a color that differentiates them from the rest of the interface, as several participants mentioned that interactive features were difficult to spot at first. Additionally, several participants commented on the lack of more significant feedback when a layer is added to the user-generated map as well as the absence of a sign that would indicate that the layer is already in the map. However, the implementation of the warning/error differentiation with yellow and red was well understood overall, especially in regard to the seriousness of the message being conveyed.
The usability test exposed both successful and flawed aspects of the interaction and GUI design, both in terms of understanding the wizard application and its actions and in terms of pure interface design. Moreover, it confirms some conclusions reached in previous work regarding interaction design for cartographic or geospatial online platforms.
The results revealed some misunderstanding of the language used within the interface. There appears to be a need for a short introductory section explaining the main vocabulary used. Beyond a clarifying role, it could also play the role of general documentation that can be used as a reference at any time. For instance, the term “map type,” the different layer categories, and some other fundamental terms could be better explained. Additionally, there was some confusion among the participants as to the extent of the wizard actions. After certain warning or error messages, some participants expected the wizard to automatically correct some parameters, whereas the wizard was built to let the user decide about those cases because they are open-ended questions, and thus dependent on the user’s purpose for the map. More specific feedback should be considered in certain cases to prevent any doubt. In addition, building auto-correcting functions should be incorporated into future developments.
Two weaknesses of the interaction design were uncovered. First, the conceptual understanding of the duality between “data browser vs. user-generated map” and how to add layers to the user-generated map was not optimal. The process could be better supported by providing better visual feedback when a layer is added to the user-generated map and to signal which layers are already in the user-generated map. This could be realized by shadowing or highlighting layers that are already present and by issuing a short, disappearing message stating when a layer has been successfully added to the map. Second, the icon that allowed the user to open an image demonstrating the text explanation was not well designed and participants did not realize it was an icon or were just too focused on the map and text to click on it. Thus, a redesign is more than warranted and one solution could involve either turning the icon into a link, using another color, or offering a miniature image with a function to enlarge it.
The tests also revealed some successes of our interaction concept. One of these was the frequent use of warning and error messages. These messages provided information about cartographic rules behind the constraints and modifications applied to user maps. The participants applied an exploratory strategy, trying different options as a means to understand the explanations in relation to changes in the map parameters and in the map itself. As the changes were applied immediately to the map, the participants did not have to wait until the end of the wizard process to see how the parameters impacted their map. The messages, which are specific to the user-generated maps in question, are thus complementary to the general explanations: they deliver the same information but put it into perspective. It helps the participants to understand how the general rules apply to their unique, specific context. The distinction between warning and error messages was well understood, likely because it was built on known signifiers and familiar conventions by using red for error and yellow for warning.
The fact that participants found the additional information helpful and appreciated discovering something new has interesting implications for geoportals: not only does it support designing an optimal interface for helping the users create better maps, it also establishes the geoportal as an entry point for learning about cartographic design rules, as it does not require any specialized software or the need to deal with raw data.
When looking at the experience of individual users across the different scores and evaluation, there are a few interesting facts to highlight. The “worst” evaluation came from participant E, who was also the participant who spent the least amount of time on the geoportal and was one of the three who did not use all the different types of interactions. Participant C, on the other hand, spent the most time and gave an overall positive evaluation. Participants I and H, who gave the system the best evaluation, spent an average amount time on the wizard, but used a very different number of the interactions and functions. Interestingly, neither had used a geoportal before. Participant G is an outlier in their use of the interactions (only two types and 17% in total), however their general feedback score was one of the highest, and their RTLX score was the second lowest. Additionally, participants A and E used only two types of interactions and with a similar frequency, however, their general feedback and RTLX scores were very different from each other. Thus the amount of help used does not seem to be linked to whether the participants found the system user-friendly and easy to use.
Because the scenario for the usability test was structured, it allowed us to make sure the participants went through all the steps in order to better compare how they used the functions in terms of time spent on the functions and levels of information they access for each step. An unsupervised test would probably have led to different results and required an even lengthier debriefing to decipher the intentions of the different participants and why they did or did not perform certain tasks. Additionally, a larger number of participants would have been required as fewer variables could be controlled. However, our structure also meant that the participants had only marginal space for creativity in the map generation process. As the study focuses on geoportals, where creativity in regard to map content and styling is often limited compared to GIS or a drawing program, this constraint was deemed acceptable for the purpose of this work.
The results also show the emergence of different user profiles among the participants. It would support the assertion that the wizard can be used successfully without accessing each level of information, and that wizard users might benefit from the opportunity to choose between different interface designs with different levels of complexity. However, due to the relatively small number of participants, this suggestion must be considered carefully.
Finally, the interest in and high use of warning functions as a discovery tool suggests that because cartographic functions and knowledge are at times complex, participants found that having the map show what was meant (instead of text explaining what was meant) was valuable. Thus when building interactions with cartographic functions and knowledge, one should take care to provide the explanation not just in a “telling” form, but importantly in a “showing” form, such as within a sample map or an immediate change to the user-generated map. Learning by doing (and by seeing) seems to apply to the relation between cartographic knowledge and cartographic interactions.
Our goal was to investigate the potential integration of cartographic functions and knowledge in an existing geoportal framework. After reviewing the state of the art in user interaction and usability, as well as our previous experience with mapping platforms, we built a model of interaction levels and showed different types of interactions with and feedback from the system to the users. Then, we tested the integration of smart cartographic functions and knowledge with a usability study. Insights gained through this study will help improve the actual platform and move towards a more hands-on approach to sharing cartographic knowledge. The main new geoportal design feature was testing interactions that provided immediate feedback about user actions in the user-generated map, rather than after going through several windows of parameters as is the case in a traditional wizard. Additionally, the choices the users made were always put in context, and the map and its contents were always visible and referred to in the wizard windows.
Feedback and the results of the usability study show that the overall experience with the cartographic functions and the wizard workflow was positive as proven by the enthusiasm of the participants, their curiosity about the cartographic content, and the different indicators regarding ease of use, task load, and qualitative feedback. However, it also revealed areas with potential for improvements, such as the implementation of the explanatory images and some unclear terminology.
From this work, we gather the following guidelines that are relevant for the integration of smart cartographic functions and knowledge into mapping platforms:
This paper and its usability study show that implementing cartographic functionalities in geoportals with an open approach can be successful, enjoyable for the users, and not perceived as cumbersome. Cartographic wizards and similar approaches to integrate cartographic knowledge and functions should be considered in geoportals as a means to attract users, to offer sound cartographic visualizations of the geoportal data, and to further promote the platform.
Furthermore, there is still great potential for development in terms of interface/interaction design and cartographic functionalities. For instance, modules about color management and generalization levels could give more creative freedom to the users; they would be straightforward to implement because they rely on information about geometry, scale, and feature themes: information which is already present in the framework. Beyond enhancing the actual geoportal GUI based on the results of this study, our future work will focus on providing a more differentiated interface while keeping access to the additional cartographic knowledge similarly available. Additionally, developing smart functions that suggest corrections and apply them will be another priority. This is challenging because it requires the system to convey precise feedback to the user about what is being executed and why, without being too obstructive in terms of the user experience and a smooth workflow. Finally, providing a positive user experience and enabling the users to reach their goals should stay at the center of all these new developments.
Andrienko, Natalia, and Gennady Andrienko. 2006. “The Complexity Challenge to Creating Useful and Usable Geovisualization Tools.” 4th International Conference on Geographic Information Science (GIScience), Münster, Germany, September 20–23.
Balley, Sandrine, Blanca Baella, Sidonie Christophe, Maria Pla, Nicolas Regnauld, and Jantien Stoter. 2014. “Map Specifications and User Requirements.” In Abstracting Geographic Information in a Data Rich World: Methodologies and Applications of Map Generalisation, edited by Dirk Burghardt, Cécile Duchêne, and William Mackaness, 17–52. Cham, Switzerland: Springer International Publishing.
Beaudouin-Lafon, Michel. 2004. “Designing Interaction, Not Interfaces.” Proceedings of the Working Conference on Advanced Visual Interfaces, Gallipoli, Italy, May 25–28.
Brewer, Cynthia A., and Mark Harrower. 2013. “ColorBrewer 2.0.” Accessed October 10, 2016. http://colorbrewer2.org/.
Cartwright, William, Jeremy Crampton, Georg Gartner, Suzette Miller, Kirk Mitchell, Eva Siekierska, and Jo Wood. 2001. “Geospatial Information Visualization User Interface Issues.” Cartography and Geographic Information Science 28 (1): 45–60. doi: 10.1559/152304001782173961.
Christophe, Sidonie. 2011. “Creative Colours Specification Based on Knowledge (COLorLEGend system).” The Cartographic Journal 48 (2): 138–145. doi: 10.1179/1743277411Y.0000000012.
Dou, Wenwen, Caroline Ziemkiewicz, Lane Harrison, Dong H. Jeong, Roxanne Ryan, William Ribarsky, Xiaoyu Wang, and Remco Chang. 2010. “Comparing Different Levels of Interaction Constraints for Deriving Visual Problem Isomorphs.” 2010 IEEE Symposium on Visual Analytics Science and Technology (VAST), Salt Lake City, UT, October 25–26.
van Elzakker, Corné P. J. M., and Karen Wealands. 2007. “Use and Users of Multimedia Cartography.” In Multimedia Cartography, edited by William Cartwright, Michael P. Peterson, and Georg Gartner, 487–504. Berlin, Heidelberg: Springer Berlin Heidelberg.
Fairbairn, David, Gennady Andrienko, Natalia Andrienko, Gerd Buziek, and Jason Dykes. 2001. “Representation and its Relationship with Cartographic Visualization.” Cartography and Geographic Information Science 28 (1): 13–28. doi: 10.1559/152304001782174005.
Fiedukowicz, Anna, Jedrzej Gasiorowski, Paweł Kowalski, Robert Olszewski, and Agata Pillich-Kolipinska. 2012. “The Statistical Geoportal and the ‘Cartographic Added Value’ — Creation of the Spatial Knowledge Infrastructure.” Geodesy and Cartography 61 (1): 47–70. doi: 10.2478/v10277-012-0021-x.
Forrest, David. 1993. “Expert Systems and Cartographic Design.” The Cartographic Journal 30 (2): 143–148. doi: 10.1179/000870493787860049.
———. 1999. “Developing Rules for Map Design: A Functional Specification for a Cartographic-Design Expert System. Cartographica 36 (3): 31–52. doi: 10.3138/9505-7822-0066-70W5.
Gould, John D., and Clayton Lewis. 1985. “Designing for Usability: Key Principles and What Designers Think.” Communications of the ACM 28 (3): 300–311. doi: 10.1145/3166.3170.
Haklay, Mordechai E. 2003. “Public Access to Environmental Information: Past, Present and Future.” Computers, Environment and Urban Systems 27 (2): 163–180. doi: 10.1016/S0198-9715(01)00023-0.
Haklay, Mordechai, and Annu-Maaria Nivala. 2010. “User-Centred Design.” In Interacting with Geospatial Technologies, edited by Mordechai Haklay, 89–106. Chichester, UK: John Wiley & Sons, Ltd.
Harrie, Lars, Sébastien Mustière, and Hanna Stigmar. 2011. “Cartographic Quality Issues for View Services in Geoportals.” Cartographica 46 (2): 92–100. doi: 10.3138/carto.46.2.92.
Harrower, Mark, and Benjamin Sheesley. 2005. “Designing Better Map Interfaces: A Framework for Panning and Zooming.” Transactions in GIS 9 (2): 77–89. doi: 10.1111/j.1467-9671.2005.00207.x.
Hart, Sandra G., and Lowell E. Staveland. 1988. “Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research.” In Advances in Psychology, edited by A. Hancock Peter and Meshkati Najmedin, 139–183. Amsterdam: North Holland Press.
He, Xin, Hans Persson, and Anders Östman. 2012. “Geoportal Usability Evaluation.” International Journal of Spatial Data Infrastructures Research 7: 88–106.
Iosifescu, Ionuţ, Cristina Iosifescu, Nadia Panchaud, Remo Eichenberger, René Sieber, and Lorenz Hurni. 2013. “Advances in Web Service-Driven Cartography.” 6th International Cartographic Conference, Dresden, Germany, August 25–30. https://icaci.org/files/documents/ICC_proceedings/ICC2013/ICC2013_Proceedings.pdf.
Iosifescu-Enescu, Ionuţ, Marco Hugentobler, and Lorenz Hurni. 2010. “Web Cartography with Open Standards–A Solution to Cartographic Challenges of Environmental Management.” Environmental Modelling & Software 25 (9): 988–999.
Jenny, Helen, Andreas Neumann, Bernhard Jenny, and Lorenz Hurni. 2010. “A WYSIWYG Interface for User-Friendly Access to Geospatial Data Collections.” In Preservation in Digital Cartography, edited by Markus Jobst, 221–238. Berlin & Heidelberg: Springer-Verlag.
Jolivet, Laurence. 2008. “On-Demand Map Design Based on User-Oriented Specifications.” AutoCarto2008, Shepherdstown, WV, USA, September 7–11. http://www.cartogis.org/docs/proceedings/2008/jolivet.pdf.
Jones, Catherine Emma, Mordechai Haklay, Sam Griffiths, and Laura Vaughan. 2009. “A Less-Is-More Approach to Geovisualization – Enhancing Knowledge Construction Across Multidisciplinary Teams.” International Journal of Geographical Information Science 23 (8): 1077–1093. doi: 10.1080/13658810802705723.
Keehner, Madeleine, Mary Hegarty, Cheryl Cohen, Peter Khooshabeh, and Daniel R. Montello. 2008. “Spatial Reasoning With External Visualizations: What Matters Is What You See, Not Whether You Interact.” Cognitive Science 32 (7): 1099–1132. doi: 10.1080/03640210801898177.
Kellenberger, Benjamin, Ionut Iosifescu Enescu, Raluca Nicola, Cristina M. Iosifescu Enescu, Nadia H. Panchaud, Roman Walt, Meda Hotea, Arlette Piguet, and Lorenz Hurni. 2016. “The Wheel of Design: Assessing and Refining the Usability of Geoportals.” International Journal of Cartography 2 (1): 95–112. doi: 10.1080/23729333.2016.1184552.
Lauesen, Soren, and Morten B. Harning. 2001. “Virtual Windows: Linking User Tasks, Data Models, and Interface Design.” IEEE Software 18 (4): 67–75. doi: 10.1109/MS.2001.936220.
Laugwitz, Bettina, Theo Held, and Martin Schrepp. 2008. “Construction and Evaluation of a User Experience Questionnaire.” In HCI and Usability for Education and Work. USAB 2008, edited by Andreas Holzinger, 63–76. Berlin, Heidelberg: Springer Berlin Heidelberg.
MacEachren, Alan M., and Menno-Jan Kraak. 1997. “Exploratory Cartographic Visualization: Advancing the Agenda.” Computers & Geosciences 23 (4): 335–343. doi: 10.1016/S0098-3004(97)00018-6.
———. 2001. “Research Challenges in Geovisualization.” Cartography and Geographic Information Science 28 (1): 3–12. doi: 10.1559/152304001782173970.
Meng, Yunliang, and Jacek Malczewski. 2009. “Usability Evaluation for a Web-based Public Participatory GIS: A Case Study in Canmore, Alberta.” Cybergeo: European Journal of Geography, document 483. doi: 10.4000/cybergeo.22849.
Nielsen, Jakob. 1993. Usability Engineering. San Fransisco: Morgan Kaufmann.
Norman, Donald A. 1990. The Design of Everyday Things. New York: Doubleday.
———. 2005. “Human-Centered Design Considered Harmful.” Interactions 12 (4): 14–19. doi: 10.1145/1070960.1070976.
———. 2013. The Design of Everyday Things, Revised and Expanded Edition. New York: Basic Books.
Panchaud, Nadia H., Ionuţ Iosifescu Enescu, and Lorenz Hurni. 2017. “Smart Cartographic Functionality for Improving Data Visualization in Map Mashups.” Cartographica 52 (2): 194–211. doi: 10.3138/cart.52.2.4115.
Resch, Bernd, and Bastian Zimmer. 2013. “User Experience Design in Professional Map-Based Geo-Portals.” ISPRS International Journal of Geo-Information 2 (4): 1015–1037. doi: 10.3390/ijgi2041015.
Roth, Robert E. 2012. “Cartographic Interaction Primitives: Framework and Synthesis.” The Cartographic Journal 49 (4): 376–395. doi: 10.1179/1743277412Y.0000000019.
———. 2013. “Interactive Maps: What We Know and What We Need to Know.” The Journal of Spatial Information Science 6: 59–115.
Roth, Robert E., and Mark Harrower. 2008. “Addressing Map Interface Usability: Learning from the Lakeshore Nature Preserve Interactive Map.” Cartographic Perspectives 60: 46–66. doi: 10.14714/cp60.231.
Schnabel, Olaf. 2007. “Map Symbol Brewer.” Accessed December 15, 2016. http://www.carto.net/schnabel/mapsymbolbrewer.
Sheesley, Ben. 2006. “TypeBrewer.” Accessed December 13, 2016. http://typebrewer.org.
Shneiderman, Ben. 1987. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Reading, MA: Addison-Wesley Longman Publishing Co., Inc.
Shneiderman, Ben, and Catherine Plaisant. 2005. Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition). Boston: Pearson Addison Wesley.
Slocum, Terry A., Connie Blok, Bin Jiang, Alexandra Koussoulakou, Daniel R. Montello, Sven Fuhrmann, and Nicholas R. Hedley. 2001. “Cognitive and Usability Issues in Geovisualization.” Cartography and Geographic Information Science 28 (1): 61–75. doi: 10.1559/152304001782173998.
Timoney, Brian. 2013. “An Iconography of Confusion: Why Map Portals Don’t Work, Part IV.” Accessed November 7, 2016. http://mapbrief.com/2013/02/19/an-iconography-of-confusion-why-map-portals-dont-work-part-iv.
Tufte, Edward R. 1983. The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.
Wardlaw, Jessica. 2010. “Principles of Interaction.” In Interacting with Geospatial Technologies, edited by Mordechai Haklay, 179–198. Chichester, UK: John Wiley & Sons, Ltd.