DOI: 10.14714/CP90.1411

© by the author(s). This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0.

The Design and Testing of 3DmoveR: an Experimental Tool for Usability Studies of Interactive 3D Maps

Lukáš Herman, Masaryk University | herman.lu@mail.muni.cz

Tomáš Řezník, Masaryk University | tomas.reznik@sci.muni.cz

Zdeněk Stachoň, Masaryk University | zstachon@geogr.muni.cz

Jan Russnák, Masaryk University | russnak@mail.muni.cz

Various widely available applications such as Google Earth have made interactive 3D visualizations of spatial data popular. While several studies have focused on how users perform when interacting with these with 3D visualizations, it has not been common to record their virtual movements in 3D environments or interactions with 3D maps. We therefore created and tested a new web-based research tool: a 3D Movement and Interaction Recorder (3DmoveR). Its design incorporates findings from the latest 3D visualization research, and is built upon an iterative requirements analysis. It is implemented using open web technologies such as PHP, JavaScript, and the X3DOM library. The main goal of the tool is to record camera position and orientation during a user’s movement within a virtual 3D scene, together with other aspects of their interaction. After building the tool, we performed an experiment to demonstrate its capabilities. This experiment revealed differences between laypersons and experts (cartographers) when working with interactive 3D maps. For example, experts achieved higher numbers of correct answers in some tasks, had shorter response times, followed shorter virtual trajectories, and moved through the environment more smoothly. Interaction-based clustering as well as other ways of visualizing and qualitatively analyzing user interaction were explored.

KEYWORDS: 3D maps; 3D cartography; 3D Movement and Interaction Recorder; 3DmoveR; usability; user performance; X3DOM; web technologies

INTRODUCTION

Applications such as Google Earth and Virtual Earth have led to greater use of the third dimension in cartography and geoinformatics. Despite the wide range of 3D visualization applications (Biljecki et al. 2015), relatively little is known in terms of their theoretical background. As noted by Wood et al. (2005)—and we can still agree with this statement—we do not know enough about how 3D visualizations can be used effectively and appropriately, especially those that are interactive. While Voženílek (2005) mentions that 3D visualization is suitable for presenting data to a public with little experience of cartography, we do not agree with this statement in the case of interactive 3D visualization. On the contrary, we anticipate that these displays will be used more effectively by experienced users (experts in 3D interactive visualizations or virtual reality), as stated, for example, by Bowman et al. (2005) and Burigat and Chittaro (2007).

According to Buchroithner and Knust (2013), two types of 3D visualization exist: pseudo-3D and real-3D. Pseudo-3D visualization is displayed using only monocular depth cues on planar media, generally a computer screen. Real-3D (true-3D), refers to stereoscopic visualizations, which use both binocular and monocular depth cues (Buchroithner and Knust 2013; Torres et al. 2013). In this paper, our research examines the more widely disseminated (and less expensive) type of visualization, pseudo-3D.

Different definitions of 3D maps exist. Bandrova (2006) defines a 3D map as a computer generated, mathematically defined, three-dimensional, highly realistic virtual representation of the world’s surface, as well as of the objects and phenomena in nature and society. Schobesberger and Patterson (2007) characterize a 3D map as the depiction of terrain with faux three-dimensionality, containing perspective that diminishes the scale of distant areas. Haeberling, Bär, and Hurni (2008) describe it as the generalized representation of a specific area using symbolization to illustrate physical features. Hajek, Jedlicka, and Cada (2016) state that 3D maps are usually understood as maps containing Digital Terrain Models, 2D data draped onto terrain, 3D models of objects, or 3D symbols.

The main objective of our research was to design, implement, and pilot test an experimental tool for the usability testing of interactive 3D maps, which we called the 3D Movement and Interaction Recorder (3DmoveR). Our own understanding of the term “3D maps” is that they are real-3D or pseudo-3D depictions of the world, including its natural or socio-economic objects and phenomena, constructed from a mathematical basis: a geographical or projected coordinate system with a Z-scale of input data and a graphical projection such as a perspective or orthogonal projection. For our 3D map to be “interactive,” we assume it must allow at least navigational (or viewpoint) interactivity (Roth 2012).

We also wanted to conduct an experiment to test Bowman et al.’s (2005) claim that advanced users of virtual reality employ more effective interaction strategies than laypersons, by making our own comparison between expert users of 3D maps and visualizations, and lay users. This experiment would also serve as a demonstration of the possibilities provided by 3DmoveR, which allows the recording of user interactions in a 3D environment.

ASPECTS OF USABILITY

Usability is understood in cartography as a relevant criterion for evaluating maps. The term is defined by ISO standard 9241-11:1998, as the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (ISO 1998). Furthermore, ISO standard 19157:2013 goes on to specifically describe usability for the geospatial domain: “Usability is based on user requirements. All quality elements may be used to evaluate usability. Usability evaluation may be based on specific user requirements that cannot be described using the quality elements described above. In this case, the usability element shall be used to describe specific quality information about a dataset’s suitability for a particular application or conformance to a set of requirements” (ISO 2013). Usability then is measured as the “degree of adherence of a dataset to a specific set of requirements.” The concept of usability can be applied in evaluating cartographic visualizations. Slocum et al. (2001) describe the importance of usability issues in 3D cartographic visualization and further emphasize that developing formal methods of usability assessment is necessary. MacEachren and Kraak (2001) also suggest that specific tools for usability research are needed.

According to ISO/IEC 9126-4:2004, usability is specified according to three parameters (ISO/IEC 2004):

USABILITY EVALUATION METHODS

Many approaches to evaluating cartographic products exist. It is possible to use a variety of evaluation methods to derive qualitative and quantitative characteristics of the tested product (including 3D maps). Authors such as van Elzakker (2004), Li, Çöltekin, and Kraak (2010), and Rother (2014) provide an overview of usability methods. These are: questionnaire; interview; direct observation; think-aloud protocol; focus-group study, screen capture or screen logging; and eye-tracking.

Generally, these research methods all involve users solving practical tasks with the product being evaluated, while speed, correctness of results, and accuracy of responses are also being monitored. These methods are not used individually, but usually combined to cover the needs of the specific study. This approach is called mixed research design, which was introduced into several disciplines by Cameron (2009), and into cartography by Bleisch (2011) and van Elzakker and Griffin (2013).

3D MAP USABILITY RESEARCH

MacEachren (1995) outlined the need for research on the usability of 3D maps. In most such studies, only static maps have been analyzed (e.g., Kraak 1988; Savage, Weibe, and Devine 2004; Ware and Plumlee 2005; Schobesberger and Patterson 2007; Popelka and Brychtova 2013; Prepernau and Jenny 2015; or Rautenbach, Coetzee, and Çöltekin 2016), or animations of flights over 3D maps (e.g., Torrens et al. 2013).

Few experiments that examined an interactive 3D virtual environment have been published. Herbert and Chen (2014) compared static 3D visualizations to interactive ones but did not study the interaction. Bleisch, Dykes, and Nebiker (2008) assessed the differences between reading 2D bar charts and reading those placed in a 3D environment. Speed and correctness were measured, but information about movement within the 3D environment was neither recorded nor evaluated, even though a 3D interactive environment was enabled. In this case, screen logging would have made it possible to determine whether participants used the interactive capabilities of 3D stimuli. Wilkening and Fabrikant (2013) studied user interaction with Google Earth. Observation and manual recording of the movement types (zooming, panning, tilting, and rotating) were used to collect these data. However, it would be possible to analyze the interaction of users in more detail and more automatically with screen logging or virtual movement recording.

Abend et al. (2012) have also contributed to the analysis of interactive movement in 3D environments; they processed videos captured while a user worked with Google Earth. However, the examination of videos is more demanding than evaluating screen-logging data, which can be analyzed automatically and objectively. Špriňarová et al. (2015) described a mainly qualitative (and to some extent subjective) approach in which participants were observed using similar movement strategies and sequences in a 3D virtual environment, including a terrain model. McKenzie and Klippel (2016) dealt with the problem of wayfinding in a virtual environment and analyzed, inter alia, movement speed. As part of their study, Juřík et al. (2017) recorded and analyzed individual movement types as users interacted with a 3D spatial data visualization across four interactive tasks.

Before we can easily apply, in cartographic research, the approaches and methods used in 3D User Interface (UI) research (for an introduction, see Bowman et al. 2005), there is need for tools that will enable, for example:

Few of the above-mentioned methods have been used and implemented in cartography, except by Treves, Viterbo, and Haklay (2015), who tracked and analyzed the movement of their participants using virtual trajectories.

As previously mentioned, most of the usability studies in cartography dealt only with static 3D maps (perspective views) as stimuli. If interactive movement in 3D space was possible, it was neither monitored nor analyzed in detail. Wilkening and Fabrikant (2013), Treves, Viterbo and Haklay (2015), McKenzie and Klippel (2016), and Juřík et al. (2017) are the only exceptions, and the approaches and methods they each used for 3D UI evaluation, especially the screen logging method, have been sources of inspiration for our tool. At the same time, we wanted to improve upon these approaches (eliminate manual records, support different variants of 3D maps) and combine them to allow comprehensive analysis of user interactions. These were our reasons for designing and implementing a new testing tool: to allow speed, the accuracy of responses, and the subjective opinions of participants to be recorded in a mixed research design.

DEVELOPMENT OF 3DMOVER

REQUIREMENTS ANALYSIS

As the first step in creating our 3D visualization testing tool, we conducted a requirements analysis in order to determine the features or functions that potential users would find necessary. We focused on two groups when determining user expectations for the tool: (1) researchers who would use it to create and analyze tests, and (2) participants in those researchers’ tests. However, for the formal requirements analysis, only the researchers were taken into account. Feedback was received from test participants later, in the evaluation phase (see Appendix 1 and the “Evaluating and Testing” section, below).

Our requirements analysis followed the ISO/IEC 25010:2011 standard (ISO/IEC (2011). An overview of identified requirements is shown in Figure 1, while a detailed description follows in the next section. These requirements could also be used to implement testing tools based on different technologies (programming languages, etc.). We identified both functional and non-functional requirements (see Figure 1). Functional requirements involve the inputs, behaviors, and outputs that the user expects from a system; these were defined based on the literature review outlined in the previous section. For example, it was important for researchers (i.e., test creators) to be able to record all characteristics they might choose to study, and to modify all the examined variables.

Figure 1. Package diagram of identified functional and non-functional requirements defined according to ISO/IEC (2011).

Figure 1. Package diagram of identified functional and non-functional requirements defined according to ISO/IEC (2011).

Non-functional requirements specify how the system works, typically including its properties or a condition restricting its operation, such as training needs, costs, or documentation. Lack of success, or bugs in the testing application, may discourage participants from engaging with the test. This is why it is also important for the software to meet non-functional requirements.

FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS

We categorized the functional requirements of the 3D testing tool into four packages: (A) displaying 3D data and interaction, (B) displaying the questionnaire and instructions, (C) user data capture, and (D) extending functions.

FUNCTIONAL REQUIREMENTS

Package A includes requirements related to interactive 3D visualization. These functions are needed to enable a wide range of 3D maps and their individual parts to be tested. As a result, individual controls can be evaluated, different modes of movement compared, or the suitability of symbols used in a 3D map assessed.

A.1. The testing tool should be able to display various types of 3D scenes or models. Preferably, it will handle 3D models of terrain (see, for example, Savage, Weibe, and Devine 2004; Popelka and Brychtova 2013; Wilkening and Fabrikant 2013), buildings (Rautenbach, Coetzee, and Çöltekin 2016; McKenzie and Klippel 2016), and abstract objects such as bar charts, etc. (Kraak 1988; Bleisch, Dykes, and Nebiker 2008).

A.2. Different types of interactive movement should be possible. Movement permits a fundamentally different affordance than static perspective views of 3D data. It is also one means of dealing with 3D object occlusion. 3D GIS applications often support several types of movement. We can distinguish between 3D movement modes (fly, walk, examine) and concrete types of movement (pan, zoom, rotation), which just consist of the movement mode “examine.” In general, the maximum number of these modes should be available in the tool, since the aim of research may be, for example, to determine user preference for different movement types. Specifically, at least the above-mentioned types of movement should be supported, because they are the most common in 3D scenes (Ware and Plumlee 2005).

A.3. Non-interactive navigation is also foreseen as a very important functional requirement. Perhaps the most common and useful is a “reset position” function, but non-interactive movement can also mean switching between predefined views (see Ware and Plumlee 2005; Shepherd 2008). A flyby through a 3D scene with a predefined path may also be considered to be a form of non-interactive movement. The efficiency of flybys (used, for example, by Torres et al. 2013) may then be compared to the efficiency of using fully interactive navigation.

Package B includes requirements aimed at displaying instructions and storing user responses and/or opinions.

B.1. A questionnaire interface is required, since the testing tool should combine practical tasks (finding solutions to assigned tasks) along with the collecting of subjective responses. A questionnaire may be placed before or after a 3D scene. A questionnaire placed before a scene will likely focus on basic demographic data and previous user experience. A questionnaire can also come after a user solves a task, asking them, for example, to evaluate it, or recall what they remember about the 3D scene. Questionnaires have been used by Schmidt and Delazari (2011), Schobesberger and Patterson (2007), and Preppernau and Jenny (2015), among others.

B.2. A space to display instructions for each task should be available. A 3D scene may precede instructions or be displayed simultaneously with the task. Instructions may take the form of text or contain pictures.

B.3. An interface to input responses during tasks is necessary. This interface may include the option to select one or more correct answers, free-write responses to open-ended questions, or select features directly from a 3D scene. Participant responses (effectiveness) were monitored by Savage, Weibe, and Devine (2004), Wilkening and Fabrikant (2013), and Preppernau and Jenny (2015).

B.4. Conditional navigation may be required for training tasks that need instructions displayed gradually. If a user learns the movement “rotation,” for example, instructions on how to perform this type of movement are shown for a period of time up to when the movement is executed (which is predefined). Afterwards, another instruction may be displayed or the user can advance to the next topic.

Package C includes requirements aimed at obtaining objective information related to a user’s performance. All these types of records should be interconnected to allow the exploration of relationships. For instance, the connection between previous user experience and speed and accuracy of answers may be then analyzed. The same applies, for example, to reconstructing the sequence of movements a user followed to fulfill the task.

C.1. Capturing time data provides indicators related to speed in solving a task. This requirement is key for describing efficiency. The simplest method is to record the time each user needs to perform a given task. Efficiency was monitored, for example, by Preppernau and Jenny (2015), McKenzie and Klippel (2016), and Juřík et al. (2017).

C.2. Capturing responses related to a 3D scene is crucial for characterizing effectiveness. It should be possible to record responses (both correct and incorrect) in the form of selecting one correct option, multiple correct options, or responses as free text. User responses were captured and then analyzed, for example, by Savage, Weibe, and Devine (2004), Wilkening and Fabrikant (2013), and Preppernau and Jenny (2015).

C.3. Interaction with virtual environments, especially movement in 3D space, should be captured independently of recording responses and the time needed to solve tasks. Each movement is composed of a change in virtual camera position and orientation. Each change of coordinates should be stored. This method is used quite often in 3D UI research (Chittaro and Ieronutti 2004; Bowman et al. 2005; Zanbaka et al. 2005; Chittaro, Ranon, and Ieronutti 2006). It is possible to reconstruct and/or analyze user movement in a 3D virtual environment when coordinates are captured together with a timestamp (e.g., Cirio et al. 2013). Positions in 3D space may be expressed in various ways within the geospatial domain. Typically, Cartesian coordinates (X, Y, Z) or geographical coordinates (longitude, latitude, and altitude above the reference surface) are used (Treves, Viterbo, and Haklay 2015). An expression of virtual camera orientation, though, is more complicated. Some applications (e.g., Google Earth) use heading, tilt, and roll, which are the values of rotation around individual axes. Another approach is used in the X3D and VRML (Virtual Reality Modelling Language) formats, where three numbers specify the rotational axis and one value gives the angle of rotation around it. A rotation matrix (usually 3×3) can also be used. Preferably, the tool will record coordinates in a common, machine-readable format (e.g., CSV, JSON, or XML).

C.4. Information about the movement type (zoom, walk, rotate, etc.) should be captured in a form that can be stored and then processed. All information about the use of non-interactive functions must also be stored. These data are necessary for determining how long users spend on each movement type or studying movement type sequences during navigation in 3D space. As noted by Wilkening and Fabrikant (2013) and Juřík et al. (2017), it is a very important aspect of research in 3D interactive visualization.

C.5. Questionnaire responses must be captured in order to assess effectiveness, users’ descriptions of previous experience, their satisfaction with the tool, and their ability to learn. Questionnaire responses were captured and then analyzed by Savage, Weibe, and Devine (2004), Wilkening and Fabrikant (2013), and Preppernau and Jenny (2015).

C.6. Capturing the use of mouse buttons and functional keys allows a more detailed analysis of user interaction. It is especially important when one type of movement can be performed in several ways (e.g., in Google Earth, a user can either zoom with the mouse wheel, or by clicking and dragging with the right mouse button). This requirement is derived from detailed user logging, a common 3D UI research method (e.g., Ritchie et al. 2008; Sung et al. 2009).

C.7. Capturing screen settings (color mode, resolution) and Web browser information (type and version) allows user settings and conditions to be monitored.

Two possible and extended functionality requirements have been identified, as defined below.

D.1. Additional tools to display position and orientation are often used in virtual environments. These include overview maps or a north arrow. Shepherd (2008) presents the benefits of these navigational aids, Schmidt and Delazari (2011) provide a comprehensive overview of them, and Burigat and Chittaro (2007) tested some of them. The effectiveness of these tools may also be examined in the future.

D.2. The system should be able to screen capture at a specific time to log virtual camera position and orientation, for example, when a user enters a response. This capture may serve as a basis for further qualitative user strategy evaluation. An expanded variation is dynamic screen capture (video recording), which permits indirect observation. This method was used, for example, by Abend et al. (2012).

NON-FUNCTIONAL REQUIREMENTS

Non-functional requirements of the 3D testing tool have been categorized into four packages: (I) usability requirements, (II) technical requirements, (III) efficiency requirements, and (IV) development requirements.

Since the proposed application is designed for usability testing, it should itself be usable, as defined in Package I.

I.1. The application should be user friendly, a particularly critical consideration when the application is designated for usability testing. Performing a task should be simple and intuitive. All important parts of the application must be easy to access, especially virtual environment operation and navigation tools, as well as the elements needed to input responses. Well-known graphical control elements (widgets) should be used in the graphical user interface of the application (buttons, radio buttons, check boxes, or text boxes). User training time should be as brief as possible. A user should be able to work with the application immediately after reading brief instructions and initial explanations. Exporting and subsequently processing the recorded data should be as simple and user friendly as possible.

Package II contains requirements for the software’s ability to be used on different platforms, and attributes that affect how much effort is needed to make specific modifications.

II.1. Employing web technologies guarantees maximum accessibility. 3D graphics rendering should be considered, as emphasized by Behr et al. (2009). The web application should work independently of the display device or its settings. This applies especially to the different behaviors of various web browsers (such as Internet Explorer, Mozilla Firefox, Google Chrome, Opera, and Safari), which often in practice do not display the same content in the same way. Preferably, the 3D application will display its contents correctly and consistently to a maximum number of users. An installation process should not be needed.

II.2. It is necessary to concentrate on syntactic interoperability during the application design phase. Interoperability, according to IEEE 610:1990, is the ability of different systems to work together to provide services and achieve synergies (IEEE 1990). For that reason, standards for technological development and data handling should be used, especially those related to 3D format support (e.g., X3D) and those that are relevant to the web environment (HTML and CSS).

II.3. The application should demonstrate scalability, for situations in which researchers demand improvement in non-functional requirements (e.g., speeding up responses or increasing capacity).

II.4. The application should be also feature extensibility, allowing researchers or developers to include new features or modify existing ones. Extensibility also allows the definition of additional functional requirements.

Package III is composed of a set of attributes affecting the relationship between the application’s performance and the resources it uses, under the stated conditions.

III.1. Performance (speed of responses) states how fast the application can complete a request delivered to it. An efficient response time should guarantee at least a 1 Mbps data transfer rate when loading a new 3D scene. We expect 3D model visualizations in sizes up to 15 MB, so expected performance is within 15 seconds. Loading new data and continuous rendering of a 3D scene (i.e., during virtual movement) should also be fast enough. For that reason, technologies with hardware-accelerated rendering are preferred.

III.2. Capacity is defined as the limit to the number of simultaneous service requests provided with guaranteed performance. The application should be capable of processing 20 simultaneous requests per second.

III.3. Availability means the probability of the application being available. The probability of a catalogue service being available should be 90% across its lifetime. To lessen downtime of the system due to updates and patches, it is therefore preferable that data forming the 3D scene be separate from other system components, such as those that offer movement controls or recording camera positions.

Several requirements related to the testing tool development process are also identified and summarized in package IV. When creating any application, reducing costs associated with development and deployment is usually important.

IV.1. Costs may be divided into software cost, spatial data cost, and personal cost (both a person’s time and their hourly pay). In terms of web applications, a wide range of software libraries is freely available, allowing costs to be reduced. The testing tool should rely on open source technologies. The final application will be released under a BSD license.

IV.2. Another situation exists for input data cost for the data that form the 3D model. Some 3D spatial data are available as free or open data, and fictitious data can be employed for some tasks, but a considerable amount of data have an associated cost. The test creator and the nature of proposed tasks determine which spatial data may be included as stimuli. Non-commercial data are expected to be used.

IV.3. Another component of cost is the labor intensity associated with developing the application. This depends on both the condition of processed data (the number of necessary adjustments that must be made to it) and the condition of software tools (the extent to which it is necessary to modify or expand them). The testing tool will be developed on a non-commercial basis as part of a Ph.D. thesis.

IV.4. Documentation is foreseen as a non-functional requirement important for the re-use of the testing tool. Test creators will require a tutorial that instructs them on how new experiments are designed. Clear and brief descriptions of controls and functionality will also be included in each test, so there is some assistance for participants.

DESIGN AND IMPLEMENTATION

We designed the experimental application 3DmoveR according to our requirements analysis. Our process was patterned after the “spiral model” (Boehm 1988), a risk-driven model for software projects. Based on a project’s risk patterns, the spiral model suggests a blend of process models for its design, such as incremental, waterfall, or evolutionary prototyping. In our own case, we decided to create the software in two iterations. In the first, we designed and implemented an initial prototype, which was then pilot tested. After improving the prototype based on the pilot test, we created a second version for use in another round of pilot testing. This version of the tool was subsequently used in the main experiment.

Open web technologies were chosen to implement 3DmoveR, which comprises a client and a server side (see Figure 2). The client side is built with HTML, JavaScript (JS), jQuery, and X3DOM (a JS library for rendering 3D graphics in web browsers). The data recorded on the client side are posted to the server, where they are stored as CSV files generated by PHP scripts.

Figure 2. The general architecture of the 3DmoveR application, and the main technologies and formats on the client and server sides.

Figure 2. The general architecture of the 3DmoveR application, and the main technologies and formats on the client and server sides.

Wide support for the X3DOM data format in web browsers was the main reason for its use in 3DmoveR’s development. X3DOM also benefits from the ready availability of software for creating 3D input data, documentation, and relevant examples. The X3DOM format uses the X3D data structure, is built on HTML5, JavaScript, and WebGL, and is free of charge for both non-commercial and commercial use (Behr et al. 2009). Common JS events are supported, e.g. for detecting user interaction or measuring time. 3D data can be stored in an HTML file or external files. Other aspects and capabilities of X3DOM are generally described by Behr et al. (2009), Herman and Řeznik (2015), and on the web (www.x3dom.org). Herman and Russnák (2016) examine X3DOM utilization in the cartographic and GIS domains.

EVALUATION AND TESTING

We evaluated 3DmoveR through two pilot tests, technical testing, and interviews with experts. Detailed descriptions of the designs, tasks, stimuli, and participants for both pilot tests as well as the resulting software design improvements can be found in Appendix 1. The results of the technical testing with different 3D models are presented in Appendix 2. Here, we summarize the results of these tests, compare them with the defined requirements, and also list the results of consultations with experts.

The 3DmoveR application was able to implement the functional requirements laid out in Figure 1. Terrain data and abstract symbols were used as stimuli in both pilot tests, while 3D city models and 3D models of building interiors were also tested elsewhere (A.1). Interactive movement was successfully implemented in the tool (A.2). Most movement actions driven by a user can be distinguished. Various 3D libraries with different controls can also be used to render 3D models (e.g., Cesium, WebGLEarth, Three.js). The proposed tools only support interaction via a mouse or keyboard and depiction of a 3D scene on standard (2D) screens, which may be seen as a limitation. Non-interactive movement (A.3) was not used in the pilot tests, but its implementation and possible application in visualizing 3D spatial data is described in our earlier publication (Herman and Řeznik 2015). Displaying questionnaires (B.1), instructions (B.2), and interfaces to input responses (B.3) presented no complications. All pilot test participants mastered the training task, and therefore we assumed that they understood conditional navigation (B.4). Recording time, type of action, and all responses and configurations functioned correctly (C.1–7). All figures and visualizations reported in section 5 were calculated and constructed from these data. Two possible and extended functionalities were identified in the requirements analysis. The role of displaying position and virtual camera orientation in the 3D scene (D.1) were described by Schmidt and Delazari (2011) and Herman and Řeznik (2015). Capturing screenshots (D.2) is also possible with the X3DOM library.

In term of non-functional requirements, the results of our pilot tests (Appendix 1) showed that the application can be considered user friendly (I.1). Users did not report any major problems when using the tool. Although full functionality is only available in Google Chrome, we considered the testing tool to be sufficient in terms of the accessibility (II.1). Ongoing work will aim to support other web browsers. The testing tool was also verified as customizable: each component of the X3D family standards can be used to expand or modify it (II.2–4). Appendix 2 shows different types of 3D data that can be tested in this tool. We also verified that the application’s performance (III.1) and capacity (III.2) met the non-functional requirements (results are also presented in Appendix 2). In terms of availability (III.3), no problems were identified, as the application is not intended for high availability (e.g., hundreds of concurrent users).

Our work aimed to minimize operating costs (IV.1–3). The X3DOM library that was used to implement 3DmoveR is open source, and freely available data were used to create stimuli. While we used a commercial program (ArcScene) to prepare input data, freeware or open-source tools could have done the same task. Our previous study (Herman and Russnák 2016) used, for example, Trimble SketchUp. While experiments employing the current form of the testing tool require JS knowledge, a graphical interface to manage tests is envisaged for the future. This would allow administration in a graphical environment instead of via programming code.

Feedback on the first version of the application that we developed was also obtained from experts in various scientific fields: cartography, geography, informatics, and psychology. We asked these experts (assistant professors) to use the application and accomplish a set of tasks; then, we collected their subjective evaluations and implemented their suggestions. For example, the cartographic design of the 3D visualizations was evaluated by senior cartographers from Masaryk University and Palacký University, in the Czech Republic. The software architecture and design were discussed with experts from the Faculty of Informatics at Masaryk University to improve the performance of the application and the data captured during the experiments. The Centre for Experimental Psychology and Cognitive Sciences at Masaryk University evaluated the resulting measures and visualizations and were satisfied with their detail.

USER STUDY WITH 3DMOVER

In the main study, we wanted to compare the differences in performance and strategies of two user groups: 3D map and visualization experts, and non-expert laypersons (the general public). Our research question was: “Are expert users able to solve the given tasks more quickly, with greater accuracy in their responses, and using a more effective strategy, as predicted by Bowman et al. (2005)?”

Forty participants took part in the test. Half of the participants (20) were experts: cartography graduates who had obtained at least a bachelor’s degree (average age 25 years; 4 females and 16 males). The other half of the participants, from the general public (laypersons), were ten psychology undergraduate students and ten final-year high school students (average age 19 years; 14 females and 6 males).

The test battery comprised an introductory questionnaire covering demographics and previous 3D visualization experience, a training task (participants had to try out all three possible types of motion, described below, otherwise they could not continue), and four test tasks with 3D maps. These tasks were selected to reflect basic cognitive processes (see Anderson, Krathwohl, and Bloom 2001). In two of the test tasks, users were presented with four objects and asked to identify which one was located at the highest altitude (Tasks 1 and 2); only one answer could be chosen from among the four options (objects A–D). Two other tasks were focused on the identification of visible objects from the top of a mountain (Tasks 3 and 4). These tasks also offered four options, but any number of them could be selected. Each task began with an instruction page, followed by a page with the 3D scene and an interface for user responses. At the end of the whole testing battery there were concluding questions, in which users offered a Likert-scale subjective evaluation of how difficult they perceived the tasks to have been.

All participants were informed that correct answers were more important than speed, and that their performance time would be recorded. Google Chrome was used for the experiment, as this web browser could be set to full screen mode before it began. Equivalent experimental conditions existed for all participants, including all environmental aspects. Participants were rewarded with small gifts at the end of testing.

Digital terrain models from the SRTM (Shuttle Radar Topography Mission) formed the principal stimuli in the main experiment. They were processed in ArcGIS 10.2. The terrain models were visualized in ArcScene with a green-to-brown hypsometric color scheme and a vertical scale (Z factor) set to two times larger than the actual altitudes. The results were exported from ArcScene as VRML (Virtual Reality Modeling Language) files and converted into X3D format using freely available software called View3dScene.

A type of virtual movement called “turntable” in X3DOM was chosen for this experiment, and was also used in both pilot tests. “Turntable” is a specific variant of a more widely used movement mode called “examine.” Both “examine” and “turntable” are composed of three specific types of movement: pan (performed by the middle mouse button), zoom (right mouse button or mouse wheel), and rotate (left mouse button). Zoom moves the scene nearer or farther, pan drags the scene side to side, and rotate turns the scene around the center of rotation. As compared to “examine,” “turntable” does not allow the longitudinal axis of the virtual camera to be rotated.

Video 1. Click to see a demonstration of 3DmoveR.

RESULTS

Interaction and virtual movement data were collected using 3DmoveR, and then analyzed and visualized. The differences in correct responses (effectiveness) were relatively small between the two user groups we compared. This is likely due to the tasks being relatively simple. Only one participant, a layperson, responded incorrectly in the first task (select the object at the highest altitude); thus, correctness was 95% for laypersons. All participants solved the second (select the object at the highest altitude) and fourth (determination of object visibility) tasks correctly. The greatest difference in effectiveness was recorded in the third task (determination of object visibility). All experts and 15 laypersons (75%) solved the third task without error. However, differences were recorded in the response times (efficiency) and other indicators, as seen in Appendix 3.

INTERACTION AND VIRTUAL MOVEMENT DATA

The descriptive statistics presented in Appendix 3 were used to compare response times, virtual movements, and interaction strategies between two user groups (experts and laypersons). Similar approaches have been used or recommended by Bade, Ritter, and Preim (2005); Zanbaka et al., (2005); Wilkening and Fabrikant (2013); and McKenzie and Klippel (2016). Measures were calculated from each user’s virtual trajectory (length, average speed) and virtual camera positions (average height, rotation characteristics), or determined from the duration of individual movement types. We also recorded the moments when interactions were interrupted (delays). Delays longer than one second normally occur at the beginning of a task and just before responding. Shorter delays represent partial interruptions in movements; we assessed movement without interruption as being smoother.

These measures allow statistical testing and comparison of different aspects of user interaction between groups; Appendix 3 contains the results. The Mann-Whitney test was used for this purpose, because most of the measures do not have normal distributions. Where the differences in the figures were significant, experts were more effective (shorter response times, shorter trajectories, fewer delays), which corresponds to our hypothesis that experts are more skilled in handling interactive 3D maps. However, trajectory lengths and the number of delays are usually closely related to response time.

The 3DmoveR tool allows the easy capture of all of the above-mentioned measures. Future researchers can design experiments that compare the performance of individual users and user groups, in order to determine how different 3D visualizations, visualization settings, and other variables affect user interactions.

VISUALIZATION OF INTERACTIONS AND VIRTUAL MOVEMENTS

Task 3 (determination of object visibility) was chosen for a detailed comparison of the strategies of the two groups, as laypersons had the lowest level of correctness, and there were other statistically significant differences between groups.

The spatial component of virtual movements in the two user groups can be illustrated by either visualizing trajectories or using the Gridded AOI (Area of Interest) method. Visualizing trajectories provided only limited results, so we employed the Gridded AOI method. Gridded AOIs were created as cubes (3D Gridded AOI) using a minimum bounding box. In each, the number of virtual camera positions was determined and therefore the density of occurrence in that AOI. Interactive visualizations of 3D Gridded AOIs for experts and laypersons are available at: olli.wz.cz/webtest/3dmover/visualizations_cp.

In addition to the spatial component of user interactions, the temporal component can also be studied. The sequences of each type of movement (rotation, pan, zoom) can be compared. This can be done visually with a sequence chart (Figure 3), but comparison is highly subjective and can be challenging (e.g., in case of large numbers of participants, or with complicated sets of interactions). However, we can identify groups of similarly interacting participants: for example, those who prefer rotation (participants E05, E07, E09, L06, and L08) or participants who use all the movement types and take a long time to solve the task (participants E04, E19, and L17). There are no clear differences between expert and layperson groups visible in Figure 3.

Figure 3. Sequence chart of user interactions. An online version of the sequence chart with sample data is available at: olli.wz.cz/webtest/3dmover/visualizations_cp.

Figure 3. Sequence chart of user interactions. An online version of the sequence chart with sample data is available at: olli.wz.cz/webtest/3dmover/visualizations_cp.

A more objective way of comparing user interaction sequences is based on the Levenshtein Distance method, which can be calculated with a freely available software tool called Scangraph (eyetracking.upol.cz/scangraph). ScanGraph’s output is a matrix of similarities and a graph in which groups of similar sequences are displayed as cliques (Figure 4). For more detailed information about ScanGraph, see Dolezalova and Popelka (2016).

Figure 4. ScanGraph output of user interaction sequences (L – layperson, E – expert). An interactive version is available at: eyetracking.upol.cz/scangraph/?source=4895429895b2a523e832e67.45570456.

Figure 4. ScanGraph output of user interaction sequences (L – layperson, E – expert). An interactive version is available at: eyetracking.upol.cz/scangraph/?source=4895429895b2a523e832e67.45570456.

ScanGraph helped to identify the differences between laypersons and experts more quantitatively, but at the same time it created mainly smaller cliques of similar participants (usually with two to five members). Figure 4 shows the 31 calculated cliques; 14 of them are uniform, containing only experts or only laypersons. The various cliques form three larger groups, one with only laypersons, another with only experts, and the third with equal numbers of both participant groups. Two smaller cliques and solitary sequences of participants “E02” and “L02” can also be identified. Participant “L02” solved the problem without any interactions and responded incorrectly (answers A, C, and D).

Besides analyzing groups, we can also go into more detail and study the spatial aspects of user interactions performed by individual participants. We can, for instance, visualize individual trajectories, highlighting virtual camera orientation, delays at individual virtual camera positions, or both types of information together. In Figure 5, the symbols for virtual camera positions are colored according to the movement type used (rotation/pan/zoom), which we can see greatly affects the shape of the virtual trajectory.

The virtual camera’s position and orientation at important moments, such as when answering questions, can be also extracted from the records and screenshots can be reconstructed for examination by researchers. One further way of studying user strategy is to play back the movements of individual participants as animations (screen video). A tool to do this, along with sample data, is available at: olli.wz.cz/webtest/3dmover/visualizations_cp. Screenshots and screen videos are suitable for qualitative evaluation of participants’ interactions and their strategies.

Figure 5. Comparison of the virtual trajectories of participant “E05” and participant “L04”. The sizes of the spheres represents delays at individual virtual camera positions. An online version of these visualizations is available at: olli.wz.cz/webtest/3dmover/visualizations_cp.

Figure 5. Comparison of the virtual trajectories of participant “E05” and participant “L04”. The sizes of the spheres represents delays at individual virtual camera positions. An online version of these visualizations is available at: olli.wz.cz/webtest/3dmover/visualizations_cp.

We can achieve a deeper understanding of user interactions with 3D visualizations by using a combination of analysis methods, some better suited to the scientific comparison of user groups (Gridded AOI and density calculation, ScanGraph), and others more suitable for a detailed examination of individual user interactions (sequence charts, visualization of trajectories, screenshots, or screen videos).

Video 2. Click to see a demonstration of interactive and static methods for analyzing user data.

DISCUSSION

The experiment demonstrated the unique advantages of 3DmoveR for cartographic research, which allowed the easy recording of data that enabled us to make comparisons between the two user groups (experts vs. laypersons) and between individual participants. The results supported our hypothesis that experts would achieve higher correctness when responding and solve the tasks more quickly. Furthermore, their movements in the virtual environment were smoother, with fewer delays that were shorter than one second. The differences in correctness between experts and laypersons varied between individual tasks. The lowest accuracy was recorded in the third task, which was assessed as the most difficult by both laypersons and experts. These results are probably due to the terrain used in this task having the least roughness and the least variation in color range. The use of a green-to-brown color scale might also have influenced some users to make decisions according to color rather than perception of 3D terrain shapes; we assume that this could have happened in the first two tasks. In subsequent studies, it would be more appropriate to uniformly color the terrain or use an orthophoto as texture. This would also increase the ecological validity of the results: stimuli would be more similar, for example, to an application like Google Earth. The vertical scale (Z factor) of terrain in the test was twice the actual altitude.

It is obvious that there is a learning effect when the results of each test type (selection of an object at the highest altitude and identification of visible objects from the top of a mountain) are examined. In the first tasks of each type (the first and third tasks overall), we recorded statistically significant differences between users in response time and the number of delays. In the second tasks of each type (second and fourth tasks), these differences were less evident. Differences in the correctness between tasks were affected by the first two tasks having only one possible correct answer, while in the third and fourth tasks multiple answers were possible.

We derived a number of individual metrics and visualizations to represent aspects of user interactions. While some differences or dependencies appear to be obvious (such as the correlation between the time a user took to solve a task and the distance they traveled), others need to be further explored and analyzed in the context of future experiments, such as how the correctness of responses depends on the sequence of virtual movement types. It is also possible to design and use other visualization methods, such as a graph showing changes in height of the virtual camera as the task is performed, or one indicating how distance changes between points of interest and the virtual camera.

CONCLUSIONS

The 3DmoveR software we developed was successfully validated through a usability test involving interactive 3D maps. To summarize, the application has the following major advantages:

The English version of the main experiment is available at: olli.wz.cz/webtest/3dmover/test_eng_cp, as well as a demo version with different types of stimuli (olli.wz.cz/webtest/3dmover/demo_eng_cp).

We tested the capabilities of 3DmoveR in pilot tests and then fully applied its possibilities in the main experiment to compare the performance of two user groups (laypersons and experts). In contrast to the classic approach of map user studies (which use static 3D maps as stimuli and analyze efficiency, effectiveness or satisfaction only), figures calculated from the data recorded in 3DmoveR were used for this comparison. Our hypothesis that experienced users would achieve better results than laypersons when working with interactive 3D maps was confirmed. They achieved higher accuracy when responding and solved tasks more quickly. Their movement in virtual environments was quicker and smoother, as indicated in the statistical testing of calculated delays.

Additionally, we explored a number of options for analyzing user strategies with different visualization and analytical methods (e.g., visualization of trajectories, Gridded AOI, sequence chart, ScanGraph). Further data-driven experiments will expand our knowledge in the usability and cognitive aspects of 3D visualization, and help explain at least some of the theoretical background of 3D cartographic visualization. Testing tools such as 3DmoveR, which permit detailed user interaction analysis, will help make that possible.

ACKNOWLEDGEMENTS

This research was funded by Grant No. MUNI/M/0846/2015, “Influence of Cartographic Visualization Methods on the Success of Solving Practical and Educational Spatial Tasks” and Grant No. MUNI/A/1251/2017, “Integrated Research on Environmental Changes in the Landscape Sphere of Earth III,” both awarded by Masaryk University, Czech Republic.

REFERENCES

Abend, Pablo, Tristan Thielmann, Ralph Ewerth, Dominik Seiler, Markus Mühling, Jörg Döring, Manfred Grauer, and Bernd Freisleben. 2012. “Geobrowsing Behaviour in Google Earth — A Semantic Video Content Analysis of On-Screen Navigation.” In GI_Forum 2012: Geovisualization, Society and Learning, edited by T. A. Jekel, et al., 2–13, Berlin: Herbert Wichmann Verlag.

Anderson, Lorin W., David R. Krathwohl, and Benjamin S. Bloom. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. New York: Longman.

Bade, Ragnar, Felix Ritter, and Bernhard Preim. 2005. “Usability Comparison of Mouse-based Interaction Techniques for Predictable 3D Rotation. In SG’05 Proceedings of the 5th International Conference on Smart Graphics, edited by Andreas Butz, Brian Fisher, Antonio Krüger, and Patrick Olivier, 138–150. Berlin, Heidelberg: Springer-Verlag. doi: 10.1007/11536482_12.

Bandrova, Temenoujka. 2006. “Innovative Technology for the Creation of 3D Maps.” Data Science Journal 4: 53–58. doi: 10.2481/dsj.4.53.

Behr, Johannes, Peter Eschler, Yvonne Jung, and Michael Zöllner. 2009. “X3DOM – A DOM-based HTML5/X3D Integration Model.” In Proceedings of Web3D 2009: The 14th International Conference on Web3D Technology, Web3D 2011, edited by Stephen Spencer, 127–135. New York: Association for Computing Machinery. doi: 10.1145/1559764.1559784.

Biljecki, Filip, Jantien Stoter, Hugo Ledoux, Sisi Zlatanova, and Arzu Çöltekin. 2015. “Applications of 3D City Models: State of the Art Review.” ISPRS International Journal of Geo-Information 4 (4): 2842–2889. doi: 10.3390/ijgi4042842.

Bleisch, Susanne. 2011. “Evaluating the Appropriateness of Visually Combining Quantitative Data Representations with 3D Desktop Virtual Environments Using Mixed Methods.” PhD diss., City, University of London. http://openaccess.city.ac.uk/1092/1/Bleisch%2C_Susanne.pdf.

Bleisch, Susanne, Jason Dykes, and Stephan Nebiker. 2008. “Evaluating the Effectiveness of Representing Numeric Information Through Abstract Graphics in 3D Desktop Virtual Environments.” Cartographic Journal 45 (3): 216–226. doi: 10.1179/000870408X311404.

Boehm, Barry W. 1988. “A Spiral Model of Software Development and Enhancement.” IEEE Computer 21 (5): 61–72. doi: 10.1109/2.59.

Bowman, Doug A., Ernst Kruijff, Joseph J. LaViola, and Ivan Poupyrev. 2005. 3D User Interfaces: Theory and Practice, 1st Edition. Redwood City, CA: Addison Wesley.

Buchroithner, Manfred F., and Claudia Knust. 2013. “True-3D in Cartography—Current Hard and Softcopy Developments.” In Geospatial Visualisation, edited by Antoni Moore and Igor Drecki, 41–65. Berlin, Heidelberg: Springer. doi: 10.1007/978-3-642-12289-7_3.

Burigat, Stefano, and Luca Chittaro. 2007. “Navigation in 3D Virtual Environments: Effects of User Experience and Location-Pointing Navigation Aids.” International Journal of Human-Computer Studies 65 (11): 945–958. doi: 10.1016/j.ijhcs.2007.07.003.

Büschel, Wolfgang, Patrick Reipschläger, Ricardo Langner, and Raimund Dachselt. 2017. “Investigating the Use of Spatial Interaction for 3D Data Visualization on Mobile Devices.” In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, 62–71. New York: Association for Computing Machinery. doi: 10.1145/3132272.3134125.

Cameron, Roslyn. 2009. “A Sequential Mixed Model Research Design: Design, Analytical and Display Issues.” International Journal of Multiple Research Approaches 3 (2): 140–152. doi: 10.5172/mra.3.2.140.

Chittaro, Luca, and Lucio Ieronutti. 2004. “A Visual Tool for Tracing Users’ Behavior in Virtual Environments.” In Proceedings of the Working Conference on Advanced Visual Interfaces, 41–47. New York: Association for Computing Machinery. doi: 10.1145/989863.989868.

Chittaro, Luca, Roberto Ranon, and Lucio Ieronutti. 2006. “VU-Flow: A Visualization Tool for Analyzing Navigation in Virtual Environments.” IEEE Transactions on Visualization and Computer Graphics 12 (6): 1475–1485. doi: 10.1109/TVCG.2006.109.

Cirio, Gabriel, Maud Marchal, Anne-Helene Olivier, and Julien Pettré. 2013. “Kinematic Evaluation of Virtual Walking Trajectories.” IEEE Transactions on Visualization and Computer Graphics 19 (4): 671–680. doi: 10.1109/TVCG.2013.34.

Dolezalova, Jitka, and Stanislav Popelka. 2016. “ScanGraph: A Novel Scanpath Comparison Method Using Graph Cliques Visualization.” Journal of Eye Movement Research 9 (4): 1–13. doi: 10.16910/jemr.9.4.5.

van Elzakker, Corné P. J. M. 2004. “The Use of Maps in the Exploration of Geographic Data. Netherlands Geographical Studies.” PhD diss., Universiteit Utrecht.

van Elzakker, Corné P. J. M., and Amy L. Griffin. 2013. “Focus on Geoinformation Users.” GIM International 27(8): 20–23. https://www.gim-international.com/content/article/focus-on-geoinformation-users.

Haeberling, Christian, Hansruedi Bär, and Lorenz Hurni. 2008. “Proposed Cartographic Design Principles for 3D Maps: A Contribution to an Extended Cartographic Theory.” Cartographica 43 (3): 175–188. doi: 10.3138/carto.43.3.175.

Hajek, Pavel, Karel Jedlicka, and Vaclav Cada. 2016. “Principles of Cartographic Design for 3D Maps – Focused on Urban Areas.” In 6th International Conference on Cartography and GIS Proceedings, Vol. 1 and Vol. 2, edited by Temenoujka Bandrova and Milan Konecny, 297–307. Sofia: Bulgarian Cartographic Association.

Herbert, Grant, and Xuwei Chen. 2014. “A Comparison of Usefulness of 2D and 3D Representations of Urban Planning.” Cartography and Geographic Information Science 42 (1): 22–32. doi: 10.1080/15230406.2014.987694.

Herman, Lukáš, Stanislav Popelka, and Vendula Hejlova. 2017. “Eye-tracking Analysis of Interactive 3D Geovisualizations.” Journal of Eye Movement Research. 10 (3): 2. doi: 10.16910/jemr.10.3.2.

Herman, Lukáš, and Tomas Řeznik. 2015. “3D Web Visualization of Environmental Information – Integration of Heterogeneous Data Sources when Providing Navigation and Interaction.” In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XL-3/W3, edited by Claude Mallet, et al., 479–485. Göttingen, Germany: Copernicus GmbH. doi: 10.5194/isprsarchives-XL-3-W3-479-2015.

Herman, Lukáš, and Jan Russnák. 2016. “X3DOM: Open Web Platform for Presenting 3D Geographical Data and E-learning.” In Central Europe Area in View of Current Geography. Proceedings of 23rd Central European Conference, edited by Libor Lnenicka, 31–40. Brno, Czech Republic: Masaryk University.

Herman, Lukáš, and Zdeněk Stachoň. 2016. “Comparison of User Performance with Interactive and Static 3D Visualization – Pilot Study.” In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLI-B2, edited by Lena Halounova, et al., 655–661. Göttingen, Germany: Copernicus GmbH. doi: 10.5194/isprsarchives-XLI-B2-655-2016.

Herman Lukáš, Zdeněk Stachoň, Radim Stuchlík, Jiří Hladík, and Petr Kubíček. 2016. “Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues.” In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLII-2/W2, edited by Efi Dimopoulou and Peter van Oosterom, 33–40. Göttingen, Germany: Copernicus GmbH. doi: 10.5194/isprs-archives-XLII-2-W2-33-2016.

IEEE (Institute of Electrical and Electronics Engineers). 1990. IEEE 610:1990 IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. doi: 10.1109/IEEESTD.1991.106963.

ISO (International Organization for Standardization). 1998. ISO 9241-11:1998 Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) — Part 11: Guidance on Usability.

———. 2013. ISO 19157:2013 Geographic Information — Data Quality.

ISO/IEC (International Electrotechnical Commission). 2011. ISO/IEC 25010:2011 Systems and Software Engineering — Systems and Software Quality Requirements and Evaluation (SQuaRE) — System and Software Quality Models.

———. 2004. ISO/IEC 9126-4:2004 Metrics Software Engineering — Product quality — Part 4: Quality in Use Metrics.

Juřík, Vojtěch, Lukáš Herman, Čeněk Šašinka, Zdeněk Stachoň, and Jiří Chmelík. 2017. “When the Display Matters: A Multifaceted Perspective on 3D Geovisualizations.” Open Geosciences 9 (1): 89–100. doi: 10.1515/geo-2017-0007.

Kraak, Menno-Jan. 1988. “Computer-Assisted Cartographical 3D Imaging Techniques.” PhD diss., Delft University.

Li, Xia, Arzu Çöltekin, and Menno-Jan Kraak. 2010. “Visual Exploration of Eye Movement Data Using the Space-Time-Cube.” In Geographic Information Science, edited by Sara Irina Fabrikant, Tumasch Reichenbacher, Marc van Kreveld, and Christoph Schlieder, 295–309. Berlin, Heidelberg: Springer. doi: 10.1007/978-3-642-15300-6_21.

MacEachren, Alan M. 1995. How Maps Work: Representation, Visualization, and Design. New York: Guilford Press.

MacEachren, Alan M., and Menno-Jan Kraak. 2001. “Research Challenges in Geovisualization.” Cartography and Geographic Information Science 28 (1): 3–12. doi: 10.1559/152304001782173970.

McKenzie, Grant, and Alexander Klippel. 2016. “The Interaction of Landmarks and Map Alignment in You-Are-Here Maps.” Cartographic Journal 53 (1): 43–54. doi: 10.1179/1743277414Y.0000000101.

Popelka, Stanislav, and Alzbeta Brychtova. 2013. “Eye-tracking Study on Different Perception of 2D and 3D Terrain Visualization.” Cartographic Journal 50 (3): 240–246. doi: 10.1179/1743277413Y.0000000058.

Preppernau, Charles A., and Bernhard Jenny. 2015. “Three-Dimensional versus Conventional Volcanic Hazard Maps.” Natural Hazards 78 (2): 1329–1347. doi: 10.1007/s11069-015-1773-z.

Rautenbach, Victoria, Serena Coetzee, and Arzu Çöltekin. 2016. “Investigating the Use Of 3D Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa.” In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLI-B2, edited by Lena Halounova, et al., 425–431. Göttingen, Germany: Copernicus GmbH. doi: 10.5194/isprsarchives-XLI-B2-425-2016.

Ritchie, James M., Raymond C. W. Sung, Heather Rea, Theodore Lim, Jonathan R. Corney, and Iris Howley. 2008. “The Use of Non-intrusive User Logging to Capture Engineering Rationale, Knowledge and Intent during the Product Life Cycle.” In PICMET 2008 – 2008 Portland International Conference on Management of Engineering & Technology, edited by Dundar F. Kocaoglu, Timothy R. Anderson, and Tugrul U. Daim, 981–989. Portland, OR: PICMET. doi: 10.1109/PICMET.2008.4599707.

Roth, Robert E. 2012. “Cartographic Interaction Primitives: Framework and Synthesis.” The Cartographic Journal 49 (4): 376–395. doi: 10.1179/1743277412Y.0000000019.

Rother, Christian. 2014. “When to Use Which User-Experience Research Methods.” Accessed January 8, 2018. https://www.nngroup.com/articles/which-ux-research-methods.

Rubin, Jeffrey, Dana Chisnell, Jared Spool. 2008. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, 2nd Edition. Hoboken, NJ: Wiley.

Šašinka, Čeněk, Kamil Morong, and Zdeněk Stachoň. 2017. “The Hypothesis Platform: An Online Tool for Experimental Research into Work with Maps and Behavior in Electronic Environments.” ISPRS International Journal of Geo-Information 6 (12): 1–22. doi: 10.3390/ijgi6120407.

Savage, Debra M., Eric. N. Wiebe, and Hugh A. Devine. 2004. “Performance of 2D versus 3D Topographic Representations for Different Task Types.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 48 (16): 1793–1797. doi: 10.1177/154193120404801601.

Schmidt, Marcio A. R., and Luciene S. Delazari. 2011. “User Testing with Tools for 3D Visual Navigation.” In Proceedings of the 25th International Cartographic Conference, edited by Anne Ruas, CO-006. http://icaci.org/files/documents/ICC_proceedings/ICC2011/Oral%20Presentations%20PDF/A3-Visualisation%20efficiency/CO-006.pdf.

Schobesberger, David, and Tom Patterson. 2007. “Evaluating the Effectiveness of 2D vs. 3D Trailhead Maps.” In Mountain Mapping and Visualisation: Proceedings of the 6th ICA Mountain Cartography Workshop, edited by Lorenz Hurni and Karel Kriz, 201–205. Zürich: ETH Zürich. http://www.mountaincartography.org/publications/papers/papers_lenk_08/schobesberger.pdf.

Shepherd, Ifan D. H. 2008. “Travails in the Third Dimension: A Critical Evaluation of Three Dimensional Geographical Visualization.” In Geographic Visualization: Concepts, Tools and Applications, edited by Martin Dodge, Mary McDerby, Martin Turner, 199–222. Hoboken, NJ: Wiley.

Slocum, Terry A., Connie Blok, Bin Jiang, Alexandra Koussoulakou, Daniel R. Montello, Sven Fuhrmann, and Nicolas R. Hedley. 2001. “Cognitive and Usability Issues in Geovisualization.” Cartography and Geographic Information Science 28 (1): 61–75. doi: 10.1559/152304001782173998.

Špriňarová, Kateřina, Vojtěch Juřík, Čeněk Šašinka, Lukáš Herman, Zbyněk Štěrba, Zdeněk Stachoň, Jiří Chmelík, and Barbora Kozlíková. 2015. “Human-Computer Interaction in Real 3D and Pseudo-3D Cartographic Visualization: A Comparative Study.” In Cartography - Maps Connecting the World, edited by Claudia Robbi Sluter, Carla Bernadete Madureira Cruz, and Paulo Márcio Leal de Menezes, 59–74. Berlin, Heidelberg: Springer. doi: 10.1007/978-3-319-17738-0_5.

Sung, Raymond C. W., James M. Ritchie, Graham Robinson, Philip N. Day, J. R. Corney, and Theodore Lim. 2009. “Automated Design Process Modelling and Analysis Using Immersive Virtual Reality.” Computer-Aided Design 41 (12): 1082–1094. doi: 10.1016/j.cad.2009.09.006.

Torres, Jordi, Maria Ten, Jesus Zarzoso, Leonardo Salom, Rafa Gaitan, and Javier Lluch. 2013. “Comparative Study of Stereoscopic Techniques Applied to a Virtual Globe.” Cartographic Journal 50 (4): 369–375. doi: 10.1179/1743277413Y.0000000034.

Treves, Richard, Paolo Viterbo, and Muki Haklay. 2015. “Footprints in the Sky: Using Student Tracklogs from a “Bird’s Eye View” Virtual Field Trip to Enhance Learning.” Journal of Geography in Higher Education 39 (1): 97–110. doi: 10.1080/03098265.2014.1003798.

Voženílek, Vít. 2005. Cartography for GIS: Geovisualization and Map Communication, 1st Edition. Olomouc, Czech Republic: Palacký University.

Ware, Colin, and Matthew D. Plumlee. 2005. “3D Geovisualization and the Structure of Visual Space.” In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 567–576. New York: Elsevier.

Wilkening, Jan, and Sara Irina Fabrikant. 2013. “How Users Interact with a 3D Geo-browser under Time Pressure.” Cartography and Geographic Information Science 40 (1): 40–52. doi: 10.1080/15230406.2013.762140.

Wood, Jo, Sabine Kirschenbauer, Jürgen Döllner, Adriano Lopes, and Lars Bodum. 2005. “Using 3D Visualization.” In Exploring Geovisualization, edited by Jason Dykes, Alan M. MacEachren, and Menno-Jan Kraak, 295–312. New York: Elsevier.

X3DOM. 2018. “X3DOM Instant 3D the HTML way! – Examples.” Accessed January 20, 2018. https://www.x3dom.org/examples.

Zanbaka, Catherine A., Benjamin C. Lok, Sabarish V. Babu, Amy C. Ulinsky, and Larry F. Hodges. 2005. “Comparison of Path Visualizations and Cognitive Measures Relative to Travel Technique in a Virtual Environment.” IEEE Transactions on Visualization and Computer Graphics 11 (6): 694–705. doi: 10.1109/TVCG.2005.92.

APPENDIX 1

PILOT TEST 1

A relatively homogenous group of participants with previous 3D spatial visualization experience was chosen for the pilot study. Participants were students at the Department of Geography at Masaryk University. All participants had obtained at least a Bachelor’s degree in cartography and all of them had participated in the course “3D Visualization in Cartography” at the time of testing. All participants were tested simultaneously (in a computer room with an appropriate number of computers) to control experimental conditions across participants.

Figure 6. Terrain models used as stimuli in Pilot Test 1.

Figure 6. Terrain models used as stimuli in Pilot Test 1.

All participants successfully completed the first pilot test. The paper questionnaire that followed was used to identify possible bugs or errors in the 3DmoveR proof-of-concept version. Most participants did not indicate that they noticed any failures or bugs. Two participants highlighted collisions with the terrain as a possible problem when moving virtually through 3D space. One participant had problems with zooming speed during a task.

Table 1. Design of and participants in Pilot Test 1

Table 1. Design of and participants in Pilot Test 1

The second stage of development saw two major changes. The CSV file structure was modified, because it was also necessary to store the end of an individual action (time, position, and virtual camera orientation) for precise analyses of virtual movements. Besides CSV files with detailed movement records, other CSV files containing user responses (effectiveness) and speed (efficiency) were also stored for each task.

PILOT TEST 2

Eleven attendees of the “European Researcher’s Night” event participated in the second experiment. Testing took place on one afternoon and evening on a single PC. Equivalent experimental conditions existed for all participants, including all environmental aspects. Participants were rewarded with small gifts at the end of testing.

Figure 7. Terrain models used as stimuli in Pilot Test 2.

Figure 7. Terrain models used as stimuli in Pilot Test 2.

For the second pilot test, and to examine the user friendliness of 3DmoveR, the general public was involved. We assumed that users who had less experience with interactive 3D maps would have more problems with controlling the application. Participants were monitored by direct observation; after the test, they were asked about potential problems, bugs, or errors. No problems were reported, and all participants completed the second pilot test. The data obtained in the second pilot test were used for designing and verifying processing procedures, evaluation, and visualization methods. For example, the CSV file structure was reviewed and the size of these files was evaluated. Their size depends on response time and the intensity of interaction (e.g., about 30 seconds of response time corresponds to 560 rows and an 83 kB file size; 1 minute of response time corresponds to 1600 rows and 235 kB). Herman and Stachoň (2016) present preliminary results of this stage.

Table 2. Design of and participants in Pilot Test 2. Task 3 from Pilot Test 1 and Task 3 from Pilot Test 2 were the same. The same terrain and distribution of objects were used.

Table 2. Design of and participants in Pilot Test 2. Task 3 from Pilot Test 1 and Task 3 from Pilot Test 2 were the same. The same terrain and distribution of objects were used.

APPENDIX 2

Testing Scene 1


Testing Scene 2


Testing Scene 3


Testing Scene 4


Testing Scene 5

APPENDIX 3

Task 1


Task 2


Task 3


Task 4


Video 3. Click to view a video of 3DmoveR testing scenes.