DOI: 10.14714/CP102.1821

© by the author(s). This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0.

Cognitively Congruent Color Palettes for Mapping Spatial Emotional Data. Matching Colors to Emotions.

Andrei Kushkin, Texas State University | andrei.v.kushkin@gmail.com

Alberto Giordano, Texas State University | a.giordano@txstate.edu

Amy Griffin (she/her), RMIT University | amy.griffin@rmit.edu

Alexander Savelyev, Independent Researcher | savelyev137@gmail.com

Emotions are touchstones of humans’ everyday life experiences. Maps of emotions inform a variety of research from urban planning and disaster response to marketing studies. Emotions are most often shown on maps with colors. Previous research suggests that humans have subjective associations between colors and emotions that impact objective task performance. Thus, a mismatch between the emotion associated with a color and the emotion it represents may bias the viewer’s attention, perception, and understanding of the map. There are no guidelines that can help cartographers and designers choose matching colors to display spatial emotional data. This study aimed to address this gap by suggesting cognitively congruent color palettes—color sets matched to emotions in a way that is aligned with color-emotion associations.

To obtain the set of candidate congruent colors and identify appropriate color-to-emotion assignments, two user experiments were conducted with participants in the United States. In the first, participants picked a representative color for 23 discrete emotions. In the second experiment, for each candidate color from a set derived from the results of the first experiment, participants selected the best-matching emotions. The probability of the emotion being selected served as a measure of how representative the color is of that emotion. Due to the many-to-many nature of associations between colors and emotions, suitable color choices were incorporated into a dynamic palette generation tool. This tool solves the color assignment problem and produces a suitable color palette depending on the combination of selected emotions.

KEYWORDS: cartography; emotions; color; mapping emotions; cognitive congruence; color palettes

INTRODUCTION

PROBLEM

Emotions are inherent to every human being and play a significant role in our life experiences, social interactions, and well-being. Psychological research provides evidence that emotions can impact our cognition and behavior and affect attention, memory, action, and decision-making (Coppin and Sander 2016). Thematic cartography has a long history of mapping different geographic, economic, and social phenomena, including visible and intangible features. Nevertheless, one of the defining characteristics of every human being—emotions—has only recently gained the attention of cartographic researchers (Griffin and McQuoid 2012; Caquard and Cartwright 2014; Caquard and Griffin 2018).

Advances in technology have made it possible to automate the collection of spatial emotional data by for example, extracting location information from social media posts, and inferring emotional data from their contents. The growing amount of spatial emotional data provides new opportunities to investigate human relationships and experiences with a place. Emotional maps are gaining popularity and have already been employed in various research areas such as tourism (Kim and Fesenmaier 2015; Mody, Willis, and Kerstein 2009), navigation (Gartner 2012; Huang et al. 2014), urban safety and planning (Pánek, Pászto, and Marek 2017; Pánek and Benediktsson 2017; Resch et al. 2015; Zeile et al. 2015), natural disaster studies (Caragea et al. 2014; Lu et al. 2015), and business intelligence (Hao et al. 2013). Social scientists use emotional maps to investigate the relationships between ethnic communities within a city and to study perceived levels of comfort and fear (Curtis et al. 2014; Matei, Ball-Rokeach, and Qiu 2001). Cultural geographers build maps of grief to provide insights into relational spaces and therapeutic environments (Maddrell 2016) and maps of happiness to learn how happiness levels correlate with demographic characteristics (Mitchell et al. 2013).

Color is the visual variable that is most often used for showing emotions on maps. For example, point symbols are placed over a base map, with different colors standing for a different experienced emotion or sentiment (Caragea et al. 2014; Lu et al. 2015; Mitchell et al. 2013). Colors are also used to represent emotions in non-spatial visualizations, like psychological self-report probes (Sacharin, Schlegel, and Scherer 2012) or interactive charts of emotional response taxonomies (Cowen et al. 2021). Usually, authors use categorical color palettes with randomly assigned colors or design their own color schemes based on their subjective understanding of what color is most suitable to show each emotion. For example, a typical example of a map that shows spatial emotional data (Figure 1) uses a color wheel scheme to represent eight types of emotion (Meenar, Flamm, and Keenan 2019). The use of color to show emotions, both within cartography (Griffin and McQuoid 2012; Caquard and Griffin 2018) and within data visualization generally (Lin et al. 2013; Setlur and Stone 2015), makes consideration of the colors used to display emotional data an important aspect of map design.

Figure 1. Map by Meenar, Flamm, and Keenan (2019) presents the city as an emotional space. Points of interest are mapped as petals of a graphical flower—each petal represented one emotion—and routes are mapped as one or more colored lines representing different emotions. Licensed under CC BY 4.0.

Figure 1. Map by Meenar, Flamm, and Keenan (2019) presents the city as an emotional space. Points of interest are mapped as petals of a graphical flower—each petal represented one emotion—and routes are mapped as one or more colored lines representing different emotions. Licensed under CC BY 4.0.

It is well known that colors have strong psychological effects. Psychological research suggests that humans have subjective associations between colors and abstract notions, including emotions (D’Andrade and Egan 1974; Hemphill 1996; Mohammad 2013). These associations can affect user performance even when color is not task-relevant (Goodhew and Kidd 2020; Lin et al. 2013). The results of empirical color research provide evidence that different dimensions of color (hue, saturation, and lightness) influence the emotional responses of the viewer and that affective connotations of color should be considered in map design (Anderson and Robinson 2021; Suk and Irtel 2010; Bartram, Patra, and Stone 2017). It is also recognized that choosing an appropriate color palette for a particular dataset is not just a matter of choosing a visually attractive representation. When mismanaged, the use of color can lead to an impaired reaction to the visual stimuli and thus cause user confusion and hinder visual data analysis (Schloss et al. 2018; Silva, Santos, and Madeira 2011). At the same time, interpreting color meaning becomes easier when colors assigned to concepts in visualizations match people’s expectations (Lin et al. 2013; Schloss et al. 2018; Setlur and Stone 2015). That research suggests that semantically-resonant color palettes provide significant performance benefits in data reading tasks.

The lack of universal, transferable map design guidelines for different mapping contexts is considered one of the main problems of modern cartography (Griffin et al. 2017). Silva, Santos, and Madeira (2011) outlined the need of knowledge and guidelines for the use of color in data visualization. This lack of map design guidelines is particularly topical to emotional cartography. For example, the latest editions of the GIS Cartography: A Guide to Effective Map Design (Peterson 2020) and Thematic Cartography and Geovisualization (Slocum et al. 2022) provide suggestions on mapping features such as elevation, climate, water bodies, geology, and hazards, but make no mention of mapping emotional data. Dent, Torguson, and Hodler (2008) touch on the connotative meanings of color and suggest possible connections between color and various notions including emotions. These authors also emphasize that further cartographic research on color meaning is needed to inform practical map design applications. Despite the large body of literature on color palette design and optimization, there are no guidelines for choosing colors for mapping emotions. Using default palettes from GIS and design software or palettes generated by cartographic color-picking tools to show emotional data on maps may lead to a conflict with subliminal associations between colors and emotions. In other words, it can cause the conceptual equivalent of the Stroop effect, hindering visual data analysis (MacLeod 1991; Stroop 1935). Conversely, showing emotions on a map using a cognitively congruent color palette where colors are matched to emotions in a way that is aligned with human associations, has the potential to improve semantic coherence and reduce the cognitive load of using the map.

PURPOSE

The purpose of the present study was to address the lack map design guidelines by identifying appropriate color choices for showing emotional data on maps. To this end, we have tried to discover a set of cognitively congruent colors for emotional data. Designing cognitively congruent color palettes requires the estimation of human color-concept associations. Thus, our first objective was to identify colors that are associated with each of the selected emotions. Schloss et al. (2018) suggest that there is no one-to-one correspondence between colors and meanings and that people interpret color-coding systems based on the simultaneous association strengths between all presented objects and colors. Given this, the second objective of the study was to assess the interpretability of the colors associated with particular emotions and solve the color-to-emotion assignment problem to maximize the interpretability of all colors in the set.

This research contributes to the literature on categorical colormap design (Lee, Sips, and Seidel 2013; Lin et al. 2013; Schloss et al. 2018; Brewer 1994), to studies of color-emotion associations (Demir 2020; Hanada 2018; Jonauskaite et al. 2020; Fugate and Franco 2019), and to the general body of emotional mapping research (Griffin and McQuoid 2012; Caquard and Griffin 2018).

STUDY OVERVIEW

There are different approaches to color palette design based on color-concept associations (Lin et al. 2013; Rathore et al. 2020; Schloss et al. 2018; Setlur and Stone 2015), but they generally involve two steps: quantifying color-concept associations, and assigning colors to concepts, using the associations from step one. We use the same approach in this research, as these steps are well aligned with research objectives 1 and 2, mentioned above.

A direct and reliable way of estimating human color-concept associations is by human judgments. Such user studies usually involve rating the strength of association between colors and concepts (Schloss et al. 2018), selecting colors that fit concepts best (D’Andrade and Egan 1974; Ou et al. 2004), or naming concepts associated with colors (Demir 2020; Hanada 2018). There is an alternative approach of automatically deriving human color-concept associations from large, user-generated datasets like tagged images (Hauthal and Burghardt 2013; Rathore et al. 2020) or textual data (Bostan and Klinger 2018; Mohammad 2016). Despite the advantages of automation and the use of publicly available data, this approach is computationally intensive and still requires manual data annotation for training the algorithm. As we were limited in computational and time resources in this study, the connection between emotions and colors was established by collecting human judgments in a user experiment. In this experiment, participants picked a color for each emotion in a list from a continuous, perceptually uniform color space.

There are several theories and multiple taxonomies of emotions, which can be generally divided into two major groups: discrete and dimensional emotion theories (Barrett 2017; Gerrig and Zimbardo 2008; Hamann 2012; Sander 2013). Discrete emotion theory suggests that there are distinct emotions that people can experience and identify. Dimensional theories conceptualize emotions as combinations of several fundamental factors or dimensions (Sander 2013). The question of whether emotions are better conceptualized in terms of discrete categories or underlying dimensions has been much debated in the psychological literature and a consensus has not been reached (Hamann 2012; Harmon-Jones, Harmon-Jones, and Summerell 2017; Barrett 1998). Research on the association of color and emotions typically employs the model of discrete emotions, which we also follow by selecting 23 discrete emotions based on established emotion classification models derived from the literature (Plutchik 2001; Scherer 2005; Scherer et al. 2013; Kim and Fesenmaier 2015; Keltner et al. 2016; Cowen and Keltner 2017; Cowen, Elfenbein, et al. 2019; Demszky et al. 2020; Cowen and Keltner 2020).

To understand how reliably each color is interpreted as representing a particular emotion, a second user experiment was conducted. During this experiment, we asked participants to solve the task backwards and match each color to the emotion(s) they thought it represented. The colors used in Experiment 2 were the congruent color candidates defined during Experiment 1. Based on the results of the two experiments, a final set of cognitively congruent colors was defined, where each color-emotion pair had a value showing how well they matched. In alignment with the previous research, color-to-emotion associations followed a many-to-many relationship. Thus, color assignment can differ depending on the number and combination of emotions in a palette. To automate the process of assigning colors for each possible set of emotions, we designed an interactive tool that generates cognitively congruent color palettes depending on the selected emotions to maximize the interpretability of all colors across the set.

METHODOLOGY

OVERVIEW

This study is based on two user experiments and used a quantitative methodological approach. Each experiment was a separate online user study that followed a within-subjects design. The experiments were conducted consecutively, with Experiment 2 built on the results of Experiment 1. Participants for each user study were recruited separately using an online crowdsourcing platform.

The use of crowdsourcing platforms for behavioral data collection is common in social science research and has been successfully implemented in color and emotion-related research (Christen, Brugger, and Fabrikant 2021; Cowen, Elfenbein, et al. 2019; Mohammad 2013). Heer and Bostock (2010) replicated existing laboratory experiments on Amazon Mechanical Turk (AMT) to demonstrate the validity of crowdsourcing for graphical perception experiments. Their crowdsourced results show higher variance but are consistent with laboratory findings. Other research outlines that crowdsourcing often lacks sufficient data quality control and should be used with caution to acquire meaningful data for behavioral research (Pe’er et al. 2022). Crowdsourcing approaches to visual perception experiments lead to a lack of control over conditions like display type, lighting, viewing angle, and distance. At the same time, crowdsourcing conditions more closely mimic real-world data visualization scenarios (Heer and Bostock 2010). Based on the comparison of different crowdsourcing platforms, it appears that Prolific outperforms other competitors, including AMT, in terms of data quality and cost per observation (Gupta, Rigotti, and Wilson 2021; Hulland and Miller 2018; Pe'er et al. 2022; Sheehan 2018). Thus, Prolific was used for both user experiments in this research.

Both studies were reviewed and approved by the Texas State University Institutional Review Board (project 8076). Data collection was implemented using the Qualtrics online survey software. Only participants located in the United States, speaking English as their first language, were recruited to participate in each study, to reduce the possible impact of cultural differences on associations between colors and emotions. All participants were 18 years of age or older. Each participant participated only in one experiment of this study. To ensure that collected data were not affected by color vision impairments, participants were required to pass an online version of the Ishihara color vision test (Marey, Semary, and Mandour 2015) and to complete the survey on a laptop or desktop computer to provide sufficient screen size. Stimuli were presented to viewers on a Munsell neutral value scale N7 background to minimize the influence of simultaneous color contrast on the perceived colors.

Sample size plays an important role in testing for statistical significance. A fairly large difference between the sample means will not be statistically significant with a small sample size, and even a small difference between sample means with a very large sample size can produce a statistically significant result (Urdan 2016). Statistical power analysis can be used to determine the sample size that is necessary to detect statistical significance at a specified confidence level α with a hypothesized effect size (Cohen 1992; Dean, Voss, and Draguljić 2017). In this research, the required sample size for each experiment was estimated by a priori power analysis solved for a medium effect size using the G*Power software, indicating that between 80 to 90 participants were necessary, depending on the target statistical test (Faul et al. 2007).

EXPERIMENT PROCEDURE

At the beginning of each user experiment, after providing informed consent, participants took a 12-plate version of the Ishihara color vision test. Following the Ishihara test instructions (Ishihara 1974), if participants gave a correct response in at least 10 of the 12 plates, their color vision was regarded as normal, and participants proceeded to the next step of the study. Information about sex and age of the participants was downloaded from the Prolific participant database, for later assessment of the basic demographic characteristics of the sample. Experiments included training tasks and questions with known answers for additional data quality control. After the main trial, at the last step of each user experiment, there was an optional free text question asking participants to provide general feedback about the study.

EXPERIMENT 1. IDENTIFY CANDIDATES FOR CONGRUENT COLORS

METHODS

Experiment 1 aimed to identify colors associated with each of the 23 discrete emotions selected for the research. Human judgments were collected to estimate the color-emotion associations and obtain candidate cognitively congruent colors.

The set of 23 emotions includes Ekman and Frisen’s (1971; 1986) seven so-called basic emotions of anger, contempt, disgust, fear, happiness, sadness, surprise. These seven are widely used in research and were included to make the results of this study more easily comparable with others. However, their ability to describe the spectrum of human emotional experiences is limited (Cowen, Sauter, et al. 2019), and to address this we added sixteen additional emotions: amusement, annoyance, awe, boredom, confusion, contentment, disappointment, grief, elation, embarrassment, interest, joy, pride, relief, serenity, and shame. These were taken from elsewhere in the literature; specifically we looked for those emotional concepts that were mentioned frequently and that, together, provided a wide range of different emotions (Plutchik 2001; Scherer 2005; Scherer et al. 2013; Kim and Fesenmaier 2015; Keltner et al. 2016; Cowen and Keltner 2017; Cowen, Elfenbein, et al. 2019; Demszky et al. 2020; Cowen and Keltner 2020). The list of 23 emotions selected for this research is not comprehensive and presents only a limited perspective on all possible emotional experiences. Nevertheless, the list extends prior work that focused only on the basic emotions.

Participants submitted their color judgments using a color picker that enabled them to select colors from a continuous perceptually uniform CIELuv color space (Schanda 2007). This color space, developed by the International Commission on Illumination, approximates human vision and is commonly used for applications where color is produced by emitted light, such as computer displays. CIELuv uses lightness (L) and chromatic coordinates (u and v), which can be challenging for non-expert users to understand and manipulate. To address this issue and increase the usability of the color picker, we utilized HSLuv (hsluv.org). HSLuv utilizes a modified color space that incorporates CIELuv within the dimensions of the HSL color model, which includes hue, saturation, and lightness. In Experiment 1, we combined the JavaScript implementation of HSLuv with the “d3-color” and “d3-color-difference” JavaScript modules to seamlessly convert user-selected colors between different color spaces, derive alternative color representations, and calculate color distances.

PARTICIPANTS

A total of 95 participants were recruited for Experiment 1 through the Prolific crowdsourcing web service. The general demographic characteristics of the sample were as follows: 51 females and 44 males with a mean age of 36, ranging from 19 to 76 years old. Participants were compensated with USD 1.10, which, when pro-rated for the average duration of the task, was equivalent to a USD 7.00 per hour rate.

DISPLAYS AND PROCEDURE

In Experiment 1 participants used an interactive color picker that allowed them to choose any color from a continuous color space. To ensure that participants understood how to use the color picker and were able to select a specific color, a training task (Figure 2) was included before the main trial. In this task, participants were asked to set the color of at least three out of four white rectangles to be as close as possible to the color of the sample rectangle on their left.

Figure 2. Training task in Experiment 1.

Figure 2. Training task in Experiment 1.

The user-selected colors were automatically compared to the target color using the CIEDE2000 version of the CIELab ΔE color distance formula. We checked several values of ΔE to select a suitable threshold value for comparing user selections with the sample colors. It appeared that a color distance of 5.5 provides a sensible level of difficulty in matching the color to the sample swatch. The color distance between the sample color and the user-selected color was calculated in real-time as the user was modifying their selected color. When it dropped below 5.5, a green checkmark indicated a successful matching of the colors. This value is consistent with the findings of Stone, Szafir, and Setlur (2014), who suggest that the minimum step in CIELab needed to make two colors visibly different is between 5 and 6. When three colors were matched, a “next” button appeared, allowing the participant to proceed to the main trial. In the main trial of Experiment 1 (Figure 3), participants selected a color for each emotion. Emotions were displayed one by one in a randomized order.

Figure 3. Experiment 1, main trial.

Figure 3. Experiment 1, main trial.

During the color assignment trials, participants had access to the definition of each emotion, which appeared when hovering the cursor over the word. The definitions for emotion terms were obtained from the online version of the Cambridge English Dictionary (dictionary.cambridge.org/dictionary/english/). The time required to select a color for each emotion, and the total time for the whole task was recorded for data quality assessment.

At the beginning of each trial, the color picker was reset to a random color to avoid bias being introduced to the color selections by the data collection instrument. The starting color of each trial was recorded along with the final user choice to check that participants did not submit the randomly preset color as their selection. A total of seven submissions with unreasonably short completion times or where these two colors were systematically similar were excluded from the study and replaced with new participants additionally recruited on Prolific.

RESULTS

Data collected in Experiment 1 were sets of colors defined in a perceptually uniform color space that were identified by participants as associated with each emotion. Color selections from all 95 participants, as well as the detailed results of the statistical tests, are provided in supplementary materials. A subset of the reported colors is presented here in Figure 4. The distribution of selected colors was consistent with the many-to-many nature of associations between colors and emotions that has been suggested in prior research. Participants selected different colors to represent the same emotion, and similar colors were associated with different emotions. Some emotions demonstrated more uniform color associations than others. Bright and saturated colors were generally assigned to positive emotions, while negative emotions were more often associated with darker colors.

Figure 4. A subset of colors reported as associated with emotions. Each column represents one participant with nine out of ninety-five participants shown here.

Figure 4. A subset of colors reported as associated with emotions. Each column represents one participant with nine out of ninety-five participants shown here.

The analysis of the data from Experiment 1 consisted of the following steps. First, color selections were inspected visually using interactive 3D scatterplots in the CIELab color space for all responses grouped by emotion (Figure 5). Visual inspection of these interactive charts suggested that the distributions of color choices in CIELab color space were different for different emotions.

Figure 5. 3D scatter plot of colors selected for anger in the CIELab color space.

Figure 5. 3D scatter plot of colors selected for anger in the CIELab color space.

Next, a repeated measures ANOVA test was conducted for each color dimension (L, a, b) to check that colors were not selected randomly and there is a statistically significant difference between colors selected for different emotions. This was then followed by multiple pairwise paired t-tests to identify which emotions were significantly different in terms of their corresponding color parameters. Then cluster analysis was applied to identify the candidates for the most representative and thus, most congruent colors for each emotion. As a result, one representative color was extracted from each cluster. Last, the strength of association with the corresponding emotion was quantified for each cognitively congruent color candidate. Based on this value, a final selection of the thirty-two cognitively congruent colors was made (Table 2).

A repeated measures ANOVA was conducted to determine whether there was any effect of emotion (independent variable) on the “L” color dimension (dependent variable). The assumption of normality was checked using QQ plots that draw the correlation between the given data and the normal distribution. Outliers were identified using the box plot method and removed. The assumption of sphericity was automatically checked using Mauchly’s test during the computation of the ANOVA. The Greenhouse-Geisser sphericity correction was automatically applied to factors violating the sphericity assumption. The mean values of the “L” color dimension were statistically significantly different for at least two emotions, F(12, 411) = 33, p < 0.0001, ηg2 = 0.45. Given that the ANOVA results showed a significant difference, post hoc pairwise comparisons using paired t-tests were applied, with p-values adjusted using the Bonferroni multiple testing correction method. The results for a total of 253 t-test comparisons (provided in supplementary materials) demonstrate that the mean “L” values are significantly different for 164 pairs of emotions.

Repeated measures ANOVA for the “a” and “b” color dimensions as the dependent variables followed the same procedure as did the analysis for the “L” color dimension. The mean values of the “a” color dimension were significantly different for at least two emotions, F(11, 387) = 8, p < 0.0001, ηg2 = 0.19. Post hoc pairwise t-test comparisons demonstrate that the mean “a” values are significantly different for 87 out of 253 pairs of emotions. The mean values of the “b” color dimension were statistically significantly different between at least two emotions, F(11, 389) = 9, p < 0.0001, ηg2 = 0.19. Post hoc pairwise t-test comparisons demonstrate that the mean “b” values are significantly different for 91 out of 253 pairs of emotions.

Cluster analysis was applied to organize color choices for each emotion into sensible groupings. This approach follows the method of Setlur and Stone (2015), who applied k-means clustering to quantize input colors into visually discriminable clusters using CIELuv Euclidean distance. Since there are thousands of clustering algorithms and none of them has been shown to outperform the others (Jain 2010), we tested different algorithms with varying parameters to see which produced more meaningful results. A simple k-means clustering and two density-based spatial clustering algorithms, DBSCAN and OPTICS, were used (Ester et al. 1996; Ankerst et al. 1999). Density-based algorithms proved to be more suitable for this study as such algorithms perform better with irregularly shaped clusters of varying density (Duan et al. 2007; Liu et al. 2012). Both density-based clustering algorithms required manual fine-tuning of their parameters for the best performance.

The cluster analysis was implemented using “Scikit-Learn,” a free machine learning library for the Python programming language (Kramer 2016; scikit-learn.org). An interactive 3D scatterplot was produced for each algorithm, where each point is assigned to a color-coded cluster (Figure 6). These scatterplots were then visually inspected, and the one with more meaningful clusters was selected for further analysis. The results of DBSCAN were used for 13 emotions, and the clusters for the remaining 10 emotions were obtained with OPTICS.

Figure 6. 3D scatter plot of classified dots for anger. The values identify the different clusters.

Figure 6. 3D scatter plot of classified dots for anger. The values identify the different clusters.

After finishing the cluster analysis for each emotion, one candidate congruent color was extracted from each identified cluster with a geometric median algorithm described by Vardi and Zhang (2000). The position of each extracted candidate color was inspected using another series of interactive 3D scatterplots to make sure it was located inside the corresponding cluster (Figure 7).

Figure 7. 3D scatterplot with classified dots and candidate colors for anger. The large dots show the actual color represented by the median point of each cluster.

Figure 7. 3D scatterplot with classified dots and candidate colors for anger. The large dots show the actual color represented by the median point of each cluster.

Since clusters varied by the number of color points, size, and shape, it was necessary to quantify the degree of association between an extracted candidate color and the corresponding emotion. This congruency rating (r) was calculated as the ratio of the number of points in the cluster (n) to the median distance () from the color points to the geometric median of that cluster (r = n ÷ ). A candidate color coming from a cluster with more points placed closer to each other will have a higher rating than a candidate color from a cluster with fewer points or with the points being farther away from each other. For clusters where colors are very close or identical, the median distance will be close to zero, leading to an infinite congruency rating.

The total number of cognitively congruent color candidates was 100 (Table 1). Some colors identified as congruent for different emotions turned out to be very similar to each other. Similar colors less than 5 ΔE apart were aggregated to a single color using the geometric median to improve the discriminability of colors in the complete set and to minimize the variability in brightness and saturation among the candidate colors because this is useful for qualitative color schemes. The remaining set of colors was reduced further by selecting only colors with the highest congruency ratings while preserving as much difference in hue as possible. The resulting set of 32 congruent color candidates (Table 2) was then tested in Experiment 2 to estimate the interpretability of each color.

Table 1. Congruency ratings of cognitively congruent color candidates based on the cluster analysis. Emotions and colors are grouped by similarity. An infinite score for anger indicates a cluster of identical colors.

Table 1. Congruency ratings of cognitively congruent color candidates based on the cluster analysis. Emotions and colors are grouped by similarity. An infinite score for anger indicates a cluster of identical colors.



Table 2. The final set of cognitively congruent color candidates.

Table 2. The final set of cognitively congruent color candidates.

DISCUSSION

Our review of the color-association literature suggested that some emotions would have more consistent and distinct color selections than others; there would be stronger similarity in the colors associated with similar emotions than of those associated with dissimilar emotions; and that there would be some variability in color-emotion assignments, but the colors would not be selected entirely at random (Fugate and Franco 2019; Demir 2020; Gilbert, Fridlund, and Lucchina 2016; Schloss et al. 2018).

The results of Experiment 1 support the findings of prior studies. For some emotions, like anger, happiness, and disgust, participants demonstrated more consistent color selections, while for the others, like awe, confusion, and surprise, color choices show higher variability (Figure 4). Colors selected for positive emotions are generally brighter and more saturated than colors picked for negative emotions.

Despite the variability in color selection and similar colors being chosen to represent different emotions, the overall distribution of color choices does not appear random. This conclusion is supported by the results of ANOVA comparisons conducted for each color dimension of the CIELab color model. The results showed that at least two emotions were significantly different from each other on each color dimension between the 23 tested emotions (p < 0.0001). According to Cohen (1988), the reported ηg2 of 0.19 for “a” and “b,” and 0.45 for “L” indicate a large effect size. According to the follow-up t-tests of all possible 253 pairs of emotions (provided in supplementary materials), only 39 of the pairs were not significantly different at least on one color dimension. Emotion pairs that did not show a significant difference consisted mainly of similar emotions like sadness-grief and joy-surprise.

However, a few pairs included dissimilar emotions. For example, the pair embarrassment-pride did not demonstrate a significant difference in any of the color dimensions. This could happen because the distribution of color choices for these emotions in CIELab space produced similar mean values of color dimensions, even though the shapes of the distributions were different (a link to all 3D scatter plots is provided in the supplementary materials). Alternatively, these might be the cases of type 2 errors happening due to multiple comparisons. In other words, the emotion pairs might in fact be significantly different, but the statistical test failed to detect this difference. Overall, the results of the statistical tests for the data collected in Experiment 1 could be considered to provide strong evidence that there is a relationship between colors and emotions, and it is possible to characterize emotions by assigning each one a unique, specific color.

Color selections obtained in Experiment 1 are well aligned with those previously reported in the literature. In particular, they are very similar to the color-emotion associations presented by Fugate and Franco (2019) and Gilbert, Fridlund, and Lucchina (2016). For example, different shades of red were a popular choice for anger, gray for boredom, and dark blue and black for sadness. Color selections from Experiment 1 also match with the general color-emotion associations summarized by Demir (2020). Our empirical data demonstrate fairly low specificity (one color being selected exclusively for a particular emotion) and consistency (only similar-looking colors being selected for an emotion), consistent with the findings of Fugate and Franco (2019).

In most previous investigations, participants were asked to indicate color-emotion associations using color swatches or color words. Because of this, the identified color-emotion associations are sometimes critiqued as having been imposed by the limited range of answer choices. Other authors have argued that the use of categorical representations of color limits our ability to identify exact color-to-emotion associations (Tham et al. 2020). For instance, many English speakers might agree that anger is associated with red, but is this association with a range of colors categorized as red or with more specific exemplars of red?

Following the methodology of Gilbert, Fridlund, and Lucchina (2016), the present study addressed the limitation of the constrained color-matching method by using an interactive color picker that allowed participants to choose any color from a perceptually uniform continuous color space. The color picker used in the current study provided controls for three color parameters, while dynamically displaying the range of available colors at the currently selected level of lightness. This provided more accurate control of the selected color than a color wheel with a single light/dark slider, the method used by Gilbert, Fridlund, and Lucchina (2016).

Even when participants were not restricted by a limited number of available choices, the obtained color-to-emotion associations aligned well with the results of previous studies. This suggests that identified color-emotion associations are not entirely task-specific or imposed by the data collection instrument. Selecting colors from a continuous color space also helped in understanding which exact color is considered more suitable for a corresponding emotion, such as which “red” is more associated with anger and which “red” is more associated with surprise. Aggregating the collected data with clustering algorithms allowed identification of colors that demonstrate more reliable associations with the corresponding emotions.

The main practical application of the outcomes of Experiment 1 for this study was to provide a basis for identifying cognitively congruent colors. The resulting color candidates still required evaluation in terms of their ability to represent corresponding emotions. At the same time our efforts and methodology in Experiment 1 could easily be extended in future research. More data can be collected for the same set of emotions to see if it is possible to refine the most congruent color choices. The same methodology can be applied to a population from a different country or using a different language to see how the color selections compare to each other, a point of particular interest given Feldman Barrett’s hypothesis that language structures emotional learning and concepts and that in the discrete emotion model, emotions are described by language (Barrett 2017). Our method and test instrument can be applied to collect data on other discrete emotions, expanding our knowledge about color-to-emotion associations in a systematic and more comparable way.

EXPERIMENT 2. QUANTIFY THE INTERPRETABILITY OF CANDIDATE CONGRUENT COLORS

METHODS

The purpose of Experiment 2 was to quantify the interpretability of the colors obtained in Experiment 1 to generate the appropriate color assignments for a given set of emotions. In other words, we wanted to see which candidate colors from Experiment 1 are more reliably interpreted as representing a particular emotion. Knowing this can inform the creation of cognitively congruent color palettes for any combination of the 23 emotions. To this end, the participants of Experiment 2 were asked to solve the task of Experiment 1 backwards and pick matching emotions for a presented color. Quantification of the color’s interpretability was based on the frequency of each emotion being selected as matching to a corresponding color.

A total of 32 colors came out as a result of Experiment 1 (Table 2). The task of matching emotions to these colors could be formulated in two ways: the best fit for an individual color and the best fit for a set of colors. Since color-concept associations usually demonstrate many-to-many relationships (Schloss et al. 2018; Fugate and Franco 2019), different combinations of emotions would likely result in different sets of assigned colors. Some colors would be interchangeably used for different emotions. Given this, testing a single set of emotions for the best set of colors would be only representative of that particular assignment case. Testing all possible combinations that could be made from 23 emotions was not feasible. Thus, Experiment 2 was designed to estimate the best fit for each individual color.

PARTICIPANTS

A total of 99 participants were recruited for Experiment 2 through the crowdsourcing platform Prolific. The general demographic characteristics of the sample were as follows: 50 females and 49 males with a mean age of 38, ranging from 18 to 78 years old. Participants were compensated with USD 1.10, which, when pro-rated for the average duration of the task, was equivalent to a USD 9.00 per hour rate.

DISPLAY AND PROCEDURE

During Experiment 2, the participants saw all 32 colors one by one in a randomized order and selected all emotions they thought each color represented (Figure 8). Emotions and their definitions were the same as those used in Experiment 1. Emotion choices were presented in individual containers with the emotion term and a checkbox to indicate if it was selected or not. These containers were ordered alphabetically in each trial to make it easier for participants to find the emotion they wanted to select. A definition of each emotion was available to participants by hovering the cursor over the corresponding container. An additional option, “none,” was included in each trial to avoid forced replies when participants did not feel an association of the current color with any emotion. The time spent selecting emotions for each color and the total time for the whole task were recorded. A total of six submissions with unreasonably short completion times or contradicting emotions selected for the same color were excluded from the study and replaced with new participants additionally recruited on Prolific.

Figure 8. The color interpretability assessment instrument.

Figure 8. The color interpretability assessment instrument.

RESULTS

Data collected in Experiment 2 were arranged in the form of a two-way contingency table of counts for each color-emotion pair. A chi-square test of independence was used to check for the presence of a relationship between an emotion and a selected color (Hanada 2018; Lutabingwa and Auriacombe 2007; Olsen and St George 2004). It has been argued that the standard chi-square test is unsuitable for data collected with multiple-choice questions where participants select all answers that apply (Mahieu et al. 2021; Loughin and Scherer 1998). Since this was the case in Experiment 2, a multiple-response chi-square test version implemented in the R statistical software package “MultiResponseR” by Mahieu et al. (2021) was applied. It was followed by a multinomial logistic regression analysis to estimate how suitable each color was for representing an emotion. The calculated probabilities of each emotion being selected depending on the color served as a measure of interpretability.

The results of the chi-square test (χ2 = 6981, p = 0.0005, and effect size Cramér’s V = 0.22) indicated a relationship between at least one color-emotion pair. In addition to the chi-square test, the “MultiResponseR” package allows determining the significance of associations between each pair of the tested variables by conducting multiple-response hypergeometric tests per cell. In particular, it showed for a given color-emotion pair whether this emotion was cited for this color in a proportion that differs significantly from the overall average citation proportion for this emotion in all colors combined (Mahieu et al. 2021). The detailed results of the hypergeometric tests per cell and multinomial logistic regression are provided in the supplementary materials.

DISCUSSION

As proposed by Schloss et al. (2018), people interpret color-coding systems by solving a decoding assignment problem. They make inferences about how colors are mapped onto concepts. Given this, Experiment 2 aimed at testing the cognitively congruent color candidates from Experiment 1 in terms of their interpretability as corresponding to a particular emotion. A statistically significant relationship between the color and emotions selected as represented by that color was expected. Hypothetically, the probability of an emotion being selected as matching to a color should be different depending on the strength of association between that color-emotion pair. These probabilities were calculated and served as interpretability ratings, with higher values meaning that this color is more reliably identified as showing a particular emotion.

A chi-square test of independence was conducted to determine whether two categorical variables of color and emotion were likely to be related. The results suggest that the null hypothesis should be rejected, and the variables are not independent of one another. The estimated effect size indicates a large effect size or strong association between colors and emotions (Volker 2006; Cohen 1988). Thus, the color candidates used in Experiment 2 are likely to be suitable colors for creating cognitively congruent color palettes.

The probabilities of each emotion being selected depending on the color were estimated with a multinomial logistic regression. The resulting values were generally quite low. This could be explained by the total number of emotions, as the probability of 1 is divided between 23 possible outcomes. However, a pattern can still be identified in the distribution of probabilities. The emotions can be divided into three groups. First are emotions (such as anger, boredom, disgust) that have a few colors with high probabilities and very low probabilities for the rest of the colors. The second group includes emotions that demonstrate medium probabilities of similar values for multiple colors (such as happiness, joy, serenity). In the third group, emotions (like confusion, shame, embarrassment) have low probabilities for a few colors and almost zero probabilities for the rest of the colors. This might happen due to the nature of the color to emotion associations, meaning that some emotions are strongly connected to one or two specific colors, while the others are more “colorful” and demonstrate higher variability in associated colors. The presence of the third group may also indicate that some emotions do not have any solid or stable color associations. The observed probabilities of an emotion being selected depending on the color still follow the many-to-many kind of relationship outlined in the literature. Pairs with the highest probabilities match the top-scoring color assignments from Experiment 1 and the color choices presented by Fugate and Franco (2019) for the corresponding emotions.

LIMITATIONS

Experiment 1 had several limitations. The first one is the variability of lighting conditions and of the screens used to take the survey. This should be considered a confounding factor, introducing additional variability to the responses since different monitors can show the same colors differently, and the same color on identical screens can look different depending on the surrounding lighting. It is worth noticing that Fugate and Franco (2019) claim that participants’ judgments are not influenced by perceiving the colors differently based on the device on which they take the survey. They report that the top-indicated color across the majority of emotions was the same between the laboratory control study and the results reported from an online crowdsourcing platform. Another limitation originates in the nature of online studies. Researchers must rely on the honesty of the self-reported demographic data, and although the data from our study were examined carefully, there is no reliable way to entirely exclude low-effort or completely random submissions.

There were also some methodological limitations. First, the total number of emotions studied in Experiment 1 was 23. This is only a fraction of all existing emotional concepts, and thus, the results of Experiment 1 provide a limited view of color-emotion associations. Second, the use of only the English language is another methodological limitation. In other languages there are emotional concepts that are not present in English and vice versa. Third, the candidates for the cognitively congruent colors were determined using specific clustering algorithms with manual parameter tuning. The use of different algorithms or different parameters may have produced other colors that could be more or less congruent than those that were identified.

Finally, it is important to note that both experiments were limited to United States residents, which afforded a degree of experimental control but at the same time limits the generalizability of the results. Communities with different cultural backgrounds may have noticeable differences in color preferences and associations (Cyr, Head, and Larios 2010; Jacobs et al. 1991; Or and Wang 2014). Because of this limitation, one should be careful when extending them to all populations in order to avoid an improper color-emotion assignment. In such cases, the proposed cognitively congruent colors may serve as a starting point for making informed decisions about choosing and assigning colors to display emotions.

Experiment 2 shares the limitations described earlier for Experiment 1 and has some limitations of its own. First, when selecting emotions represented by a given color, participants did not have a way to rank the suitability of each choice. Thus, each selected emotion had the same contribution to the overall probability, which might not be the case with actual color to emotion associations. Including an additional weighting procedure could help to calculate more precise probabilities for each color-emotion pair and, by doing this, achieve a more optimal final color assignment. Another limitation of Experiment 2 was the total number of colors tested. Having 32 colors tested is comparable to the number of colors used in the other studies with some authors having fewer (Fugate and Franco 2019; Jonauskaite et al. 2020), and others having more (Schloss et al. 2018; Tham et al. 2020). At the same time, including the other possible candidate colors may provide additional information about color to emotion associations and possibly reveal some other patterns that remained unnoticed in the current set of tested colors.

COLORS4EMOTIONS COLOR PALETTE GENERATOR

To turn the findings of Experiments 1 and 2 into a practically usable tool, we constructed an interactive color palette generator. Here we describe the construction of this tool, provide examples of color palettes generated by the tool, and discuss its potential use and limitations.

Quantification of color-emotion associations allows us to apply mathematical methods to solve the color assignment problem. Following the approach of Schloss et al. (2018), our tool generates suggested colors for each set of emotions by solving the color assignment problem as a linear program. Assignment problems, also known as maximum-weight matching problems, are mathematical models describing how to pair items from two categories (Kuhn 1955). For example, such models can optimally assign employees to jobs in a company, machines to tasks in a factory, and trucks to routes in a shipping network (Williams 2013). Linear programming, also called linear optimization, is a method to achieve the best outcome (such as maximizing profit or minimizing cost) in this matching process and can be used when its requirements are represented by linear relationships (Williams 2013; Schrijver 1998). The values of probabilities of each emotion being selected for a particular color were derived from the multinomial logistic regression model from Experiment 2, and when combined with the results of the hypergeometric tests per cell, formed the basis for solving the color-to-emotion assignment problem. Only color-emotion pairs with probabilities that demonstrated a statistically significant relationship were included when generating the palettes.

Because different colors demonstrate a similar degree of association with multiple emotions, it is possible to create multiple combinations of congruent color assignment. Our interactive tool offers two options: an isolated and a balanced assignment of colors suggested by Schloss et al. (2018). The isolated algorithm for color-emotion assignment is straightforward and maximizes the color-emotion associations among all color-emotion pairs for the chosen emotions. The balanced algorithm mitigates conflicts due to many-to-many relationships by simultaneously maximizing the association between all paired items while minimizing the association between unpaired items. An additional optional constraint of the minimum allowed color distance between the assigned colors in CIEDE2000 ΔE units was added to the algorithm to improve the discriminability of colors assigned to different emotions. If possible, the algorithm assigns the colors to emotions ensuring the minimum distance between the colors in the suggested palette is not less than the specified value.

The color palette generator was implemented using the “PuLP” linear programming toolkit and the Python programming language (Mitchell, O’Sullivan, and Dunning 2011). It can be used to automatically generate cognitively congruent palettes for any possible combination of the 23 emotions. This script was then turned into a web app (Figure 9) that produces two cognitively congruent palettes for the selected emotions. It also displays an extended set of colors with top-scoring options for each emotion to give the users more flexibility in terms of available color choices because, depending on other aspects of the map design, cognitive congruence of colors and emotions may be only one of many design considerations. These colors are presented with the corresponding probability scores to help users manually adjust the suggested palette without reducing the overall suitability of the palette too much. The final color palette for emotional data is expected to be a color-coding system that is easier for map readers to use and understand. The app is available at colors4emotions.tk.

Figure 9. Example of palettes generated by the cognitively congruent color palette generation tool.

Figure 9. Example of palettes generated by the cognitively congruent color palette generation tool.

The practical applications of the cognitively congruent color palette tool that we built based on the results of Experiment 2 are diverse. It may be helpful to cartographers who need to choose colors for mapping emotions, for designers who need to color-code emotions in their visualizations, or for scientists who develop stimuli or measurement instruments that may benefit from using cognitively congruent colors. It should be noted that the tool doesn't consider lightness/saturation differences when producing a palette. Taking this into account could be way to build upon the conducted research and would help generate color sets without some colors being noticeably brighter or darker than the others.

CONCLUSION

This study builds upon and extends existing knowledge about color-emotion associations in the domains of psychology, cartography, and data visualization. It provides much-needed empirically-based guidelines for the informed use of color and for the design of more effective visual representations of spatial emotional data that facilitate comprehension and analysis of this information (Silva, Santos, and Madeira 2011). We aimed to solve a pragmatic problem of identifying the cognitively congruent colors for suitably displaying emotional data on maps. The congruent colors were defined as matching subliminal color-emotion associations. To identify these associations, we conducted a user experiment where participants chose colors that represented each emotion. Color candidates for each emotion were calculated as geometric medians of clusters in the reported colors plotted in the CIELab color space. The interpretability of each congruent color candidate was quantified with another user experiment.

Given the many-to-many nature of the relationship between colors and emotions, the congruent color for an emotion will need to differ, depending on the combination of emotions. The color assignment problem was solved mathematically, using the linear programming approach. This solution was implemented as a web-app that generates cognitively congruent color palettes for the selected emotions. It is expected that the use of congruent colors will provide advantages for user task performance, will reduce the perceived difficulty of the tasks as compared to when undertaken with non-congruent colors, and will probably influence decisions users make with the emotional data.

This research did not try to identify whether there are any universal color-emotion associations. Indeed, some psychologists have suggested it’s unlikely that universal emotions even exist (Barrett 2017), much less universal color-emotion associations. Investigation of individual or cultural differences and understanding the underlying mechanisms and patterns of color-emotion associations were outside of the scope of the present research. Possible differences in color-emotion associations between male and female participants or between younger and older participants were not considered. Two primary contributions were made: (1) an empirically derived set of cognitively congruent colors for 23 emotions and (2) an interactive web-app tool that suggests cognitively congruent color palettes for emotional data, which can serve as a guideline and starting point for researchers, designers, and cartographers who need to create effective visualizations of emotions.

By estimating the associations between colors and a set of discrete emotion concepts, this study mainly contributes to our knowledge of color-emotion associations and the emotional mapping branch of thematic cartography. The presented findings can be important both for academic and commercial contexts. The literature outlines that color-concept associations should be considered when designing color-coding systems for categorical data. The application of this idea to emotional mapping is a useful contribution to existing knowledge because maps of emotions are valuable tools for studying human experience with space and place. Mapping of emotional landscapes, as advocated by human geographers and critical cartographers, makes geospatial practices more relevant to real-life decisions (Kwan 2007; Pearce 2008).

The broader impact of the outcomes of the current study is twofold. First, our tool for choosing colors for visualization of emotions may help researchers, cartographers, and designers create visualizations of emotions that put a lower cognitive load on the viewers. This could facilitate exploratory visual analysis and help emphasize and communicate the necessary information more accurately. Geographers who use emotional mapping for collecting data can use the color palette generator tool to provide the participants with color-coding systems that are easier to use. Researchers and geovisual analysts who explore big spatial datasets for extracting emotional information could benefit from data visualizations that more effectively convey information and insights from such complex data. Designers of user interfaces and human-computer interaction (HCI) specialists can use cognitively congruent palettes for emotional data in development of web-based or mobile applications. The provided palette generator tool can be used as a guideline and assist nonprofessional cartographers and people dealing with emotional data visualization in diverse disciplines such as medicine, psychology, and graphic design. It can help with color choices for making their visualizations easier to read, explore, and understand.

Second, an empirically tested cognitively congruent color set for visualizing emotions can serve as a basis for further research. As emotional mapping is a relatively new area of thematic cartography, there are no well-established design methods for showing emotions on maps. The effectiveness of different symbolization approaches could be evaluated in future work, using the provided color suggestions as a baseline for comparison. Investigation of the influence of cognitive congruence of the color palette on user performance and preference for different kinds of emotional maps (e.g., choropleth) could provide further guidance to designers and cartographers. As demonstrated by Fuest et al. (2021), differences in cartographic designs can influence user decision-making. Thus, the suggested cognitively congruent colors can be used to research the influence of symbolization on the opinions and decisions of emotional maps’ viewers’. This could be of especial importance for maps made for and used by policymakers.

In closing, it is important to note that existing color conventions and principles of color mapping should not be ignored in favor of facilitating cognitive congruence; design considerations are always multifactorial. This study, however, advocates that connoted color meanings in general and color-emotion associations, in particular, should be among the essential design considerations in cartography and data visualization.

REFERENCES

Anderson, Cary L., and Anthony C. Robinson. 2021. “Affective Congruence in Visualization Design: Influences on Reading Categorical Maps.” IEEE Transactions on Visualization and Computer Graphics 28 (8): 2867–2878. https://doi.org/10.1109/TVCG.2021.3050118.

Ankerst, Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg Sander. 1999. “OPTICS: Ordering Points to Identify the Clustering Structure.” ACM Sigmod Record 28 (2): 49–60. https://doi.org/10.1145/304181.304187.

Barrett, Lisa Feldman. 1998. “Discrete Emotions or Dimensions? The Role of Valence Focus and Arousal Focus.” Cognition and Emotion 12 (4): 579–599. https://doi.org/10.1080/026999398379574.

———. 2017. How Emotions Are Made: The Secret Life of the Brain. Boston: Houghton Mifflin Harcourt.

Bartram, Lyn, Abhisekh Patra, and Maureen Stone. 2017. “Affective Color in Visualization.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 1364–1374. New York: ACM. https://doi.org/10.1145/3025453.3026041.

Bostan, Laura-Ana-Maria, and Roman Klinger. 2018. “An Analysis of Annotated Corpora for Emotion Classification in Text.” In Proceedings of the 27th International Conference on Computational Linguistics, 2104–2119. Santa Fe, NM: Association for Computational Linguistics. https://aclanthology.org/C18-1179.

Brewer, Cynthia. 1994. “Color Use Guidelines for Mapping and Visualization.” In Visualization in Modern Cartography, edited by Alan M. MacEachren and D. R. Fraser Taylor, 123–147. Oxford, UK: Elsevier. https://doi.org/10.1016/B978-0-08-042415-6.50014-4.

Caquard, Sébastien, and William Cartwright. 2014. “Narrative Cartography: From Mapping Stories to the Narrative of Maps and Mapping.” The Cartographic Journal 51 (2): 101–106. https://doi.org/10.1179/0008704114Z.000000000130.

Caquard, Sébastien, and Amy L. Griffin. 2018. “Mapping Emotional Cartography.” Cartographic Perspectives 91: 4–16. https://doi.org/10.14714/CP91.1551.

Caragea, Cornelia, Anna Cinzia Squicciarini, Sam Stehle, Kishore Neppalli, and Andrea H. Tapia. 2014. “Mapping Moods: Geo-Mapped Sentiment Analysis during Hurricane Sandy.” In Proceedings of the 11th International ISCRAM Conference, edited by Starr Roxanne Hiltz, Linda Plotnick, Mark Pfaf, and Patrick C. Shih, 642–651. Brussels: ISCRAM.

Christen, Markus, Peter Brugger, and Sara Irina Fabrikant. 2021. “Susceptibility of Domain Experts to Color Manipulation Indicate a Need for Design Principles in Data Visualization.” PLOS ONE 16 (2): e0246479. https://doi.org/10.1371/journal.pone.0246479.

Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences, Second Edition. New York: Routledge. https://doi.org/10.4324/9780203771587.

———. 1992. “Statistical Power Analysis.” Current Directions in Psychological Science 1 (3): 98–101. https://doi.org/10.1111/1467-8721.ep10768783.

Coppin, Géraldine, and David Sander. 2016. “Theoretical Approaches to Emotion and Its Measurement.” In Emotion Measurement, edited by Herbert L. Meiselman, 3–30. Dusford, UK: Woodhead Publishing. https://doi.org/10.1016/B978-0-08-100508-8.00001-1.

Cowen, Alan, Hillary Elfenbein, Petri Laukka, and Dacher Keltner. 2019. “Mapping 24 Emotions Conveyed by Brief Human Vocalization.” American Psychologist 74 (6): 698–712. https://doi.org/10.1037/amp0000399.

Cowen, Alan, and Dacher Keltner. 2017. “Self-Report Captures 27 Distinct Categories of Emotion Bridged by Continuous Gradients.” Proceedings of the National Academy of Sciences 114 (38): E7900–7909. https://doi.org/10.1073/pnas.1702247114.

———. 2020. “What the Face Displays: Mapping 28 Emotions Conveyed by Naturalistic Expression.” American Psychologist 75 (3): 349–364. https://doi.org/10.1037/amp0000488.

Cowen, Alan, Dacher Keltner, Florian Schroff, Brendan Jou, Hartwig Adam, and Gautam Prasad. 2021. “Sixteen Facial Expressions Occur in Similar Contexts Worldwide.” Nature 589 (7841): 251–257. https://doi.org/10.1038/s41586-020-3037-7.

Cowen, Alan, Disa Sauter, Jessica L. Tracy, and Dacher Keltner. 2019. “Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression.” Psychological Science in the Public Interest 20 (1): 69–90. https://doi.org/10.1177/1529100619850176.

Curtis, Jacqueline W., Ellen Shiau, Bryce Lowery, David Sloane, Karen Hennigan, and Andrew Curtis. 2014. “The Prospects and Problems of Integrating Sketch Maps with Geographic Information Systems to Understand Environmental Perception: A Case Study of Mapping Youth Fear in Los Angeles Gang Neighborhoods.” Environment and Planning B: Planning and Design 41 (2): 251–271. https://doi.org/10.1068/b38151.

Cyr, Dianne, Milena Head, and Hector Larios. 2010. “Colour Appeal in Website Design within and across Cultures: A Multi-Method Evaluation.” International Journal of Human-Computer Studies 68 (1–2): 1–21. https://doi.org/10.1016/j.ijhcs.2009.08.005.

D’Andrade, R., and M. Egan. 1974. “The Colors of Emotion.” American Ethnologist 1 (1): 49–63. https://doi.org/10.1525/ae.1974.1.1.02a00030.

Dean, Angela, Daniel Voss, and Danel Draguljić. 2017. Design and Analysis of Experiments, Second Edition. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-52250-0.

Demir, Ümit. 2020. “Investigation of Color-Emotion Associations of the University Students.” Color Research and Application 45 (5): 871–884. https://doi.org/10.1002/col.22522.

Demszky, Dorottya, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. “GoEmotions: A Dataset of Fine-Grained Emotions.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4040–4054. Stroudsburg, PA: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.372.

Dent, Borden, Jeff Torguson, and Thomas Hodler. 2008. Cartography: Thematic Map Design, Sixth Edition. New York: McGraw-Hill Education.

Duan, Lian, Lida Xu, Feng Guo, Jun Lee, and Baopin Yan. 2007. “A Local-Density Based Spatial Clustering Algorithm with Noise.” Information Systems 32 (7): 978–986. https://doi.org/10.1016/j.is.2006.10.006.

Ekman, Paul, and Wallace V. Friesen. 1971. “Constants across Cultures in the Face and Emotion.” Journal of Personality and Social Psychology 17 (2): 124–129. https://doi.org/10.1037/h0030377.

———. 1986. “A New Pan-Cultural Facial Expression of Emotion.” Motivation and Emotion 10 (2): 159–168. https://doi.org/10.1007/BF00992253.

Ester, Martin, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. 1996. “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise.” In Proceedings of 2nd International Conference on Knowledge Discovery and Data Mining, edited by Evangelos Simoudis, Jiawei Han, Usama Fayyad, 226–231. Washington, DC: AAAI Press.

Faul, Franz, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. “G*Power 3: A Flexible Statistical Power Analysis Program for the Social, Behavioral, and Biomedical Sciences.” Behavior Research Methods 39 (2): 175–191. https://doi.org/10.3758/BF03193146.

Fuest, Stefan, Susanne Grüner, Mark Vollrath, and Monika Sester. 2021. “Evaluating the Effectiveness of Different Cartographic Design Variants for Influencing Route Choice.” Cartography and Geographic Information Science 48 (2): 169–185. https://doi.org/10.1080/15230406.2020.1855251.

Fugate, Jennifer Marie Binzak, and Courtny L. Franco. 2019. “What Color Is Your Anger? Assessing Color-Emotion Pairings in English Speakers.” Frontiers in Psychology 10: 206. https://doi.org/10.3389/fpsyg.2019.00206.

Gartner, Georg. 2012. “Putting Emotions in Maps–The Wayfinding Example.” In Mapping Mountain Dynamics: From Glaciers to Volcanoes, 61–65. Auckland: CartoPRESS. http://www.mountaincartography.org/publications/papers/papers_taurewa_12/papers/mcw2012_proceedings.pdf.

Gerrig, Richard J., and Philip G. Zimbardo. 2008. Psychology and Life, Eighteenth Edition. Boston, MA: Pearson Education / Allyn and Bacon.

Gilbert, Avery N., Alan J. Fridlund, and Laurie A. Lucchina. 2016. “The Color of Emotion: A Metric for Implicit Color Associations.” Food Quality and Preference 52 (September): 203–210. https://doi.org/10.1016/j.foodqual.2016.04.007.

Goodhew, Stephanie C., and Evan Kidd. 2020. “Bliss Is Blue and Bleak Is Grey: Abstract Word-Colour Associations Influence Objective Performance Even When Not Task Relevant.” Acta Psychologica 206: 103067. https://doi.org/10.1016/j.actpsy.2020.103067.

Griffin, Amy L., and Julia McQuoid. 2012. “At the Intersection of Maps and Emotion: The Challenge of Spatially Representing Experience.” Kartographische Nachrichten 62 (6): 291–299.

Griffin, Amy L., Travis White, Carolyn Fish, Beate Tomio, Haosheng Huang, Claudia Robbi Sluter, João Vitor Meza Bravo,Sara I. Fabrikant, Susanne Bleisch, Melissa Yamada, and Péricles Picanço. 2017. “Designing across Map Use Contexts: A Research Agenda.” International Journal of Cartography 3 (sup1): 90–114. https://doi.org/10.1080/23729333.2017.1315988.

Gupta, Neeraja, Luca Rigotti, and Alistair Wilson. 2021. “The Experimenters’ Dilemma: Inferential Preferences over Populations.” ArXiv: 2107.05064. https://doi.org/10.48550/arXiv.2107.05064.

Hamann, Stephan. 2012. “Mapping Discrete and Dimensional Emotions onto the Brain: Controversies and Consensus.” Trends in Cognitive Sciences 16 (9): 458–466. https://doi.org/10.1016/j.tics.2012.07.006.

Hanada, Mitsuhiko. 2018. “Correspondence Analysis of Color–Emotion Associations.” Color Research and Application 43 (2): 224–237. https://doi.org/10.1002/col.22171.

Hao, Ming C., Christian Rohrdantz, Halldor Janetzko, Daniel A. Keim, Umeshwar Dayal, Lars erik Haug, Meichun Hsu, and Florian Stoffel. 2013. “Visual Sentiment Analysis of Customer Feedback Streams Using Geo-Temporal Term Associations.” Information Visualization 12 (3–4): 273–290. https://doi.org/10.1177/1473871613481691.

Harmon-Jones, Eddie, Cindy Harmon-Jones, and Elizabeth Summerell. 2017. “On the Importance of Both Dimensional and Discrete Models of Emotion.” Behavioral Sciences 7 (4): 66. https://doi.org/10.3390/bs7040066.

Hauthal, Eva, and Dirk Burghardt. 2013. “Extraction of Location-Based Emotions from Photo Platforms.” In Progress in Location-Based Services, edited by Jukka M. Krisp, 3–28. Berlin: Springer. https://doi.org/10.1007/978-3-642-34203-5_1.

Heer, Jeffrey, and Michael Bostock. 2010. “Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 203–212. New York: ACM. https://doi.org/10.1145/1753326.1753357.

Hemphill, Michael. 1996. “A Note on Adults’ Color–Emotion Associations.” The Journal of Genetic Psychology 157 (3): 275–280. https://doi.org/10.1080/00221325.1996.9914865.

Huang, H., G. Gartner, S. Klettner, and M. Schmidt. 2014. “Considering Affective Responses towards Environments for Enhancing Location Based Services.” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL–4: 93–96. https://doi.org/10.5194/isprsarchives-XL-4-93-2014.

Hulland, John, and Jeff Miller. 2018. ““Keep on Turkin’”?” Journal of the Academy of Marketing Science 46 (5): 789–794. https://doi.org/10.1007/s11747-018-0587-4.

Ishihara, Shinobu. 1974. Ishihara’s Tests for Colour-Blindness: Concise Edition. Tokyo: Kanehara Shuppan.

Jacobs, Laurence, Charles Keown, Reginald Worthley, and Kyung-Il Ghymn. 1991. “Cross-cultural Colour Comparisons: Global Marketers Beware!” International Marketing Review 8 (3): 21–33. https://doi.org/10.1108/02651339110137279.

Jain, Anil K. 2010. “Data Clustering: 50 Years beyond K-Means.” Pattern Recognition Letters 31 (8): 651–666. https://doi.org/10.1016/j.patrec.2009.09.011.

Jonauskaite, Domicele, Ahmad Abu-Akel, Nele Dael, Daniel Oberfeld, Ahmed M. Abdel-Khalek, Abdulrahman S. Al-Rasheed, Jean-Philippe Antonietti, Victoria Bogushevskaya, Amer Chamseddine, Eka Chkonia, et al. 2020. “Universal Patterns in Color-Emotion Associations Are Further Shaped by Linguistic and Geographic Proximity.” Psychological Science 31 (10): 1245–1260. https://doi.org/10.1177/0956797620948810.

Keltner, Dacher, Jessica Tracy, Disa A. Sauter, Daniel C. Cordaro, and Galen McNeil. 2016. “Expression of Emotion.” In Handbook of Emotions, Fourth Edition, edited by Lisa Feldman Barrett, Michael Lewis, and Jeannette M. Haviland-Jones, 467–482.

Kim, Jeongmi (Jamie), and Daniel R. Fesenmaier. 2015. “Measuring Emotions in Real Time: Implications for Tourism Experience Design.” Journal of Travel Research 54 (4): 419–429. https://doi.org/10.1177/0047287514550100.

Kramer, Oliver. 2016. “Scikit-Learn.” In Machine Learning for Evolution Strategies, 45–53. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-33383-0_5.

Kuhn, Harold W. 1955. “The Hungarian Method for the Assignment Problem.” Naval Research Logistics Quarterly 2 (1–2): 83–97. https://doi.org/10.1002/nav.3800020109.

Kwan, Mei-Po. 2007. “Affecting Geospatial Technologies: Toward a Feminist Politics of Emotion.” The Professional Geographer 59 (1): 22–34. https://doi.org/10.1111/j.1467-9272.2007.00588.x.

Lee, Sungkil, Mike Sips, and Hans-Peter Seidel. 2013. “Perceptually Driven Visibility Optimization for Categorical Data Visualization.” IEEE Transactions on Visualization and Computer Graphics 19 (10): 1746–1757. https://doi.org/10.1109/TVCG.2012.315.

Lin, Sharon, Julie Fortuna, Chinmay Kulkarni, Maureen Stone, and Jeffrey Heer. 2013. “Selecting Semantically-Resonant Colors for Data Visualization.” Computer Graphics Forum 32 (3.4): 401–410. https://doi.org/10.1111/cgf.12127.

Liu, Qiliang, Min Deng, Yan Shi, and Jiaqiu Wang. 2012. “A Density-Based Spatial Clustering Algorithm Considering Both Spatial Proximity and Attribute Similarity.” Computers & Geosciences 46: 296–309. https://doi.org/10.1016/j.cageo.2011.12.017.

Loughin, Thomas M., and Peter N. Scherer. 1998. “Testing for Association in Contingency Tables with Multiple Column Responses.” Biometrics 54 (2): 630–637. https://doi.org/10.2307/3109769.

Lu, Yafeng, Xia Hu, Feng Wang, Shamanth Kumar, Huan Liu, and Ross Maciejewski. 2015. “Visualizing Social Media Sentiment in Disaster Scenarios.” In Proceedings of the 24th International Conference on World Wide Web, 1211–1215. New York: ACM. https://doi.org/10.1145/2740908.2741720.

Lutabingwa, J., and C. J. Auriacombe. 2007. “Data Analysis in Quantitative Research.” Journal of Public Administration 42 (6): 528–548.

MacLeod, Colin M. 1991. “Half a Century of Research on the Stroop Effect: An Integrative Review.” Psychological Bulletin 109 (2): 163–203. https://doi.org/10.1037/0033-2909.109.2.163.

Maddrell, Avril. 2016. “Mapping Grief. A Conceptual Framework for Understanding the Spatial Dimensions of Bereavement, Mourning and Remembrance.” Social & Cultural Geography 17 (2): 166–188. https://doi.org/10.1080/14649365.2015.1075579.

Mahieu, Benjamin, Pascal Schlich, Michel Visalli, and Hervé Cardot. 2021. “A Multiple-Response Chi-Square Framework for the Analysis of Free-Comment and Check-All-That-Apply Data.” Food Quality and Preference 93: 104256. https://doi.org/10.1016/j.foodqual.2021.104256.

Marey, Hatem M., Noura A. Semary, and Sameh S. Mandour. 2015. “Ishihara Electronic Color Blindness Test: An Evaluation Study.” Ophthalmology Research 3 (3): 67–75. https://doi.org/10.9734/OR/2015/13618.

Matei, Sorin, Sandra J. Ball-Rokeach, and Jack Linchuan Qiu. 2001. “Fear and Misperception of Los Angeles Urban Space: A Spatial-Statistical Study of Communication-Shaped Mental Maps.” Communication Research 28 (4): 429–463. https://doi.org/10.1177/009365001028004004.

Meenar, Mahbubur, Bradley Flamm, and Kevin Keenan. 2019. “Mapping the Emotional Experience of Travel to Understand Cycle-Transit User Behavior.” Sustainability 11 (17): 4743. https://doi.org/10.3390/su11174743.

Mitchell, Lewis, Morgan R. Frank, Kameron Decker Harris, Peter Sheridan Dodds, and Christopher M. Danforth. 2013. “The Geography of Happiness: Connecting Twitter Sentiment and Expression, Demographics, and Objective Characteristics of Place.” PLoS ONE 8 (5): e64417. https://doi.org/10.1371/journal.pone.0064417.

Mitchell, Stuart, Michael O’Sullivan, and Iain Dunning. 2011. “PuLP: A Linear Programming Toolkit for Python.” The University of Auckland, Auckland, New Zealand. https://optimization-online.org/2011/09/3178.

Mody, Ruturaj N., Katharine S. Willis, and Roland Kerstein. 2009. “WiMo: Location-Based Emotion Tagging.” In Proceedings of the 8th International Conference on Mobile and Ubiquitous Multimedia, 1–4. New York: ACM Press. https://doi.org/10.1145/1658550.1658564.

Mohammad, Saif M. 2013. “Even the Abstract Have Colour: Consensus in Word-Colour Associations.” ArXiv: 1309.5391. https://doi.org/10.48550/arXiv.1309.5391.

———. 2016. “Sentiment Analysis: Detecting Valence, Emotions, and Other Affectual States from Text.” In Emotion Measurement, edited by Herbert L. Meiselman 201–237. Elsevier. https://doi.org/10.1016/B978-0-08-100508-8.00009-6.

Olsen, Chris, and Diane Marie M. St. George. 2004. “Cross-Sectional Study Design and Data Analysis.” Young Epidemiology Scholars Competition Teaching Units, Robert Wood Johnson.

Or, Calvin K. L., and Heller H. L. Wang. 2014. “Color-Concept Associations: A Cross-Occupational and -Cultural Study and Comparison.” Color Research and Application 39 (6): 630–635. https://doi.org/10.1002/col.21832.

Ou, Li-Chen, M. Ronnier Luo, Andrée Woodcock, and Angela Wright. 2004. “A Study of Colour Emotion and Colour Preference. Part I: Colour Emotions for Single Colours.” Color Research and Application 29 (3): 232–240. https://doi.org/10.1002/col.20010.

Pánek, Jiří, and Karl Benediktsson. 2017. “Emotional Mapping and Its Participatory Potential: Opinions about Cycling Conditions in Reykjavík, Iceland.” Cities 61: 65–73. https://doi.org/10.1016/j.cities.2016.11.005.

Pánek, Jiří, Vít Pászto, and Lukáš Marek. 2017. “Mapping Emotions: Spatial Distribution of Safety Perception in the City of Olomouc.” In The Rise of Big Spatial Data, edited by Igor Ivan, Alex Singleton, Jiří Horák, and Tomáš Inspektor, 211–224. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-45123-7_16.

Pearce, Margaret Wickens. 2008. “Framing the Days: Place and Narrative in Cartography.” Cartography and Geographic Information Science 35 (1): 17–32. https://doi.org/10.1559/152304008783475661.

Pe’er, Eyal, David M. Rothschild, Zak Evernden, Andrew Gordon, and Ekaterina Damer. 2022. “Data Quality of Platforms and Panels for Online Behavioral Research.” Behavior Research Methods 54 (4): 1643–1662. https://doi.org/10.3758/s13428-021-01694-3.

Peterson, Gretchen. 2020. GIS Cartography: A Guide to Effective Map Design, Third Edition. Boca Raton, FL: CRC Press. https://doi.org/10.1201/9781003046325.

Plutchik, Robert. 2001. “The Nature of Emotions: Human Emotions Have Deep Evolutionary Roots, a Fact That May Explain Their Complexity and Provide Tools for Clinical Practice.” American Scientist 89 (4): 344–350. https://doi.org/10.1511/2001.28.344.

Rathore, Ragini, Zachary Leggon, Laurent Lessard, and Karen B. Schloss. 2020. “Estimating Color-Concept Associations from Image Statistics.” IEEE Transactions on Visualization and Computer Graphics 26 (1): 1226–1235. https://doi.org/10.1109/TVCG.2019.2934536.

Resch, Bernd, Anja Summa, Günther Sagl, Peter Zeile, and Jan-Philipp Exner. 2015. “Urban Emotions—Geo-Semantic Emotion Extraction from Technical Sensors, Human Sensors and Crowdsourced Data.” In Progress in Location-Based Services 2014, edited by Georg Gartner and Haosheng Huang, 199–212. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-11879-6_14.

Sacharin, Vera, Katja Schlegel, and Klaus R. Scherer. 2012. “Geneva Emotion Wheel Rating Study.” Geneva: University of Geneva, Swiss Center for Affective Sciences. https://www.researchgate.net/publication/280880848_Geneva_Emotion_Wheel_Rating_Study.

Sander, David. 2013. “Models of Emotion: The Affective Neuroscience Approach.” The Cambridge Handbook of Human Affective Neuroscience, edited by Jorge Armony and Patrik Vuilleumier, 5–53. Cambridge, UK: Cambridge University Press https://doi.org/10.1017/CBO9780511843716.003.

Schanda, János. 2007. Colorimetry: Understanding the CIE System. Hoboken, NJ: John Wiley & Sons. https://doi.org/10.1002/9780470175637.

Scherer, Klaus R. 2005. “What Are Emotions? And How Can They Be Measured?” Social Science Information 44 (4): 695–729. https://doi.org/10.1177/0539018405058216.

Scherer, Klaus R., Vera Shuman, Johnny Fontaine, and Cristina Soriano. 2013. “The GRID Meets the Wheel: Assessing Emotional Feeling via Self-Report.” In Components of Emotional Meaning: A Sourcebook, edited by Johnny J. R. Fontaine, Klaus R. Scherer, and Cristina Soriano, 281–298. Oxford, UK: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199592746.003.0019.

Schloss, Karen B., Laurent Lessard, Charlotte S. Walmsley, and Kathleen Foley. 2018. “Color Inference in Visual Communication: The Meaning of Colors in Recycling.” Cognitive Research: Principles and Implications 3: 5. https://doi.org/10.1186/s41235-018-0090-y.

Schrijver, Alexander. 1998. Theory of Linear and Integer Programming. Hoboken, NJ: John Wiley & Sons.

Setlur, Vidya, and Maureen C. Stone. 2015. “A Linguistic Approach to Categorical Color Assignment for Data Visualization.” IEEE Transactions on Visualization and Computer Graphics 22 (1): 698–707. https://doi.org/10.1109/TVCG.2015.2467471.

Sheehan, Kim Bartel. 2018. “Crowdsourcing Research: Data Collection with Amazon’s Mechanical Turk.” Communication Monographs 85 (1): 140–156. https://doi.org/10.1080/03637751.2017.1342043.

Silva, Samuel, Beatriz Sousa Santos, and Joaquim Madeira. 2011. “Using Color in Visualization: A Survey.” Computers & Graphics 35 (2): 320–333. https://doi.org/10.1016/j.cag.2010.11.015.

Slocum, Terry A., Robert B. McMaster, Fritz C. Kessler, and Hugh H. Howard. 2022. Thematic Cartography and Geovisualization, Fourth Edition. Boca Raton, FL: CRC Press.

Stone, Maureen, Danielle Albers Szafir, and Vidya Setlur. 2014. “An Engineering Model for Color Difference as a Function of Size.” In Proceedings of the IS&T 22nd Color and Imaging Conference, 253–258. Springfield, VA: Society for Imaging Science and Technology. https://doi.org/10.2352/CIC.2014.22.1.art00045.

Stroop, J. Ridley. 1935. “Studies of Interference in Serial Verbal Reactions.” Journal of Experimental Psychology 18 (6): 643–662. https://doi.org/10.1037/h0054651.

Suk, Hyeon-Jeong, and Hans Irtel. 2010. “Emotional Response to Color across Media.” Color Research and Application 35 (1): 64–77. https://doi.org/10.1002/col.20554.

Tham, Diana Su Yun, Paul T. Sowden, Alexandra Grandison, Anna Franklin, Anna Kai Win Lee, Michelle Ng, Juhyun Park, Weiguo Pang, and Jingwen Zhao. 2020. “A Systematic Investigation of Conceptual Color Associations.” Journal of Experimental Psychology: General 149 (7): 1311–1332. https://doi.org/10.1037/xge0000703.

Urdan, Timothy C. 2016. Statistics in Plain English, Fourth Edition. New York: Routledge.

Vardi, Yehuda, and Cun-Hui Zhang. 2000. “The Multivariate L1-Median and Associated Data Depth.” Proceedings of the National Academy of Sciences 97 (4): 1423–1426. https://doi.org/10.1073/pnas.97.4.1423.

Volker, Martin A. 2006. “Reporting Effect Size Estimates in School Psychology Research.” Psychology in the Schools 43 (6): 653–672. https://doi.org/10.1002/pits.20176.

Williams, H. Paul. 2013. Model Building in Mathematical Programming, Fifth Edition. Chichester, UK: John Wiley & Sons.

Zeile, Peter, Bernd Resch, Jan-Philipp Exner, and Günther Sagl. 2015. “Urban Emotions: Benefits and Risks in Using Human Sensory Assessment for the Extraction of Contextual Emotion Information in Urban Planning.” In Planning Support Systems and Smart Cities, edited by Stan Geertman, Joseph Ferreira Jr., Robert Goodspeed, and John Stillwell, 209–225. Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-319-18368-8_11.