We analyzed human brain activity patterns from 148 studies of emotion categories (2159 total participants) using a novel hierarchical Bayesian model. The model allowed us to classify which of five categories—fear, anger, disgust, sadness, or happiness—is engaged by a study with 66% accuracy (43-86% across categories). Analyses of the activity patterns encoded in the model revealed that each emotion category is associated with unique, prototypical patterns of activity across multiple brain systems including the cortex, thalamus, amygdala, and other structures. The results indicate that emotion categories are not contained within any one region or system, but are represented as configurations across multiple brain networks. The model provides a precise summary of the prototypical patterns for each emotion category, and demonstrates that a sufficient characterization of emotion categories relies on (a) differential patterns of involvement in neocortical systems that differ between humans and other species, and (b) distinctive patterns of cortical-subcortical interactions. Thus, these findings are incompatible with several contemporary theories of emotion, including those that emphasize emotion-dedicated brain systems and those that propose emotion is localized primarily in subcortical activity. They are consistent with componential and constructionist views, which propose that emotions are differentiated by a combination of perceptual, mnemonic, prospective, and motivational elements. Such brain-based models of emotion provide a foundation for new translational and clinical approaches.
In this meta-analysis across 148 studies, we ask whether it is possible to identify patterns that differentiate five emotion categories—fear, anger, disgust, sadness, and happiness—in a way that is consistent across studies. Our analyses support this capability, paving the way for brain markers for emotion that can be applied prospectively in new studies and individuals. In addition, we investigate the anatomical nature of the patterns that are diagnostic of emotion categories, and find that they are distributed across many brain systems associated with diverse cognitive, perceptual, and motor functions. For example, among other systems, information diagnostic of emotion category was found in both large, multi-functional cortical networks and in the thalamus, a small region composed of functionally dedicated subnuclei. Thus, rather than relying on measures in single regions, capturing the distinctive qualities of different types of emotional responses will require integration of measures across multiple brain systems. Beyond this broad conclusion, our results provide a foundation for specifying the precise miX of activity across systems that differentiates one emotion category from another.
Emotions play a crucial role in forging and maintaining social relationships, which is a major adaptation of our species. They are also central in the diagnosis and treatment of virtually every mental disorder . The autonomic and neuroendo-crine changes that accompany emotional episodes may also play an important role in physical health via peripheral gene eXpression and other pathways (e.g.,  ).
In animals, substantial progress has been made in linking motivated behaviors such as freezing (e.g. ), flight (e.g. ), reward pursuit (e.g. ), and aggressive behavior (e.g. ) to specific brain circuits. However, emotional eXperiences in humans are substantially more compleX. Emotions such as fear emerge in response to compleX situations that include basic sensory elements such as threat cues  as well as mental attributions about context information (e.g., the belief that one is being socially evaluated  and one’s own internal states [9,10]. A specific emotion category, like fear, can involve a range of behaviors, including freezing, flight, aggression, as well as compleX social interactions. Thus, in spite of groundbreaking advances in understanding the circuitry underlying basic behavioral adaptations for safety and reproduction (including ‘threat’ behaviors; ), there is no comprehensive model of the neurophysiological basis of emotional eXperience in humans.
The first two decades of neuroimaging saw hundreds of studies of the brain correlates of human emotion, but a central problem for the field is that the regions most reliably activated—e.g., the anterior cingulate, anterior insula, amygdala, and orbitofrontal corteX—are activated in multiple categories of emotions  , and during many other sensory, perceptual and cognitive events [11,13]. Thus, activation of these regions is not specific to one emotion category or even emotion more generally. And, while there are many findings that seem to differentiate one emotion type from another, it is not clear that these findings are reliable enough (with sufficiently large effects) or generalizable enough across studies to meaningfully use brain information to infer what type of emotion was experienced.
For example, in an innovative recent study, Kassam and colleagues  identified patterns of fMRI activity that distinguished multiple emotion categories. Though very promising, such approaches are limited in two basic ways. First, they are not really models of the generative processes sufficient to characterize a particular type of emotional experience. For example, the most common method uses Support Vector Machines to discriminate affective conditions (e.g., depressed patients vs. controls; ), and to discriminate 5 emotions, 10 separate maps (5 choose 2) are required for ‘brute-force’ pattern separation. Other models, such as the Gaussian Nai've Bayes approach , rely on differnces in activity patterns without capturing any of the interactions among brain regions that are likely critical for differentiating affective states [20,22,23]. Secondly, like univariate brain mapping, these approaches are beginning to yield a collection of patterns that seem to differentiate one affective state or patient group from another, but it remains to be seen how generalizable these predictive models are across studies, task variants, and populations. If history is a guide, in the area of emotion, the patterns that appear to reliably distinguish emotion categories may vary from study to study (e.g.,  vs.  ), making it difficult to identify generalizable models of specific emtion types.
In this paper, our goal was to develop a generative, brain-based model of the five most common emotion categories—fear, anger, disgust, sadness, and happiness—based on findings across studies. Developing such a model would provide a rich characterization of the ‘core’ brain activation and co-activation patterns prototypical of each emotion category, which could be used to both make inferences about the distinctive features of emotion categories and their functional similarities across the brain or in specific systems. In addition, a useful model should be able to go beyond identifying significant differences across emotion categories and provide information that is actually diagnostic of the category based on observed patterns of brain activity. From a meta-analytic database of nearly 400 neuroimaging studies (6,827 participants) on affect and emotion, we used a subset of studies (148 studies) focused on the five emotion categories mentioned above to develop an integrated, hierarchical Bayesian model of the functional brain patterns underlying them.
First, we asked whether it is possible to identify patterns of brain activity diagnostic of emotion categories across contexts and studies. Second, we asked whether emotion categories can be localized to specific brain structures or circuits, or to more broadly distributed patterns of activity across multiple systems. For many decades, scientists have searched to no avail for the brain basis of emotion categories in specific anatomical regions—e.g., fear in the amygdala, disgust in the insula, etc. The amygdala and insula are involved in fear and disgust, but are neither sufficient nor necessary for their experience. Conversely, emotions in both categories engage a much wider array of systems assumed to have cognitive, perceptual, and sensory functions , and damage to these systems can profoundly affect emotionality [26,27]. This multi-system view of emotion is consistent with network-based theories of the brain’s functional architecture [28,29] that have gained substantial traction in recent years. Based on these findings, we predicted that anger, sadness, fear, disgust and happiness emerge from the interactions across distributed brain networks that are not specific to emotion per se, but that subserve other basic processes, including attention, memory, action, and perception, as well as autonomic, endocrine, and metabolic regulation of the body [30,31]. Empirical support for this network approach to emotion has begun to emerge in individual eXperiments (e.g., [14,32—36]), and in task-independent (“resting-state”) analyses . In Kassam et al. , for example, emotion category-related fMRI activity was widely distributed across the brain; and the same is true for recent work predicting depression from brain activity  , again illustrating the need for a network approach.
Our analysis included 148 PET and fMRI studies published from 1993 to 2011 (377 maps, 2159 participants) that attempted to specifically cultivate one of the five most commonly studied categories of emotion—happiness, fear, sadness, anger, and disgust. The studies were relatively heterogeneous in their methods for eliciting emotion (the most common were visual, auditory, imagery, and memory recall), and in the stimuli used (faces, pictures, films, words, and others). There was some covariance between emotion categories and elicitation methods (81 Table), and we assessed the impact of this in several analyses. Studies used both male and female participants, primarily of European descent. The goal of our analysis was to test whether each emotion category has a unique signature of activity across the brain that is consistent despite varying methodological conditions (ruling out the possibility that emotion activity maps differ systematically because of method variables), providing a provisional brain ‘signature’ for each emotion category.
The BSPP model is a hierarchical Bayesian representation of the joint density of the number and locations of peak activations Within a study (i.e., X, y, Z coordinates) given its particular emotion category. The BSPP model differs from standard univariate [12,13,39] and co-activation based [40,41] approaches to meta-analysis in several fundamental ways. For instance, Activation Likelihood Estimation (ALE), multilevel kernel density analysis (MKDA), and co-activation approaches are 1) not generative models of the emotion, and 2) not multivariate in brain space. Because they are not generative models, standard analyses provide only descriptive, summary maps of activity or bivariate co-activation for different psychological states.
The generative (or ‘forward’) model estimates a set of brain locations, or ‘population centers’, that are consistently active during instances of a given emotion category. Stochastic sampling from these population centers With study-level variation (in methods, preprocessing, statistical analysis, etc.) and measurement-level spatial noise is assumed to generate the observed data. The result is a rich, probabilistic representation of the spatial patterns of brain activity associated With each emotion category. Once estimated, the model can be used to  investigate the brain representations for each emotion implicit in the model and  infer the most likely emotion category for a neW study based on its pattern of activation (‘reverse’ inference). The generative model concerns the process by Which emotional instances of a given category produce observed peak activation foci, and the likelihood With Which they do so. Activation data from studies or individuals are modeled at three hierarchical levels (see Methods and 
At Level 1 is the individual study data, in the form of peak coordinate locations. Level 2 models the activation centers across study with a data-generating focus that can result in variable numbers of reported locations depending on the smoothness in the image and analysis/reporting choices. Level 3 models the location of ‘true’ population centers for each emotion category with a probability distribution over space specified by the model.
The MCMC procedure draws samples from the joint posterior distribution of the number and locations of peak activations in the brain given an emotion category. The posterior distribution is summarized in part by the in tensity function map representing the spatial posterior eXpected number of activation or population centers in each area across the brain given the emotion category; this can be used to interpret the activation pattern characteristic of an emotion category (Fig. 1A). Since the BSPP models the joint distribution of the number and locations of a set of peak coordinates, the posterior distribution also includes information about the co-activation across voxels; thus, MCMC samples drawn from it can be used to infer on the co-activation patterns and network properties for each emotion category (discussed below).
Once the Bayesian model is estimated, it can be inverted in a straightforward manner to estimate the posterior probability of each emotion category given a set of brain activation coordinates (see Methods). We used Bayes rule to obtain these probabilities, assuming no prior knowledge of the base-rate of studies in each category (i.e., flat priors), and used leave-one-study-out cross-validation so that predictions were always made about studies not used to train the model. The model performed the five-way classification of emotion categories with accuracy ranging from 43% for anger to 86% for fear to (mean balanced accuracy 2 66%; Fig. 1B; Sl Table); chance was 20% for all categories, and absence of bias was validated by a permutation test. The BSPP model outperformed both a Nai've Bayes classifier (mean accuracy was 35%) and a nonlinear support-vector-machine based classifier (mean accuracy was 33%; see Supplementary Methods for details), confirming its utility in distinguishing different emotion categories.
For instance, 23% of the studies used recall and 50% used visual images to induce sadness, whereas 2% of studies used recall and 90% used visual images to induce fear. Thus, patterns for sadness vs. fear might be differentiable because the different stimuli elicit different brain responses. We verified that classification results were essentially unaffected by controlling for induction method (40—83% accuracy across the five emotion categories, and 61% on average; 81 Table). We also attempted to predict the emotion category using several methodological variables, including the method of elicitation (the most common were visual, auditory, imagery, and memory recall), stimulus type (faces, pictures, films, words, and others), participant gender, control condition, and imaging technique (PET or fMRI). Several of these variables accurately classified some emotion categories in the five-way classification (S2 Table), but no methods variable performed as well as the original BSPP model in accuracy. Stimulus type, task type, and imaging technique predicted emotion significantly above chance, at 32%, 26%, and 26% accuracy, respectively. Elicitation method, participant seX, and control condition for the emotion contrasts were at 24%, 21%, and 18%, respectively, all nonsignificant). Thus, although there are some dependencies between the methods used and the emotion categories studied, the emotion category patterns that we identified with our BSPP approach appeared to generalize across the different methods (at least as represented in our sample of studies).
1C shows the intensity maps associated with each emotion category. The distinctiveness for each emotion category was distributed across all major regions of the corteX, as well as in subcortical regions, as supported by additional analyses described below. Notably, limbic and paralimbic regions such as the amgydala, ventral striatum, orbitofrontal corteX (OFC), anterior cingulate corteX (ACC), brainstem, and insula were likely to be active in all emotion categories, though with different and meaningful distributions within each region (as shown in analyses below). In addition, regions typically labeled as ‘cognitive’ or ‘perceptual’ were also engaged and potentially differentially engaged across categories, including ventromedial, dorsomedial, and ventrolateral prefrontal cortices (vaFC, deFC, and leFC), posterior cingulate (PCC), hippocampus, and medial temporal lobes, and occipital regions.
2A). For cortical, basal ganglia, and cerebellar networks, we used results from Buckner and colleagues [42—44] , who identified seven networks with coherent resting-state connectivity across 1,000 participants. Each of the seven included a cortical network and correlated areas within the basal ganglia (BG) and cerebellum. We supplemented these networks with anatomical subregions within the amygdala, hippocampus, thalamus, and the brainstem and hypothalamus. We tested whether a) broad anatomical divisions (e.g., corteX, amygdala) showed different overall intensity values across the five emotion categories; and b) whether the ‘signature’ of activity across networks within each division differed significantly across emotions (S3 Table). Our broad goal, however, was not to exhaustively test all emotion differences in all regions, but to provide a broad characterization of each emotion category and which brain divisions are important in diagnosing them.
Overall, different emotion categories involved reliably different patterns of activation across these anatomically circumscribed zones (Figs. 2B, S3 and S3 Table), and illustrate how the BSPP model can be used to draw inferences across multiple anatomical levels of analysis.
First, there were few differences among emotion categories in the overall level of cortical engagement (summarized in S3 Table). Second, no emotion category mapped to any single cortical network, but emotion categories could be distinguished by significant differences in their profiles across networks (Fig. 2B, p <. 001 overall; S3 Table). To identify patterns across the seven cortical networks, we used nonnegative matrix factorization  to decompose these intensity values into two distinct profiles (Fig. 2C). These profiles were differentially expressed by different emotions (p < 0.01) and showed significant differences in 8 of the 10 pairwise comparisons across emotions (q < 0.05 FDR corrected; S3 Table). Adopting Buckner et al.’s terminology for the networks, we found that anger and fear categories were characterized by a profile that included mainly involved the ‘dorsal attention,’ ‘visual’ (occipital), ‘frontoparietal,’ ‘limbic,’ and ‘default mode’ networks. Patterns for disgust, sadness, and happiness categories were characterized by moderate activation of this profile and high intensity in a second profile that included the ‘ventral attention,’ ‘somatomotor,’ and ‘visual’ networks (Fig. 2C). When combined, these two profiles differentiated all five emotion categories to some degree, as can be seen in the nearly non-overlapping probability density functions (colored regions in Fig. 2C). Third, the grouping of emotion categories in terms of cortical activity profiles did not match folk conceptions of emotions (e.g., ) or the dimensions identified in behavioral emotion research . For example, happiness and disgust categories (one ‘positive’ and one ‘negative’ emotion) produced very similar profiles, but disgust and fear categories (both high-arousal negative emotions) produced very different profiles.
Different patterns across emotion categories were also discernable in subcortical zones including the amygdala, thalamus, brainstem/ cerebellum, and basal ganglia (Fig. 2B). Nonnegative matrix factorization again produced distinct profiles for each emotion category (S3 Fig), reveals several additional characteristics over and above the cortical profiles. First, as with cortical networks, the largest differences were not overall intensity differences across emotion categories, but rather the profile of differences across subregions. Even the zones most differentially engaged in terms of average intensity, such as the amgy-dala (Fig. 2B), showed appreciable intensity in all five emotion categories (see also Fig. 1), consistent with previous meta-analyses. Second, whereas cortical networks discriminated the fear and anger categories from the other emotions, hippocampal and cerebellar/brainstem profiles discriminated fear from anger (q < 0.05 FDR; S3 Fig and S3 Table). Third, the relationships between cortical, BG, and cerebellar networks varied across emotion categories. For example, the anger category produced the highest intensity in the cortical ‘dorsal attention’ network, whereas in the BG ‘dorsal attention’ zone, the disgust category was high and the anger category was low (Fig. 2B). This suggests that the network coupling as observed in task-independent data (i.e., ‘resting-state’) is not preserved when emotional experiences are induced. Thalamic areas connected with motor and premotor cortices were most activated in fear and disgust categories, as were ‘somatomotor’ BG regions, but the ‘somatomotor’ cortical network is low in disgust and fear categories. These patterns suggest that simple characterizations such as more vs. less motor activity are insufficient to characterize the brain representations of emotion categories.
By saving the average intensity values for each region/ network from each MCMC iteration, we were able to estimate the co-activation intensity as the correlation between average intensity values for each pair of regions. We used a permutation test to threshold the co-activation estimates (using the most stringent of the q <. 05 FDR-corrected thresholds across categories).
3 shows that each emotion category was associated with a qualitatively different configuration of co-activation between cortical networks and subcortical brain regions. In Fig. 3A, force-directed graphs of the relationships among the 49 anatomical regions/networks demonstrate very different topological configurations for the five emotion categories. In these graphs, regions and networks are represented by circles (nodes), with significant co-activations (edges) represented as lines. The size of each circle reflects the region/network’s betweeness centrality [48,49], a graph-theoretic measure of the degree to which a region/network is a ‘connector’ of multiple other regions. Colors reflect membership in siX cerebral zones: Cortex, basal ganglia, cerebellum/brainstem, thalamus, amygdala, and hippocampus. Fig. 3B shows the same graph’s relationships in anatomical brain space. Finally, Fig. 3C shows estimates of average co-activa-tion within (diagonals) and between (off-diagonals) the siX cerebral zones. The co-activation metric reflects global efficiency, based on the average shortest path length, a graph theoretic measure of the shortest path connecting the regions in the co-activation graphs, and was calculated as the average (1 /path length) between pairs of regions/networks.
Connectors (>90th percentile in betweenness-centrality) include the cortical frontoparietal network, right amygdala, and brainstem. (Visual corteX was a connector for all emotion categories except sadness). In disgust, by contrast, cortical networks connect to basal ganglia regions and serve as a bridge to an otherwise isolated cerebellum. Connectors include the somatomotor basal ganglia network and brainstem. The fear category is marked by reduced co-activation among cortical networks and between corteX and other structures, but the basal ganglia are tightly integrated with the amygdala and thalamus. In happiness, intra-cortical co-activation is higher, but cortical-subcortical co-activation is low, and connectors include the limbic cortical network, motor thalamus, and visual basal ganglia and cerebellum. Sadness is characterized by dramatically reduced co-activation within the corteX, between corteX and other regions, and between cerebellum and other regions. Intra-thalamic, intra-basal ganglia, and intra-cerebellar co-activation are relatively preserved, but large-scale connections among systems are largely absent. Connectors include the limbic cerebellum, brainstem, two hippocampal regions, and the left centromedial amygdala.
The results of our BSPP model indicate that emotion categories are associated with distinct patterns of activity and co-activation distributed across the brain, such that there is a reliable brain basis for diagnosing instances of each emotion category across the variety of studies within our meta-analytic database. The brain patterns are sufficient to predict the emotion category targeted in a study with moderate to high accuracy, depending on the category, in spite of substantial heterogeneity in the paradigms, imaging methods, and subject populations used.
In addition, the results have much greater specificity than the two-choice classifications that are most common in fMRI studies. For example, though anger has only a 43% sensitivity, it has 99% specificity. In addition, the positive predictive value is above 60% and negative predictive value above 89% for all categories (see Table 1). This means that if an emotion is classified as an instance of a particular category, there is at least a 60% chance that it truly belongs to the category; and if not classified as an instance of a category, there is at least a ~90% chance that it truly is not an instance. However, the major value of the model is not merely in inferring the emotion category from brain data in new studies, but in characterizing a canonical, population-level representation of each emotion category that can constrain the development of theories of emotion and brain-based modeling and prediction in individual studies. Whereas emotion-predictive features developed by multivariate pattern analyses (MVPA) within individual studies can be driven by task or subject-related ideosyncracies and fail to generalize, a strength of the representations we have identified is that, because they were trained across heterogeneous instances, they are likely to reflect generalizable features. Below, we discuss the value of the generative BSPP model as a computational technique for characterizing emotion, and the implications for theories of emotion and brain network science.
Because it is a generative model, the BSPP model of emotion categories is capable of making predictions about new instances. Other methods—such as our previous MKDA analyses, ALE analyses, and bivariate co-activation analyses that we and others have developed—are not generative models, and would not be expected to be appropriate to or perform well in classification.
For example, when using Support Vector Machines to discriminate the five categories, ten separate classifier maps (5-choose-2) are required to predict the category, rather than relying on a single representation of each category and the likelihood that a particular study belongs to it. In addition, the nonlinear SVM model we adapted for meta-analytic classification performs substantially more poorly in classification.
However, unlike data-driven pattern classification models, this model can be queried flexibly—i.e., here, we present graphs of bivariate (2-regi0n) co-activa-tion strengths, but other, more comprehensive summaries can be used, including those that were not explicitly used in model training. For example, we demonstrate this by using non-neg-ative matrix factorization (NNMF) to derive canonical profiles of activation across cortical and subcortical systems. We then recalculate the model likelihood according to the new metric of canonical profile activation, without re-f1tting the model, and are able to make statistical inferences about the differences among emotion categories in that new feature space. This flexibility is a hallmark of generative Bayesian models that provides advantages in allowing researchers to test new metrics, features, and patterns, rather than being limited to a fixed set of features such as pairwise correlations. Beyond these considerations of methodology and broad interpretation, the present findings bear on theories of emotion, and the ways in which studies look for the hallmarks of particular emotional eXperiences, in novel and specific ways. We elaborate on some of these below.
The present findings constitute a brain-based description of emotion categories that does not conform to emotion theories that are based on the phenomenology of emotion. Our findings do not support basic emotion theories , Which are inspired by our phenomenology of distinct experiences of anger, sadness, fear, disgust and happiness that should be mirrored in distinct modules that cause each emotion. According to such theories, each emotion type arises from a dedicated population of neurons that are  architecturally separate,  homologous With other animals and  largely subcortical or paralimbic (e.g., infralimbic-amygdala-PAG). Many theories also assume that the signature for an emotion type should correspond to activation in a specific brain region or anatomically modular circuit (e.g., ), usually Within sub-cortical tissue (e.g.,  ). In a recent review of basic emotion theories, Tracy wrote that the “agreed-upon gold standard is the presence of neurons dedicated to an emotion’s activation” (, p. 398).
The areas of the brain sufficient to represent and classify emotion category in our results are not architecturally separate, and include cortical networks that may not have any direct homologue in nonhuman primates . Our results suggest these cortical networks act as a bridge between subcortical systems in different ways, depending on the emotion category, Which is consistent With the anatomy and neurophysiology of cortical-subcortical circuits (e.g., ). Though we do not have the resolution to examine small, isolated populations of cells (see below for more discussion), we are not aware of findings that identify single neurons dedicated to one specific type of emotion Within prefrontal, somatosensory, and other networks. Thus, the weight of evidence suggests that cortical networks are centrally and differentially involved in emotion generation and that they are not conserved across species.
Valence, arousal, and approach-avoid orientation are descriptors that are fundamental at the phenomenological level, but not necessarily at the level of brain architecture that we studied here. Thus, theories of emotion have been underconstrained at the neurophysiological level, With an absence of specific human brain data on the necessary and sufficient conditions to differentiate across emotion categories, and our findings can inform the evolution of emotion theories in specific ways.
Happiness, sadness, and disgust categories belong to another, distinct group, With preferential activity in somatomotor and 'Ventral attention' (or 'salience') networks. This distinction is pronounced and strong in this dataset, and it cannot be explained by the methodological variations across studies that we examined. Importantly for emotion theories, neither can it be explained by traditional emotion concepts: The 'ventral attention' group includes two negative emotions (disgust and sadness) and one positive one (happiness), and one most often labeled as high-arousal (disgust) and two as low-arousal (happiness and sadness), at least with respect to the in-scanner paradigms typically used in these studies (that is, sadness can be high-arousal, but in-scanner sadness manipulations are typically low-arousal). One emotion is traditionally categorized as approach-related (happiness), and two as avoidance-re-lated (sadness and disgust). The 'dorsal attention' group contains two negative emotions, one traditionally categorized as approach (anger) and one as avoidance-related (fear). Thus, the structure that emerges by examining the cortical patterns of activity across emotion categories is not likely to be explainable in terms of any of the traditional phenomenological dimensions used in emotion theories.
The 'dorsal attention' and 'fronto-parietal' networks are consistently engaged by tasks that require the allocation of attentional resources to the external world, particularly as guided by task goals [57,58]. By contrast, the ventral attention network (which is largely spatially overlapping with the so-called salience network ) includes (a) more ventral frontoparietal regions consistently engaged during exogenously cued, more automatic processing of events, and (b) cingulate, insular, and somatosensory regions (e.g., $11) that are targets of interoceptive pathways that carry information about pain, itch, and other visceral sensations (for reviews, see [30,60]). The default mode network may provide a bridge between conceptual cortical processes and visceromotor, homostatic, and neuroendocrine processes commonly associated with affect  , including the shaping of learning and affective responses based on expectations . It is consistently engaged during conceptual processes such as semantic memory , person perception [64,65], and prospection about future events  , as well as in emotion [36,67], valuation, and conceptually driven autonomic and neuroendocrine responses [68—70] and their effects on cognition . Thus, the modal patterns we observed suggest that anger and fear categories preferentially engage cortical processes that support an 'external orientation/ object focused' schema, characterized by goal driven responses where objects and events in the world are in the foreground. By contrast, sadness, happiness, and disgust engage cortical patterns that support an internal orientation/homeostatic focused schema, characterized by orientation to immediate somatic or visceral experience, which prioritizes processing of interoceptive and homeostatic events. In sum, the new dimension of goal driven/ external object focused vs. reactive/ internal ho-meostasis-focused, rather than traditional phenomenological dimensions, may be important for capturing distinctions between emotion categories respected by gross anatomical brain organization.
The importance of the external/object versus internal/interoceptive dimension is also reflected in the surprising observation that our attempts to classify positive versus negative valence across the entire set largely failed. The finding that emotion categories are a better descriptor than valence categories provides new information about how emotion categories are represented in brain systems. We are in no way claiming that positive and negative valence is unimportant. At the microscopic level, separate populations of neurons within the same gross anatomical structures appear to preferentially encode positively versus negatively valenced events [72—74]. Valence may be an aspect of emotional responses that is particularly important subjectively, but is not the principal determinant of which brain regions are engaged during emotional experience at an architectural level. By analogy, the loudness of a sound has important subjective and behavioral consequences; but the brain does not contain a separate "loud sound system" and "soft sound system." Because valence is important, it has been widely assumed that the brain must contain separate systems for positive and negative valence. Our results suggest they may be aspects of processing within emotion systems. In support of this view, recent work has demonstrated that emotions frequently thought of as univalent, such as sadness, can be experienced as either positive or negative, depending on the context.
Though we focus mainly on the cortex in our interpretations above, we are not claiming that cortical patterns alone are sufficient to fully characterize differences across emotion categories. Cortical-subcortical interactions have been central to emotion since the term ‘limbic system’ was coined (e.g.,  ). Subcortical responses are likely equally or more important, and show different organizational patterns. The pattern of cortico-cerebellar connectivity differs markedly across emotion categories (see Table 2): In anger, fronto-parietal cortex is co-activated positively with amygdala and cerebellum, and the dorsal attention network is negatively associated with cerebellar activation; in disgust, somatomotor cortex associations with basal ganglia dominate; in fear, visual-subcortical (esp. amygdala) co-activation dominates. And, perhaps most prominently, sadness is characterized by a profound lack of co-activation between cortical and subcortical cerebellar/brainstem networks, and a strong, preserved co-activation of hindbrain (cerebellar/brainstem) systems.
This pattern might provide hints as to why psychopathology, and depression in particular, frequently impairments in the ability to describe emotional experience in a fine-grained, contextualized manner (e.g., alexithymia), which is a risk factor for multiple psychiatric conditions (e.g., [79—83] ). They also provide a new way of thinking about the reasons for the observed benefits of subgenual cingulate cortical stimulation for depression , as the subgenual cingulate and surrounding ventromedial prefrontal cortex have the densest projections to the brainstem of any cortical region [85,86].
In spite of the existence of topographically mapped prefrontal-cerebellar circuits  that play a prominent role in emotion as revealed by human brain stimulation and lesions (e.g., ), the prefrontal-cerebellar-brainstem axis has not been a major focus of recent theoretical and MVPA-based studies of depression (e.g., [17,89]) or emotion more generally (e.g., [14,15]), and is often specifically excluded from analysis. However, cerebellar connectivity plays a central role in some of the most discriminative whole-brain studies of depressed patients vs. controls to date [19,21]. However, these latter studies omitted the brainstem (and note that “the functional connectivity of the brainstem should be investigated in the future”). In our results, the brainstem is also critical: In sadness, it is co-activated with the cerebellum only, whereas in other emotions it is much more integrated with the thalamus, basal ganglia, and cortex. Thus, our results can help inform and guide future studies on this system in specific ways.
Our findings place an important constraint on emotion theories that identify emotions with discrete brain regions or circuits. Since the inception of the 'limbic system' concept by Paul MacLean  , it has been widely assumed that there is an 'emotional' brain system that encodes experience and is dissociable from systems for memory, perception, attention, etc. Our results provide a compelling and specific refutation of that view. Single regions are not sufficient for characterizing emotions: Amygdala responsivity is not sufficient for characterizing fear; the insula in not sufficient for characterizing disgust; and the subgenual cingulate is not sufficient for characterizing sadness. While other meta-anal-yses have reached this broad conclusion for individual brain regions, we also found that no single network (at least as currently defined from resting-state connectivity; such networks are Widely used in inference and classification) is sufficient for characterizing an emotion category, either. Rather, the activity patterns sufficient to characterize an emotion category spanned multiple cortical and subcortical systems associated With perception memory, homeostasis and visceromotor control, interoception, etc.
But, even beyond the examples we discuss, the rich patterns that emerge from our model can inform future studies of other emotion-specific interaction patterns. For example: The amygdala has been often discussed in relation to fear, but our results demonstrate that the preference for fear is limited mainly to the basolateral amygdalar complex, and other subregions closer to the basal forebrain may play consistent roles across multiple emotions. Happiness is characterized by low cortical co-activation with amygdala, thalamus, and basal ganglia, tight basal ganglia-thalamic integration, and a novel left-hemi-sphere dominance in the hippocampus that may be related to theories of lateralized valence processing [90,91] (Table 2). In some cases, the results are counterintuitive based on previous emotion theories, and in other cases they are consistent. For example, anger is widely thought to involve increased action tendencies, and based on this one might predict increased activation in somatomotor networks . However, our results paint a picture more consistent with pre-frontally mediated, goal directed regulation of motor systems. Conversely, sadness is associated with reduced feelings of somatomotor activity in the limbs , consistent with overall ‘de-en-ergization’ and internal focus. Consistent with this, both major motor-control systems (basal ganglia and cerebellum) are more isolated from the corteX and limbic system in sadness than any other emotion.
The broad architecture of these networks was predicted a priori by the Conceptual Act Theory , part of a new family of construction theories hypothesizing that anger, sadness, fear, disgust, and happiness are not biological types arising from dedicated brain modules, but arise from interactions of anatomically distrtibuted, core systems [30,31,94—96]; however, the specific patterns and interrelationships involved are just beginning to be discovered. Even the broad principles of this architecture do not conform to predictions of basic emotion theories, nor of appraisal theories, which imply that there is one brain system corresponding to specific aspects of cognitive appraisal (e.g., valence, novelty, control, etc.). We believe that previous theories on emotion have been underconstrained by brain data, and the present findings constitute a specific set of constraints that may be integrated into future theories on emotion. In addition, the multi-network emotion representations we identify here paint a picture of emotion that underscores the importance of NIMH’s recent RDoCs approach, as well as recent papers taking a network approach to psychopathology [19,97— 100] —and they provide a specific template for testing specific network-topological predictions about the ingredients of emotion and the category-level responses that emerge from their interactions.
Fear was the most accurate overall, with 86% accuracy, whereas anger was the least accurate, at 43%. These findings could indicate heterogeneity in the categories themselves. However, it could also reflect the signal detection properties of the test itself, as we explain below. Thus, it is premature to make strong claims about the diversity/heterogeneity of the emotion categories based on these results. Heterogeneity in the representation across categories occurs both because some of the methods used to elicit emotion are more diverse (82 Table) and because the categories are likely to be inherently psychologically and neurally diverse.
Thus, there may be multiple types of ‘anger’ that activate different subsets of regions and networks. What we observe is the population average across these potentially disparate features. This is analogous to dwellings containing disparate architectural features (e.g., an igloo vs. a castle) being grouped into a common category (‘dwelling’) because of their cultural functions rather than their architectures. On a more mundane level, the ways in which researchers choose to study emotion categories can also contribute to the observed diversity (and reduced classification accuracy); researchers studying fear, for example, tend to sample very similar instances by using a small range of fear-inducing methods, whereas anger is elicited in more diverse ways.
Sensitivity and specificity can always be traded off by changing the decision threshold for labeling an instance as ‘anger,’ ‘fear,’ etc., and accuracy in 5-way classification is more closely related to sensitivity than specificity. Here, anger has the lowest sensitivity (43%), but the highest specificity (99%, Table 1): Studies that are not anger are almost never categorized as anger. Such differences in threshold preclude making strong claims about the diversi-ty/heterogeneity of the emotion categories themselves based on these results. However, we should not be too quick to dismiss all differences in decoding accuracy to methodological artifacts; true differences in category heterogeneity may exist as well.
First, our analyses reflect the composition of the studies available in the literature, and are subject to testing and reporting biases on the part of authors. This is particularly true for the amygdala (e.g., the activation intensity for negative emotions may be overrepresented in the amygdala given the theoretical focus on fear and related negative states). However, the separation of emotion categories in the amygdala was largely redundant with information contained in cortical patterns, which may not be subject to the same biases. Likewise, other interesting distinctions were encoded in the thalamus and cerebellum, which have not received the theoretical attention that the amygdala has and are likely to be bias-free.
Some regions—particularly the brainstem—are likely to be much more important for understanding and diagnosing emotion than is apparent in our findings, because neu-roimaging methods are only now beginning to focus on the brainstem with sufficient spatial resolution and artifact-suppression techniques (Satpute et al., 2013). Other areas that are likely to be important, such as the ventromedial prefrontal corteX (e.g., BA 25 and posterior portions of medial OFC) are subject to signal loss and distortion, and are likely to be underrepresented.
A meta-analytic result is only as good as the data from which it is derived, and a brief look at 81 Fig indicates that there are some systematic differences in the ways researchers have studied (and evoked instances of) different emotion categories. We have tried to systematically assess the influence of methodology differences in this paper, but our ability to do this is imperfect. However, though we cannot rule out all possible methodological differences, we should not be too quick to dismiss findings in ‘sensory processing’ areas, etc., as methodological artifacts. Emotional responses may be inherently linked to changes in sensory and motor cortical processes that contribute to the emotional response (e.g., ). This is a central feature of both early and modern embodiment-based theories of emotion [92,102—104]. In addition, most major theories of emotion suggest that there are systematic differences in cognitive, perceptual, and motor processes across emotion categories; and in some theories, such as the appraisal theories, those differences are inherently linked to or part of the emotional response . Finally, the results we present here provide a co-activation based view of emotion representation that can inform models of functional connectivity. However, co-activation is not the same as functional connectivity. The gold-standard measures of direct neural connectivity use multiple single-unit recording or optogenetics combined With single-unit electrophysiology to identify direct neural connections With appropriate latencies (e.g., < 20 msec). Much of the information processing in the brain that creates co-activation may not relate to direct neural connectivity at all, but rather to diffuse modulatory actions (e.g., dopamine and neuropeptide release, much of Which is extrasynaptic and results in volume transmission). Thus, the present results do not imply direct neural connectivity, and may be related to diffuse neuromodulatory actions as well as direct neural communication. However, these forms of brain information processing may be important in their own right.
Activation foci are coordinates reported in Montreal Neurologic Institute standard anatomical space (or transformed from Talairach space). Foci are nested within study activation maps, maps of group comparisons between an emotion or affect-related condition and a less intense or affectiver neutral comparison condition. We used the foci associated with study activation maps to predict each map’s associated emotion category. Studies were all peer-reviewed and were identified in journal databases (PubMed, Google Scholar, and MEDLINE) and in reference lists from other studies. A subset of studies that focused specifically on the most frequently studied emotion categories were selected (148 studies, 377 maps, 2519 participants). Categories included anger (69 maps), disgust (69 maps), fear (97 maps), happiness (77 maps), and sadness (65 maps).
We model the foci (peak activation locations) as the offspring of a latent study center process associated with a study activation map. The study centers are in turn offspring of a latent population center process. The posterior intensity function of the population center process provides inference on the location of population centers, as well as the inter-study variability of foci about the population centers.
At level 1, for each study, we assume the foci are a realization of an independent cluster process driven by a random intensity function. These processes are independent across studies. The study level foci are made up of two types of foci: singly reported foci and multiply reported foci. For a given activation area in the brain, some authors only report a single focus, while others report multiple foci, however this information is rarely provided in the literature. These differences are attributable to how different software packages report results, and simply author preference. We assume that multiply reported foci cluster about a latent study activation center, while the singly reported foci can either cluster about a latent population center or are uniformly distributed in the brain. At level 2, we model the latent study activation center process as an independent cluster process. We assume that the latent study activation centers can either cluster about the latent population center or are uniformly distributed in the brain. At level 3, we model the latent population center process driven by a homogeneous random intensity (a homogeneous Poisson process). The points that may cluster about the population centers are singly reported foci from level 1 and study activation centers from level 2.
In particular, we use spatial birth and death processes nested Within a Markov chain Monte Carlo simulation algorithm. The details of the algorithm and pseudo code are provided in .
Foci reported in each emotion category can be modeled as an emotion-specific spatial point process. This leads to a joint model for foci from studies With different categories of emotions, and it can be used to classify the emotion category of studies given their observed foci, by choosing the emotion category that maximizes the posterior predictive probability. To be more specific, suppose we have 11 studies, let F,- and E,- denote the foci and the emotion category for study 1' respectively, for i = 1, . . ., n. The BSPP model specifies the probability TE (Fi|E,-, l), Where i represents the collection of all the parameters in the BSPP model. The posterior predictive probability of emotion category for a neW study En“ is given by Pr(E : e|(Fi7Ei):'1=17 Fn+1)0C
We conduct Bayesian learning of the model parameters on the foci reported from a set of training studies consisting of all studies except a left-out study, k. We then make a prediction for study k based on its reported brain foci. We repeat the procedure for each study (1. . .K) and compute the classification rate across all studies. The above procedure for a Bayesian model can be very computationally expensive since it involves multiple posterior simulations. We employ an importance sampling method to substantially reduce the computation. See  for details.
These networks covered the entire cerebrum, excluding white matter and ventricles. Within each of the 49 regions, we calculated the average BSPP intensity value across voxels for each emotion category. Analyses of the mean intensity across regions/networks are visualized in Figs. 2 and S3 and 81 Text. Calculation of region/network mean intensity. For Markov chain Monte Carlo (MCMC) iterations t: [1. . .T] (T = 10,000 in this analysis), region/network r = [1. . .R], and emotion categories c = [1. . .C] (C = 5 in this analysis), let MC be a TX R matrix of mean intensity values in each region for each iteration for emotion c. We calculated the mean intensity 4 23 M ET_ M ET_ M ‘signature across regions’ for each emotion category, M C = %‘t1 - -%“R , which are shown for subsets of regions in Figs. 2 and S3. In addition, the matrix MC contains samples from the joint posterior distribution of regional intensity values that can be used for visualization and statistical inference. Mean intensity values for each region/ network served as nodes in co-activation analyses.
Visualizations of the configuration of regions associated with each emotion (Figs. 4, S4) was performed by estimating the pairwise correlations among all regions r = [1. . .R]across MCMC iterations (e.g., for regions i and j, the correlation between MC oiand MC oj). Thresholded correlations served as edges in co-activation analyses. The correlations were thresholded using a permutation test, as follows: We permuted the rows of each vector MC or, for r = [1. . .R], independently, and calculated the maximum correlation coefficient across the R X R correlation matrix for each of 1000 iterations. The 95th percentile of this distribution was used as the threshold, which controlled for matrix-wise false positives at P <. 05 family-wise error rate corrected. The most stringent threshold across emotion category (r > 0.0945, for sadness) was used for all emotion categories, to maintain a consistent threshold across graphs. The location of each node (region/network) on the graph was determined by applying the Fruchterman-Reingold force-directed layout algorithm (as implemented in the MatlabBGL toolbox by David Gleich) to the thresholded correlation matrix.
The MCMC sampling scheme also provided a means of making statistical inferences on whether the multivariate ‘pat-tern of regional’ of intensity differed across pairs of emotion category (81 Table). 1). For any given pair of emotion categories i and j, the difference between the intensity fingerprints is given by the vector M d = [M 1. — M j]. As the elements of M are samples from the joint posterior distribution of intensity values, statistical inference on the difference 171d depends on its statistical distance from the origin, which is assessed by examining the proportion P of the samples that lie on the opposite side of the origin from 171,1, adjusting for the fact that the mean M d could occur in any of the 2R quadrants of the space defined by the regions. This is given by: parametric P-value for the difference in posterior intensity profiles across regions from the comparisons at q <. 05. It is an analogue to the multivariate F-test in parametric statistics. This test can be conducted across profiles within a set of regions/networks (e.g., cortical networks shown in Fig 4A), across all regions, or across the intensity scores in nonnegative components MCMC algorithm, which is subjected to false discovery rate control across the pairwise of activation ‘patterns across regions.’
Here, we used it to decompose the matrix of activation intensities for each of the five emotions across subgroups of 49 regions into simpler, additive ‘profiles’ of activation shown in polar plots in Figs. 2 and S3. The activation matrix A is decomposed into two component matrices W(nok) and H (kom) whose elements are nonnegative, such that A = WH, with the number of components (k) selected a priori (here, k = 2 for interpretability and visualization). The squared error between A and WH was minimized via an alternating least squares algorithm with multiple starting points. The rows of H constitute the canonical profiles shown in figures, and emotion-specific activation intensity values from the BSPP model are plotted in the 2-dimensional space of the two recovered canonical activation profiles. NNMF is a particularly appropriate and useful decomposition technique here, because activation intensity is intrinsically nonnegative [110,111]. In such cases, NNMF identifies components that are more compact and interpretable than principal components analysis (PCA) or independent components analysis (ICA), and better reflect human intuitions about identifying parts that can be additively combined into Wholes. Here, the parts reflect interpretable, canonical activation profiles, and the Whole is the observed activation profile for each emotion category across multiple brain systems.
Support Vector Machine analyses. The Bayesian Spatial Point Process Model classification results are compared against the support vector machine-based classification described here. (PDF)
Classification accuracy tables and confusion matrices for five-way emotion classification for several methods. Results from the Bayesian Spatial Point Process Model [38,106] are the focus of this paper, and other methods are included for comparison purposes. ROW labels reflect the true category, and column labels the predicted category. Diagonals (red) indicate accuracy or recall proportions. Off-diagonals indicate error proportions. *: Accuracy is significantly above chance based on a binomial test. (PDF)
A summary of statistical tests on co-activation patterns in the network analysis. (PDF)
Emotion classification based on methodological variables. (PDF)
A) Map of relationships between emotion categories and methodological variables, including emotion elicitation method, stimulus type used, participant sex, and imaging technique. Colors represent proportions of studies in a given emotion category that involved each method variable; columns each sum to 1 (100% of studies) within each method variable. Ideally, stimulus category and other methodological variables would be evenly distributed across the five emotion categories (i.e., each row would be approximately evenly colored across emotion categories), although this is impractical in practice, and the distribution depends on how investigators have chosen to conduct studies. See 82 Table for additional information about classification of emotion type from these methodological variables. (PDF)
Intensity maps for each of the five emotion categories. Intensity maps reflect the distribution of study activation centers (Level 2 in the Bayesian model) across brain space. They are continuously valued across space, though they are sampled in voxels (2 X 2 X 2 mm), and the integral of the intensity map over any area of space reflects the expected number of study-level centers for that emotion category. Brighter colors indicate higher intensity, and the maps are thresholded at a value of 0.001 for display.
Subcortical zones and activation intensity profiles Within each. A) Left: Basal ganglia regions of interest based on the Buckner Lab’s lOOO-person resting state connectivity analyses (9—11), along With amygdala and parahippocampal/hippocampal regions of interest based on the probabilistic atlas of Amunts et al. (12) (see Main Text). Network labels follow the conventions used in the Buckner Lab papers. Right: Intensity maps for each emotion category displayed on surface images of the basal ganglia, amygdala, and hippocampus. B) Intensity profiles for each emotion category (colored lines, colors as in (A)) in subcortical zones. Values towards the perimeter of the circle indicate higher activation intensity. In the amygdala, LB: basolateral complex; CM: corticomedial division; SF: superficial division. L: left; R: right hemisphere. In the hippocampus: FD, dentate; CA, Cornu Ammonis; SUB, subicular zone. C) Intensity distribution for each emotion (colors) in the space of the first two factors from nonnegative matrix factorization of the intensity profiles. The colors correspond to the emotion category labels in (A), and the colored areas show the 95% confidence region for the activation intensity profiles. (PDF)
Connectivity graphs for each emotion category, as in Fig. 3, but With labels for all regions/networks. A) anger; B) disgust; C) fear; D) happy; E) sad. The layouts are force-directed graphs for each emotion category, based on the Fruchterman-Reingold spring algorithm. The nodes (circles) are regions or networks, color-coded by anatomical system. The edges (lines) reflect co-activation between pairs of regions or networks, assessed based on the joint distribution of activation intensity in the Bayesian model at P <. 05 corrected based on a permutation test. The size of each circle reflects its betweenness-centrality (ref), a measure of how strongly it connects disparate networks. Region labels are as in Figs. 2 and S3. Colors: Cortex, red; Basal ganglia, green; Cerebellum/brainstem, blue; Thalamus, yellow; Amygdala, magenta; Hippocampus, cyan/light blue. Network names follow the convention used in the Bucker Lab’s resting-state connectivity papers: V, visual net-work/zone (in cortex and connected regions in basal ganglia and cerebellum); dA, dorsal attention; vA, ventral attention; FP, fronto-parietal; S, somatosensory; DM, default mode; L, limbic. Thalamic regions are from the connectivity-based atlas of Behrens et al. (13). Tem, temporal; Som, somatosensory; Mot, motor; Pmot, premotor; Occ, occipital, PFC, prefrontal corteX con-SS Fig. Average correlation values within and between region groups, and relationship with global network efficiency and regional activation intensity. Average co-activation Within and between each region/ network grouping, for comparison to global network efficiency values based on path length in Fig. 3. Top: Average correlation in regional intensity across 10,000 MCMC samples in the Bayesian model. These correlations provide a measure of co-activation across disparate brain networks. The overall pattern is similar to Fig. 3; however, the average correlation does not reflect some of the structure captured in global efficiency and reflected in the graphs in Fig. 3. Bottom left: Average correlation is related to global efficiency across network groups and emotion categories (r = 0.76). Each point reflects an element of the matrices in the top panel. Bottom right: Global efficiency is unrelated to average activation intensity Within the regions being correlated (r = 0.02), indicating that the efficiency metric used in the main manuscript provides information independent of the marginal activation intensity.
See all papers in April 2015 that mention basal ganglia.
See all papers in PLOS Comp. Biol. that mention basal ganglia.
Back to top.
See all papers in April 2015 that mention Bayesian model.
See all papers in PLOS Comp. Biol. that mention Bayesian model.
Back to top.
See all papers in April 2015 that mention posterior distribution.
See all papers in PLOS Comp. Biol. that mention posterior distribution.
Back to top.
See all papers in April 2015 that mention brain regions.
See all papers in PLOS Comp. Biol. that mention brain regions.
Back to top.
See all papers in April 2015 that mention generative models.
See all papers in PLOS Comp. Biol. that mention generative models.
Back to top.
See all papers in April 2015 that mention meta-analysis.
See all papers in PLOS Comp. Biol. that mention meta-analysis.
Back to top.
See all papers in April 2015 that mention Support Vector.
See all papers in PLOS Comp. Biol. that mention Support Vector.
Back to top.
See all papers in April 2015 that mention functional connectivity.
See all papers in PLOS Comp. Biol. that mention functional connectivity.
Back to top.
See all papers in April 2015 that mention Monte Carlo.
See all papers in PLOS Comp. Biol. that mention Monte Carlo.
Back to top.
See all papers in April 2015 that mention prefrontal corteX.
See all papers in PLOS Comp. Biol. that mention prefrontal corteX.
Back to top.
See all papers in April 2015 that mention region/ network.
See all papers in PLOS Comp. Biol. that mention region/ network.
Back to top.