The aim of the present study was to investigate the influence of contextual cues on our evaluation of facial expressions of emotion. In order to do this, we connected the field of research on emotion perception with the field of research on the perception of films. Specifically, we aimed at replicating the Kuleshov effect e. In order to study the context-sensitivity of emotions under more ecologically valid conditions, we used dynamic scenes as contextual stimuli.
Participants were shown 18 film sequences of neutral faces across three emotional contexts conditions Neutral, Happiness, and Fear. Hence, we adopted both a dimensional and a categorical approach to emotion e. Our results confirmed the presence of a significant effect in terms of both valence and arousal for the Fear context only.
More specifically, participants rated neutral faces in fearful contexts as significantly more negative and more arousing than neutral faces in both neutral or happy contexts. Hence, while from a dimensional point of view our results suggest the presence of a significant effect when neutral faces were paired with fearful contexts, from a categorical point of view our participants tended to choose the emotion categories congruent with the preceding context also when neutral faces were paired with happy contexts.
On the basis of the affective prediction hypothesis Barrett and Bar, ; Barrett et al. In our view, however, a more suitable explanation for the Kuleshov effect is that the context triggers the arousal and the emotional reaction in the observer who then attributes an emotional value to a neutral face.
More specifically, our results differ from Barratt et al. More specifically, we adopted fear as a negative emotion because, from an evolutionary point of view, it is capable of directing our attention to potentially dangerous stimuli such as the scenarios depicted in our fearful contexts.
In this regard, an interesting explanation is provided by the motivated attention theory Lang et al. Moreover, since the activation of these motivational circuits can be elicited also by pictures e. Additionally, it has been demonstrated that this aversive response, defined by modulations in self-report, physiological, and behavioral systems e. We suggest that the same mechanisms are elicited when using fear-related videos, thus explaining our results.
For these reasons, future studies aiming to assess this effect using fearful and phobic contexts should include an evaluation of phobic traits by means of dedicated questionnaires e. The absence of a significant modulation of valence and arousal ratings when neutral faces were paired with happy contexts could be ascribed to the kind of positive scenarios we proposed to our participants. Indeed, among stimuli rated as pleasant, erotic materials elicit the strongest affective reactions Bradley et al.
As a matter of note, Barratt et al. In our opinion, altogether these results seem to suggest that this kind of contextual effect emerges more clearly when employing strong arousing emotional contexts as stimuli.
Future studies should further clarify this aspect. Taken together, our results again highlight the context-sensitivity of emotions and the importance of studying them under ecologically valid conditions.
A goal for future studies will be to investigate this effect in different modalities, creating auditory emotional contexts to distinguish the capability of visual and auditory modalities to influence the comprehension of facial expressions.
As far as we know, there has been only one previous study dedicated to investigating the role of sound in the evaluation of facial expressions in films using Kuleshov-type experimental sequences Baranowski and Hecht, , p. They asked participants to rate the emotional state of the actor on the six basic emotions, thus adopting a categorical approach only.
Moreover, they employed an experimental design suitable for investigating the multisensory integration of music and facial expressions but for this reason different from the original Kuleshov sequences. Thus, despite their encouraging results, future studies should further assess the role of the auditory modality on the comprehension of facial expressions. Moreover, since little has been done to explore such contextual modulations on emotion processing at the physiological level, in order to further investigate questions about the interaction between contextual cues and the comprehension of facial expressions, it would be important to use time sensitive measures, such as electroencephalography EEG Wieser and Brosch, We think that our advanced and more ecological design will be of great help in developing new studies to better understand emotion processing in humans.
MC wrote the paper. All authors have contributed to, seen and approved the manuscript. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This research was supported by a grant by Chiesi Foundation to VG. National Center for Biotechnology Information , U. Journal List Front Psychol v. Front Psychol.
Published online Oct 4. Maria A. Author information Article notes Copyright and License information Disclaimer. This article was submitted to Theoretical and Philosophical Psychology, a section of the journal Frontiers in Psychology. Received May 25; Accepted Sep The use, distribution or reproduction in other forums is permitted, provided the original author s or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with these terms. This article has been cited by other articles in PMC. Abstract Facial expressions are of major importance in understanding the mental and emotional states of others. Keywords: facial expressions, emotion, contexts, film editing, Kuleshov effect. Introduction Albeit there are many theories related to emotions and their comprehension, the present study focuses on the idea that facial expressions are of major importance in understanding the mental and emotional states of others e.
Materials and Methods Participants Twenty-eight adult volunteers of Italian nationality took part in the study 14 female ; mean age Faces glance shots To create the film sequences, we used the 24 neutral faces 12 female selected and digitally manipulated by Barratt et al.
Open in a separate window. Differences from the Original Paradigm of Barratt et al. Statistical Analysis and Results In accordance with the previous study by Barratt et al.
More specifically, my aim is to show that the salience of different features and dimensions will vary with context.
Where context can be controlled, such features should influence product comparisons in predictable directions. After outlining a psychological theory that describes similarity judgments as the result of a feature matching process, the generality of the theory is tested in a consumer products context. The results support such a feature based approach to product perception and attest to the sensitivity of perception to changes in context.
A dimensionally based technique such as multidimensional scaling MDS presumes a full analogy between the cognitive concepts of similarity and dissimilarity on the one hand, and the Euclidean geometry of spatial proximity and distance on the other Cunningham and Shepard Specifically, MDS postulates that perceived similarity among objects is a monotonically decreasing function of the distances between those objects represented as points in n-dimensional metric space.
Therefore, such a method is based on both dimensional and metric assumptions. However, Tversky has demonstrated that these assumptions may be inappropriate in describing people's perception of similarity. The metric assumptions require that similarity be symmetric. That is, the similarity of a, the subject, to b, the referent, is necessarily identical to the similarity of b to a. The metric models also assume that perceived similarities among objects should be perfectly negatively correlated with the corresponding perceived dissimilarities.
In many cases, however, neither of these assumptions holds. He also showed that when two stimuli in a group appeared to have both more common and more distinctive features than any other pair of stimuli in the group, these two were often judged to be both the most similar and the most dissimilar pair in the group. For instance, the USSR and the United States received both the highest similarity and the highest dissimilarity ratings among the countries judged by Tversky's subjects.
Such findings lead one to question the general applicability of both the dimensional and metric assumptions of MDS. Indeed, dimensional interpretations may be inappropriate for a large class of product categories and, if applied, will produce misleading product spaces.
Tversky offers an alternative description of how people make similarity judgments. Judging similarity is a feature matching process. It seems more appropriate to represent faces, countries, or personalities in terms of many qualitative features than in terms of a few quantitative dimensions. The assessment of similarity between such stimuli, therefore, may be better described as a comparison of features rather than as the computation of metric distance between points Tversky , p.
When faced with a similarity task, people extract and compile from remembered information a limited list of relevant features on the basis of which they perform the required task.
As formally stated by Tversky, the similarity between two objects, s a,b , where a and b are associated with feature sets A and B respectively, is:. The similarity between two objects is a function of their common features A B , features of a but not of b A-B , and features of b but not of a B-A.
Similarity can be a function of just common or just distinctive features or both depending on the values of the parameters q , a , and b. Because format can influence the value of these parameters, Tversky's theory has the power to account for diverse empirical observations.
In other words, when one stimulus is the subject and the other the referent, the distinctive features of the referent do not receive as much weight in the overall similarity judgment as the distinctive features of the subject. When, in addition, one stimulus has more distinctive features than the other, an asymmetry occurs. A spreading-activation theory of semantic processing.
Psychological Review , 82 , — Garner, W. The processing of information and structure. Potomac, MD: Erlbaum. De Grace, P. Perceptual effects of scene context on object identification.
Psychological Research , 52 , — Primed lexical decision: Combined effects of the proportion of related prime-target pairs and the stimulus-onset asynchrony of prime and target. Quarterly Journal of Experimental Psychology , 36A , — De Groot, A.
Word-context effects in word naming and lexical decision. Quarterly Journal of Experimental Psychology , 37A , — Gottschaldt, K. Psychologische Forschung , 8 , — Psychologische Forschung , 12 , 1— Henderson, J. Effects of foveal priming and extrafoveal preview on object identification. Krasselt, A. Mens, L. Can perceived shape be primed? The autonomy of Organization pp. Doctoral dissertation, University of Nijmegen. Hidden figures are ever present. Meyer, D. Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations.
In the same vein, Wieser and Brosch highlighted how faces and facial expressions are always perceived in a wider context involving not only within-face features e. This is also in line with the behavioral ecology view of facial expressions e. Already in the early twentieth century, the Soviet filmmaker Kuleshov — argued that such situational context could significantly change our interpretation of facial expressions. The story has been passed on as a demonstration about contextual priming in movies, also known as the Kuleshov effect Carroll, Barratt et al.
In terms of the latter, a Kuleshov-type sequence can be regarded more precisely as an instance of point-of-view POV editing. To our knowledge, there have been only three previous attempts at replicating the original Kuleshov experiment. More recently, Mobbs et al. Participants were asked to rate the emotional expression and mental state of a still image of a face, crosscut with an emotional image, using a two-dimensional rating scale see the dimensional approach; e.
Behavioral and fMRI results substantiated the Kuleshov effect with higher ratings of valence and arousal for faces paired with positive and negative contexts than for those paired with neutral contexts, and enhanced BOLD responses in several brain regions including the amygdala.
However, as stressed by Barratt et al. Bearing in mind these limitations, Barratt et al. As the contexts could be either static or dynamic objects, the authors used either a photograph with a slow zoom-in effect or a video clip. During the experiment, eye movements were recorded.
Results showed significant behavioral effects pointing in the expected direction from both a categorical and dimensional point of view. Specifically, neutral faces paired with sad contexts were rated as the most negative and least aroused, while neutral faces paired with desire contexts were perceived as the most positive and the most aroused Barratt et al. With the present study, we aimed at investigating and exploring further Barratt et al.
Furthermore, we aimed at verifying the persistence of the effect despite these variations in order to employ the same experimental paradigm in a future electroencephalographic study to explore the contextual modulations on emotion processing at both the physiological and cortical levels.
Participants were shown 18 film sequences of neutral faces crosscut with scenes evoking two different emotions happiness, and fear, plus neutral stimuli as a baseline condition. Hence, from a dimensional point of view e. We employed only two emotional contexts happy and fearful in order to keep the design as simple as possible, and to highlight the differences between opposite emotional conditions in terms of valence.
In particular, we adopted fear as a negative emotion because, from an evolutionary point of view, it is capable of directing our attention to potentially dangerous stimuli activating one of the two major motivation circuits defensive vs. Since we focused on both a dimensional and categorical approach to emotion e. In comparison to desire which is capable of activating the appetitive motivational system; Bradley et al.
As contextual stimuli, we employed dynamic scenes in order to study the context-sensitivity of emotions under more ecologically valid conditions. We expected to find a significant difference between the ratings of valence, arousal, and category attributed to neutral faces paired with emotional contexts both fearful and happy and those attributed to neutral faces in neutral contexts. More specifically, we expected neutral faces in fearful contexts to be rated with more negative valence and higher arousal scores than neutral faces in neutral contexts, and neutral faces in happy contexts to be rated with more positive valence and higher arousal scores than neutral faces in neutral contexts.
Twenty-eight adult volunteers of Italian nationality took part in the study 14 female ; mean age All participants had normal or corrected-to-normal visual acuity. All participants provided a written informed consent to participate in the study, which had been approved by the Institutional Review Board of the University of Parma and has been conducted according to the principles expressed in the Declaration of Helsinki.
To create the film sequences, we used the 24 neutral faces 12 female selected and digitally manipulated by Barratt et al. In contrast to the original study of Barratt et al. We then divided each shot in the middle, resulting in two 1. In this way, as recommended by Barratt et al. All of the faces were gray-scaled and presented in three-quarter profile in order to avoid a direct gaze into the camera and to facilitate the illusion that the person was looking at an object in an off-screen space Barratt et al.
The scenes were previously validated regarding their emotional content. For the happy condition they comprised contents such as puppies, kittens, or newborns. For the fearful condition they included potentially dangerous animals e.
The neutral contexts were mostly provided by city and country views Figure 1 for details regarding validation procedure and selection criteria, please see Supplementary Materials. Examples of scenes. A Neutral condition, B fearful condition, C happy condition. For each participant, we created a list of 18 film sequences in total, six per emotional condition in accordance with the emotion evoked by the object shot taking into account a few basic rules: each facial identity had to be shown only once; both the gender and the orientation of the faces had to be balanced.
Hence, the 18 experimental trials comprised nine trials with female faces six looking to the left and three looking to the right and nine trials with male faces three looking to the left and six looking to the right.
In sum, we asked participants to fill in these questionnaires to exclude the possibility that personality traits or deficits in emotion recognition and empathic abilities could influence the performance in the task. The experimental procedure included two blocks. Each trial consisted of a black fixation cross on a gray background or ms , followed by the film-sequence presented for 6 s.
A green background was used as inter-trial interval ITI with a duration of either or ms. They articulated their choice by using the keyboard positioned in front of them. Again, no time limit was given. Experimental paradigm. A valence and arousal rating, B categorization. The experimental session was preceded by a training session that included four trials, showing film sequences edited using scenes excluded at the end of the validation process two neutral, one happy, and one fearful , and other four facial identities two female taken from the KDEF, half of them looking to the left and the other half to the right.
At the end of the procedure, the participants were asked to answer five open questions via Google Forms to assess their experience and their familiarity with the stimuli: 1 Have you ever seen some of these videos before?
In sum, in contrast to the original paradigm developed by Barratt et al. In accordance with the previous study by Barratt et al. This was done in order to evaluate whether, for each participant, a condition mean was higher positive value or lower negative value than the overall mean in terms of valence and arousal.
In order to investigate the modulation of rating by context condition, we performed a linear mixed effects analysis. We entered the rating score as a dependent variable, and Measure 2 levels: Arousal and Valence and Context 3 levels: Neutral, Fearful, and Happy as independent fixed variables.
We entered intercepts for stimuli and subjects, and by-subject slopes for the effect of Context as random effects. Visual inspection of residual plots did not reveal any obvious deviations from homoscedasticity or normality.
0コメント