Monday, 18 May 2009

Unconscious Facial Reactions to Emotional Facial Expressions
Dimberg,U., Thunberg, M., & Elmehed, K. (2000)
Psychological Science, Vol II, No 1 86-89

 note  this researcher look at zygomotor only and not orbicularis? therefore ? was accepting the Duchenne smile as the genuine smile although did not mention it.

Dimmberg, 1997 'humans are predisposed to react emotionally to facial expressions'

'Neural activity in the human amygdala differs when people are exposed to different facial stimuli' ( Whalen et al, 1998)

'damage to amygdala impairs the recognition of facial expression'

'when people are exposed to pictures of emotional expressions, they spontaneously and rapidly react with distinct facial EMG displays ( Dimberg 1982,1990)'

' a critical characteristic of an automatic reaction, besides being spontaneous and rapid, is that it can occur without attention or conscious awareness'

unconcious response can be investigated by backward masking technique leads to 

RQ  If distinct facial reactions can be unconsciously elicted, then the masked happy target face would evoke larger zygomatic major muscle activity ( elevates lip to form a smile) and lower corrugator supercilii ( knits eyebrows during a frown)

design
120 students into 3 groups
happy-neutral
neurtral -neutral
angry-neutral
groups differed only in respect to the type of stimuli to which they were unconsciously exposed (30 ms) with 5 sec exposure for the neutral masking stimuli.  

Participant verbal reports when closely questioned indicated that they had not 'consciously' detected the target stimulus.

Directly after the experiment each subject was exposed to one presentation of the target-mask complex and were asked to rate for angry/happy.  Theses ratings did not differ between groups 

Although EMG results indicated a difference between groups , presumably reflecting the target,  the participant overall reported  experience of the target-mask stimulus was relatively neutral.

results
Scored EMG amplitude every 100 m secs giving 10 data points  over a 1 second period.

zygomatic - happy-neutral(up to 1.7 mv) > neutral-neutral (up to 1 mv)> angry neutral (0 mv) during .5 - 1.0 secs ( measured over 5 points)  see diagrams amplitude )

at 300ms there was a small (0.5) positive response what does this mean? Dimberg suggests a startle response.

corrugator happy-neutral <>

at approx 220 ms there was  a positive (2-5-3.0 mv response)

'it is not evident from the present study to what degree the different facial reactions originate in unconscious mimiking behavior or to what degree the facial reactions initially are read outs of underlying emotional states'

refers to Buck, R. (198o) J. Personality and Social Psychology, 38,811-824  'facial muscle activity is essential for the occurrence of emotional experience'

Friday, 15 May 2009


Investigating the production of emotional facial expressions: a combined electroencephalographic (EEG) and electromyographic (EMG) response.

Korb, S., Grandjean, D., & Scherer, K.R. (2008)
Proceedings of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008), September 17-19, 2008, Amsterdam, the Netherlands, ISBN 978-1-4244-2154, 1-6.

Describes dichotomy between spontaneous & posed expression and blends of these.  A blend may be the result of emotional regulation for example decreasing intensity or duration.  Ekman ( refs  7 & 8 from this paper) suggest that spontaeous emotional facial expressions show less irregularities ( pauses & stepwise intensity changes) 

facial mimicry  - Dimberg paper. EMG & EEG used to demonstrate muscle involvement in mimicry ( Achaibou et al downloaded) Mcintosh et al, printed)

neural basis 
innervation of facial muscles

temporalis, masseter,  internal & external pterygoid, are innervated by the 5th
eye muscles innervated by 3,4 & 6
facial nerve ( 7) innervate most of the muscles. 

'facial nucleus ( pons)  similar across man, non-human primates, and lower animals with the difference that it contains more neurons innervating muscles of the mouth and lower face in man, allowing for a high degree of fine and controlled movement as required for speech, and more neurons innervate the upper face and auricular muscles in lower animals'

motor control of facial muscle

voluntary movements of the lower face muscles  ( allowing for mouth movements) originate in the contralateral motor areas , ( primary motor cortex (M1) & lateral premotor cortex (LPMC) or along the corticobulbar tract descending to the facial muscles.
voluntary movement of upper face originate in the same place but project ipsilatareally as well as contralaterally.

spontaneous facial expressions project from subcortical structures including the basal ganglia, innervating both ipsi & contralateral facial nuclei via extrapyramidal tracts passing through the brainstem reticular formation  ( ref 12 Gazzaniga et al 1990 & ref 29 Purves et al, 1999)

'in addition to the direct pyramidal tract, cortical motor areas also project bilaterally onto interneurons of the reticular formation ( ? KRO the mechanism that  can modulate  spontaneous expression.' 'interestingly these indirect pathways influence mainly motor neurons of the facial muscle that control the upper face'

recent tracing studies in non humans (ref 25) at least 5 cortical motor areas project directly onto the facial nuclei
MI ( primary motor cortex)
LMPC lateral premotor cortex
SMA supplementary motor area
 caudal cingulate motor area ( CMCc ) &  rostral cingulate motor area (CMCr)
M1, LPMC &CMCc project mainly to the contralateral facial nucleus via the direct cortico bulbar tract and to mainly the motor neurons of the lower face muscles.
SMA & CMCr synapse  mainly onto ipsi and contralateral upper facial facial nuclei.
nb these are the main projections rather than the only projections.

Refs 8, 11 & 12 concerned with hemisphere involvement  e.g. right hemisphere as the source for production of at least posed facial expression.

Neurological evidence

Implanted electrodes/ depth electrodes  in epilepsy (refs 10, 20 location of seizure focus) 
note : only the left side stimulated 
 left area rostral to  supplementary motor cortex (SMA) smiling and laughing repeatedly elicited in this specific area.  Smiling was produced by lower currents than laughter, suggesting that smiling and laughter 'might involve similar mechanisms and are closely related on a single continuum' (ref 10).

Double dissociation between emotional facial paresis and voluntary facial paresis (ref 16 Hopf et al 1992 for spontaneous  also refs 17&25)  Spontaneous involves lesions of thalamus, striatocapsular area, frontal subcortical white matter, insula, and medial frontal lobe including the supplementary motor area (SMA)

refs to the article in Brain topography on the Beritschaftspotential 








Tuesday, 28 April 2009

Immersion?

Place Illusion & Plausibility . Realistic behaviour in Immersive Environments.
Mel Slater @ The Royal Society


? plausibility given by physical reality ( will need to check)

immersion as an illusion of physical reality.
  • display in all sensory systems
  • tracks head, neck, etc so that display is determined as a function of head tracking

immersive systems 

characterised by the sensori-motor contingencies that they support. ( KRO - ?need for synchrony) When these are satisfactory they give an illusion of physical reality.  It is difficult to turn off a strong illusion by knowing that it is an illusion.  The more you probe the more likely to break the effect of physical reality.

Once plausibility is broken then it is always broken (  if it is an illusion then is this part of the illusional effect?)

Place illusion

necessary to have self reference, contingent events, credibility.  Place illusion once broken can be reconstructed (  so what is the effect of illusion in this case?)

Illusions - Slater  makes reference to brain as a baysian self referencing system.
Carr, D., and Oliver, M. (in press 2009)

Second Life, Immersion and Learning’ in Social Computing and Virtual Communities, edited by Panayiotis Zaphiris and Chee Siang Ang, published by Taylor and Francis.

describes how the introduction of new technology eg voice into SL led to a classification into members as immersionist or augmentationists.

Immersionists - SL is a substitute for real life and tend to not disclose any real life information.  Augmentationist see SL as an extension of RL.

Reviewed use of immersion :

  • mostly to imply a 3D environment
  • but sometimes vaguely defined and 'yet assumed to be good for learning' 'Approached as something that can be measured'
  • immersion as reality ? presence ( Slater used the term presence) Looks to game simulation where 'less is more ' approach is accepted  'Wright, the creator of SIMCIty understands the value of leaving things out' .  Bartle (2003) 'if you introduce reality into a virtual world, it's no longer a virtual world: it's just an adjunct to the real world.  It ceases to be a place, and reverts to being a medium.

immersion and engagement different , McMahan (2003) in the context of computer games.  In gaming McMahan sees immersion as fantasy part whilst engagement is linked to the challenges of the game.  Carr (2006) ' engagement and immersion are not linked to particular aspects of a game or a text.  Instead they are imagined as attentive states that sit along a continuum and that suggest a particular stance towards the game at a particular moment' ' Immersion can be countered when participant required to give attention to engagement' game 'allows the player to constantly move between the two ( Carr 2006, p.55)

There are things in SL that can act as triggers  and ' a task that is initially engaging might become a more immersive pleasure once the user attains competence'

Frasca's work on computer video describe how some can be passive pleasures.  In this case immersion would not be transformative.  'It is the interruptions that facilitate critique'

authors suggest ' players simultaneously operate from two perspectives - in -world and out- world'

Pedagogy

The authors describe how the different states of consciousness , immersion and engagement , could be targeted as part of the approach to teaching and learning in gaming & virtual world environments.



Bartle, R. (2003) Not Yet, You Fools! At Game Girl Advance (28.7.2003) http://www.gamegirladvance.com/archives/2003/07/28/not_yet_you_fools.html, accessed November 2008

Carr, D. (2006) ‘Play and Pleasure’ in Computer Games: Text, Narrative and Play, Carr, D, Buckingham, D, Burn, A and Schott, G (eds) Cambridge: Polity pp 45-58

McMahan, A. (2003) ‘Immersion, Engagement and Presence: A Method for Analyzing 3-D Video Games’, in Wolf, M and Perron, B (eds) The Video Game Theory Reader. New York: Routledge pp 67-86


Tuesday, 14 April 2009

Posamentier & Herv'e Abdi

Neuropsychology Review, Vol. 13, No. 3, September 2003 ( C° 2003)

Processing Faces and Facial Expressions

Mette T. Posamentier1;2 and Herv´e Abdi1

Bruce and Young model

for

Casting findings from behavioral and neuropsychological observations and the experimental approach into a unified framework, Bruce and Young (1986) developed a now classic model of face recognition expressed in terms of processing pathways and modules for recognition of familiar faces (see Fig. 1). They suggested that seven types of information can be derived from faces: pictorial, structural, visually derived semantics (age and sex), identity specific semantics, name, expression, and facial speech (movements of the lips during speech production) codes.

against

Endo et al, (1992) presented subjects with familiar and unfamiliar faces with three different facial expressions (neutral, happy, and angry). Familiar faces were recognized faster when displaying “neutral” expression than when displaying a “happy” or an “angry” expression. In another experiment, faces of well-known people with “neutral” and “happy” expressions were used as familiar faces.

Neurophysiological evidence

animal studies Single-cell recordings have also shed further light on the presumed independence of facial identity and expressions. Hasselmo et al. (1989) investigated the role of expression and identity in face-selective responses of neurons in the temporal visual cortex of macaque monkeys. Hasselmo et al. recorded the responses of 45 neurons to a stimulus set of pictures of three monkeys displaying three different facial expressions (calm, slight threat, and full threat). Fifteen neurons responded to identity independently of expression, and nine neurons responded to the different expressions independently of identity. Hasselmo et al. further found a differential response to the different expressions, with a stronger response to expressions of full threat. The neurons responsive to expressions were found primarily in the superior temporal sulcus, whereas neurons responding to identity were located in the inferior temporal gyrus. Although most single-cell responses have been recorded in the visual cortex, this is not the only area that has shown specific responses to faces. Leonard et al. (1985) found population of cells in the amygdala of macaque monkeys to be also responsive to faces.

human studies The first single-cell recordings of face processing in humans were conducted by Ojemann et al.(1992) on 11 patients. These findings parallel Hasselmo et al.’s findings of differential activation patterns (Hasselmo et al., 1989) to identity and expressions in primates.

ERP & Experimental/behavioral studies

Event-related potentials have also been used to study processing of facial expressions. In a study designed to examine both processing of facial identity and facial expressions in normal subjects,M¨unte et al. (1998) recorded ERPs from multiple scalp locations to assess the timing and distribution of effects related to the processing of identity and expressions. The earliest ERP differences in an identity matching task were found around 200 ms. The earliest ERP effects in an expression matching task came in later, around 450 ms. The tasks also differed in their localization or scalp distribution: Identity was associated with a fronto-central effect, whereas expression was associated with a centroparietal effect. Streit et al. (2000) evaluated differences in ERPs in emotional and structural face processing. In the emotional processing task, subjects were asked to identify which one of six facial expressions was present on the face. In the structural encoding task, blurred images of faces and five other categories were presented and subjects had to identify the category of each stimulus. Pictures of facial expressions and blurred facial images evoked similar event potentials around 170 ms, which should be specifically related to processing of facial stimuli and thus replicating previous findings. However, the facial expressions decoding task evoked a peak around 240 ms, whereas such aresponse was absent for the blurred facial images. According to the authors, this peak latency of 240 ms might then represent specific processes underlying the decoding of facial expressions. This 240-ms peak is different from the time sequence reported by M¨unte et al. for expression matching. The discrepancy may be due to task differences: a matching task is probably more effortful than an encoding task and would therefore take more time. Herrmann et al. (2002) replicated these findings.

Neuropsychological evidence

As we have previously seen, lesion studies suggest that some specific brain structures or different neural substrates are involved in processing different types of face related information. In fact, we are looking at a double dissociation: prosopagnosic patients are impaired at face recognition but have intact facial expression processing, and other subjects are impaired at facial expression processing but can still recognize faces. Results from studies of facial expression processing suggest that different emotions are also processed by different regions of the brain. For example, Adolphs et al. (1996) investigated facial expression recognition in a large number of subjects with focal brain damage. The authors hypothesized that cortical systems primarily responsible for recognition of facial expressions would involve discrete regions of higher-order sensory cortices. Recognition of specific emotions would depend on the existence of partially distinct systems. This predicts that different patterns of expression recognition deficits should depend upon the lesion site. In general, none of the subjects showed impairment in processing happy facial expressions, but several subjects displayed difficulty in recognizing negative emotions(especially fear and sadness). The authors propose that deficits in processing negative emotions can be due to the fact that the number of negative emotions is largerthan the number of positive emotions. In fact, there is only one positive emotion, namely happiness. As such, a happy smile should be easily recognizable. It is also possible that negative emotions show a deficit because they can be easily confused, such as mistaking fear for surprise, or anger for disgust.

Two routes?

The study by Gorno-Tempini et al. (2001), already referred to in the discussion on processing of disgust, used both explicit and incidental tasks, expression recognition and gender decision respectively, in processing of disgusted, happy, and neutral expressions. Regions of activation varied by both task and facial expression: the right neostriatum and the left amygdala showed higher activation in explicit processing than in incidental processing. The bilateral orbitofrontal cortex activated to explicit recognition of happy expressions. Common activation for all conditions were found in the right temporal occipital junctions, which would be indicative of visual and perceptual processing, and the left temporal and left inferior frontal cortex, which would be involved in semantic processing.

Clearly then, activation is both distributed and task-modulated. The observed difference in activation patterns between explicit and implicit processing, or what could also be viewed as conscious and unconscious processing, falls in line with LeDoux’s concept of the “low and high roads” for processing emotional stimuli (LeDoux, 1996). Recall that the “low road” provides a crude representation of thestimuli to the amygdala, whereas the “high road” involves elaborated processing in the sensory cortex. By adding factors such as social context, prosody, gaze, and body language to the relatively simple visual perception of facial expressions, the conscious processing of facial expression can then be also subsumed under the concept of social cognition (see Adolphs, 1999, 2001, for recent reviews of the field of social cognition). For example, damage to the amygdala goes beyond pure impairments in facial expression recognition and appears to play a role in social decision making. This is illustrated by a study of three subjects with bilateral damage to the amygdala who were impaired at facial expression recognition and who deviated from normal controls in social judgments involving facial stimuli. The subjects with amygdala damage judged faces that normal subjects had deemed most untrustworthy and unapproachable as trustworthy and approachable (Adolphs, 1999; Adolphs et al., 1998). Adolphs et al. (2001) recently extended this finding to a group of high functioning autistic subjects who were tested on the same material as the bilateral-amygdala-damaged patients. The autistic subjects showed normal social judgments from lexical stimuli, but, as the amygdala patients, showed abnormal social judgments regarding the trustworthiness of faces. These findings support the role of the amygdala in linking visual perception of socially relevant stimuli with the retrieval of social knowledge and subsequent social behaviors.

Summary: Facial Expressions

The goal of imaging studies of the perception of facial expressions has been to evaluate whether there are distinct neural substrates dedicated to processing emotions as displayed by different facial expressions. Evidence from behavioral and lesion studies do suggest that different structuresare activated by different emotions. The role of the amygdala in processing fearful stimuli has been well established. Recall that the patients who presented lesions of the amygdala and were impaired at processing negative emotions with fear being most strongly affected. Patients with Huntington’s disease display loss of amygdala and basal ganglia tissue, associated with impaired processing of fear, anger, and disgust in particular. However, no subject groups displayed any difficulties in processing happy facial expression. This suggests differential processing of positive and negative emotions. So far, a number of neuroimaging studies have shown differential activation patterns in response to five of the six basic emotions displayed by facial expression. No studies have examined activation by surprised facial expression. Activation of the amygdala by fearful expressions should come as no surprise as reported by Morris et al. (1996, 1998) and Breiter et al. (1996). But, note that the facial expressions of sadness and happiness also activated the amygdala. Amygdala activation has been also reported in a categorization task of unknown faces (Dubois et al., 1999). Thus, it is quite likely that the amygdala also responds to faces in general with the purpose of assessing the possible threatening valence of a stimuli (e.g., “Is this a friend or foe?”). Further, the results from the imaging studies of disgust implicate the basal ganglia structure as well as the insular cortex in the processing of this emotion

(Gorno-Tempini et al., 2001; Phillips et al., 1997; Sprengelmeyer et al., 1997). Interestingly, the two facial expressions for which consistent patterns of activation have been established are fear and disgust. These are emotions that are evoked in direct threats to the system. The majority of studies that have examined activation in response to different facial expressions also have found activation in other areas (e.g., prefrontal cortex, inferior frontal cortex, right medial temporal region, anterior cingulate cortex, inferior temporal cortex, and orbitofrontal cortex) in addition to the regions of particular interest.

The inferior frontal cortex is commonly activated in response to different facial expressions, and may serve as an area for integration or semantic processing of information contained in facial expressions. Additionally, activation in the orbitofrontal cortex in response to facial expressions is quite interesting because this region is implicated in numerous cognitive activities, including decision making, response selection and reward value, behavior control, as well as judgments about olfactory stimuli (Abdi, 2002; Blair and Cipolotti, 2000; Rolls, 2000). Attractive faces produced activation in the medial orbitofrontal cortex, and the response was enhanced if the face was smiling in addition to being attractive (O’Doherty et al., 2003). However, the areas mentioned above are also activated by other face-processing tasks, such as encoding and recognition. Therefore, the functionality or exact role of these additional areas of activation remains unclear.

CONCLUSION

We began this review by reporting a dissociation in the processing of facial identity and facial emotions as evidenced in certain patient populations. For example, prosopagnosic patients fail to recognize faces, but some show no impairment in processing facial expressions, whereas patients with amygdaloid lesions display problems in processing facial expressions but not facial identity. Although the Bruce and Young model states that identity processing follows a different route than facial expression processing, findings from experimental and behavioral studies alone failed to a certain degree to establish functional independence between the two subsystems, because the testing paradigms employed often confounded facial expression and facial identification tasks. From the field of neurophysiology, single-cell recordings in primates have identified differential neuronal responses to facial identity and expressions, as well as different brain areas. Event-related potential studies have established a different time course as well as different foci for identity and expression processing. Taken together, the findingsfrom experimental, neuropsychological, and neurophysiological approaches strongly support the existence of dissociable systems for processing facial identity and facial expressions, and thus validate the Bruce andYoung model, which postulates independent processing of identity and expression.

How successful has the newer brain imaging approachbeen in supporting the existence of separate systems dedicated to the processing of facial identity and facial expression? So far, activation of the fusiform gyrus has been well established in a number of face-processing tasks. The activation patterns in the fusiform gyrus in response to face perception do correlate with lesion sites resulting in aninability to recognize faces, as seen in prosopagnosic patients. However, the fusiform gyrus is also activated when facial expressions are processed. But because the majority of studies of facial expression processing used an ROI approach, additional areas of activation were largely not evaluated or at least not reported. The role of the amygdala in processing fearful facial expressions is also well established, but the amygdala has also been found to show a generalized response to faces. Thus, it becomes quite difficult to tell whether observed activation patterns are in response to facial identity or facial expression processing. Because the areas of neural activation in response to both face-processing tasks are quite extensive and show considerable overlap, we are still faced with the task of interpreting and assessing the functionality of these areas of activation. Going beyond the fusiform gyrus, the activation patterns in response to different face-processing tasks reveal to a large extent a task-dependent as well as distributed network.

Although neuroimaging studies of face perception are answering questions related to basic research into face processing, the results from neuroimaging studies of facial expression processing in patient populations show perhaps greater promise of having direct applications. As we have seen, deficits in facial expression processing have been found in a number of different disorders, ranging from Huntington’s disease, schizophrenia, depression, mania, OCD, sociopathy to autism. Gaining an understanding of the neural networks involved in the processing of affective stimuli may provide further insights into the spectrum of deficits associated with these disorders. For example, Sheline et al. (2001) reported normalized amygdala activation in a group of depressed patients with antidepressant treatment.

Wednesday, 1 April 2009

CMC as democratic - theory, practical, culture

Issues for democracy and social identity in Computer Mediated Communication and Networked Learning (2002)
Hodgson, V.
in Networked Learning:  Perspectives and Issues, Steeples, C., and Jones, C. ( Eds) Spinger-Verlag, London Ltd

Claims made for CMC as a context for learning include practical, theoretical and cultural.

theoretical 
critical pedagogy school of educational thinking
  •  p230 ' the proponents of this school advocate that higher levels of learning and knowing are achieved through critical reflexivity and dialogic processes'  
  • ideal discourse ( Boyd(1987), Boshier (1990) 
  • Merizow , 1988 has ' is accepting of others as equal partners' - ? no visual  clues to reflect status ( whether appearance or other) 
practical

Exploring ideas of equality further.

culture or cultural artefact?

but there is the question of what part does social presence  &/or social identity play ?

idea of text carrying markers  (Simeon Yates, 1997)


need to sort out whether online communities are a separate culture or a cultural artefact?.  If it is a cultural artefact then the internet can be viewed as p232 ' an agent of change' and that the way it is used is the central question.  With this view  any online community is seen as an extension of existing social practices and patterns of interaction.

p233 ' We claim that it is important to both recognise the intrinsic interplay between what the users bring with them from their own context of practice and experience, and what can be created/developed in the virtual environment and then in turn transferred back into the non-virtual world.' 
p233  the author argues that ' at any moment in time in an online discussion what you see/read is what is 'real' '


Tuesday, 31 March 2009

Emotion in CMC Review article (Derks et al, 2008)

The role of emotion in computer-mediated communication: A review
Derks, D., Fischer, A.H., and Bos, A. (2008)
Computers in Human Behaviour, Vol 24, Issue 3, 766-785.

Method
Reviewed Psychoinfo, Medline Google scholar & professional network.
Key terms; emoticons, flaming, unihibited behaviour, anonymity, emotion and mood in combination with CMC, F2F, social sharing, internet, onlinbe, self disclosure, anonymity, gender differences, display rules, mimicry and anonymity.
Studies with a social interaction setting; restricted to text based CMC.
Romantic relationships and non English language papers were excluded

Definition of emotion communication:
the recognition , expression and sharing of emotions or moods between two or more individuals. Explicit emotion communication involves references to discrete emotions through verbal labels ( I am very angry) appraisals ( this is scary) and tendencies to act ( I would like to hit you) or emblems ( emoticons). Implicit emotional communication includes the emotional style of the message, as can be inferred from the degree of personal involvement, self disclosure, language use, etc.

Comparing FtoF and CMC for emotions, sociality the most important aspect to consider.
First
Literature review in terms of social presence
1. The difference is the impact of context on social presence (sociality).( Short, Williams & Christie, 1976).

2. Manstead (in press) proposes 2 dimensions; the physical and the social.
The physical means there is no bodily contact and no visibility.
The lack of visibility links to the social to contribute to a reduced relational salience.
Furthermore the other person may be unknown and together with reduced salience it increases anonymity of the situation . i.e. visibility and knowing the person are additive when it comes to the social presence equation.

Derks et al : What does this mean for emotional experience?
  • bodily contact - probably more relevant to intimate relationships
  • visibility -implications for the decoding and recognition of others' emotions, also the expression of one's own emotion is less visible
3. interaction between social norms & social presence ( Potmes, Spears etc)
No social cues at the outset, SIDE sees this as significant for increasing ingroup identity ( ? therefore salience of the situation) but social cues tend to 'leak out' .
KRO what is the implication of this - ? need to view from a developmental perspective.

Three points of focus for the literature review and as far as possible compared f-to-f and CMC
  1. Emotion talk as part of content
  2. expression of emotion
  3. recognition of emotion ( but little research)
1. Emotion as part of content
f-to-f
need to talk about emotions - social sharing- a general manifestation of f-to-f ( Christophe & Rime, 1997) , the more intense the feeling the more inclined to talk about the event.

once exposed to social sharing it is then common to share with a third person i.e. non anonymity of source

'in a met-analytic review, Collins & Miller (1994) found that people who engage in intimate disclosures tend to be liked more than those who disclose less' i.e. sharing emotions is a useful tool. Disclosing emotions 'healthy and good for well being'
CMC
use of MSN (KRO and & ? facebook)
online dating
CMC mediated therapies
studies of self disclosure
(Savicki (1991), Savicki & Kelly (2000), Herring. Men ignore socio-emotional, they are more task orientated and less satisfied with the medium. Women in female only groups self disclose and attempt to reduce tension . I statements & directly addressing group members more likely More likely to thank, appreciate and apologise and be upset by violations of politeness. Men ignore socio-emotional, more task orientated and less satisfied with the medium.)

2. Expression of emotion

F-to-f

studies into the effect of co-presence
Fridlund compared
a) f-to-f
b) imaginary present
c) alone
There was more smiling in conditions a) and b) than c). More likely to cry when alone.

display rules and the identity of interactants
Hess, Fischer . power relations and the activation of different display rules. Social position prescribes what emotions to display.

relationships - friendships accentuate facial expression i.e. relationships as a mediator of expression. (KRO anonymity and stranger may operate together in CMC)

gender - some evidence for gender differences in display rules

CMC ( almost total focus on flaming!)
Siegel et al (1986) compared groups engaged on identical tasks. Flaming was more common in CMC. No difference between synchronous and asynchronous CMC.

3. recognition of emotion
lack of visibility therefore no NVC.

Function of NVC in f-to-f
  • reduce ambiguity
  • tone down or intensify emotional expression ( Lee & Wagner)
  • animate and/or clarify interaction
  • elicit mimicry - particularly important for establishing positive relationships

Comparing f-to-f and CMC
Sasaki & Ohbuchi (1999)
compared interaction via CMC and f-to-f ( vocal)
The task was to interact with a confederate in two hypothetical conflict situations in which confederate had to accept an unreasonable request. Didn't see each other in either situation. Confederates voice manipulated to produce either a positive or a negative tone. P's asked to rate emotions and intentions of the confederate. Emotions equally intense for each ' in vocal condition, however, angry emotions and perceived negative intents prompted aggressive responses, whilst such effects were absent in CMC' , p8.

Consider whether lack of NVC in CMC can lead to either over estimation or underestimation of emotional state and therefore inappropriate reactions or judgements of others.

CMC
emoticons
like NVC can serve to accentuate, emphasise, clarify.
Derks et al (2007) manipulated the social context of chat ( task or socio-emotional) and valence ( positive or negative) . Ps could respond with text, emoticon or a combination. Social contexts tended to attract emoticons. Positive emotions in contexts with positive valence negative emoticons in contexts with negative valence.
BUT
when task orientated Ps used the least number of emoticons p9 ' individuals have to be more accurate, they have more explaining to do, and if possible, they are required to present alternatives.
AND
use of emoticons is deliberate ( voluntary)


Authors concludes that absence of NVC is taken over. p10. individuals more explicitly describe or label their emotions in CMC compared to F2F. There is no research in which this is directly compared, however.

Authors claim that MIMICRY cannot be achieved in CMC KRO but presentational style may be a way.

Questions
Is emotional embodiment reduced in CMC?
Is emotional reaction easier to control in CMC?
Is spontaneity reduced ( i.e. asynchronous -time to reflect)

Monday, 16 March 2009

Time & Photo & Walther

Is a picture worth a thousand words?  Photographic Images in Long-Term and Short-Term Computer Mediated Communication (2001)
Walther, J.B., , Slovacek, C.L., and Tidwell, L.C.

Communication Research, 28, 105.
DOI: 10.1177/009365001028001004

  •  wide spread adoption of text based conferencing.
  • what are the advantages, if any of online environments that try to emulate FTF levels of presence?

REVIEW
Social presence theory ( Short, Williams, & Christie, 1976).  Originally a theory of teleconferencing.
'Conceives social presence as a communicator's subjective sense about the salience of an interaction partner' 'This feeling is a function of a number of cues that the medium offers' i.e. focuses on the quantity rather than the quality of transmitted cues.  Effect of reduced social presence is a reduction in interpersonal warmth and affection.
(KRO this idea of social presence does not consider the effect of asynchrony  ? different to Garrison & Anderson).

Whereas users rate video higher than CMC and audio on social presence(refs) users tend to only use video to check attention levels of others : is it access to the visual appearance of others that leads to higher ratings of  such environments?

Uncertainty reduction theory ( Berger & Calabrese, 1975) 'Greater amounts of information about partners reduce discomfort, increases predictability, and raises the level of affection towards others.'  Berger & Douglas ( 1981) 'found that seeing photographs did have uncertainty reducing effects'

Social Information Processing (SIP) theory of relational communication (Walther, 1992)

When required to interact individuals are motivated to affliliate and therefore they use whatever cue systems are available.  In CMC this is usually content.  Walther (1996) suggests that 'negative effects are confined to zero history groups with no longevity'  Level of motivation can depend on a number of factors but anticipated future interaction very high on the list of motivators.  Anticipated future interaction 'promotes more personal questions and slef disclosures ........ and influences intimacy levels more than media cues'

The Hyperpersonal communication framework   Variant ( ? development ) of SIP 
considered in terms of receivers, senders & the channel.

Receivers - This part of the theory is influenced by/ draws on  SIDE (social identification/deindividuation ( Spear & Lea, 1992).  
'When there are no cues available to identify others then any bit of social information transferred by the context is subject to overattribution by receivers.  Under certain circumstances of CMC users construct hyperbolic and idealized constructions of their virtual partners'
"furthermore, when partners experience a salient group identity rather than an individual orientation, these attributions accentuate similarities and shared norms and therefore social evaluations are more positive'

Senders and impression management
Senders are able to engage in selective self-presentation. 
  1. Messages can be revised and timed
  2.  no acccidental transmission of unintended NVC
  3. no cues to physical appearance
  4. some of the cognitive-behavioral resources that are required in FTF ( backchanneling, attention checking etc) 
  5. channel -have time to attend to both the task and the social dimensions of the exchange.
therefore
'the hyperpersonal perspective depicts how senders select, receivers magnify, channels promote and feedback increases enhanced and selective communication behaviors in CMC'
'by subsuming SIP principles  CMC users anticipate a long-term commitment with their partners, they initiate affliative behaviours as as time accrues, these experiences affect communication patterns positively'

Method
8 groups of virtual work-teams ( USA & UK students) - 24 participants
task was 'plausible and relevant and required high levels of involvement  over a realistic period of time'
2 ( long term/short term) x2 (photograph/no photograph) design 

DVs  (cf DZX when survey focused on student experience of course rather than specifics about others)
self administered questionnaire post course, 5 point Likert scale. when appropriate each participant assessed two others.
  1. Intimacy/affection - Burgoon & Hale's (1987) scale
  2. Relational communication - Walther & Burgoon, 1992 - adjusted for groups
  3. attractiveness - subset of McCroskey & McCain (1974) (task- 5 items, social - 5 items, physical- 5 items)
  4. self presentation success was assessed using two original items
' the computer-mediated communication allowed me to present myself in a favourable way'
'I think I made a good impression on the others through the use of the computer system'

Results

Intimacy/affection & relational  : 'the same photographs that help defeat impersonal conditions ( short term contexts) also dampen hyperpersonal ones ( that form in long term online encounters)
projected attractiveness: only over attributed  for males rating females and 'suggests that social categorical judgements might arise regardless of physical appearance ( see also Postmes et al)
success of self presentation users felt that CMC facilitated better self presentation when there was no photo.