Tuesday 28 April 2009

Immersion?

Place Illusion & Plausibility . Realistic behaviour in Immersive Environments.
Mel Slater @ The Royal Society


? plausibility given by physical reality ( will need to check)

immersion as an illusion of physical reality.
  • display in all sensory systems
  • tracks head, neck, etc so that display is determined as a function of head tracking

immersive systems 

characterised by the sensori-motor contingencies that they support. ( KRO - ?need for synchrony) When these are satisfactory they give an illusion of physical reality.  It is difficult to turn off a strong illusion by knowing that it is an illusion.  The more you probe the more likely to break the effect of physical reality.

Once plausibility is broken then it is always broken (  if it is an illusion then is this part of the illusional effect?)

Place illusion

necessary to have self reference, contingent events, credibility.  Place illusion once broken can be reconstructed (  so what is the effect of illusion in this case?)

Illusions - Slater  makes reference to brain as a baysian self referencing system.
Carr, D., and Oliver, M. (in press 2009)

Second Life, Immersion and Learning’ in Social Computing and Virtual Communities, edited by Panayiotis Zaphiris and Chee Siang Ang, published by Taylor and Francis.

describes how the introduction of new technology eg voice into SL led to a classification into members as immersionist or augmentationists.

Immersionists - SL is a substitute for real life and tend to not disclose any real life information.  Augmentationist see SL as an extension of RL.

Reviewed use of immersion :

  • mostly to imply a 3D environment
  • but sometimes vaguely defined and 'yet assumed to be good for learning' 'Approached as something that can be measured'
  • immersion as reality ? presence ( Slater used the term presence) Looks to game simulation where 'less is more ' approach is accepted  'Wright, the creator of SIMCIty understands the value of leaving things out' .  Bartle (2003) 'if you introduce reality into a virtual world, it's no longer a virtual world: it's just an adjunct to the real world.  It ceases to be a place, and reverts to being a medium.

immersion and engagement different , McMahan (2003) in the context of computer games.  In gaming McMahan sees immersion as fantasy part whilst engagement is linked to the challenges of the game.  Carr (2006) ' engagement and immersion are not linked to particular aspects of a game or a text.  Instead they are imagined as attentive states that sit along a continuum and that suggest a particular stance towards the game at a particular moment' ' Immersion can be countered when participant required to give attention to engagement' game 'allows the player to constantly move between the two ( Carr 2006, p.55)

There are things in SL that can act as triggers  and ' a task that is initially engaging might become a more immersive pleasure once the user attains competence'

Frasca's work on computer video describe how some can be passive pleasures.  In this case immersion would not be transformative.  'It is the interruptions that facilitate critique'

authors suggest ' players simultaneously operate from two perspectives - in -world and out- world'

Pedagogy

The authors describe how the different states of consciousness , immersion and engagement , could be targeted as part of the approach to teaching and learning in gaming & virtual world environments.



Bartle, R. (2003) Not Yet, You Fools! At Game Girl Advance (28.7.2003) http://www.gamegirladvance.com/archives/2003/07/28/not_yet_you_fools.html, accessed November 2008

Carr, D. (2006) ‘Play and Pleasure’ in Computer Games: Text, Narrative and Play, Carr, D, Buckingham, D, Burn, A and Schott, G (eds) Cambridge: Polity pp 45-58

McMahan, A. (2003) ‘Immersion, Engagement and Presence: A Method for Analyzing 3-D Video Games’, in Wolf, M and Perron, B (eds) The Video Game Theory Reader. New York: Routledge pp 67-86


Tuesday 14 April 2009

Posamentier & Herv'e Abdi

Neuropsychology Review, Vol. 13, No. 3, September 2003 ( C° 2003)

Processing Faces and Facial Expressions

Mette T. Posamentier1;2 and Herv´e Abdi1

Bruce and Young model

for

Casting findings from behavioral and neuropsychological observations and the experimental approach into a unified framework, Bruce and Young (1986) developed a now classic model of face recognition expressed in terms of processing pathways and modules for recognition of familiar faces (see Fig. 1). They suggested that seven types of information can be derived from faces: pictorial, structural, visually derived semantics (age and sex), identity specific semantics, name, expression, and facial speech (movements of the lips during speech production) codes.

against

Endo et al, (1992) presented subjects with familiar and unfamiliar faces with three different facial expressions (neutral, happy, and angry). Familiar faces were recognized faster when displaying “neutral” expression than when displaying a “happy” or an “angry” expression. In another experiment, faces of well-known people with “neutral” and “happy” expressions were used as familiar faces.

Neurophysiological evidence

animal studies Single-cell recordings have also shed further light on the presumed independence of facial identity and expressions. Hasselmo et al. (1989) investigated the role of expression and identity in face-selective responses of neurons in the temporal visual cortex of macaque monkeys. Hasselmo et al. recorded the responses of 45 neurons to a stimulus set of pictures of three monkeys displaying three different facial expressions (calm, slight threat, and full threat). Fifteen neurons responded to identity independently of expression, and nine neurons responded to the different expressions independently of identity. Hasselmo et al. further found a differential response to the different expressions, with a stronger response to expressions of full threat. The neurons responsive to expressions were found primarily in the superior temporal sulcus, whereas neurons responding to identity were located in the inferior temporal gyrus. Although most single-cell responses have been recorded in the visual cortex, this is not the only area that has shown specific responses to faces. Leonard et al. (1985) found population of cells in the amygdala of macaque monkeys to be also responsive to faces.

human studies The first single-cell recordings of face processing in humans were conducted by Ojemann et al.(1992) on 11 patients. These findings parallel Hasselmo et al.’s findings of differential activation patterns (Hasselmo et al., 1989) to identity and expressions in primates.

ERP & Experimental/behavioral studies

Event-related potentials have also been used to study processing of facial expressions. In a study designed to examine both processing of facial identity and facial expressions in normal subjects,M¨unte et al. (1998) recorded ERPs from multiple scalp locations to assess the timing and distribution of effects related to the processing of identity and expressions. The earliest ERP differences in an identity matching task were found around 200 ms. The earliest ERP effects in an expression matching task came in later, around 450 ms. The tasks also differed in their localization or scalp distribution: Identity was associated with a fronto-central effect, whereas expression was associated with a centroparietal effect. Streit et al. (2000) evaluated differences in ERPs in emotional and structural face processing. In the emotional processing task, subjects were asked to identify which one of six facial expressions was present on the face. In the structural encoding task, blurred images of faces and five other categories were presented and subjects had to identify the category of each stimulus. Pictures of facial expressions and blurred facial images evoked similar event potentials around 170 ms, which should be specifically related to processing of facial stimuli and thus replicating previous findings. However, the facial expressions decoding task evoked a peak around 240 ms, whereas such aresponse was absent for the blurred facial images. According to the authors, this peak latency of 240 ms might then represent specific processes underlying the decoding of facial expressions. This 240-ms peak is different from the time sequence reported by M¨unte et al. for expression matching. The discrepancy may be due to task differences: a matching task is probably more effortful than an encoding task and would therefore take more time. Herrmann et al. (2002) replicated these findings.

Neuropsychological evidence

As we have previously seen, lesion studies suggest that some specific brain structures or different neural substrates are involved in processing different types of face related information. In fact, we are looking at a double dissociation: prosopagnosic patients are impaired at face recognition but have intact facial expression processing, and other subjects are impaired at facial expression processing but can still recognize faces. Results from studies of facial expression processing suggest that different emotions are also processed by different regions of the brain. For example, Adolphs et al. (1996) investigated facial expression recognition in a large number of subjects with focal brain damage. The authors hypothesized that cortical systems primarily responsible for recognition of facial expressions would involve discrete regions of higher-order sensory cortices. Recognition of specific emotions would depend on the existence of partially distinct systems. This predicts that different patterns of expression recognition deficits should depend upon the lesion site. In general, none of the subjects showed impairment in processing happy facial expressions, but several subjects displayed difficulty in recognizing negative emotions(especially fear and sadness). The authors propose that deficits in processing negative emotions can be due to the fact that the number of negative emotions is largerthan the number of positive emotions. In fact, there is only one positive emotion, namely happiness. As such, a happy smile should be easily recognizable. It is also possible that negative emotions show a deficit because they can be easily confused, such as mistaking fear for surprise, or anger for disgust.

Two routes?

The study by Gorno-Tempini et al. (2001), already referred to in the discussion on processing of disgust, used both explicit and incidental tasks, expression recognition and gender decision respectively, in processing of disgusted, happy, and neutral expressions. Regions of activation varied by both task and facial expression: the right neostriatum and the left amygdala showed higher activation in explicit processing than in incidental processing. The bilateral orbitofrontal cortex activated to explicit recognition of happy expressions. Common activation for all conditions were found in the right temporal occipital junctions, which would be indicative of visual and perceptual processing, and the left temporal and left inferior frontal cortex, which would be involved in semantic processing.

Clearly then, activation is both distributed and task-modulated. The observed difference in activation patterns between explicit and implicit processing, or what could also be viewed as conscious and unconscious processing, falls in line with LeDoux’s concept of the “low and high roads” for processing emotional stimuli (LeDoux, 1996). Recall that the “low road” provides a crude representation of thestimuli to the amygdala, whereas the “high road” involves elaborated processing in the sensory cortex. By adding factors such as social context, prosody, gaze, and body language to the relatively simple visual perception of facial expressions, the conscious processing of facial expression can then be also subsumed under the concept of social cognition (see Adolphs, 1999, 2001, for recent reviews of the field of social cognition). For example, damage to the amygdala goes beyond pure impairments in facial expression recognition and appears to play a role in social decision making. This is illustrated by a study of three subjects with bilateral damage to the amygdala who were impaired at facial expression recognition and who deviated from normal controls in social judgments involving facial stimuli. The subjects with amygdala damage judged faces that normal subjects had deemed most untrustworthy and unapproachable as trustworthy and approachable (Adolphs, 1999; Adolphs et al., 1998). Adolphs et al. (2001) recently extended this finding to a group of high functioning autistic subjects who were tested on the same material as the bilateral-amygdala-damaged patients. The autistic subjects showed normal social judgments from lexical stimuli, but, as the amygdala patients, showed abnormal social judgments regarding the trustworthiness of faces. These findings support the role of the amygdala in linking visual perception of socially relevant stimuli with the retrieval of social knowledge and subsequent social behaviors.

Summary: Facial Expressions

The goal of imaging studies of the perception of facial expressions has been to evaluate whether there are distinct neural substrates dedicated to processing emotions as displayed by different facial expressions. Evidence from behavioral and lesion studies do suggest that different structuresare activated by different emotions. The role of the amygdala in processing fearful stimuli has been well established. Recall that the patients who presented lesions of the amygdala and were impaired at processing negative emotions with fear being most strongly affected. Patients with Huntington’s disease display loss of amygdala and basal ganglia tissue, associated with impaired processing of fear, anger, and disgust in particular. However, no subject groups displayed any difficulties in processing happy facial expression. This suggests differential processing of positive and negative emotions. So far, a number of neuroimaging studies have shown differential activation patterns in response to five of the six basic emotions displayed by facial expression. No studies have examined activation by surprised facial expression. Activation of the amygdala by fearful expressions should come as no surprise as reported by Morris et al. (1996, 1998) and Breiter et al. (1996). But, note that the facial expressions of sadness and happiness also activated the amygdala. Amygdala activation has been also reported in a categorization task of unknown faces (Dubois et al., 1999). Thus, it is quite likely that the amygdala also responds to faces in general with the purpose of assessing the possible threatening valence of a stimuli (e.g., “Is this a friend or foe?”). Further, the results from the imaging studies of disgust implicate the basal ganglia structure as well as the insular cortex in the processing of this emotion

(Gorno-Tempini et al., 2001; Phillips et al., 1997; Sprengelmeyer et al., 1997). Interestingly, the two facial expressions for which consistent patterns of activation have been established are fear and disgust. These are emotions that are evoked in direct threats to the system. The majority of studies that have examined activation in response to different facial expressions also have found activation in other areas (e.g., prefrontal cortex, inferior frontal cortex, right medial temporal region, anterior cingulate cortex, inferior temporal cortex, and orbitofrontal cortex) in addition to the regions of particular interest.

The inferior frontal cortex is commonly activated in response to different facial expressions, and may serve as an area for integration or semantic processing of information contained in facial expressions. Additionally, activation in the orbitofrontal cortex in response to facial expressions is quite interesting because this region is implicated in numerous cognitive activities, including decision making, response selection and reward value, behavior control, as well as judgments about olfactory stimuli (Abdi, 2002; Blair and Cipolotti, 2000; Rolls, 2000). Attractive faces produced activation in the medial orbitofrontal cortex, and the response was enhanced if the face was smiling in addition to being attractive (O’Doherty et al., 2003). However, the areas mentioned above are also activated by other face-processing tasks, such as encoding and recognition. Therefore, the functionality or exact role of these additional areas of activation remains unclear.

CONCLUSION

We began this review by reporting a dissociation in the processing of facial identity and facial emotions as evidenced in certain patient populations. For example, prosopagnosic patients fail to recognize faces, but some show no impairment in processing facial expressions, whereas patients with amygdaloid lesions display problems in processing facial expressions but not facial identity. Although the Bruce and Young model states that identity processing follows a different route than facial expression processing, findings from experimental and behavioral studies alone failed to a certain degree to establish functional independence between the two subsystems, because the testing paradigms employed often confounded facial expression and facial identification tasks. From the field of neurophysiology, single-cell recordings in primates have identified differential neuronal responses to facial identity and expressions, as well as different brain areas. Event-related potential studies have established a different time course as well as different foci for identity and expression processing. Taken together, the findingsfrom experimental, neuropsychological, and neurophysiological approaches strongly support the existence of dissociable systems for processing facial identity and facial expressions, and thus validate the Bruce andYoung model, which postulates independent processing of identity and expression.

How successful has the newer brain imaging approachbeen in supporting the existence of separate systems dedicated to the processing of facial identity and facial expression? So far, activation of the fusiform gyrus has been well established in a number of face-processing tasks. The activation patterns in the fusiform gyrus in response to face perception do correlate with lesion sites resulting in aninability to recognize faces, as seen in prosopagnosic patients. However, the fusiform gyrus is also activated when facial expressions are processed. But because the majority of studies of facial expression processing used an ROI approach, additional areas of activation were largely not evaluated or at least not reported. The role of the amygdala in processing fearful facial expressions is also well established, but the amygdala has also been found to show a generalized response to faces. Thus, it becomes quite difficult to tell whether observed activation patterns are in response to facial identity or facial expression processing. Because the areas of neural activation in response to both face-processing tasks are quite extensive and show considerable overlap, we are still faced with the task of interpreting and assessing the functionality of these areas of activation. Going beyond the fusiform gyrus, the activation patterns in response to different face-processing tasks reveal to a large extent a task-dependent as well as distributed network.

Although neuroimaging studies of face perception are answering questions related to basic research into face processing, the results from neuroimaging studies of facial expression processing in patient populations show perhaps greater promise of having direct applications. As we have seen, deficits in facial expression processing have been found in a number of different disorders, ranging from Huntington’s disease, schizophrenia, depression, mania, OCD, sociopathy to autism. Gaining an understanding of the neural networks involved in the processing of affective stimuli may provide further insights into the spectrum of deficits associated with these disorders. For example, Sheline et al. (2001) reported normalized amygdala activation in a group of depressed patients with antidepressant treatment.

Wednesday 1 April 2009

CMC as democratic - theory, practical, culture

Issues for democracy and social identity in Computer Mediated Communication and Networked Learning (2002)
Hodgson, V.
in Networked Learning:  Perspectives and Issues, Steeples, C., and Jones, C. ( Eds) Spinger-Verlag, London Ltd

Claims made for CMC as a context for learning include practical, theoretical and cultural.

theoretical 
critical pedagogy school of educational thinking
  •  p230 ' the proponents of this school advocate that higher levels of learning and knowing are achieved through critical reflexivity and dialogic processes'  
  • ideal discourse ( Boyd(1987), Boshier (1990) 
  • Merizow , 1988 has ' is accepting of others as equal partners' - ? no visual  clues to reflect status ( whether appearance or other) 
practical

Exploring ideas of equality further.

culture or cultural artefact?

but there is the question of what part does social presence  &/or social identity play ?

idea of text carrying markers  (Simeon Yates, 1997)


need to sort out whether online communities are a separate culture or a cultural artefact?.  If it is a cultural artefact then the internet can be viewed as p232 ' an agent of change' and that the way it is used is the central question.  With this view  any online community is seen as an extension of existing social practices and patterns of interaction.

p233 ' We claim that it is important to both recognise the intrinsic interplay between what the users bring with them from their own context of practice and experience, and what can be created/developed in the virtual environment and then in turn transferred back into the non-virtual world.' 
p233  the author argues that ' at any moment in time in an online discussion what you see/read is what is 'real' '