The face recognition model developed by Bruce and Young has eight key parts and it suggests how we process familiar and unfamiliar faces, including facial expressions. The diagram below shows how these parts are interconnected. Structural encoding is where facial features and expressions are encoded. This information is translated at the same time, down two different pathways, to various units. One being expression analysis, where the emotional state of the person is shown by facial features.
By using facial speech analysis we can process auditory information. This was shown by McGurk (1976) who created two video clips, one with lip movements indicating ‘Ba’ and other indicating ‘Fa’.
Both clips had the sound ‘Ba’ played over the clip. However, participants heard two different sounds, one heard ‘Fa’ the other ‘Ba’. This suggests that visual and auditory information work as one. Other units include Face Recognition Units (FRUs) and Person Identity Nodes (PINs) where our previous knowledge of faces is stored. The cognitive system contains all additional information, for example it takes into account your surroundings, and who you are likely to see there.
fMRI scans done by Kanwisher et al. (1997) showed that the fusiform gyrus in the brain was more active in face recognition than object recognition, this suggests and supports the idea that face recognition involves a separate processing mechanism. This model suggests that we process familiar and unfamiliar faces differently. That we process familiar faces using; structural encoding, FRUs, PINs and Name Generation. However, we use structural encoding, expression analysis, facial speech analysis and direct visual processing to process unfamiliar faces.
Literature review on the cognitive processes involved with face recognition
... information that shall provide information or be a data bank to face recognition systems in the stage of facial recognition. Encoding takes place in two separate processes, ... analysis stage of processing helps to separate distinct information from general information that gives more meaning to the encoded information (Shepherd, 2008, p. 320). Face recognition ...
However, there is evidence by Young et al. suggesting that the idea of double association is poor. He studied 34 brain damaged men, finding there was only weak evidence for any difference between recognising familiar and unfamiliar faces. An issue with this study and the model itself, is the use of brain damaged patients to prove it works. This is because there is only a small sample size so it is hard to generalise to the wider population. It is also unclear if it is the brain injury itself that causes the result and if it is
the same for healthy people.
There was a study done by Young, Hay, and Ellis (1985) that uses people with no medical issues. They asked people to keep a diary record of problems they experienced in face recognition. They found people never reported putting a name to a face while knowing nothing else about that person. This supports the model as it suggests that we cannot think of a person’s name unless we know other contextual information about them.
Prosopagnosia is a condition where a person cannot recognise familiar faces, but only the features, not the whole face. The condition contradicts the model as it suggests that the process are most likely not separate. As most patients had severe problems with facial expression as well as facial identity, this suggests they are processed separately.
The model can also be seen as reductionist, as it only gives a vague description of what the cognitive system does. However, there is research that does support the concept that there are two are separate paths for processing face recognition and facial expression. One being Humphreys, Avidan, and Behrmann (2007) who studied three participants with developmental prosopagnosia. All three had poor ability to recognise faces, but their ability to recognise facial expressions was similar to that of healthy individuals.
The Essay on Asymmetry In Facial Emotional
Asymmetry in Facial Emotional Expression. Abstract Research in the past has demonstrated that the right hemisphere of the brain is dominant in the perception and expression of emotion. As a result of crossing of the nervous system, the expectation was that the left side of the face would express emotion more intensely than the right. This was tested by using left and right composite faces, showing ...
A study that suggests that units of face recognition are separate is Bruyer et al. (1983).
Who investigated a patient unable to recognise familiar faces, but who could understand their facial expressions, which implies that facial expression analysis and name generation is separately processed. This supports Bruce and Young’s idea of separate units. Further support for the idea of separate components of face recognition was shown by Campbell et al. (1986).
They found a prosopagnosic who could not recognise familiar faces or identify their facial expressions, however they could perform speech analysis. This study suggested that facial speech analysis is a separate unit of face recognition.