A Reflection on Leslie Zebrowitz’s talk
Leslie Zebrowitz’s talk brought her work—which deals with how deep-seated tendencies to overgeneralize distort our assessment of people,–to bear on the issue of justice in the courtroom.
Leslie Zebrowitz’s talk brought her work—which deals with how deep-seated tendencies to overgeneralize distort our assessment of people,–to bear on the issue of justice in the courtroom.
The premise of Zebrowitz’s work on overgeneralization is that the adaptive, useful reactions we have to features such as a baby’s cute face, the disfiguration caused by disease or genetic malady, or the familiarity of our family members, also distort our impression of people whose faces happen to have those characteristics.
It is essential for the survival of the species that we should perceive babies as attractive, innocent, needing and deserving our care and protection. Babies’ faces are round, with little noses and chins, big eyes and high foreheads. Some adults, through no action of their own but simply due to the genetics of their bone structure, happen to have faces with baby-like features. Adults with “baby faces” are perceived as having the qualities of a baby: naive, guileless, etc. Adults with mature faces – strong square chin, heavy brow, and lower forehead — are perceived to be more aggressive, but also more responsible and intelligent. Although these perceived qualities do not have any basis in the actions or character of the subject, arising from how their bones, hair and skin developed, they may strongly affect the observer’s impressions. These misapprehensions, which can result in costly errors for the perceiver, subject, or both, persist because the evolutionary value of reacting with nurturance, acceptance, and patience to an actual baby outweighs the disadvantage of sometimes misperceiving adult characteristics due to the look of a face.
There are times, however, when this misapprehension can be extremely costly. One of them is in the court system. Dr. Zebrowitz, with her colleagues and students, has looked at “baby-face” perceptions and other overgeneralizations and how they affect justice in court. They found that these effects do influence court outcomes; for example, babyfaced defendants are more likely to be found innocent in cases involving intentional actions, and more likely to be found guilty in cases involving negligence. Justice is not, in fact, blind. But these cases highlight why the ideal of justice is portrayed as blind or unbiased. Seeing the participants in the case introduces errors in person perception, errors which can be serious yet subtle and hard to address.
But can we make justice blind? What, ideally, should juries and judges be able to see of the defendant, plaintiff or witnesses? Many feel that it they need to see the accused in order to assess their words and gauge how believable they are. Yet if that judgment is based on distorted character perception, then perhaps justice is better served by being blind, by not seeing the participant in a case, no matter how important it feels.
Would we be better off with a purely audio court? This is an intriguing idea. There are studies that show that people are better at distinguishing truth from deception when only listening to audio recordings, rather than seeing a video (with audio). Yet, for other assessments, such as understanding the rapport and relationship between people, visual cues are more important. The issue is further complicated because the existence of cues to deception in a particular modality does not necessarily match the observer’s use of those cues – indeed, the problem Dr Zebrowitz’s work addresses is people’s erroneous assessments.
These are not just hypothetical questions. Virtual courts are being developed. As we design the technology for these computer-mediated justice systems, we must think about the impact of different ways of representing people, and how those representations might impact our ability to assess honesty, character, and relationships. Although the first impulse might be to use the most richly detailed media available; however, if one channel tends to make us less accurate in our perceptions, is its absence better?
Intriguingly, we can also investigate the possibility of excising only the problematic aspects of visual appearance. It can be very useful to see a person’s expressions and their gestures. A subject who sits calmly and attentively gives off a different impression than one who fidgets constantly and seems uninterested, distracted or annoyed, which is different again from the one who glares menacingly at the jury. These gestures and expressions, unlike the features that lead to overgeneralization problems, are not an artifact of genetic inheritance, but actions generated by the person, and are useful and relevant in assessing him or her. Could we create a court technology where all the key participants were given the same neutral appearance, but where gestures and expressions were still visible?
It is possible to use the actions of a person to drive the animation of an avatar. Would this be desirable in a future virtual court? It eliminates much of the facial overgeneralization issues. It retains the voice, which is necessary for detecting deception. The voice would usually reveal gender, and sometimes race and socioeconomic position. Such an experiment raises very interesting questions about what aspects of a person’s physique are relevant to the court. For instance, we could make all these avatars equal in size. But in a case where much of the case depends on knowing if the plaintiff had reason to be scared of the defendant, seeing that the plaintiff is 5 feet and frail and the defendant is a hulking 6 foot plus, makes a difference. So here basic physical shape would need to be conveyed.
This opens other questions. The underlying thought experiment is as follows: in the context of a trial, if we can control what aspects of a person’s physical appearance are introduced into the proceedings, what elements should we include, and what should we omit? If a defendant is very baby-faced and innocent looking, and is charged with say, fraud or with deceptive sales practices, is it not relevant to see what he or she looks like, in order to have insight into how the plaintiffs could have been taken in?
Another issue is familiarity. If someone looks like you, or like many people you know, you’ll believe them to be more trustworthy. This is an artifact of your perception, not a fact based on any trait of theirs. Similarly, if someone resembles an individual about whom you have strong feeling, you are likely to transfer those feelings to that person. Thus, the defendant who has the luck of looking like a juror’s beloved uncle may be warmly defended by that juror. (see alsoBailenson’s work showing that making another look more like you, by blending photographs, makes them seem more trustworthy).
Zebrowitz notes the effect of facial features is different in determining guilt and in sentencing:attractive individuals have been found to receive lighter sentences, though attractiveness did not affect whether they were found guilty. One could address this problem without high tech solutions, by maintaining face to face court proceedings, but having a separate judge determine sentencing. The sentencing judge would be given the written summary of the case, with all the relevant information, but would not see or hear the participants, thus working from all – and only – the information deemed legally relevant for that decision.
What, ideally, would we see of others in a given situation in order to have all the information we need about them, but without having the information that leads us to make erroneous assumptions? The court scenario shows vividly and with serious consequences the effects of our overgeneralizations and other error-prone impression formation processes – but this question has relevance far beyond the legal system. It has implications in how we perceive politicians, in how we determine who to trust. It makes us think about what aspects of a person’s appearance are germane to a discussion.
A Reflection on Jeremy Bailenson’s talk
A Reflection on Jeremy Bailenson’s talk, “Transformed Social Interaction in Virtual Reality.” In virtual worlds, people appear in the guise of avatars. These graphical representations can closely resemble the user – but they can also be radically or subtly transformed.
A Reflection on Jeremy Bailenson’s talk, “Transformed Social Interaction in Virtual Reality.”
In virtual worlds, people appear in the guise of avatars. These graphical representations can closely resemble the user – but they can also be radically or subtly transformed. These transformations can be apparent to all inhabitants of the virtual world, or they can be tailored to individual perspectives. With a series of ingenious experiments, Jeremy Bailenson has been studying the social and psychological effects of transforming avatar behavior and appearance.
In one experiment, Bailenson and colleagues blended the face of a viewer with that of a presidential candidate. The blend was subtle enough that the viewer did not detect it, yet the new resemblance to the candidate was effective: candidates thus transformed were perceived to be more familiar—and therefore more desirable—than candidates who were not altered.
In another, an avatar that had been programmed to maintain constant eye gaze spoke with the subject. Such persistent scrutiny is almost unheard-of in the real world – we typically look at the person we’re talking to only about 40% amount of time, or 70% of the time when we are listening. The intense gaze discomfited the subjects, but was at the same time, persuasive.
Other experiments focused on how one’s avatar affected one’s own behavior and perceptions. Subjects with attractive avatars felt and acted friendlier than did those who saw themselves portrayed by ugly ones. Such effects occurred even when only the subject saw the transformation: people negotiated harder and more successfully when they saw their own avatar as taller than another, even though their negotiating partner did not see the transformed height.
This work raises many ethical questions and forces us to articulate what, exactly, we mean by an “honest” representation – and when we actually want it.
During an election, candidates play different roles in front of different audiences. They may appear in plaid shirts and jeans to address a group of farmers, and jackets and ties for a dinner with corporate executives. They may even shift the cadences of their speech, e.g. adding adrawl in the South. Is this mimicry dishonest, or is it a reasonable way of expressing comradeship with the audience?
Mimicry is integral to our social interactions. In face to face conversation, we subconsciously express empathy and solidarity by mimicking each others’ verbal cadences and movements. This mirroring not only reflects the empathy between the parties, it also helps form it. However, this ordinarily subconscious and socially beneficial behavior can be deliberately exploited by someone who wants to seem amicably like-minded, but who actually has ulterior, if not predatory, motives.
One of Bailenson’s experiments showed that avatars programmed to mimic the subject’s gestures were more persuasive and well-liked than avatars using naturalistic but non-mimicking gestures. Is mimicry carried out via avatar simply an extension of the same social adaptability, or is it fundamentally different?
I would argue that the automatic simulation of mimicry it is fundamentally different, even from the most deliberate and calculated of face to face imitations. The candidate who copies the clothes and cadences of his or her potential voters, or the empathy-faking listener, must at least pay close attention to the actions of their audience and experience acting like them. When the mimicry is transposed to the virtual world, the person behind the avatar experiences no such affinity. The intimacy is purely illusory.
Yet before we relegate such socially smooth avatar behaviors to the category of inherently dishonest depictions, it is worth thinking about the alternative. If we are to have embodied online interactions – and the massive popularity of avatar-based places and games indicate they will be of growing importance – the avatars need to have some level of automatic behavior. If you want your avatar to move, you don’t want to be laboriously animating each step of its gait, you want it to have a walking algorithm. And, arguably, if you want your avatar to social, you don’t want to be laboriously animating each nod and gesture, you want it to have social interaction algorithms. The question becomes: where do we want to draw the line?— where does an algorithm help make the avatar experience come alive, and where do we want the active engagement of the participants to control this behavior?
Part of what makes Bailenson’s research so thought-provoking is that in reacting to the prospect of automated persuasion, we are forced to confront our beliefs and practices around simulated empathy in our everyday life. From the cashier’s cheery “have a nice day” to the waiter’s praise of our discerning menu choices, we enjoy the warmth of virtual friendliness. Much of the vast “service industry” is built on imitation camaraderie, and we complain bitterly when it is absent. For society to function, much faking is needed.
Bailenson’s experiments also touch on the illusions inherent in our relationship with our self. People who saw their own avatar as taller than others did better in negotiations – even though only they saw the height differential. People who saw their own avatar as attractive were more confident and friendly. People who saw their avatar get visibly fatter when eating were more successful dieters. These have fascinating implications, both exciting and disturbing, for our increasingly simulated lives.
Judith Donath is a Berkman Faculty Fellow and was the founding director of the Sociable Media Groupat the MIT Media Lab. She is leading the Berkman Center for Internet & Society’s Law Lab Spring 2010 Speaker Series: The Psychology and Economics of Trust and Honesty. Judith’s work focuses on the social side of computing, synthesizing knowledge from fields such as graphic design, urban studies and cognitive science to build innovative interfaces for online communities and virtual identities. She is known internationally for pioneering research in social visualization, interface design, and computer mediated interaction.
A Reflection on Stephen Kosslyn’s talk
A Reflection on Stephen Kosslyn’s talk “Brain Bases of Deception: Why We Probably Will Never Have a Perfect Lie Detector”.
A Reflection on Stephen Kosslyn’s talk “Brain Bases of Deception: Why We Probably Will Never Have a Perfect Lie Detector”
The premise of lie detection is that there is some perceivable physical sign when someone is lying. We have many beliefs about what these signs may be. For instance, we may want someone to look us in the eye when they are recounting a suspect tale, because we believe that direct eye contact is difficult for liars. We are confident of out ability to spot a lie, but in practice it is difficult: we’re not nearly as good as we think we are (indeed, some studies show that many people do not do much better than chance). Being deceived is quite harmful, so this is a big problem.
People have long sought ways to determine who is lying. In the Middle Ages, suspects were put through ordeals, such as dipping their arm in boiling waters— if they did not blister were they considered innocent. In ancient China, suspects were made to chew dry rice and spit it out; if it remained dry they were convicted.
While today we do not look for immunity from injury as a sign of innocence, modern polygraphs work on the same principle as chewing dried rice – they emphasis physical responses that are believed to insuppressibly accompany lying. The rice test sought to detect dry mouth, a sign of nervousness; today’s polygraphs often measure heart rate, respiration, and how sweaty one’s hands are (Galvantic Skin Response or GSR).
Polygraphs are widely used in the intelligence community and in private companies, and they are well embedded in the popular imagination at a truth-telling mechanism
However, they are notoriously unreliable, and many jurisdictions now forbid or limit their use as evidence.
They are unreliable because they are based on side effects of the phenomenon they seek to measure. The investigator wants to know if the subject is telling a lie. However, what the polygraph measures are physiological symptoms of emotions that may accompany lying, i.e. stress, nervousness and fear. They are not measuring the lying itself—that is, the creation of a false narrative— but are instead looking at how one responds to the act of lying. They are not looking at the thought-process itself, but at the symptoms that accompany a state of aroused feeling.
When using these measurements, false positives are a clear possibility: some people respond nervously because of the situation. And there are numerous false negatives. If someone does not feel guilt or fear about lying – at the extreme, the most pathological of liars – they will appear on a polygraph test to be truthful. In addition, there are many techniques for fooling the polygraph, such as putting a nail in your shoe and pressing on it to experience pain with each answer, including ones where you are known to be telling the truth, in order to alter your response profile.
Looking at these attempts to measure physical corollaries of deception, Stephen Kosslyn and his colleagues asked: Why look at the side effects of lying? Why not go right to the source – what is the brain doing? What can we see in brain activity that enables us to distinguish lying from truth-telling?
One of the main themes running through Professor Kosslyn’s research has been the idea that the processes that we think of as a “single activity” are, when you see what the brain is doing, comprised of multiple functions. For example, we think of identifying a thing – that’s a chair, that’s a bluejay – as a singular activity. But it turns out identifying something at a general level – there’s a bird – uses a different part of the brain than identifying it at a more specific level – there’s a robin. .
Lying, too, is a complex mental process and different types of lies use different parts of the brain. For instance, Kosslyn and colleagues looked at the difference between spontaneous lies and lies that had been rehearsed, and found that the pattern of neural activity was quite different for each. Rehearsed lies, for example, elicit more activation in the right anterior frontal cortice, a part of the brain used in recalling episodic memory. And there are many other features that distinguish different types of lies. There are lies about yourself and lies about other people. There are lies that you think are justified and lies you think are wrong. All of these would not only have a different neural activation pattern than the corresponding truth, they might also all be distinct from one another. One of Kosslyn’s main conclusions is that a reliable neural imaging lie-detector is unlikely: there are too many factors that go into lying to make a recognizable signature of it feasible.
Significant individual differences further complicate the situation. Although a particular brain region may be activated inmost people when they make a certain kind of lie, that pattern is not universal. Differences may be caused by differences in how they perceived a certain kind of lie (indeed, even in neutral tasks such as bird recognition, there is significant individual differences that result from differing levels of expertise. For me, recognizing a bird as a robin is pretty specific, but for a serious bird watcher, robin is a general category). Differences may also be the result of individual variations in brain function.
Kosslyn entitled his talk “Why We Probably Will Never Have a Perfect Lie Detector.” It’s a provocative title, in light of all the research now being done on finding the neural correlates of deception.
Is our understanding of the brain simply at an early stage now, and as it advances, we will indeed be able to look into someone’s head and know if they are lying? There are people who are exceptional human lie-detectors, able to distinguish truth telling from deception at a remarkably high rate, without the benefit of technological tools What do they see in another’s demeanor that reveals the lie – and if this can be read via physical manifestations, should it not be equally if not more possible to read these results from the brain itself?
Or is the problem with the concept of “lie” itself? Our folk understanding of lies distinguishes between social lies and harmful lies. Research such as Kosslyn’s experiments further reveals the cognitive complexity behind deception.
Finally, scientists are usually confident – if not over-confident – that the problem they are pursuing with their research will be solved. But here, with the claim that “we probably never will have a perfect lie detector” are we hearing a frank assessment of the difficulties facing a research agenda — or a note of hope? Social lies aside, deception is destructive—it is the tool of criminals and cheaters. But deception is also tied to privacy. If I can lie, I am in control of the contents of my mind. It is how we keep secrets. The perfect lie detector is the end of privacy.
Judith Donath is a Berkman Faculty Fellow and was the founding director of the Sociable Media Groupat the MIT Media Lab. She is leading the Berkman Center for Internet & Society’s Law Lab Spring 2010 Speaker Series: The Psychology and Economics of Trust and Honesty. Judith’s work focuses on the social side of computing, synthesizing knowledge from fields such as graphic design, urban studies and cognitive science to build innovative interfaces for online communities and virtual identities. She is known internationally for pioneering research in social visualization, interface design, and computer mediated interaction.