Imagine you are on a jury: would you trust the testimony of a drunk eyewitness? In a surprising new study, Angelica Hagsand and her colleagues report that drunk witnesses performed just as reliably as sober witnesses at recognising a criminal in a line-up.
One hundred and twenty-three students (60 per cent were women; average age 25) were split into three groups - one third drank orange juice for 15 minutes; another group spent the same time drinking enough orange juice mixed with vodka to reach a blood alcohol concentration (BAC) level of .04 per cent; the final group drank enough vodka and orange to reach a BAC level of .07 per cent. This last value is just below the legal drink driving limit in the UK and USA, and is approximately equivalent to an average-sized man drinking two or three shots of vodka in that time.
Five minutes after they'd finished drinking, the participants watched a five-minute video of a man kidnapping two women at a bus stop, shot from the perspective of a witness. Close views of the man's face were available for a total 31 seconds during the film.
A week later, the participants were invited back and completed a surprise identification task. In a sober state, they saw an 8-man line-up on a computer screen that either did, or did not, feature the kidnapper who they'd seen in the film. The test administrator didn't know which condition participants were in, nor whether the culprit was present. Each participant had to say whether the culprit was in the line-up, answering either "yes", "no the culprit is not present" or "do not remember".
Although better than chance, overall performance was poor, consistent with a great deal of past research showing the limited accuracy of eyewitness memory. Crucially, for both the culprit-present and culprit-absent conditions, there was no difference in accuracy across the different participant groups. This result held even after excluding participants who answered that they could not remember.
In fact, although not a statistically significant difference, the most intoxicated (.07 per cent BAC) participants actually achieved a higher accuracy percentage than the controls in both the culprit-present (47.1 per cent vs. 38.5 per cent) and culprit-absent (56.3 per cent vs. 41.7 per cent) line-up conditions. These results contradicted the researchers' expectations. Based on alcoholic myopia theory (a loss of memory for peripheral details), they predicted that the intoxicated participants would match the controls when the culprit was present, but would make more incorrect identifications when he was absent.
The results also clash with the common sense beliefs of the general public that drunk witnesses will be less reliable than sober witnesses. Given how common it is for witnesses to crimes to be intoxicated, there's been surprisingly little research on how alcohol affects eyewitness performance. Sure, this study has its limitations - the alcohol levels used were only moderate and the crime wasn't a real event - but it makes a welcome contribution to a neglected research area.
_________________________________
Angelica Hagsand, Emma Roos-af-Hjelmsäter, Pär Anders Granhag, Claudia Fahlke, and Anna Söderpalm-Gordh (2013). DO SOBER EYEWITNESSES OUTPERFORM ALCOHOL INTOXICATED EYEWITNESSES IN A LINEUP? The European Journal of Psychology Applied to Legal Context: http://www.usc.es/sepjf/images/documentos/Volumen_5/hagsand%20et%20al.pdf
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Thursday, 31 January 2013
Wednesday, 30 January 2013
Extras
Eye-catching studies that didn't make the final cut:
A psychopath, a narcissist & a Machiavellian enter a room, who was perceived more favourably?
What is the most interesting part of the brain? (open access)
"Spending time on others increases one’s feeling of time affluence"
"Individuals can perceive suicidality from facial appearance with accuracy"
"Male sexual victimisation by women should be taken as seriously as that of women by men"
Brown-eyed men perceived as more trustworthy, mainly because of their rounder face-shape (see also)
Restless Legs Syndrome: a qualitative analysis of psychosocial suffering
Children's and parents' stereotypical views of scientists could put the children off science
About half the benefit of popular "z-drugs" for insomnia is due to the placebo effect
"Medical training is not an optimal environment for the psychological health of medical students"
Mediterranean diet associated with experiencing more positive/ fewer negative moods (cross-sectional study)
Police uniform colour makes no difference to aggression by, or against, police
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
A psychopath, a narcissist & a Machiavellian enter a room, who was perceived more favourably?
What is the most interesting part of the brain? (open access)
"Spending time on others increases one’s feeling of time affluence"
"Individuals can perceive suicidality from facial appearance with accuracy"
"Male sexual victimisation by women should be taken as seriously as that of women by men"
Brown-eyed men perceived as more trustworthy, mainly because of their rounder face-shape (see also)
Restless Legs Syndrome: a qualitative analysis of psychosocial suffering
Children's and parents' stereotypical views of scientists could put the children off science
About half the benefit of popular "z-drugs" for insomnia is due to the placebo effect
"Medical training is not an optimal environment for the psychological health of medical students"
Mediterranean diet associated with experiencing more positive/ fewer negative moods (cross-sectional study)
Police uniform colour makes no difference to aggression by, or against, police
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Tuesday, 29 January 2013
Can you will yourself happier?
"Happiness is as a butterfly, which, when pursued, is always beyond our grasp, but which, if you will sit down quietly, may alight upon you." (Nathaniel Hawthorne)A key question for people hoping to improve their well-being is whether it is counter-productive to focus too hard on the end goal of being happier. Philosophers like John Stuart Mill have proposed that it is - he wrote that happiness comes to those who "have their minds fixed on some object other than their own happiness." A pertinent study published in 2003 by Jonathan Schooler and his colleagues (pdf) supported this idea: participants who listened to music with the intention of feeling happier actually ended up feeling less happy than others who merely listened to the music with no happiness goal.
But now a new study has come along which purports to show that trying deliberately to be happier is beneficial after all. Yuna Ferguson and Kennon Sheldon criticise the Schooler study on the basis that the music used - Stravinsky's Rite of Spring - is not conducive to happiness, and that's why it interfered with deliberate attempts to feel happier.
Ferguson and Sheldon had 167 participants spend 12 minutes listening either to Rite of Spring or an upbeat section from Rodeo by Copland. Crucially, half the participants were instructed to relax and observe their natural reactions to the music. "It is important that you do not try to consciously improve your mood," they were told. The other participants received the opposite instructions - "really focus on improving your mood".
Afterwards, two measures of mood were taken - one based on six words like "joyful"; the other a continuous measure of positive feelings. The participants who'd listened to the cheery music, and simultaneously tried to improve their mood, reported feeling in a more positive mood than the participants who'd merely listened to the upbeat music, and the participants who'd listened to the down-beat music, whether they strived to feel happier or not. This was despite the fact that the groups did not differ in how much they'd enjoyed the activity, or how "pressured" they'd felt to complete it.
A second study was similar, but this time 68 participants visited a psych lab five times over two weeks to spend 15 minutes each time listening to music they'd chosen from a pre-selected list covering various genres from folk to hip-hop. Again, half the participants were instructed to focus on the music and not their own happiness (they were told that doing so could backfire); the other half were told to think a lot about their happiness and to try to feel happier (they were told that doing so is beneficial).
At the end of the two weeks, the group who'd deliberately tried to feel happier showed an improvement in their happiness levels compared with baseline; in contrast, the participants who'd merely focused on the music did not enjoy this benefit. This was despite both groups believing to the same degree that the intervention would make them happier, and both groups enjoying their music the same amount.
"The results suggest that without trying, individuals may not experience higher positive changes in their well-being," Ferguson and Sheldon concluded. "Thus practitioners and individuals interested in happiness interventions might consider the motivational mindset as an important facet of improving well-being."
Sceptical readers may not be so easily persuaded. Because there was no attempt to measure the participants' thought-processes, it's difficult to know how they interpreted and acted on the two forms of instruction. In the second study in particular, even though they were told there was no need, how do we know the participants didn't go to lengths outside of the lab to boost their happiness? From a statistical point of view, the first study lacks any measure of change in mood.
The second study is also complicated by the music-focus group starting out with, and ending up with, a slightly higher average happiness score than the happiness-focus group (albeit these differences were not statistically significant) - see graph. This raises the possibility of a ceiling effect for the music-focus group - perhaps they were already too happy for the intervention to make a difference.
_________________________________
Ferguson, Y., and Sheldon, K. (2013). Trying to be happier really can work: Two experimental studies. The Journal of Positive Psychology, 8 (1), 23-33 DOI: 10.1080/17439760.2012.747000
--Further reading--
Rare, profound positive events won't make you happy, but lots of little ones will
How happiness campaigns could end up making us sadder
How being happy can bring success
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Monday, 28 January 2013
Boost your memory for names by making a game of it
People often apologise for being useless at remembering names, as if it's some idiosyncratic quirk of theirs. In fact, with the exception of memory champs and their fancy mnemonics, plenty of research shows that most of the rest of us are especially hopeless at remembering people's names, as compared with other items of information, such as professions.
Names are arbitrary tags, and so we struggle to embed them in a web of meaningful connections. Research has even shown that people are poorer at remembering names than occupations when the same word is used (e.g. "Mr Carpenter" vs. "a carpenter"), presumably because treating the word as a name robs it of its wider semantic associations.
But that's not to say we can't do a better job of remembering names if we make more effort. And a new study suggests a way to untap this potential - turn the task of memorising names into a game.
Say you're off to a business lunch. You and a colleague could allocate points for any names you remember successfully afterwards. For example, you get 10 points for the boss, 5 points for her assistant, and a point a piece for the remainder of her team. The new research suggests that incentivising the memory challenge in this way will give you a far better chance of recalling the most important names. This could prove handy, helping you make a good impression in future meetings.
Sara Festini and her colleagues put this idea to the test in a study with 32 undergrads. Participants were presented with pictures of 28 male faces, each paired either with a name (e.g. "Mr Fisher") or an occupation ("fisher"). Each face-word pair had a designated point value - either 10 points or 1 point and participants had two chances to study the series of faces and their attached information. A 3-minute filler task came next before the memory test began. The participants were shown the faces and had to recall the relevant name or occupation.
Overall, participants were much better at recalling occupations than names (47 per cent correct vs. 27 per cent), consistent with past research. But crucially, participants did a superior job at remembering high value (10-point) names, than low value names (33 per cent vs. 21 per cent). It's as if the extra incentive prompted participants to go to greater lengths to process the names and encode them more deeply. In contrast, point values made no difference to success with recalling occupations, perhaps because they had already been embedded automatically into a web of semantic connections.
When the experiment was repeated with nonsense words used for names and occupations (e.g. "monid" for occupation and "Mr Monid" for a name), performance was equivalent for names and occupations because the occupations had now been stripped of their automatic meaningful connotations. This time, higher point values improved people's memory for both names and occupations, presumably because both were now able to benefit from the effort of extra processing and encoding.
For a third and final experiment, faces were again paired with standard names and occupations (carpenter / Mr Carpenter) but this time participants were required to rehearse the information for each face out loud, eight times. This was intended to interfere with any attempts at deeper processing, to see if that was the mechanism by which higher points led to better memory. And that's exactly what happened, with high-value names now recalled no more effectively than low-value names.
The researchers said their study revealed "a method to improve proper name learning", although they were cautious about how it might be applied in real life. "Future experiments are needed to determine if deliberately assigning high value to important names in everyday situations similarly boosts name recall as it did in a controlled lab setting."
But their main message remains upbeat: "Although names are difficult to remember," the researchers concluded, "actions can be taken to facilitate their recall."
_________________________________
Festini, S., Hartley, A., Tauber, S., and Rhodes, M. (2012). Assigned value improves memory of proper names. Memory, 1-11 DOI: 10.1080/09658211.2012.747613
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Names are arbitrary tags, and so we struggle to embed them in a web of meaningful connections. Research has even shown that people are poorer at remembering names than occupations when the same word is used (e.g. "Mr Carpenter" vs. "a carpenter"), presumably because treating the word as a name robs it of its wider semantic associations.
But that's not to say we can't do a better job of remembering names if we make more effort. And a new study suggests a way to untap this potential - turn the task of memorising names into a game.
Say you're off to a business lunch. You and a colleague could allocate points for any names you remember successfully afterwards. For example, you get 10 points for the boss, 5 points for her assistant, and a point a piece for the remainder of her team. The new research suggests that incentivising the memory challenge in this way will give you a far better chance of recalling the most important names. This could prove handy, helping you make a good impression in future meetings.
Sara Festini and her colleagues put this idea to the test in a study with 32 undergrads. Participants were presented with pictures of 28 male faces, each paired either with a name (e.g. "Mr Fisher") or an occupation ("fisher"). Each face-word pair had a designated point value - either 10 points or 1 point and participants had two chances to study the series of faces and their attached information. A 3-minute filler task came next before the memory test began. The participants were shown the faces and had to recall the relevant name or occupation.
Overall, participants were much better at recalling occupations than names (47 per cent correct vs. 27 per cent), consistent with past research. But crucially, participants did a superior job at remembering high value (10-point) names, than low value names (33 per cent vs. 21 per cent). It's as if the extra incentive prompted participants to go to greater lengths to process the names and encode them more deeply. In contrast, point values made no difference to success with recalling occupations, perhaps because they had already been embedded automatically into a web of semantic connections.
When the experiment was repeated with nonsense words used for names and occupations (e.g. "monid" for occupation and "Mr Monid" for a name), performance was equivalent for names and occupations because the occupations had now been stripped of their automatic meaningful connotations. This time, higher point values improved people's memory for both names and occupations, presumably because both were now able to benefit from the effort of extra processing and encoding.
For a third and final experiment, faces were again paired with standard names and occupations (carpenter / Mr Carpenter) but this time participants were required to rehearse the information for each face out loud, eight times. This was intended to interfere with any attempts at deeper processing, to see if that was the mechanism by which higher points led to better memory. And that's exactly what happened, with high-value names now recalled no more effectively than low-value names.
The researchers said their study revealed "a method to improve proper name learning", although they were cautious about how it might be applied in real life. "Future experiments are needed to determine if deliberately assigning high value to important names in everyday situations similarly boosts name recall as it did in a controlled lab setting."
But their main message remains upbeat: "Although names are difficult to remember," the researchers concluded, "actions can be taken to facilitate their recall."
_________________________________
Festini, S., Hartley, A., Tauber, S., and Rhodes, M. (2012). Assigned value improves memory of proper names. Memory, 1-11 DOI: 10.1080/09658211.2012.747613
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
5 chances to win The Myth of Martyrdom
This competition is now closed and the winners contacted. We have five copies to give away of The Myth of Martyrdom What Really Drives Suicide Bombers, Rampage Shooters, and Other Self-Destructive Killers by Adam Lankford, kindly donated to us by Palgrave Macmillan.
_________________________________
From the publishers: "Drawing on an array of primary sources, including suicide notes, love letters, diary entries, and martyrdom videos, Lankford reveals the important parallels that exist between suicide bombers, airplane hijackers, cult members, and rampage shooters. The result is an astonishing account of rage and shame that will transform the way we think of terrorism forever." (Check out this review from Scientific American Mind).For your chance to win a copy, simply post a comment to this blog entry telling us which was your favourite Research Digest blog item of 2012 and why (there's a drop-down archive menu in the right-hand column). We'll pick five winners at close of play on Friday. Please leave a way for us to contact you by email. Good luck!
_________________________________
Friday, 25 January 2013
Link feast
In case you missed them - 10 of the best psychology links from the past week:
1. Rituals as social glue - In a fascinating piece for Nature, Dan Jones wrote about the role of different kinds of ritual in binding together small and large social groups.
2. Is TV better for babies than a book? Leading developmental psychologist (and former student of Piaget) Annette Karmiloff-Smith appeared on this week's The Life Scientific on BBC Radio 4.
3. The BBC published a wonderful short history of the ground-breaking Shenley (mental health) Hospital in Hertfordshire.
4. New Scientist reviewed The Face of Emotion: How botox affects our mood and relationships by Eric Finzi: "The scientific debate about the regulation of the emotions is as lively as ever, and this is a provocative and insightful contribution."
5. As usual lots of psychologists have answered the Edge annual question, which this year is: "What should we be worried about?".
6. Action for Happiness has published a new report: "Increasing Happiness by Understanding What People Want"
7. A recent Wellcome Collection event on pain is now available to watch on YouTube.
8. Fundamentally speaking, are humans good or bad? asks Tom Stafford in his latest column for BBC Future.
9. Stop everything else you are doing and concentrate on reading this: People who think they're great at multi-tasking are often unusually bad at it.
10. The Guardian published an edited extract from January First: A Child's Descent Into Madness And Her Father's Struggle To Save Her, by Michael Schofield.
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
1. Rituals as social glue - In a fascinating piece for Nature, Dan Jones wrote about the role of different kinds of ritual in binding together small and large social groups.
2. Is TV better for babies than a book? Leading developmental psychologist (and former student of Piaget) Annette Karmiloff-Smith appeared on this week's The Life Scientific on BBC Radio 4.
3. The BBC published a wonderful short history of the ground-breaking Shenley (mental health) Hospital in Hertfordshire.
4. New Scientist reviewed The Face of Emotion: How botox affects our mood and relationships by Eric Finzi: "The scientific debate about the regulation of the emotions is as lively as ever, and this is a provocative and insightful contribution."
5. As usual lots of psychologists have answered the Edge annual question, which this year is: "What should we be worried about?".
6. Action for Happiness has published a new report: "Increasing Happiness by Understanding What People Want"
7. A recent Wellcome Collection event on pain is now available to watch on YouTube.
8. Fundamentally speaking, are humans good or bad? asks Tom Stafford in his latest column for BBC Future.
9. Stop everything else you are doing and concentrate on reading this: People who think they're great at multi-tasking are often unusually bad at it.
10. The Guardian published an edited extract from January First: A Child's Descent Into Madness And Her Father's Struggle To Save Her, by Michael Schofield.
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Thursday, 24 January 2013
Glimpsed at last - the life of neuropsychology's most important patient
Leborgne's brain |
Broca was far from being the first person to propose that speech function is located in the frontal lobes, but crucially, the evidence from Leborgne helped him persuade the academic community. For centuries experts had believed mental functions were located in the brain's hollows; that the cortex ("husk" in Latin) was little more than a rind of tissue and blood vessels. Today, problems producing language are still termed Broca's aphasia in recognition of Broca's landmark contribution, although Broca in fact named Leborgne's problems aphémie (meaning “without speech”). The Greek term “aphasia” (also meaning “speechlessness”), adopted by medicine, was coined in Broca's day by the physician Armand Trousseau.
Far more is known about Gage's life |
In contrast, Broca was careful to save Leborgne's brain for posterity. He decided against a full dissection, performing a surface examination only. Today the preserved organ is housed at the Musée Dupuytren museum in Paris, where Broca placed it. The brain has been scanned numerous times using modern methods (e.g. PDF), allowing detailed analysis of the location and nature of any lesions. We now know that the frontal lobe damage to Leborgne's brain was more extensive and deeper than Broca had realised based on his superficial examinations. But, contra the situation with Gage, while we are well-informed about Leborgne's brain, before now his identity and life story have remained largely mysterious. Broca's medical notes revealed little.
Thankfully, in a new paper, Cezary Domanski at Maria Curie-Sklodowska University in Poland has used archive registers in France to uncover hitherto unknown detailed biographical information about Monsieur Leborgne. Born in Moret-sur-Loing - the picturesque town that inspired Monet and other impressionists - "Tan's" full name was Louis Victor Leborgne. He was the son of Pierre Christophe Leborgne, a school teacher, and Margueritte Savard. He had three older siblings, Lucille, Pierre and Anne, and two younger siblings, Arsene and Louise.
An epileptic since his youth, it was Leborgne's loss of speech that led to him being hospitalised at age 30. Unmarried, he ended up spending the remaining 21 years of his life in hospital. Before this incapacitation through illness, Domanski tells us Leborgne was a "formier" in Paris, a kind of skilled craftsman who made the wooden forms used by shoemakers in their work. Together with the information on Leborgne's family, this news corrects at least one historical myth. The oft-told idea that Leborgne "was an uneducated illiterate from the lower social class should once and for all be deemed erroneous," writes Domanski.
Based on his inquiries, the Polish historian offers an intriguing speculation - given that Leborgne's birthplace of Moret was home to several tanneries, Domanski wonders if his repeated utterance of Tan was somehow connected to childhood memories of the pretty town.
"One thing remains certain," Domanski concludes, "The memory of the disease and cause of death of 'Monsieur Leborgne' proved far more enduring than the story of his life, which was deemed irrelevant even when the patient was still alive. It is time for Louis Victor Leborgne to regain his identity ...".
In 2009, out of the blue, a photograph was discovered of Phineas Gage. I wonder if we will ever look upon an image of Leborgne?
_________________________________
Domanski CW (2013). Mysterious "Monsieur Leborgne": The Mystery of the Famous Patient in the History of Neuropsychology is Explained. Journal of the history of the neurosciences, 22 (1), 47-52 PMID: 23323531
--Further reading--
500 Francs Says Language Is Housed in the Frontal Lobes!
Speaking without Broca's area
Broca’s area: Nomenclature, anatomy, typology and asymmetry
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Wednesday, 23 January 2013
Can you fake your personality in a photo?
Say you wanted your Facebook pic or Twitter avatar to convey to the world that you have a particular personality type, different from how you really are, would you be able to pose in a such a way to achieve this?
A new Finnish study led by Sointu Leikas has explored this idea by asking 60 participants (average age 27; 30 men) to pose for 11 photographs, from waist up, against a white background. The first photo was simply taken as they posed freely. Next, they posed the extremes of each of the Big Five personality traits, described to them as: stable and calm (low neuroticism); anxious and distressed (high neuroticism); extraverted and enthusiastic (high extraversion); reserved and quiet (low extraversion); intellectually curious / daydreamer (high openness); conventional / does not like change (low openness); empathic and warm (high agreeableness); critical and quarrelsome (low agreeableness); dependable and self-disciplined (high conscientiousness); and unorganised and careless (low conscientiousness). The participants weren't allowed to change their clothing or hair to create these various impressions.
The photos were subsequently shown to 401 observer participants (average age 26, 343 women). Each observer rated the personality of the person depicted in 11 photos, each showing a different posing participant in one of the various posing conditions. Attractiveness of the posers was controlled for in the analysis, given that attractiveness is known to influence perceptions of personality.
In many cases the participants succeeded in conveying specific personality impressions, even when different from their true personality scores, but this varied with the particular personality traits in question. They were most effective at portraying either high or low extraversion. Openness was also conveyed quite successfully. Past research has shown that high-scorers on Openness tend to look away from the camera, so it's possible the posing participants in the current study realised this, perhaps subconsciously.
The posers also had partial success with neuroticism and conscientiousness: they were rated as less conscientiousness when attempting to appear as unorganised, compared with their neutral photo; and they were rated as more neurotic when they attempted to appear anxious, as compared with their neutral photo. Attempts to appear stable or dependable and self-disciplined did not work so well. Another striking finding was the posers' complete failure to convey reliably either high or low agreeableness. Observer ratings were all over the place for this trait, perhaps due to a reluctance to score strangers on this dimension on the basis of such limited evidence. "From an applied perspective, this can be considered fortunate," the researchers said, "because it suggests that it is difficult to convey a false image of high Agreeableness."
Past research has largely focused on how much surprisingly accurate information we're able to garner from the briefest glimpses of other people's appearance. This new study is an interesting departure, turning the focus to how much we can control the perceptions we create. "With everyone a Google search away, first impressions of potentially important others are increasingly likely to be be based on impressions of personality in photographs," Leikas and her colleagues said. "The results suggested that it is possible to control the impressions of personality in photographs. However, success ... depends on the particular trait in question."
A weakness of the study is the fact the posing participants were unable to modify their clothing or hairstyle, or use props or backgrounds. In real life, people hoping to look friendly on a Facebook profile, or entrepreneurial on LinkedIn, would surely alter their clothes and backdrop to help achieve their desired image.
_________________________________
Leikas, S., Verkasalo, M., and Lönnqvist, J. (2013). Posing personality: Is it possible to enact the Big Five traits in photographs? Journal of Research in Personality, 47 (1), 15-21 DOI: 10.1016/j.jrp.2012.10.012
Further reading: What your Facebook picture says about your cultural background.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
A new Finnish study led by Sointu Leikas has explored this idea by asking 60 participants (average age 27; 30 men) to pose for 11 photographs, from waist up, against a white background. The first photo was simply taken as they posed freely. Next, they posed the extremes of each of the Big Five personality traits, described to them as: stable and calm (low neuroticism); anxious and distressed (high neuroticism); extraverted and enthusiastic (high extraversion); reserved and quiet (low extraversion); intellectually curious / daydreamer (high openness); conventional / does not like change (low openness); empathic and warm (high agreeableness); critical and quarrelsome (low agreeableness); dependable and self-disciplined (high conscientiousness); and unorganised and careless (low conscientiousness). The participants weren't allowed to change their clothing or hair to create these various impressions.
The photos were subsequently shown to 401 observer participants (average age 26, 343 women). Each observer rated the personality of the person depicted in 11 photos, each showing a different posing participant in one of the various posing conditions. Attractiveness of the posers was controlled for in the analysis, given that attractiveness is known to influence perceptions of personality.
In many cases the participants succeeded in conveying specific personality impressions, even when different from their true personality scores, but this varied with the particular personality traits in question. They were most effective at portraying either high or low extraversion. Openness was also conveyed quite successfully. Past research has shown that high-scorers on Openness tend to look away from the camera, so it's possible the posing participants in the current study realised this, perhaps subconsciously.
The posers also had partial success with neuroticism and conscientiousness: they were rated as less conscientiousness when attempting to appear as unorganised, compared with their neutral photo; and they were rated as more neurotic when they attempted to appear anxious, as compared with their neutral photo. Attempts to appear stable or dependable and self-disciplined did not work so well. Another striking finding was the posers' complete failure to convey reliably either high or low agreeableness. Observer ratings were all over the place for this trait, perhaps due to a reluctance to score strangers on this dimension on the basis of such limited evidence. "From an applied perspective, this can be considered fortunate," the researchers said, "because it suggests that it is difficult to convey a false image of high Agreeableness."
Past research has largely focused on how much surprisingly accurate information we're able to garner from the briefest glimpses of other people's appearance. This new study is an interesting departure, turning the focus to how much we can control the perceptions we create. "With everyone a Google search away, first impressions of potentially important others are increasingly likely to be be based on impressions of personality in photographs," Leikas and her colleagues said. "The results suggested that it is possible to control the impressions of personality in photographs. However, success ... depends on the particular trait in question."
A weakness of the study is the fact the posing participants were unable to modify their clothing or hairstyle, or use props or backgrounds. In real life, people hoping to look friendly on a Facebook profile, or entrepreneurial on LinkedIn, would surely alter their clothes and backdrop to help achieve their desired image.
_________________________________
Leikas, S., Verkasalo, M., and Lönnqvist, J. (2013). Posing personality: Is it possible to enact the Big Five traits in photographs? Journal of Research in Personality, 47 (1), 15-21 DOI: 10.1016/j.jrp.2012.10.012
Further reading: What your Facebook picture says about your cultural background.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Tuesday, 22 January 2013
Lying becomes automatic with practice
Forget shifty eyes or fidgety fingers, psychology research has shown that these supposed signs of lying are unreliable. Liars easily learn to make eye contact, and anxiety can make honest people squirm nervously.
A more useful foundation for lie-detection is the simple fact that lying is more cognitively demanding than telling the truth. False answers therefore usually take slightly longer than honest responses, especially when a suspect is burdened with an extra mental challenge, such as telling their story backwards.
However, a new study suggests that the cognitive demands of lying can be reduced with practice. Xiaoqing Hu and his team presented 48 participants with dates, place names and other information and asked them to indicate with one of two button presses whether the information was self-relevant or not. In real life this would be equivalent to a suspect posing as a different person. Instructed to lie, the participants took longer to respond than when they told the truth, consistent with the well-established idea that lying is cognitively demanding.
Next, a third of the participants were told about the reaction time difference and given extensive practice (360 trials) at lying more quickly about the self-relevance of information. The requirement to get faster was made explicit because past research found lying practice without such an instruction was ineffective. On retesting, the trained participants in the current study no longer took more time to answer dishonestly compared with telling the truth. "Deception is malleable and its performance index can be voluntarily controlled to be more automatic," the researchers said.
Another group had no training but were told about the reaction time difference between lying and truth telling, and encouraged to answer faster when lying. They got faster at lying compared with a control group, but still they were speedier when being honest.
The researchers admitted that their sample size was small and a replication is needed with more participants. However, they said their results suggest that to be more realistic, lie-detection research based around the cognitive demands of lying should incorporate the effects of practice.
_________________________________
Hu, X., Chen, H., and Fu, G. (2012). A Repeated Lie Becomes a Truth? The Effect of Intentional Control and Training on Deception. Frontiers in Psychology, 3 DOI: 10.3389/fpsyg.2012.00488
Also published recently: Sorting the Liars from the Truth Tellers: The Benefits of Asking Unanticipated Questions on Lie Detection.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
A more useful foundation for lie-detection is the simple fact that lying is more cognitively demanding than telling the truth. False answers therefore usually take slightly longer than honest responses, especially when a suspect is burdened with an extra mental challenge, such as telling their story backwards.
However, a new study suggests that the cognitive demands of lying can be reduced with practice. Xiaoqing Hu and his team presented 48 participants with dates, place names and other information and asked them to indicate with one of two button presses whether the information was self-relevant or not. In real life this would be equivalent to a suspect posing as a different person. Instructed to lie, the participants took longer to respond than when they told the truth, consistent with the well-established idea that lying is cognitively demanding.
Next, a third of the participants were told about the reaction time difference and given extensive practice (360 trials) at lying more quickly about the self-relevance of information. The requirement to get faster was made explicit because past research found lying practice without such an instruction was ineffective. On retesting, the trained participants in the current study no longer took more time to answer dishonestly compared with telling the truth. "Deception is malleable and its performance index can be voluntarily controlled to be more automatic," the researchers said.
Another group had no training but were told about the reaction time difference between lying and truth telling, and encouraged to answer faster when lying. They got faster at lying compared with a control group, but still they were speedier when being honest.
The researchers admitted that their sample size was small and a replication is needed with more participants. However, they said their results suggest that to be more realistic, lie-detection research based around the cognitive demands of lying should incorporate the effects of practice.
_________________________________
Hu, X., Chen, H., and Fu, G. (2012). A Repeated Lie Becomes a Truth? The Effect of Intentional Control and Training on Deception. Frontiers in Psychology, 3 DOI: 10.3389/fpsyg.2012.00488
Also published recently: Sorting the Liars from the Truth Tellers: The Benefits of Asking Unanticipated Questions on Lie Detection.
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Thursday, 17 January 2013
Time travel study shows my years take up more space than yours
We think about time in terms of space, as revealed in the way we talk about it ("looking ahead to our future"; "looking back at our past") and in the results from psychology experiments. For instance, people from countries that write right-to-left find it easier to associate future events with the right-hand side of space. This begs the question - how much space do we think time takes up, and is it always constant, or does it vary with how richly we represent particular episodes?
Brittany Christian and her colleagues have explored this question with a pair of fascinating studies. The first involved 60 participants (aged 18 to 32 years) marking the position of various birthdays on a 36cm horizontal line. The middle of the line was marked as "now". Some participants were asked to draw a mark to show the position of their 8th and 9th birthdays, their previous and next birthdays (relative to now), and their 58th and 59th birthdays, representing past, present and future periods of time, respectively. Other participants did the same for the equivalent birthdays of a best friend; others did it for a stranger who shared the same birth date as them.
The key result here was that participants indicating their own birthdays tended to leave a larger gap between their previous and next birthdays, as compared with participants who marked the birthdays of a best friend. In turn, those marking the birthdays of a best friend left a larger gap between previous and next birthdays than did participants who marked the birthday positions of a stranger. No contrasts emerged for gaps between birthdays in the further past or future (8th and 9th or 58th and 59th), perhaps because we represent such distant time more generically. The main result suggests that the more richly we encode past and future events in our minds, the more physical space we allocate to our mental representation of those periods.
A second study was similar but this time 63 participants (aged 18 to 32) controlled their passage backwards or forwards through time, an experience that was created using the optic flow of white dots on a computer screen. The contraction of the dots towards the centre creates the sensation of moving backwards, the expansion of dots outwards gives the feeling of travelling forwards. Using a keypad to control their motion, the participants were asked to move forward or backwards through time until they reached various birthdays up to ten years in the past or future. As in the first study, they did this either for their own birthdays, the birthdays of a friend, or a stranger.
Participants chose to travel through more space to reach birthday events in their own lives, compared with the space they travelled when journeying towards a friend's same birthdays. Participants traversed the least amount of space to reach those birthday dates in the life of a stranger. These differences were true for past events and future events, and they held across the full span of time that was investigated (i.e. a birthday up to ten years in the past or future).
Christian and her colleagues said their finding was consistent with construal level theory: "more space is allocated to events that feature self-relevant and episodically rich (i.e. more concrete) mental representations." Future research is needed to see if other factors also affect the amount of space allocated to temporal representations, such as factual knowledge or emotional salience. Would we allocate more space to time in the life of someone who we like but don't know well, or to someone we know well, but don't like?
The researchers said the "behavioural implications of these findings remains an important challenge for future". It's speculative for now, but they surmised that their results could help make sense of the planning fallacy - our tendency to underestimate how long things will take us, compared with others. The fact that we represent our own time with more space could tempt us to feel like we can get more done in a given period.
_________________________________
Christian BM, Miles LK, and Macrae CN (2012). Your space or mine? Mapping self in time. PloS one, 7 (11) PMID: 23166617
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Brittany Christian and her colleagues have explored this question with a pair of fascinating studies. The first involved 60 participants (aged 18 to 32 years) marking the position of various birthdays on a 36cm horizontal line. The middle of the line was marked as "now". Some participants were asked to draw a mark to show the position of their 8th and 9th birthdays, their previous and next birthdays (relative to now), and their 58th and 59th birthdays, representing past, present and future periods of time, respectively. Other participants did the same for the equivalent birthdays of a best friend; others did it for a stranger who shared the same birth date as them.
The key result here was that participants indicating their own birthdays tended to leave a larger gap between their previous and next birthdays, as compared with participants who marked the birthdays of a best friend. In turn, those marking the birthdays of a best friend left a larger gap between previous and next birthdays than did participants who marked the birthday positions of a stranger. No contrasts emerged for gaps between birthdays in the further past or future (8th and 9th or 58th and 59th), perhaps because we represent such distant time more generically. The main result suggests that the more richly we encode past and future events in our minds, the more physical space we allocate to our mental representation of those periods.
A second study was similar but this time 63 participants (aged 18 to 32) controlled their passage backwards or forwards through time, an experience that was created using the optic flow of white dots on a computer screen. The contraction of the dots towards the centre creates the sensation of moving backwards, the expansion of dots outwards gives the feeling of travelling forwards. Using a keypad to control their motion, the participants were asked to move forward or backwards through time until they reached various birthdays up to ten years in the past or future. As in the first study, they did this either for their own birthdays, the birthdays of a friend, or a stranger.
Participants chose to travel through more space to reach birthday events in their own lives, compared with the space they travelled when journeying towards a friend's same birthdays. Participants traversed the least amount of space to reach those birthday dates in the life of a stranger. These differences were true for past events and future events, and they held across the full span of time that was investigated (i.e. a birthday up to ten years in the past or future).
Christian and her colleagues said their finding was consistent with construal level theory: "more space is allocated to events that feature self-relevant and episodically rich (i.e. more concrete) mental representations." Future research is needed to see if other factors also affect the amount of space allocated to temporal representations, such as factual knowledge or emotional salience. Would we allocate more space to time in the life of someone who we like but don't know well, or to someone we know well, but don't like?
The researchers said the "behavioural implications of these findings remains an important challenge for future". It's speculative for now, but they surmised that their results could help make sense of the planning fallacy - our tendency to underestimate how long things will take us, compared with others. The fact that we represent our own time with more space could tempt us to feel like we can get more done in a given period.
_________________________________
Christian BM, Miles LK, and Macrae CN (2012). Your space or mine? Mapping self in time. PloS one, 7 (11) PMID: 23166617
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Wednesday, 16 January 2013
Self-esteem is catching
You've probably heard about the negative research showing how people take their work stress home, upsetting their partner's mood. Well, the good news is there's a positive equivalent. Angela Neff and her colleagues monitored the self-esteem of 102 working couples over five days (mostly German academics), getting them to answer questions about their work-related self-esteem when they got home, and then again at bed-time (e.g. they rated their agreement with statements like "I feel as smart as others"). They also completed a measure of their general self-esteem levels and their empathy.
The key finding was that a person's after-work self-esteem was positively related to their partner's self-esteem at bed-time that same day. In other words, when one person came home with a spring in their step, feeling confident about their ability at work, this seemed to infect their partner, so that by bed-time, the partner too was feeling more confident about their own work-related abilities. This transfer of positive self-esteem was more pronounced when the receiving partner tended to be of lower self-esteem more generally and was more empathic.
"This finding supports the notion that work and family do not necessarily have to be conflicting domains," the researchers said, "but can also be mutually enriching." That said, there's a negative interpretation of these results. If one person comes to depend on their partner's after-work positivity, this can backfire on those occasions that the partner has a bad day.
Neff and her colleagues said their finding was important because research shows that high work-related self-esteem tends to go hand in hand with better job performance and satisfaction. In this sense, the psychological effect of one person's success at work can filter its way through to their partner, in turn boosting his or her work performance the next day. From a practical perspective, this shows managers how important and far-reaching the effects can be of providing their employees with positive (self-esteem enhancing) feedback.
On a more sober note, the study has several limitations, as the authors realise. This includes the fact that they have no idea of the mechanism by which self-esteem passes from one person to the other. And the sample was narrow, made up of highly-educated people who nearly all came from one field of work. It remains to be seen if the same result will be found with more diverse participants.
_________________________________
Neff, A., Sonnentag, S., Niessen, C., and Unger, D. (2012). What's mine is yours: The crossover of day-specific self-esteem Journal of Vocational Behavior, 81 (3), 385-394 DOI: 10.1016/j.jvb.2012.10.002
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
The key finding was that a person's after-work self-esteem was positively related to their partner's self-esteem at bed-time that same day. In other words, when one person came home with a spring in their step, feeling confident about their ability at work, this seemed to infect their partner, so that by bed-time, the partner too was feeling more confident about their own work-related abilities. This transfer of positive self-esteem was more pronounced when the receiving partner tended to be of lower self-esteem more generally and was more empathic.
"This finding supports the notion that work and family do not necessarily have to be conflicting domains," the researchers said, "but can also be mutually enriching." That said, there's a negative interpretation of these results. If one person comes to depend on their partner's after-work positivity, this can backfire on those occasions that the partner has a bad day.
Neff and her colleagues said their finding was important because research shows that high work-related self-esteem tends to go hand in hand with better job performance and satisfaction. In this sense, the psychological effect of one person's success at work can filter its way through to their partner, in turn boosting his or her work performance the next day. From a practical perspective, this shows managers how important and far-reaching the effects can be of providing their employees with positive (self-esteem enhancing) feedback.
On a more sober note, the study has several limitations, as the authors realise. This includes the fact that they have no idea of the mechanism by which self-esteem passes from one person to the other. And the sample was narrow, made up of highly-educated people who nearly all came from one field of work. It remains to be seen if the same result will be found with more diverse participants.
_________________________________
Neff, A., Sonnentag, S., Niessen, C., and Unger, D. (2012). What's mine is yours: The crossover of day-specific self-esteem Journal of Vocational Behavior, 81 (3), 385-394 DOI: 10.1016/j.jvb.2012.10.002
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Monday, 14 January 2013
"The end of history illusion" illusion
An intriguing study was published in Science recently with the eye-catching title "The End of History Illusion". It spawned plentiful uncritical media coverage, including the You Won't Stay The Same article in the NYT, and already the effect has its own Wikipedia entry. But does it really exist or has hype and stylish presentation generated an illusory illusion?
Jordi Quoidbach, Dan Gilbert and Timothy Wilson claimed to have shown that people of all ages underestimate how much their personalities, preferences and values will change in the future. Ideally the psychologists would have asked people to predict the amount they'd change, and then followed them up later to compare the actual change with the prediction. They didn't do that.
Instead, their favoured approach was to survey over 7,000 people aged 18 to 68 and to ask some of them - "the reporters" - to recall their personality ten years ago, using a personality questionnaire, and to ask others - "the predictors" to estimate their expected personality ten years hence. All participants also answered questions about their current personality. The key contrast was the amount of difference between recalled vs. current personality (documented by reporters) and the amount of difference between predicted and current personality (documented by the predictors). Across every decade of life, the former was much larger. People thought their personality had changed more than they thought it would change.
But there are some serious problems here. First off, there's no actual data on real change. How do we know this is a delusion of prediction? Perhaps the reporters are wrong about how much they've changed. Quoidbach's team realised this so they looked at data from a separate longitudinal study that really had followed people over time (once in 95-96 and again in 04-06), allowing examination of actual personality change.
Unfortunately, this MacArthur Foundation Survey of MidLife Development in the United States (MIDUS) used a completely different measure of personality. "Direct comparison of the data was not possible," Quoidbach et al confessed. Nonetheless, after comparing the amount of personality change in the MIDUS survey with amount of change estimated by reporters in their own study, the researchers said the change was "almost identical", and "substantially larger" than the change predicted by the predictors in their study.
But there's another serious issue with the new research, which was highlighted in a recent blog post: that is, the predictors might well have believed that their personality will change, but they didn't know which direction it will change in. Take extraversion. Perhaps they will become more shy, perhaps more gregarious? Without knowing the direction of change, the most accurate prediction is to report no change in extraversion from their current score.
Quoidbach's group hinted at this problem by acknowledging that participants may not "feel confident predicting specific changes". To overcome this, they asked over a thousand people to answer a non-specific question, estimating how much they felt they'd changed as a person, or how much they would change. It doesn't entirely deal with the direction of change issue, but again, "reporters" estimated more change than "predictors" predicted.
Yet another survey with thousands more participants asked them to recall or predict changes in their values over ten years - things like hedonism and security. Again, people reported more change in their values than they predicted. Recalled change was more modest in older participants, but again the difference in recall and prediction occurred at every decade. However, this survey has the same problems as before - the issues of memory distortion and predicting bi-directional change - and in this case they went unaddressed.
Gathering yet more data, the researchers surveyed thousands of people about their preferences in the past compared with now, and asked others about their likely preferences in the future, compared with now - things like favoured holidays, taste in music and food. The idea of this study was to eliminate memory bias. Quoidbach et al reasoned that people have an accurate sense of their past preferences, although they didn't reference any data to support this claim. People recalling the past again appeared to have changed more than was anticipated by those looking ahead.
A final, far smaller study attempted to address the practical implications of our failure to anticipate how much we will change in the future. One hundred and seventy adults were split in two groups. One stated their current favourite band and said how much they'd pay to see them in ten years. The other group reported their favourite band ten years ago and said how much they'd pay to see them today. There was a big difference - those looking ahead said they'd pay 61 per cent more to see their current favourite band, as compared with the price the retrospective group said they'd pay to see their former favourite band today. "Participants substantially overpaid for a further opportunity to indulge a current preference," the researchers said.
But was this a fair comparison? The retrospective group know things about their former favourite band that the future group couldn't possibly know about their current favourite band. For example, perhaps members of the retrospective group didn't like their chosen band's follow-up albums, perhaps they already saw them in concert many times over the last ten years. Maybe the future group were optimistic about the future creative output of their current favourite band. In short, there are so many other factors at play here, besides participants' beliefs about the stability of their own preferences.
Convinced by their own demonstrations of the End of History Illusion, Quoidbach et al speculated about the possible causes. One explanation, they suggested, is that the effect is a manifestation of our gilded view of ourselves: "most people believe that their personalities are attractive ....," the researchers wrote, "having reached that exalted state, they may be reluctant to entertain the possibility of change." But this explanation seems to ignore the self-doubt and pessimism that blights many people's lives. Quoidbach's other proposal is that the End of History Illusion is a manifestation of the fluency heuristic - because it's tricky to imagine change in the future, we infer that change is unlikely.
These speculations are premature. It would be easier to believe in the End of History of Illusion if there was data on actual change, rather than a reliance on participants' memories of themselves in the past. Even if the effect is real, it's also not clear if this is a general bias about the future that extends beyond our beliefs about ourselves. What predictions would we make about the future change of other people? Or about human culture in general? Here's one thing that surely won't change - slick psychology papers with eye-catching titles getting lots of attention.
_________________________________
Quoidbach J, Gilbert DT, and Wilson TD (2013). The end of history illusion. Science (New York, N.Y.), 339 (6115), 96-8 PMID: 23288539
[thanks Thom Baguley for help with understanding some of the methodological issues]
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Jordi Quoidbach, Dan Gilbert and Timothy Wilson claimed to have shown that people of all ages underestimate how much their personalities, preferences and values will change in the future. Ideally the psychologists would have asked people to predict the amount they'd change, and then followed them up later to compare the actual change with the prediction. They didn't do that.
Instead, their favoured approach was to survey over 7,000 people aged 18 to 68 and to ask some of them - "the reporters" - to recall their personality ten years ago, using a personality questionnaire, and to ask others - "the predictors" to estimate their expected personality ten years hence. All participants also answered questions about their current personality. The key contrast was the amount of difference between recalled vs. current personality (documented by reporters) and the amount of difference between predicted and current personality (documented by the predictors). Across every decade of life, the former was much larger. People thought their personality had changed more than they thought it would change.
But there are some serious problems here. First off, there's no actual data on real change. How do we know this is a delusion of prediction? Perhaps the reporters are wrong about how much they've changed. Quoidbach's team realised this so they looked at data from a separate longitudinal study that really had followed people over time (once in 95-96 and again in 04-06), allowing examination of actual personality change.
Unfortunately, this MacArthur Foundation Survey of MidLife Development in the United States (MIDUS) used a completely different measure of personality. "Direct comparison of the data was not possible," Quoidbach et al confessed. Nonetheless, after comparing the amount of personality change in the MIDUS survey with amount of change estimated by reporters in their own study, the researchers said the change was "almost identical", and "substantially larger" than the change predicted by the predictors in their study.
But there's another serious issue with the new research, which was highlighted in a recent blog post: that is, the predictors might well have believed that their personality will change, but they didn't know which direction it will change in. Take extraversion. Perhaps they will become more shy, perhaps more gregarious? Without knowing the direction of change, the most accurate prediction is to report no change in extraversion from their current score.
Quoidbach's group hinted at this problem by acknowledging that participants may not "feel confident predicting specific changes". To overcome this, they asked over a thousand people to answer a non-specific question, estimating how much they felt they'd changed as a person, or how much they would change. It doesn't entirely deal with the direction of change issue, but again, "reporters" estimated more change than "predictors" predicted.
Yet another survey with thousands more participants asked them to recall or predict changes in their values over ten years - things like hedonism and security. Again, people reported more change in their values than they predicted. Recalled change was more modest in older participants, but again the difference in recall and prediction occurred at every decade. However, this survey has the same problems as before - the issues of memory distortion and predicting bi-directional change - and in this case they went unaddressed.
Gathering yet more data, the researchers surveyed thousands of people about their preferences in the past compared with now, and asked others about their likely preferences in the future, compared with now - things like favoured holidays, taste in music and food. The idea of this study was to eliminate memory bias. Quoidbach et al reasoned that people have an accurate sense of their past preferences, although they didn't reference any data to support this claim. People recalling the past again appeared to have changed more than was anticipated by those looking ahead.
A final, far smaller study attempted to address the practical implications of our failure to anticipate how much we will change in the future. One hundred and seventy adults were split in two groups. One stated their current favourite band and said how much they'd pay to see them in ten years. The other group reported their favourite band ten years ago and said how much they'd pay to see them today. There was a big difference - those looking ahead said they'd pay 61 per cent more to see their current favourite band, as compared with the price the retrospective group said they'd pay to see their former favourite band today. "Participants substantially overpaid for a further opportunity to indulge a current preference," the researchers said.
But was this a fair comparison? The retrospective group know things about their former favourite band that the future group couldn't possibly know about their current favourite band. For example, perhaps members of the retrospective group didn't like their chosen band's follow-up albums, perhaps they already saw them in concert many times over the last ten years. Maybe the future group were optimistic about the future creative output of their current favourite band. In short, there are so many other factors at play here, besides participants' beliefs about the stability of their own preferences.
Convinced by their own demonstrations of the End of History Illusion, Quoidbach et al speculated about the possible causes. One explanation, they suggested, is that the effect is a manifestation of our gilded view of ourselves: "most people believe that their personalities are attractive ....," the researchers wrote, "having reached that exalted state, they may be reluctant to entertain the possibility of change." But this explanation seems to ignore the self-doubt and pessimism that blights many people's lives. Quoidbach's other proposal is that the End of History Illusion is a manifestation of the fluency heuristic - because it's tricky to imagine change in the future, we infer that change is unlikely.
These speculations are premature. It would be easier to believe in the End of History of Illusion if there was data on actual change, rather than a reliance on participants' memories of themselves in the past. Even if the effect is real, it's also not clear if this is a general bias about the future that extends beyond our beliefs about ourselves. What predictions would we make about the future change of other people? Or about human culture in general? Here's one thing that surely won't change - slick psychology papers with eye-catching titles getting lots of attention.
_________________________________
Quoidbach J, Gilbert DT, and Wilson TD (2013). The end of history illusion. Science (New York, N.Y.), 339 (6115), 96-8 PMID: 23288539
[thanks Thom Baguley for help with understanding some of the methodological issues]
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Friday, 11 January 2013
Link feast
In case you missed them - 10 of the best psychology links from the past week:
1. There's More to Life Than Being Happy - by Emily Esfahani Smith for the Atlantic. "Leading a happy life, the psychologists found, is associated with being a 'taker' while leading a meaningful life corresponds with being a 'giver'."
2. The latest issue of the Wellcome Trust's free Big Picture magazine is devoted to the brain and brain scanning techniques.
3. Psychologists discuss the cocktail party effect - BBC Radio 4.
4. How switching tasks maximises creative thinking.
5. The Examined Life by Stephen Grosz - stories and case studies from 25 years as a London psychoanalyst - was BBC Radio 4's Book of the Week. The book is "already something of a literary sensation", says the Guardian.
6. The 12 cognitive biases that prevent you from being rational.
7. Psychological insights into human attention from the skills of a pick-pocket - by Adam Green for the New Yorker.
8. The jobs with the most psychopaths.
9. Psychologists discuss disgust - BBC Radio 4.
10. New book that's definitely worth a look - The World Until Yesterday in which Jared Diamond explores what we can learn from traditional societies. Tom Payne, the Telegraph reviewer, said it left him "riveted and thinking hard". But Wade Davis for The Guardian was less enthusiastic: "the lessons [Diamond] draws from his sweeping examination of culture are for the most part uninspired and self-evident."
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
1. There's More to Life Than Being Happy - by Emily Esfahani Smith for the Atlantic. "Leading a happy life, the psychologists found, is associated with being a 'taker' while leading a meaningful life corresponds with being a 'giver'."
2. The latest issue of the Wellcome Trust's free Big Picture magazine is devoted to the brain and brain scanning techniques.
3. Psychologists discuss the cocktail party effect - BBC Radio 4.
4. How switching tasks maximises creative thinking.
5. The Examined Life by Stephen Grosz - stories and case studies from 25 years as a London psychoanalyst - was BBC Radio 4's Book of the Week. The book is "already something of a literary sensation", says the Guardian.
6. The 12 cognitive biases that prevent you from being rational.
7. Psychological insights into human attention from the skills of a pick-pocket - by Adam Green for the New Yorker.
8. The jobs with the most psychopaths.
9. Psychologists discuss disgust - BBC Radio 4.
10. New book that's definitely worth a look - The World Until Yesterday in which Jared Diamond explores what we can learn from traditional societies. Tom Payne, the Telegraph reviewer, said it left him "riveted and thinking hard". But Wade Davis for The Guardian was less enthusiastic: "the lessons [Diamond] draws from his sweeping examination of culture are for the most part uninspired and self-evident."
_________________________________
Post compiled by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Thursday, 10 January 2013
Social disapproval leads to longer lasting behaviour change than cash fines
If you want to influence people's behaviour by hitting them where it hurts, the wallet seems like a great place to aim. Say a local authority began fining litter-bugs on the spot, you can bet the streets would soon be cleaner. But there's a downside. People begin to see the behaviour in terms of a cost-benefit analysis. They stop littering not because it's wrong, but because it makes financial sense. This approach can also encourage would-be litterers to perceive other people's tidy behaviour as a financial rather than a moral choice. None of this matters too much until the litter wardens go home. Absent the financial threat, litterers are quick to start dumping their junk again.
It's not realistic to have a constant method of enforcement in place. So what approach will be more effective than the time-limited influence of fines? A new study by Rob Nelissen and Laetitia Mulder suggests that social disapproval is more effective than financial sanctions because the effects linger on even after the threat of disapproval is lifted.
The researchers invited 84 participants to sit alone at computer cubicles, to play several rounds of a public goods game in groups of four. Players started with 4 Euros each, and every round they chose how much to place into a group kitty. At the end of each round the group stash was multiplied 1.5 times and shared among the four players. The anti-social temptation is to free-load, to enjoy the proceeds from the group payout without contributing a fair share.
One third of the groups played under threat of financial sanction. Each round, these participants saw the contributions of the other players and could choose to fine others one Euro. Players were also told about any fines they'd received. Another third of the groups played under threat of social disapproval. Each round participants could choose to direct their disapproval at other players. They also learned how many players had frowned on their tactics. There was also a control group with no sanction system in place.
For the first seven rounds, both financial threats and social disapproval threats increased fair play (compared with control condition), but the effect of fines was greater. Crucially, at the seventh round, the players in the sanction conditions were told there was a computer malfunction and that the final three rounds would be played without any fining or disapproval system in place. The key test was how they'd behave once the threat of sanction was lifted.
With the sanctions gone, the cooperative play of participants in the financial condition fell away quickly, more so than in the social disapproval condition. Indeed, by the tenth and final round, players in the financial condition played the same selfish style as control condition players. In contrast, the players in the social disapproval condition continued to show signs of increased fair play.
"Clearly this has important implications for public policy," Nelissen and Mulder concluded. "Our results suggest that successful norm induction requires public communication of social (dis)approval, not only because it increases the salience and thus the effectiveness of norms in guiding behaviour, but also because it makes them stick even if people are not consistently punished for their violations."
_________________________________
Nelissen, R., and Mulder, L. (2013). What makes a sanction “stick”? The effects of financial and social sanctions on norm compliance. Social Influence, 8 (1), 70-80 DOI: 10.1080/15534510.2012.729493
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
It's not realistic to have a constant method of enforcement in place. So what approach will be more effective than the time-limited influence of fines? A new study by Rob Nelissen and Laetitia Mulder suggests that social disapproval is more effective than financial sanctions because the effects linger on even after the threat of disapproval is lifted.
The researchers invited 84 participants to sit alone at computer cubicles, to play several rounds of a public goods game in groups of four. Players started with 4 Euros each, and every round they chose how much to place into a group kitty. At the end of each round the group stash was multiplied 1.5 times and shared among the four players. The anti-social temptation is to free-load, to enjoy the proceeds from the group payout without contributing a fair share.
One third of the groups played under threat of financial sanction. Each round, these participants saw the contributions of the other players and could choose to fine others one Euro. Players were also told about any fines they'd received. Another third of the groups played under threat of social disapproval. Each round participants could choose to direct their disapproval at other players. They also learned how many players had frowned on their tactics. There was also a control group with no sanction system in place.
For the first seven rounds, both financial threats and social disapproval threats increased fair play (compared with control condition), but the effect of fines was greater. Crucially, at the seventh round, the players in the sanction conditions were told there was a computer malfunction and that the final three rounds would be played without any fining or disapproval system in place. The key test was how they'd behave once the threat of sanction was lifted.
With the sanctions gone, the cooperative play of participants in the financial condition fell away quickly, more so than in the social disapproval condition. Indeed, by the tenth and final round, players in the financial condition played the same selfish style as control condition players. In contrast, the players in the social disapproval condition continued to show signs of increased fair play.
"Clearly this has important implications for public policy," Nelissen and Mulder concluded. "Our results suggest that successful norm induction requires public communication of social (dis)approval, not only because it increases the salience and thus the effectiveness of norms in guiding behaviour, but also because it makes them stick even if people are not consistently punished for their violations."
_________________________________
Nelissen, R., and Mulder, L. (2013). What makes a sanction “stick”? The effects of financial and social sanctions on norm compliance. Social Influence, 8 (1), 70-80 DOI: 10.1080/15534510.2012.729493
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Tuesday, 8 January 2013
The children of securely attached mothers think that God is close
Children's sense of God's closeness is apparently related, not to their mother's religiosity, but to their mother's attachment style - that is, whether the mother is calm and confident in her relationships or anxious and uncertain. Specifically, Rosalinda Cassibba and her colleagues have shown that the children of securely attached mothers (religious or not) tend to think that God is closer, as compared with the children of insecurely attached mothers.
The new finding builds on claims made last century by the British psychoanalyst John Bowlby that attachment style is transmitted from generation to generation (via non-genetic means). The new result suggests that a mother's attachment style affects the kind of attachment her child forms not just with her, but with other potential caring figures, even non-corporeal ones.
Seventy-one Italian mothers were classified as having a secure or insecure attachment style based on a short interview. They also answered questions about their religious faith and attachment to God. Meanwhile, their children (average age 7; 29 boys, 42 girls) were presented with a felt board depicting a child and were told six stories involving that child: some were neutral (e.g. he sits at a table and reads), others were more distressing (e.g. his dog died). For each story, the children were asked to place a felt character to show where God was located. The children were able to choose from 10 possible felt figures to represent God - most chose a man or a heart.
The children of securely attached mothers tended to place God nearer to the child in both the neutral and distressing stories. By contrast, the children's placement of God was unrelated to their mother's religiosity. Cassibba and her colleagues aren't certain of the mechanism underlying the relationship between mothers' attachment and children's sense of God's closeness, but they think it probably has to do with the mothers' care-giving style, or possibly a personality style shared with the parent.
The study has a number of short-comings including the fact that the children were locating God's closeness to a fictional child, not to themselves. Also, we don't know how specific this is - would they, for instance, have located a child's teddy bear as nearer? Notwithstanding these issues, the researchers said their finding "is important both for attachment research in developmental psychology and the psychology of religion."
Somewhat strangely for an article published in a psychology journal, Cassibba and her colleagues ended with the following advice for the pious: "A caregiver who desires his or her children to come to view God as a close relational partner may do well in placing a high priority on the children's own needs for support and closeness. The caregiver's implicit teachings about relationships is likely to be far more important than his or her explicit preaching about God."
_________________________________
Cassibba R, Granqvist P, and Costantini A (2013). Mothers' attachment security predicts their children's sense of God's closeness. Attachment and human development, 15 (1), 51-64 PMID: 23216392
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
The new finding builds on claims made last century by the British psychoanalyst John Bowlby that attachment style is transmitted from generation to generation (via non-genetic means). The new result suggests that a mother's attachment style affects the kind of attachment her child forms not just with her, but with other potential caring figures, even non-corporeal ones.
Seventy-one Italian mothers were classified as having a secure or insecure attachment style based on a short interview. They also answered questions about their religious faith and attachment to God. Meanwhile, their children (average age 7; 29 boys, 42 girls) were presented with a felt board depicting a child and were told six stories involving that child: some were neutral (e.g. he sits at a table and reads), others were more distressing (e.g. his dog died). For each story, the children were asked to place a felt character to show where God was located. The children were able to choose from 10 possible felt figures to represent God - most chose a man or a heart.
The children of securely attached mothers tended to place God nearer to the child in both the neutral and distressing stories. By contrast, the children's placement of God was unrelated to their mother's religiosity. Cassibba and her colleagues aren't certain of the mechanism underlying the relationship between mothers' attachment and children's sense of God's closeness, but they think it probably has to do with the mothers' care-giving style, or possibly a personality style shared with the parent.
The study has a number of short-comings including the fact that the children were locating God's closeness to a fictional child, not to themselves. Also, we don't know how specific this is - would they, for instance, have located a child's teddy bear as nearer? Notwithstanding these issues, the researchers said their finding "is important both for attachment research in developmental psychology and the psychology of religion."
Somewhat strangely for an article published in a psychology journal, Cassibba and her colleagues ended with the following advice for the pious: "A caregiver who desires his or her children to come to view God as a close relational partner may do well in placing a high priority on the children's own needs for support and closeness. The caregiver's implicit teachings about relationships is likely to be far more important than his or her explicit preaching about God."
_________________________________
Cassibba R, Granqvist P, and Costantini A (2013). Mothers' attachment security predicts their children's sense of God's closeness. Attachment and human development, 15 (1), 51-64 PMID: 23216392
Post written by Christian Jarrett (@psych_writer) for the BPS Research Digest.
Subscribe to:
Posts (Atom)