The Psychology of 'Making A Murderer'

Roughly ten hours into Making a Murderer, a Netflix documentary about the murder trial of Steven Avery, his defense lawyer Dean Strang delivers the basic thesis of the show:

“The forces that caused that [the conviction of Brendan Dassey and Steven Avery]…I don’t think they are driven by malice, they’re just expressions of ordinary human failing. But the consequences are what are so sad and awful.”

Strang then goes on to elaborate on these “ordinary human failing[s]”:

“Most of what ails our criminal justice system lies in unwarranted certitude among police officers and prosecutors and defense lawyers and judges and jurors that they’re getting it right, that they simply are right. Just a tragic lack of humility of everyone who participates in our criminal justice system.”

Strang is making a psychological diagnosis. He is arguing that at the root of injustice is a cognitive error, an “unwarranted certitude” that our version is the truth, the whole truth, and nothing but the truth. In the Avery case, this certitude is most relevant when it comes to forensic evidence, as his lawyers (Dean Strang and Jerry Buting) argue that the police planted keys and blood to ensure a conviction. And then, after the evidence was discovered, Strang and Buting insist that forensic scientists working for the state distorted their analysis to fit the beliefs of the prosection. Because they needed to find the victim’s DNA on a bullet in Avery’s garage – that was the best way to connect him to the crime – the scientists bent protocol and procedure to make a positive match.

Regardless of how you feel about the details of the Avery case, or even about the narrative techniques of Making A Murderer, the documentary raises important questions about the limitations of forensics. As such, it’s a useful antidote to all those omniscient detectives in the CSI pantheon, solving crimes with threads of hair and fragments of fingerprints. In real life, the evidence is usually imperfect and incomplete. In real life, our judgments are marred by emotions, mental short-cuts and the desire to be right. What we see is through a glass darkly.

One of the scientists who has done the most to illuminate the potential flaws of forensic science is Itiel Dror, a cognitive psychologist at the University College of London. Consider an experiment conducted by Dror that featured five fingerprint experts with more than ten years of experience working in the field. Dror asked these experts to examine a set of prints from Brandon Mayfield, an American attorney who’d been falsely accused of being involved with the Madrid terror attacks. The experts were instructed to assess the accuracy of the FBI’s final analysis, which concluded that Mayfield's prints were not a match. (The failures of forensic science in the Mayfield case led to a searing 2009 report from the National Academy of Sciences. I wrote about Mayfield and forensics here.) 

Dror was playing a trick. In reality, each set of prints was from one of the experts’ past cases, and had been successfully matched to a suspect. Nevertheless, Dror found that the new context – telling the forensic analysts that the prints came from the exonerated Mayfield - strongly influenced their judgment, as four out of five now concluded that there was insufficient evidence to link the prints. While Dror was careful to note that his data did “not necessarily indicate basic flaws” in the science of fingerprint identification – those ridges of skin remain a valid way to link suspects to a crime scene – he did question the reliability of forensic analysis, especially when the evidence gathered from the field is ambiguous. Here's an example of an ambiguous set of prints, which might give you a sense of just how difficult forensic analysis can be:

Similar results have emerged from other experiments. When Dror gave forensic analysts more typical stories about fingerprints they’d already reviewed, such as informing them that a suspect had already confessed, the new stories were able to get two-thirds of analysts to reverse their previous conclusions at least once. In an email, Dror noted that the FBI has replicated this basic finding, showing that in roughly 10 percent of cases examiners reverse their findings even when given the exact same prints. They are consistently inconsistent.

If the flaws of forensics were limited to fingerprints and other forms of evidence requiring visual interpretation, such as bite marks and hair samples, that would still be extremely worrying. (Fingerprints have been a crucial police tool since the French detective Alphonse Bertillon used a bloody print left behind on a pane of glass to secure a murder conviction in 1902.) But Dror and colleagues have shown that these same basic failings can even afflict the gold-standard of forensic evidence: DNA.

The experiment went like this: Dror and Greg Hampikian presented DNA evidence from a 2002 Georgia gang rape case to 17 professional DNA examiners working in an accredited government lab. Although the suspect in question (Kerry Robinson) had pleaded not guilty, the forensic analysts in the original case concluded that he could not be excluded based on the genetic data. This testimony, write the scientists, was “critical to the prosecution.”

But was it the best interpretation of the evidence? After all, the DNA gathered from the rape victim was part of a genetic mixture, containing samples from multiple individuals. In such instances, the genetics become increasingly complicated and unclear, making forensic analysts more likely to be swayed by their presumptions and prejudices. And because crime labs are typically part of a police department, these biases are almost always tilted in the direction of the prosecution. For instance, in the Avery case, the analyst who identified the victim’s DNA on a bullet fragment had been explicitly instructed by a detective to find evidence that the victim had been “in his house or his garage.”

To explore the impact of this potentially biasing information, Dror and Hampikian sent the DNA evidence from the Georgia gang rape case to additional examiners. The only difference was that these forensic scientists did the analysis blind - they weren’t told about the grisly crime, or the corroborating testimony, or the prior criminal history of the defendants. Of these 17 additional experts, only one concurred with the original conclusion. Twelve directly contradicted the finding presented during the trial – they said Robinson could be excluded - and four said the sample itself was insufficient.

These inconsistencies are not an indictment of DNA evidence. Genetic data remains, by far, the most reliable form of forensic proof. And yet, when the sample contains biological material from multiple individuals, or when it’s so degraded that it cannot be easily sequenced, or when low numbers of template molecules are amplified, the visual readout provided by the DNA processing software must be actively interpreted by the forensic scientists. They are no longer passive observers – they have become the instrument of analysis, forced to fill in the blanks and make sense of what they see. And that’s when things can go astray.

The errors of forensic analysts can have tragic consequences. In Convicting the Innocent, Brandon Garrett’s investigation of more than 150 wrongful convictions, he found that “in 61 percent of the trials where a forensic analyst testified for the prosecution, the analyst gave invalid testimony.” While these mistakes occurred most frequently with less reliable forms of forensic evidence, such as hair samples, 17 percent of cases involving DNA testing also featured misleading or incorrect evidence. “All of this invalid testimony had something in common,” Garret writes. “All of it made the forensic evidence seem like stronger evidence of guilt than it really was.”

So what can be done? In a recent article in the Journal of Applied Research in Memory and Cognition, Saul Kassin, Itiel Dror and Jeff Kukucka propose several simple ways to improve the reliability of forensic evidence. While their suggestions might seem obvious, they would represent a radical overhaul of typical forensic procedure. Here are the psychologists top five recommendations:

1) Forensic examiners should work in a linear fashion, analyzing the evidence (and documenting their analysis) before they compare it to the evidence taken from the target/suspect. If their initial analysis is later revised, the revisions should be documented and justified.

2) Whenever possible, forensic analysts should be shielded from potentially biasing contextual information from the police and prosecution. Here are the psychologists: “We recommend, as much as possible, that forensic examiners be isolated from undue influences such as direct contact with the investigating officer, the victims and their families, and other irrelevant information—such as whether the suspect had confessed. “

3) When attempting to match evidence from the field to that taken from a target/suspect, forensic analysts should be given multiple samples to test, and not just a single sample taken from the suspect. This recommendation is analogous to the eyewitness lineup, in which eyewitnesses are asked to identify a suspect among a pool of six other individuals. Previous research looking at the use of an "evidence lineup" with hair samples found that introducing additional samples reduced the false positive error rate from 30.4 percent to 3.8 percent.  

4) When a second forensic examiner is asked to verify a judgment, the verification should be done blindly. The “verifier” should not be told about the initial conclusion or given the identity of the first examiner.

5) Forensic training should also include lessons in basic psychology relevant to forensic work. Examiners should be introduced to the principles of perception (the mind is not a camera), judgment and decision-making (we are vulnerable to a long list of biases and foibles) and social influence (it’s potent).

The good news is that change is occurring, albeit at a slow pace. Many major police forces – including the NYPD, SFPD, and FBI – have started introducing these psychological concepts to their forensic examiners. In addition, leading forensic organizations, such as the US National Commission on Forensic Science, have endorsed Dror’s work and recommendations.

But fixing the practice of forensics isn’t enough: Kassin, Dror and Kukucka also recommend changes to the way scientific evidence is treated in the courtroom. “We believe it is important that legal decision makers be educated with regard to the procedures by which forensic examiners reached their conclusions and the information that was available to them at that time,” they write. The psychologists also call for a reconsideration of the “harmless error doctrine,” which holds that trial errors can be tolerated provided they aren’t sufficient to reverse the guilty verdict. Kassin, Dror and Kukucka point out that this doctrine assumes that all evidence is analyzed independently. Unfortunately, such independence is often compromised, as a false confession or other erroneous “facts” can easily influence the forensic analysis. (This is a possible issue in the Avery case, as Brendan Dassey’s confession – which contains clearly false elements and was elicited using very troubling police techniques – might have tainted conclusions about the other evidence. I've written about the science of false confessions here.) And so error begets error; our beliefs become a kind of blindness.

It’s important to stress that, in most instances, these failures of forensics don't require intentionality. When Strang observes that injustice is not necessarily driven by malice, he's pointing out all the sly and subtle ways that the mind can trick itself, slouching towards deceit while convinced it's pursuing the truth. These failures are part of life, a basic feature of human nature, but when they occur in the courtroom the stakes are too great to ignore. One man’s slip can take away another man’s freedom.

Dror, Itiel E., David Charlton, and Ailsa E. Péron. "Contextual information renders experts vulnerable to making erroneous identifications." Forensic Science International 156.1 (2006): 74-78.

Ulery, Bradford T., et al. "Repeatability and reproducibility of decisions by latent fingerprint examiners." PloS one 7.3 (2012): e32800.

Dror, Itiel E., and Greg Hampikian. "Subjectivity and bias in forensic DNA mixture interpretation." Science & Justice 51.4 (2011): 204-208.

Kassin, Saul M., Itiel E. Dror, and Jeff Kukucka. "The forensic confirmation bias: Problems, perspectives, and proposed solutions." Journal of Applied Research in Memory and Cognition 2.1 (2013): 42-52.

The Danger of Safety Equipment

My car is a safety braggart. When I glance at the dashboard, there’s a cluster of glowing orange lights, reminding me of all the smart technology designed to save me from my stupid mistakes. Airbags, check. Anti-lock brakes, check. Traction control, check. Collision Alert system, check.

It’s a comforting sight. It might also be a dangerous one. In fact, if you follow the science, all of these safety reminders could turn me into a more dangerous driver. This is known as the risk compensation effect, and it refers to the fact that people tend to take increased risks when using protective equipment. It’s been found among bicycle riders (people go faster when wearing helmets), taxi drivers and children running an obstacle course (safety gear leads kids to run more “recklessly.”) It’s why football players probably hit harder when playing with helmets and the fatality rate for skydivers has remained constant, despite significant improvements in safety equipment. (When given better parachute technology, people tend to open their parachutes closer to the ground, leading to a sharp increase in landing deaths.) It’s why improved treatments for HIV can lead to riskier sexual behaviors, why childproof aspirin caps don’t reduce poisoning rates (parents are more likely to leave the caps off bottles) and why countries with mandatory seat belt laws shift the risk from drivers to pedestrians and cyclists. As John Adams, professor of geography at the University of College London notes, “Protecting car occupants from the consequences of bad driving encourages bad driving.”

However, despite this surfeit of field data, the precise psychological mechanisms of risk compensation remain unclear. One of the lingering mysteries involves the narrowness of the effect. For instance, when people drive a car loaded with safety equipment, it’s clear that they often drive faster. But are they also more likely to not follow parking regulations? Similarly, a football player wearing an advanced helmet is probably more likely to deliver a dangerous hit with their head. But are they also more willing to commit a penalty? Safety equipment makes us take risks, but what kind of risks?

To explore this mystery, the psychologists Tim Gamble and Ian Walker at the University of Bath came up with a very clever experimental design. They recruited 80 subjects to play a computer game in which they had to inflate an animated balloon until it burst. The bigger the balloon, the bigger the payout, but every additional pump came with a risk: the balloon could pop, and then the player would get nothing.

Here’s the twist: Before the subjects played the game, they were given one of two pieces of headgear to wear. Some were given a baseball hat, while others were given a bicycle helmet. They were told that the gear was necessary part of the study, since the scientists had to track their eye movements. You can see the equipment below:

In reality, the headgear was a test of risk compensation. Gamble and Walker wanted to know how wearing a bike helmet, as opposed to a baseball hat, influenced risk-taking behavior on a totally unrelated task. (Obviously, a bike helmet won’t protect you from an exploding balloon on a computer screen.) Sure enough, those subjects randomly assigned to wear the helmet inflated the balloon to a much greater extent, receiving risk-taking scores that were roughly 30 percent higher. They also were more likely to admit to various forms of “sensation-seeking,” such as saying they “wish they could be a mountain climber,” or that they “enjoy the company of real ‘swingers.’” In short, the mere act of wearing a helmet that provided no actual protection still led people to act as if they were protected from all sorts of risks.

This lab research has practical implications. If using safety gear induces a general increase in risky behavior - and not just behavior directly linked to the equipment - then it might also lead to unanticipated dangers for which we are ill prepared. “This is not to suggest that the safety equipment will necessarily have its specific utility nullified,” write Gamble and Walker, “but rather that there could be changes in behavior wider than previously envisaged.” If anti-lock brakes lead us to drive faster in the rain, that’s too bad, but at least it’s a danger the technology is designed to mitigate. However, if the presence of the safety equipment also makes us more likely to text on the phone, then it might be responsible for a net reduction in overall safety, at least in some cases. Anti-lock brakes are no match for a distracted driver.

This doesn’t mean we're better off without air bags or behind the wheel of a Ford Pinto. But perhaps we should think of ways to lessen the salience of our safety gear. (At the very least, we should get rid of all those indicators on the dashboard.) Given the risk compensation effect, the safest car just might be the one that never tells you how safe it really is.

Gamble, Tim and Walker, Ian. “Wearing a Bicycle Helmet Can Increase Risk Taking and Sensation Seeking in Adults,” Psychological Science, 2016.

Do Genes Predict Intelligence? In America, It Depends on Your Class

There’s a longstanding academic debate about the genetics of intelligence. On the one side is the “hereditarian” camp, which cites a vast amount of research showing a strong link between genes and intelligence. This group can point to persuasive twin studies showing that, by the time children are 17 years old, their genetics explain approximately 66 percent of the variation in intelligence. To the extent we can measure smarts, what we measure is a factor largely dictated by the double helices in our cells.

On the other side is the “sociological” camp. These scientists tend to view differences in intelligence as primarily rooted in environmental factors, whether it’s the number of books in the home or the quality of the classroom. They cite research showing that many children suffering from severe IQ deficits can recover when placed in more enriching environments. Their genes haven’t changed, but their cognitive scores have soared.

These seem like contradictory positions, irreconcilable descriptions of the mind. However, when science provides evidence of two opposing theories, it’s usually a sign that something more subtle is going on. And this leads us to the Scarr-Rowe hypothesis, an idea developed by Sanda Scarr in the early 1970s and replicated by David Rowe in 1999. It’s a simple conjecture, at least in outline: according to the Scarr-Rowe hypothesis, the influence of genetics on intelligence depends on the socioeconomic status of the child. In particular, the genetic influence is suppressed in conditions of privation – say, a stressed home without a lot of books – and enhanced in conditions of enrichment. These differences have a tragic cause: when children grow up in poor environments, they are unable to reach their full genetic potential. The lack of nurture holds back their nature.

You can see this relationship in the chart below. As socioeconomic status increases on the x-axis, the amount of variance in cognitive-test performance explained by genes nearly triples. Meanwhile, nurture generates diminishing returns. Although upper class parents tend to fret over the details of their parenting — Is it better to play the piano or the violin? Should I be a Tiger Mom or imitate those chill Parisian parents?— these details of enrichment become increasingly insignificant. Their children are ultimately held back by their genetics.

It’s a compelling theory, with significant empirical support. However, a number of studies have failed to replicate the Scarr-Rowe hypothesis, including a 2012 paper that looked at 8716 pairs of twins in the United Kingdom. This inconsistency has two possible explanations. The first is that the Scarr-Rowe hypothesis is false, a by-product of underpowered studies and publication bias. The second possibility, however, is that different societies might vary in how socioeconomic status interacts with genetics. In particular, places with a more generous social welfare system – and an educational system less stratified by income - might show less support for the Scarr-Rowe hypothesis, since their poor children are less likely to be cognitively limited by their environment.

These cross-country differences are the subject of a new meta-analysis in Psychological Science by Elliot Tucker-Drob and Timothy Bates. In total, the scientists looked at 14 studies drawn from nearly 25,000 pairs of twins and siblings, split rather evenly between the United States and other developed countries in Western Europe and Australia. The goal of their study was threefold: 1) measure the power of the Scarr-Rowe hypothesis in the United States 2) measure the power of the Scarr-Rowe hypothesis outside of the United States, in countries with stronger social-welfare systems and 3) compare these measurements.

The results should depress every American: we are the great bastion of socioeconomic inequality, the only rich country where many poor children grow up in conditions so stifling they fail to reach their full genetic potential. The economic numbers echo this inequality, showing how these differences in opportunity persist over time. Although America likes to celebrate its upward mobility, the income numbers suggest that such mobility is mostly a myth, with only 4 percent of people born into the bottom quintile moving into the top quintile as adults. As Michael Harrington wrote in 1962, “The real explanation of why the poor are where they are is that they made the mistake of being born to the wrong parents.” 

Life isn’t fair. Some children will be born into poor households. Some children will inherit genes that make it harder for them to succeed. Nevertheless, we have a duty to ensure that every child has a chance to learn what his or her brain is capable of. We should be ashamed that, in 21st century America, the effects of inequality are so pervasive that people on different ends of the socioeconomic spectrum have minds shaped by fundamentally different forces. Rich kids are shaped by the genes they have. Poor kids are shaped by the support they lack.

Tucker-Drob, Elliot and Bates, Timothy. “Large cross-national differences in gene x socioeconomic status interaction on intelligence,” Psychological Science. 2015.        

The Louis-Schmeling Paradox

 

Why do we go to sporting events?

The reasons to stay home are obvious. Here’s my list, in mostly random order: a beer costs $12, the view is better from my couch, die-hard fans can be scary, the price of parking, post-game traffic.

That’s a pretty persuasive list. And yet, as I stare into my high-resolution television, I still find myself hankering for the live event, jealous of all those people eating bad nachos in the bleachers, or struggling to see the basketball from the last row. It’s an irrational desire - I realize I should stay home, save money, avoid the hassle – but I still want to be there, at the game, complaining about the cost of beer.

In a classic 1964 paper, “The Peculiar Economics of Professional Sports,” Walter Neale came up with an elegant explanation for the allure of live sporting events. He began his discussion with what he called the Louis-Schmeling Paradox, after the epic duo of fights between heavyweights Joe Louis and Max Schmeling. (Louis lost the first fight, but won the second.) According to Neale, the boxers perfectly illustrate the “peculiar economics” of sports. Although normal business firms seek out monopolies – they want to minimize competition and maximize profits – such a situation would be disastrous for a heavyweight fighter. If Joe Louis had a boxing monopoly, then he’d have “no one to fight and therefore no income,” for “doubt about the competition is what arouses interest.” Louis needed a Schmeling, the Lakers needed the Celtics and the Patriots benefit from a healthy Peyton Manning. It’s the uncertainty that’s entertaining.

Professional sports leagues closely follow Neale’s advice. They construct elaborate structures to smooth out the differences between teams, instituting salary caps, revenue sharing and lottery-style drafts. The goal is to make every game a roughly equal match, just like a Louis-Schmelling fight. Because sports monopolies are bad for business, Neale writes that the secret prayer of every team owner should be: “Oh Lord, make us good, but not that good.”

It’s an alluring theory. It’s also just that: a theory, devoid of proof. Apart from a few scattered anecdotes – when the San Diego Chargers ran roughshod over the AFL in 1961 “fans stayed away”– Neale’s paper is all conjecture.

Enter a new study by the economists Brad Humphreys and Li Zhou, which puts the Louis-Schmeling paradox to the empirical test. Humphreys and Zhou decided to delve into the actual numbers, looking at the relationship between league competition, team performance and game attendance. Their data was drawn from the home games of every Major League Baseball team between 2006 and 2010, as they sought to identify the variables that actually made people want to buy expensive tickets and overpay for crappy food.

What did they find? In “Peculiar Economics,” Neale made a clear prediction: “The closer the standings, and within any range of standings, the more frequently the standings change, the larger will be the gate receipts.” (Neale called this the “League Standing Effect,” arguing that the flux of brute competition was a “kind of advertising.”) However, Humphreys and Zhou reject this hypothesis, as they find that changes in the standings, and the overall closeness of team win percentages, have absolutely no impact on game attendance. Uncertainty is overrated.

But the study isn’t all null results. After looking at more than 12,000 baseball games, Humphreys and Zhou found that two variables were particularly important in determining attendance. The first variable was win preference, which isn’t exactly shocking: fans are more likely to attend games in which the home team is more likely to win. If we’re going to invest time and money in a live performance, then we want the investment to pay off; we don’t want to be stuck in post-game traffic after a defeat, thinking ruefully of all the better ways we could have spent our cash.

The second variable driving ticket sales is loss aversion, an emotional quirk of the mind in which losses hurt more than gains feel good. According to Humphreys and Zhou, loss aversion compounds the pain of a team’s defeat, especially when we expected a win. This suggests that the impact of an upset is asymmetric, with surprising losses packing a far greater emotional punch than surprising wins. The end result is that the pursuit of competitive balance – a league in which upsets are common – is ultimately a losing proposition for teams trying to sell more tickets. Instead of seeking out parity, greedy owners should focus on avoiding home losses, as that tends to discourage attendance at games.*

And so a familiar tension is revealed in the world of sports. On the one hand, there are the collective benefits of equality, which is why sports leagues aggressively redistribute wealth and draft picks. (The NFL is a bastion of socialism.) However, the individual team owners have a much narrower set of interests – they just want to win, especially at home, because that's what sells tickets.

The fans are stuck somewhere in between. While Neale might have been mistaken about the short-term motives of attendance – we want Louis to knock the shit out of Schmeling, not witness a close boxing match – he was almost certainly correct about the long-term impact of a league with a severe competitive imbalance. (It’s exciting when the Warriors are 23-0; it’s a travesty if they go undefeated for an entire season.) Sports fans might not be drawn to uncertainty, but they sure as hell need hope. Just ask those poor folks packed into Wrigley Field.

*Baseball owners should also invest in pitching: teams that give up more runs at home also exhibit lower attendance.

Neale, Walter C. "The peculiar economics of professional sports: A contribution to the theory of the firm in sporting competition and in market competition." The Quarterly Journal of Economics (1964): 1-14.

Humphreys, Brad, and Li Zhou. "The Louis-Schmelling Paradox and the League Standing Effect Reconsidered." The Journal of Sports Economics. (2015) 16: 835-852

When Should Children Start Kindergarten?

One of the fundamental challenges of parenting is that the practice is also the performance; childcare is all about learning on the job. The baby is born, a lump of need, and we’re expected to keep her warm, nourished and free of diaper rash. (Happy, too.) A few random instincts kick in, but mostly we just muddle our way through, stumbling from nap to nap, meal to meal. Or at least that’s how it feels to me.

Given the steep learning curve of parenting, it’s not surprising that many of us yearn for the reassurance of science. I want my sleep training to have an empirical basis; I’m a sucker for fatty acids and probiotics and the latest overhyped ingredient; my bookshelf groans with tomes on the emotionally intelligent toddler.

Occasionally, I find a little clarity in the research. The science has taught me about about the power of emotional control and the importance of secure attachments. But mostly I find that the studies complicate and unsettle, leaving me questioning choices that, only a generation or two ago, were barely even a consideration.  I’m searching for answers. I end up with anxiety.

Consider the kindergartner. Once upon a time, a child started kindergarten whenever they were old enough to make the age cutoff, which was usually after they turned five. (Different states had slightly different requirements.) However, over the last decade roughly 20 percent of children have been held back from formal schooling until the age of six, a process known as “redshirting.” The numbers are even higher for children in “socioeconomically advantaged families.”

What’s behind the redshirting trend? There are many causes, but one of the main factors has been research suggesting that delaying a child’s entry into a competitive process offers lasting benefits. Most famously, researchers have demonstrated that Canadian hockey players and European soccer stars are far more likely to have birthdays at the beginning of the year. The explanation is straightforward: because these redshirted children are slightly older than their peers, they get more playing time and better coaching. Over time, this creates a feedback loop of success.

However, the data has been much more muddled when it comes to the classroom. Athletes might benefit from a later start date, but the case for kindergartners isn’t nearly as clear. One study of Norwegian students concluded that the academic benefits were a statistical illusion: older children score slightly higher on various tests because they’re older, not because they entered kindergarten at a later date. Other studies have found associations between delayed kindergarten and educational attainment – starting school later makes us stay in school longer – but no correlation with lifetime earnings. To make matters even more complicated, starting school late seems to have adverse consequences for boys from poorer households, who are more likely to drop out of high-school once they reach the legal age of school exit.

Are you confused? Me too, and I’ve got a got a kid on the cusp of kindergarten. To help settle this debate, Thomas Dee of Stanford University and Hans Henrik Sievertsen at the Danish National Centre for Social Research decided to study Danish schoolchildren. This is for two reasons: 1) the country had high quality longitudinal data on the mental health of its students and 2) children in Denmark are supposed to begin formal schooling in the calendar year in which they turn six. This rule allowed Dee and Sievertsen to compare children born at the start of January with children born just a few days before in December. Although these kids are essentially the same age, they ended up in in different grades, making them an ideal population to study the impact of a delayed start to school.

After comparing these two groups of students, Dee and Sieversten found a surprisingly large difference in their mental health. According to the scientists, children who were older when they started kindergarten – they fell on the January side of the calendar – displayed significant improvements in mental health, both at the age of 7 and 11. In particular, the late starters showed much lower levels of inattention and hyperactivity, with a one-year delay leading to 73 percent decrease in reported problems.

As the scientists note, these results jive with a large body of research in developmental psychology suggesting that children benefit from an extended period of play and unstructured learning. When a child is busy pretending – when they turn a banana into a phone or a rock into a spaceship – they are practicing crucial mental skills. They are learning how to lose themselves in an activity and sustain their own interest. They are discovering the power of emotion and the tricks of emotional control. “To become mature,” Nietzsche once said, “is to recover that sense of seriousness which one had as a child at play.” But it takes time to develop that seriousness; the imagination cannot be rushed. 

Does this mean I should hold back my daughter? Is it always better to start kindergarten at a later date? Probably not. Dee and Sieverten are careful to note that the benefits of a later start to school were distributed unevenly among the Danish children. As a result, the scientists emphasize the importance of taking the individual child into account when making decisions about when to start school. Where is he on the developmental spectrum? Has she had a chance to develop her play skills? What is the alternative to kindergarten? As Dee noted in The Guardian, “the benefits of delays are unlikely to exist for children in preschools that lack the resources to provide well-trained staff and a developmentally rich environment.”

And so we’re left with the usual uncertainty. The data is compelling in aggregate – more years of play leads to better attention skills – but every child is an n of 1, a potential exception to the rule. (Parents are also forced to juggle more mundane concerns, like money; not every family can afford the luxury of redshirting.) The public policy implications are equally complicated. Starting kindergarten at the age of six might reduce attention problems, but only if we can replace the academic year with high-quality alternatives. (And that’s really hard to do.)

The takeaway, then, is that there really isn’t one. We keep looking to science for easy answers to the dilemmas of parenting, but mostly what we learn is that such answers don’t exist. Childcare is a humbling art. Practice, performance, repeat.

Dee, Thomas and Hans Henrik Sievertsen. "The Gift of Time? School Starting Age and Mental Health,” NBER Working Paper No. 21610, October 2015.

The Root of Wisdom: Why Old People Learn Better

In Plato’s Apology, Socrates defines the essence of wisdom. He makes his case by comparison, arguing that wisdom is ultimately an awareness of ignorance. The wise man is not the one who always gets it right. He’s the one who notices when he gets it wrong: 

I am wiser than this man, for neither of us appears to know anything great and good; but he fancies he knows something, although he knows nothing; whereas I, as I do not know anything, so I do not fancy I do. In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know. 

I was thinking of Socrates while reading a new paper in Psychological Science by Janet Metcalfe, Lindsey Casal-Roscum, Arielle Radin and David Friedman. The paper addresses a deeply practical question, which is how the mind changes as it gets older. It’s easy to complain about the lapses of age: the lost keys, the vanished names, the forgotten numbers. But perhaps these shortcomings come with a consolation. 

The study focused on how well people learn from their factual errors. The scientists gave 44 young adults (mean age = 24.2 years) and 45 older adults (mean age = 73.7 years) more than 400 hundred general information questions. Subjects were asked, for instance, to name the ancient city with the hanging gardens, or to remember the name of the woman who founded the American Red Cross. After answering each question, they were asked to rate, on a 7-point scale, their “confidence in the correctness of their response.” Then, they were then shown the correct answer. (Babylon, Clara Barton.) This phase of the experiment was done while subjects were fitted with an EEG cap, a device able to measure the waves of electrical activity generated by the brain.

The second part of the experiment consisted of a short retest. The subjects were asked, once again, to answer 20 of their high-confidence errors – questions they thought they got right but actually got wrong – and 20 low-confidence errors, or those questions they always suspected they didn’t know.

The first thing to note is that older adults did a lot better on the test overall. While the young group only got 26 percent of questions correct, the aged subjects got 41 percent. This is to be expected: the mind accumulates facts over time, slowly filling up with stray bits of knowledge.

What’s more surprising, however, is how the older adults performed on the retest, after they were given the answers to the questions they got wrong. Although current theory assumes that older adults have a harder time learning new material - their semantic memory has become rigid, or “crystallized” – the scientists found that the older subjects performed much better than younger ones during the second round of questioning. In short, they were far more likely to correct their errors, especially when it came to low-confidence questions:

Why did older adults score so much higher on the retest? The answer is straightforward: they paid more attention to what they got wrong. They were more interested in their ignorance, more likely to notice what they didn’t know. While younger subjects were most focused on their high-confidence errors – those mistakes that catch us by surprise – older subjects were more likely to consider every error, which allowed them to remember more of the corrections. Socrates would be proud.

You can see these age-related differences in the EEG data. When older subjects were shown the correct answer in red, they exhibited a much larger P3a amplitude, a signature of brain activity associated with the engagement of attention and the encoding of memory.

Towards the end of their paper, the scientists try to make sense of these results in light of research documenting the shortcomings of the older brain. For instance, previous studies have shown that older adults have a harder time learning new (and incorrect) answers to math problems, remembering arbitrary word pairs, and learning “deviant variations” to well-known fairy tales. Although these results are often used as evidence of our inevitable mental decline – the hippocampus falls apart, etc. – Metcalfe and colleagues speculate that something else is going on, and that older adults are simply “unwilling or unable to recruit their efforts to learn irrelevant mumbo jumbo.” In short, they have less patience for silly lab tasks. However, when senior citizens are given factually correct information to remember – when they are asked to learn the truth – they can rally their attention and memory. The old dog can still learn new tricks. The tricks just have to be worth learning.

Metcalfe, Janet, Lindsey Casal-Roscum, Arielle Radin, and David Friedman. "On Teaching Old Dogs New Tricks." Psychological Science (2015)

The Upside of a Stressful Childhood

 

Everyone knows that chronic stress is dangerous, especially for the developing brain. It prunes dendrites and inhibits the birth of new neurons. It shrinks the hippocampus, swells the amygdala and can lead to an elevated risk for heart disease, depression and diabetes. At times, these persuasive studies can make it can seem as if the ideal childhood is an extended vacation, shielded from the struggles of life.

But the human brain resists such simple prescriptions. It is a machine of tradeoffs, a fleshy computer designed to adapt to its surroundings. This opens the possibility that childhood stress - even of the chronic sort - might actually prove useful, altering the mind in valuable ways. Stress is a curse. Except when it's a benefit.

A new paper by a team of scientists at the University of Minnesota (Chiraag Mittal, Vladas Griskevicius, Jeffry Simpson, Sooyeon Sung and Ethan Young) investigates this surprising hypothesis. The researchers focused on two distinct aspects of cognition: inhibition and shifting.

Inhibition involves the exertion of mental control, as the mind overrides its own urges and interruptions. When you stay on task, or resist the marshmallow, or talk yourself out of a tantrum, you are relying on your inhibitory talents. Shifting, meanwhile, involves switching between different trains of thought. “People who are good at shifting are better at allowing their responses to be guided by the current situation rather than by an internal goal,” write the scientists. These people notice what’s happening around them and are able to adjust their mind accordingly. Several studies have found a correlation between such cognitive flexibility and academic achievement.

The researchers focused on these two cognitive functions because they seemed particularly relevant in stressful childhoods. Let’s start with inhibition. If you grow up in an impoverished environment, you probably learn the advantages of not waiting, as delaying a reward often means the reward will disappear. In such contexts, write the scientists, “a preference for immediate over delayed rewards…is actually more adaptive.” Self-control is for suckers.

However, the opposite logic might apply to shifting. If an environment is always changing – if it’s full of unpredictable people and intermittent comforts – then a child might become more sensitive to new patterns. They could learn how to cope by increasing their mental flexibility. 

It’s a nice theory, but is it true? To test the idea, the Minnesota scientists conducted a series of experiments and replications. The first study featured 103 subjects, randomly divided into two groups. The first group was given a news article about the recent recession and the lingering economic malaise. The second group was given an article about a person looking for his lost keys. The purpose of this brief intervention was to induce a state of uncertainty, at least among those reading about the economy.

Then, all of the subjects were given standard tasks designed to measure their inhibition and shifting skills. The inhibition test asked subjects to ignore an attention-grabbing flash and look instead at the opposite side of the screen, where an arrow was displayed for 150 milliseconds. Their score was based on how accurately they were able to say which way the arrow was pointing. Switching, meanwhile, was assessed based on how quickly subjects were able to identify colors or shapes after switching between the categories.  

Once these basic cognitive tests were over, the scientists asked everyone a few questions about the unpredictability of their childhood. Did their house often feel chaotic? Did people move in and out on a seemingly random basis? Did they have a hard time knowing what their parents or other people were going to say or do from day-to-day?

The results confirmed the speculation. If people were primed to feel uncertain – they read that news article about the middling economy – those who reported more unpredictable childhoods were significantly worse at inhibition but much better at shifting. “To our knowledge, these experiments are the first to document that stressful childhood environments do not universally impair mental functioning, but may actually enhance certain cognitive functions in the face of uncertainty,” write the scientists. “These findings, therefore, suggest that the cognitive functioning of adults reared in more unpredictable environments may be better conceptualized as adapted rather than impaired.”

After replicating these results, the scientists turned to a sample of subjects from the Minnesota Longitudinal Study of Risk and Adaptation. (I’ve written about this incredible project before.) Begun in the mid-1970s, the Minnesota study has been following the children born to 267 women living in poverty in the Minneapolis area for nearly four decades. As a result, the scientists had detailed data on their childhoods, and were able to independently assess them for levels of stress and unpredictability.

The scientists gave these middle-aged subjects the same shifting task. Once again, those adults with the most chaotic childhoods performed better on the test, at least after being made to feel uncertain. This suggests that those young children forced to deal with erratic environments – they don’t know where they might be living next month, or who they might be living with – tend to develop compensatory skills in response; the stress is turned into a kind of nimbleness. This difference is significant, as confirmed by a meta-analysis:

The triangle-dotted line represents those subjects with an unpredictable childhood

The triangle-dotted line represents those subjects with an unpredictable childhood

Of course, this doesn’t mean poverty is a blessing, or that we should wish a chaotic childhood upon our children. “We are not in any way suggesting or implying that stressful childhoods are positive or good for people,” write Mittal, et al. However, by paying attention to the adaptations of the mind, we might learn to help people take advantage of their talents, even if they stem from disadvantage. If nothing else, this research is a reminder that Nietzsche had a point: what doesn’t kill us can make us stronger. We just have to look for the strength.

Mittal, Chiraag, Vladas Griskevicius, Jeffry A. Simpson, Sooyeon Sung, and Ethan S. Young. "Cognitive adaptations to stressful environments: When childhood adversity enhances adult executive function." Journal of Personality and Social Psychology 109, no. 4 (2015): 604.

Why You Should Hold It

When Prime Minister David Cameron is giving an important speech, or in the midst of difficult negotiations, he relies on a simple mental trick known as the full bladder technique. The name is not a metaphor. Cameron drinks lots of liquid and then deliberately refrains from urinating, so that he is “desperate for a pee.” The Prime Minister believes that such desperation comes with benefits, including enhanced clarity and improved focus.

Cameron heard about the technique from Enoch Powell, a Conservative politician in the 1960s. Powell once explained why he never peed before a big speech. "You should do nothing to decrease the tension before making a big speech," he said. "If anything, you should seek to increase it."

Are the politicians right? Does the full bladder technique work? Is the mind boosted by the intense urge to urinate?  These questions are the subject of a new paper by Elise Fenn, Iris Blandon-Gitlin, Jennifer Coons, Catherine Pineda and Reinalyn Echon in the journal of Consciousness and Cognition.

The study featured a predictable design. In one condition, people were asked to drink five glasses of water, for a total of 700 milliliters. In the second condition, they only took five sips of water, or roughly 50 milliliters. The subjects were then forced to wait 45 minutes, a “timeframe that ensured a full bladder.” (The scientists are careful to point out that their study was approved by two institutional review boards; nobody wet their pants.) While waiting, the subjects were asked their opinions on various social and moral issues, such as the death penalty, gun control and gay rights.

Once the waiting period was over, and those who drank lots of water began to feel a little discomfort, the scientists asked some of the subjects to lie to an interviewer about their strongest political opinions. If they were pro-death penalty, they had to argue for the opposite; if they believed in gay marriage, they had to pretend they weren’t. (To give the liars an incentive, they were told that those who “successfully fooled the interviewer” would receive a gift card.) Those in the truth telling condition, meanwhile, were told to simply speak their mind.

All of the conversations were videotaped and shown to a panel of seventy-five students, who were asked to rate the answers across ten different variables related to truth and deception. Does the subject appear anxious? How hard does he or she appear to be thinking? In essence, the scientists wanted to know if being in a “state of high urination urgency” turned people into better liars, more adept at suppressing the symptoms of dishonesty.

That’s exactly what happened. When people had to pee, they exhibited more “cognitive control” and “their behaviors appeared more confident and convincing when lying than when telling the truth.” They were less anxious and fidgety and gave more detailed answers.

In a second analysis, the scientists asked a panel of 118 students to review clips of the conversation and assess whether or not people were telling the truth. As expected, the viewers were far more likely to believe that people who had to pee were being honest, even when they were lying about their beliefs.

There is a larger mystery here, and it goes far beyond bladders and bathrooms. One of the lingering debates in the self-control literature is the discrepancy between studies documenting an ego-depletion effect  – in which exerting self-control makes it harder to control ourselves later on – and the inhibitory spillover effect, in which performance on one self-control task (such as trying not to piss our pants) makes us better at exerting control on another task (such as hiding our dishonesty). In a 2011 paper, Tuk, et al. found that people in a state of urination urgency were more disciplined in other domains, such as financial decision-making. They were less impulsive because they had to pee.

This new study suggests that the discrepancy depends on timing. When self-control tasks are performed sequentially, such as resisting the urge to eat a cookie and then working on a tedious puzzle, the ego gets depleted; the will is crippled by its own volition. However, when two self-control tasks are performed at the same time, our willpower is boosted: we perform better at both tasks. “We often do not realize the many situations in our daily lives where we’re constantly exerting self control and how that affects our capacity,” wrote co-author Iris Blandón-Gitlin in an email. “It is nice to know that we can also find situations where our mental resources can be facilitated by self-control acts.” 

There are some obvious takeaways. If you’re trying to skip dessert, then you should also skip the bathroom; keep your bladder full. The same goes for focusing on a difficult activity  – you’re better off with pressure in your pants, as the overlapping acts of discipline will give you even more discipline. That said, don’t expect the mental boost of having to pee to last after you relieve yourself. If anything, the ego depletion effect might then leave you drained: All that willpower you spent holding it in will now hold you back.

F Fenn, Elise, Iris Blandón-Gitlin, Jennifer Coons, Catherine Pineda, and Reinalyn Echon. "The inhibitory spillover effect: Controlling the bladder makes better liars." Consciousness and Cognition 37 (2015): 112-122.

Quality, Quantity, Creativity

There are two competing narratives about the current television landscape. The first is that we’re living through a golden age of scripted shows. From The Sopranos to Transparent, Breaking Bad to The Americans: the art form is at its apogee.

The second narrative is that there’s way too much television - more than 400 scripted shows! - and that consumers are overwhelmed by the glut. John Landgraf, the CEO of FX Networks, recently summarized the problem: "I long ago lost the ability to keep track of every scripted TV series,” he said last month at the Television Critics Association. “But this year, I finally lost the ability to keep track of every programmer who is in the scripted programming business…This is simply too much television.” [Emphasis mine.] Landgraf doesn’t see a golden age – he sees a “content bubble.”

Both of these narratives are true. And they’re both true for a simple reason: when it comes to creativity, high levels of creative output are often a prerequisite for creative success. Put another way, throwing shit at the wall is how you figure out what sticks. More shit, more sticks.

This is a recurring theme of the entertainment business. The Golden Age of Hollywood – a period beginning with The Jazz Singer (1927) and ending with the downfall of the studio system in the 1950s – gave rise to countless classics, from Casablanca to The Searchers. It also led to a surplus of dreck. In the late 1930s, the major studios were releasing nearly 400 movies per year. By 1985, that number had fallen to around 100. It’s no accident that, as Quentin Tarantino points out, the movies of the 80s “sucked.”

The psychological data supports these cultural trends. The classic work in this area has been done by Dean Keith Simonton at UC-Davis. In 1997, after decades spent studying the creative careers of scientists, Simonton proposed the “equal odds rule,” which argues that “the relationship between the number of hits (i.e., creative successes) and the total number of works produced in a given time period is positive, linear, stochastic, and stable.” In other words, the people with the best ideas also have the most ideas. (They also have some of the worst ideas. As Simonton notes, “scientists who publish the most highly cited works also publish the most poorly cited works.”) Here’s Simonton, rattling off some biographical snapshots of geniuses:

"Albert Einstein had around 248 publications to his credit, Charles Darwin had 119, and Sigmund Freud had 330,while Thomas Edison held 1,093 patents—still the record granted to any one person by the U.S. Patent Office. Similarly, Pablo Picasso executed more than 20,000 paintings, drawings, and pieces of sculpture, while Johann Sebastian Bach composed over 1,000 works, enough to require a lifetime of 40-hr weeks for a copyist just to write out the parts by hand."

Simonton’s model was largely theoretical: he tried to find the equations that fit the historical data. But his theories now have some new experimental proof. In a paper published this summer in Frontiers in Psychology, a team of psychologists and neuroscientists at the University of New Mexico extended Simonton’s work in some important ways. The scientists began by giving 246 subjects a version of the Foresight test, in which people are shown a graphic design (say, a zig-zag) and told to “write down as many things as you can that the drawing makes you think of, looks like, reminds you of, or suggests to you.” These answers were then scored by a panel of independent judges on a five-point scale of creativity. In addition, the scientists were given a variety of intelligence and personality tests and put into a brain scanner, where the thickness of the cortex was measured.

The results were a convincing affirmation of Simonton’s theory: a high ideation rate - throwing shit at the wall - remains an extremely effective creativity strategy. According to the scientists, “the quantity of ideas was related to the judged quality or creativity of ideas to a very high degree,” with a statistical correlation of 0.73. While we might assume that rushing to fill up the page with random thoughts might lead to worse output, the opposite seemed to occur: those who produced more ideas also produced much better ideas. Rex Jung, the lead author on the paper, points out that this “is the first time that this relationship [the equal odds rule] has been demonstrated in a cohort of ‘low creative’ subjects as opposed to the likes of Picasso, Beethoven, or Curie.” You can see the linear relationship in the chart below:

The scientists also looked for correlations between the cortical thickness data and the performance of the subjects. While such results are prone to false positives and Type 1 errors – the brain is a complicated knot – the researchers found that both the quantity and quality of creative ideas were associated with a thicker left frontal pole, a brain area associated with "thinking about one's own future" and "extracting future prospects."

Of course, it’s a long way from riffing on a zig-zag in a lab to producing quality scripted television. Nevertheless, the basic lesson of this research is that expanding the quantity of creative output will also lead to higher quality, just as Simonton predicted. It would be nice (especially for networks) if we could get Breaking Bad without dozens of failed pilots, or if there was a way to greenlight Deadwood without buying into John from Cincinnati. (Call it the David Milch conundrum.) But there is no shortcut; failure is an essential inefficiency. In his writing, Simonton repeatedly compares the creative process to Darwinian evolution, in which every successful adaptation emerges from a litter of dead-end mutations and genetic mistakes. The same dismal logic applies to culture.* The only way to get a golden age is to pay for the glut. 

Jung, Rex E., et al. "Quantity yields quality when it comes to creativity: a brain and behavioral test of the equal-odds rule." Frontiers in Psychology (2015).

*Why? Because William Goldman was right: "NOBODY KNOWS ANYTHING. Not one person in the entire motion picture field knows for a certainty what's going to work. Every time out it's a guess..."

The Best Way To Increase Voter Turnout

“Nothing is more wonderful than the art of being free, but nothing is harder to learn how to use than freedom.”

-Alexis de Tocqueville

Why don’t more Americans vote? In the last midterm election, only 36.4 percent of eligible voters cast a ballot, the lowest turnout since 1942.

To understand the causes of low turnout, the Census Bureau regularly asks citizens why they chose not to exercise their constitutional right. The number one reason is always the same: “too busy.” (That was the reason given by 28 percent of non-voters in 2014.) The second most popular excuse is “not interested,” followed by a series of other obstacles, such as forgetting about the election or not liking any of the candidates.

What’s telling about this survey is that the reasons for not voting are almost entirely self-created. They are psychological, not institutional. (Only 2 percent of non-voters blame “registration problems.”) It’s not that people can’t vote – it’s that they don’t want to. They are unwilling to make time, even if it only takes a few minutes.

Alas, most interventions designed to mobilize these non-voters – to help them deal with their harried schedule and political apathy - are not very effective. One recent paper, for instance, reviewed evidence from more than 200 get-out-the-vote (GOTV) projects to show that, on average, door-to-door canvassing increased turnout by 1 percentage point, direct mail increased turnout by 0.7 percentage points, and phone calls increased turnout by 0.4 percentage points.* Television advertising, meanwhile, appears to have little to no effect. (Nevertheless, it’s estimated that 2016 political campaigns will spend more than $4.4 billion on broadcast commercials.)

However, a new working paper, by John Holbein of Duke University, proposes a very different approach to increase voter turnout. While most get-out-the-vote operations are fixated on the next election, trying to churn out partisans in the same Ohio/Florida/Nevada zip codes, Holbein’s proposal focuses on children, not adults. It also comes with potentially huge side benefits.

In his paper, “Marshmallows and Votes?” Holbein looks at the impact of the Fast Track Intervention, one of the first large scale programs designed to improve children’s non-cognitive skills, such as self-control, emotional awareness and grit. (“Non-cognitive” remains a controversial description, since it implies that these skills don’t require high-level thinking. They do.) Fast Track worked. Follow-up surveys with hundreds of students enrolled in the program showed that, relative to a control group, those given an education in non-cognitive skills were less aggressive at school, better at identifying their emotions and more willing to work through difficult problems. As teenagers, those in the treatment group “manifested reduced conduct problems in the home, school, and community, with decreases in involvement with deviant peers, hyperactivity, delinquent behavior, and conduct disorders.” As adults, they had lower conviction rates for violent and drug-related crimes.

Holbein wanted to expand on this analysis by looking at voter behavior when the Fast Track subjects were in their mid to late twenties. After matching people to their voter files, he found a clear difference in political participation rates. According to Holbein’s data, “individuals exposed to Fast Track turned out to vote in at least one of the federal elections held during 2004-2012 at a rate 11.1 percentage points higher than the control group.” That represents a 40 percent increase over baseline levels. Take that, television ads.

There are two important takeaways. The first is that a childhood intervention designed to improve non-cognitive skills can have “large and long-lasting impacts on political behavior in adulthood.” In his paper, Holbein emphasis the boost provided by self-regulation, noting that the ability to "persevere, delay gratification, see others' perspectives, and properly target emotion and behavior" can help people overcome the costs of participating in an election, whether it's waiting in a line at a polling place or not getting turned off by negative ads.  And since the health of a democracy depends on the involvement of its citizens – we must learn how to use our freedom, as Tocqueville put it – it seems clear that attempts to improve political participation should begin as early as possible, and not just with lectures about civics.

The second lesson involves the downstream benefits of an education in non-cognitive skills. We’ve been so focused for so long on the power of intelligence that we’ve largely neglected to teach children about self-control and emotional awareness. (That’s supposed to be the job of parents, right?) However, an impressive body of research over the last decade or so has made it clear that non-cognitive skills are extremely important. In a recent review paper, the Nobel Laureate James Heckman and the economist Tim Kautz summarize the evidence: “The larger message of this paper is that soft skills [e.g, non-cognitive skills] predict success in life, that they causally produce that success, and that programs that enhance soft skills have an important place in an effective portfolio of public policies.”

Democracies are not self-sustaining – they have to invest in their continued existence, which means developing citizens willing to cast a ballot. In Making Democracy Work, Robert Putnam and colleagues analyzed the regional governments of Italy. On paper, all of these governments looked identical, having been created as part of the same national reform. However, Putnam found that their effectiveness varied widely, largely as a result of differing levels of civic engagement among their citizens. When people were more engaged with their community – when they voted in elections, read newspapers, etc. – they had governments that were more responsive and successful. According to Putnam, civic engagement is not a by-product of good governance. It’s a precondition for it.

This new paper suggests that the road to a better democracy begins in the classroom. By teaching young students basic non-cognitive skills – something we should be doing anyway – we might also improve the long-term effectiveness of our government.

Holbein, John. "Marshmallows and Votes? Childhood Non-Cognitive Skill Development and Adult Political Participation." Working Paper.

*There is evidence, however, that modern campaigns have become more effective at turning out the vote, largely because their ground games can now target supporters with far higher levels of precision. According to a recent paper by Ryan Enos and Anthony Fowler, the Romney and Obama campaigns were able to increase voter participation in the 2012 election by 7-8 percentage points in “highly targeted” areas. Unfortunately, these additional supporters were also quite expensive, as Enos and Fowler conclude that “the average cost of generating a single vote is about 87 dollars."

Can Tetris Help Us Cope With Traumatic Memories?

In a famous passage in Theatetus, Plato compares human memory to a wax tablet: 

Whenever we wish to remember anything we see or hear or think of in our own minds, we hold this wax under the perceptions and thoughts and imprint them upon it, just as we make impressions from seal rings; and whatever is imprinted we remember and know as long as its image lasts, but whatever is rubbed out or cannot be imprinted we forget and do not know.

The appeal of Plato’s metaphor is that it fits our intuitions about memory. Like the ancient philosopher, we’re convinced that our recollections are literal copies of experience, snapshots of sensation written into our wiring. And while our metaphors of memory now reflect modern technology – the brain is often compared to a vast computer hard drive – we still focus on comparisons that imply accurate recall. Once a memory is formed, it’s supposed to stay the same, an immutable file locked away inside the head.

But this model is false. In recent years, neuroscientists have increasingly settled on a model of memory that represents a dramatic break from the old Platonic metaphors. It turns out that our memories are not fixed like an impression in a wax tablet or a code made of zeroes and ones. Instead, the act of remembering changes the memory itself, a process known as memory reconsolidation. This means that every memory is less like a movie, a collection of unchanging scenes, and more like a theatrical play, subtly different each time it’s performed. 

On the one hand, the constant reconsolidation of memory is unsettling. It means that our version of history is fickle and untrustworthy; we are all unreliable narrators. And yet, the plasticity of memory also offers a form of hope, since it means that even the worst memories can be remade. This was Freud’s grand ambition. He insisted that the impossibility of repression was the “corner-stone on which the whole structure of psychoanalysis rests.” Because people could not choose what to forget, they had to find ways to live with what they remembered, which is what the talking cure was all about. 

If only Freud knew about video games. That, at least, is the message of a new paper in Psychological Science from a team of researchers at Cambridge, Oxford and the Karolinska Institute. (The first author is Ella James; the corresponding author is Emily Holmes.) While previous research has used beta-blockers to weaken traumatic memories during the reconsolidation process – subjects are given calming drugs and then asked to remember their traumas – these scientists wanted to explore interventions that didn’t involve pharmaceuticals. Their logic was straightforward. Since the brain has strict computational limits, and memories become malleable when they’re recalled, distracting people during the recall process should leave them with fewer cognitive resources to form a solid memory trace of the bad stuff. “Intrusive memories of trauma consist of mental images such as visual scenes from the event, for example, the sight of a red car moments before a crash,” write the scientists. “Therefore, a visuospatial task performed when memory is labile (during consolidation or reconsolidation) should interfere with visual memory storage (as well as restorage) and reduce subsequent intrusions.” In short, we’ll be so distracted that we’ll forget the pain. 

The scientists began by inducing some experimental trauma. In the first experiment, the “trauma” consisted of a 12 minute film containing 11 different scenes involving “actual or threatened death, as well as serious injury.” There was a young girl hit by a car, and a man drowning in the sea and a teenager, staring at his phone, who gets struck by a van while crossing the street. The subjects were shown these tragic clips in a dark room and asked to imagine themselves “as a bystander at the scene.” 

The following day, the subjects returned to the lab and were randomly assigned to two groups. Those in the first group were shown still pictures drawn from the video, all of which were designed to make them remember the traumatic video. There was a photo of the young girl just before she was hit and a snapshot of the man in the ocean, moments before he slipped below the surface. Then, after a brief “music filler task” – a break designed to let the chemistry of reconsolidation unfold – the subjects were told to play Tetris on a computer.  Twelve minutes later, they were sent home with a diary and asked to record any “intrusive memories” of the traumatic film over the following week.  

Subjects in the control group underwent a simpler procedure. After returning to the lab, they were given the music filler task and told to sit quietly in a room, where they were allowed to “think about anything.” They were then sent home with the same diary and the same instructions.

As you can see in the charts below, that twelve minute session of Tetris significantly reduced the number of times people remembered those awful scenes, both during the week and on their final return to the lab:

A second experiment repeated this basic paradigm, except with the addition of two additional control groups. The first new group played Tetris but was not given the reactivation task first, which meant their memories never became malleable. The second new control group was given the reactivation task but without the Tetris cure. The results were again compelling:

It’s important to note that the benefits of the Tetris treatment only existed when the distraction was combined with a carefully timed recall sequence. It’s not enough to play a video game after a trauma, or to reflect on the trauma in a calming space. Rather, the digital diversion is only therapeutic within a short temporal window, soon after we’ve been reminded of what we’re trying to forget. "Our findings suggest that, although people may wish to forget traumatic memories, they may benefit from bringing them back to mind, at least under certain conditions - those which render them less intrusive," said Ella James, in an interview with Psychological Science

The virtue of this treatment, of course, is that it doesn’t involve any mood-altering drugs, most of which come with drawbacks and side-effects. (MDMA might be useful for PTSD, but it can also be a dangerous compound.) The crucial question is whether these results will hold up among people exposed to real traumas, and not just a cinematic compilation of death and injury. If they do, then Tetris just might become an extremely useful psychiatric tool.  

Last point: Given the power of Tetris to interfere with the reconsolidation process, I couldn’t help but wonder about how video games might be altering our memory of more ordinary events. What happens to those recollections we think about shortly before disappearing into a marathon session of Call of Duty or Grand Theft Auto V? Are they diminished, too? It’s a dystopia ripped from the pages of Infinite Jest: an entertainment so consuming that it induces a form of amnesia. The past is still there. We just forget to remember it.

James, Ella L., et al. "Computer game play reduces intrusive memories of experimental trauma via reconsolidation-update mechanisms." Psychological Science (2015)

 

How Does Mindfulness Work?

In the summer of 1978, Ellen Langer published a radical sentence in the Journal of Personality and Social Psychology. It’s a line that’s easy to overlook, as it appears in the middle of the first page, sandwiched between a few dense paragraphs about the nature of information processing. But the sentence is actually a sly attack on one of the pillars of Western thought. “Social psychology is replete with theories that take for granted the ‘fact’ that people think,” Langer wrote. With her usual audacity, Langer then went on to suggest that most of those theories were false, and that much of our behavior is “accomplished…without paying attention.” The “fact” of our thinking is not really a fact at all.

Langer backed up these bold claims with a series of clever studies. In one experiment, she approached a student at a copy machine in the CUNY library. As the subject was about to insert his coins into the copier, Langer asked if she could use the machine first. The question came in three different forms. The first was a simple request: “Excuse me, I have 5 pages. May I use the Xerox machine?” The second was a request that included a meaningless reason, or what Langer called “placebic information”: “Excuse me, I have 5 pages. May I use the Xerox machine, because I have to make copies?” Finally, there was a condition that contained an actual excuse, if not an explanation: “Excuse me, I have 5 pages. May I use the Xerox machine, because I’m in a rush?”

If people were thoughtful creatures, then we’d be far more likely to let a person with a valid reason (“I’m in a rush”) cut in line. But that’s not what Langer found. Instead, she discovered that offering people any reason at all, even an utterly meaningless one (“I have to make copies”) led to near universal submission. It’s not that people aren’t listening, Langer says – it’s that they’re not thinking. While mindlessness had previously been studied in situations of “overlearned motoric behavior,” such as typing on a keyboard, Langer showed that the same logic applied to many other situations.

After mapping out the epidemic of mindlessness, Langer decided to devote the rest of her career to its antidote, which she refers to as mindfulness. In essence, mindfulness is about learning how to control the one thing in this world that we can control: our attention. But this isn’t the sterile control of the classroom, in which being attentive means “holding something still.” Instead, Langer came to see mindfulness as a way to realize that reality is never still, and that what we perceive is only a small sliver of what there is. “When you notice new things, that puts you in the present, but it also reminds you that you don’t know nearly as much as you think you know,” Langer told me. “We tend to confuse the stability of our attitudes and mindsets with the stability of the world. But the world outside isn’t stable – it’s always changing.” Mindfulness helps us see the change. 

This probably sounds obvious. But then the best advice usually is. In recent years, Langer and others have documented the far-reaching benefits of mindfulness, showing how teaching people basic mindfulness techniques can help them live longer, improve eyesight, alleviate stress, lose weight, increase happiness and empathy, decrease cognitive biases and even enhance memory in old age. Most recently, Langer has showed, along with her Harvard colleagues, that mindfulness can attenuate the progress of ALS, a disease that is believed to be “almost solely biologically driven.”

However, it’s one thing to know that mindfulness can work. It’s something else to know how it works, to understand the fundamental mechanisms by which mindfulness training can alter the ways we engage with the world. That’s where a recent paper by Esther Papies, Mike Keesman, Tila Pronk and Lawrence Barsalou in JPSP can provide some important insights. The researchers developed a short form of mindfulness training for amateurs; it takes roughly twelve minutes to complete. Participants view a series of pictures and are told to “simply observe” their reactions, which are just “passing mental events.” These reactions might include liking a picture, disliking it, and so on. The goal is to go meta, to notice what you notice, and to do all this without judging yourself.  

At first glance, such training can seem rather impractical. Unlike most programs that aim to improve our behavior, there is no mention of goals or health benefits or self-improvement. The scientists don’t tell people which thoughts to avoid, or how to avoid them; they offer no useful tips for becoming a better person.

Nevertheless, these short training sessions altered a basic engine of behavior, which is the relationship between motivational states – I want that and I want it now – and our ensuing choices.  At any given moment, people are besieged with sundry desires: for donuts, gossip, naps, sex. It’s easy to mindlessly submit to these urges. But a little mindfulness training (12 minutes!) seems to help us say no. We have more control over the self because we realize the self is a fickle ghost, and that it’s craving for donuts will disappear soon enough. We can wait it out.

The first experiment by Papies, et al. involved pictures of opposite-sex strangers. The subjects were asked to rate the attractiveness of each person and whether or not he or she was a potential partner.  In addition, they were asked questions about their “sexual motivation,” such as “How often do you fantasize about sex?” and “How many sexual partners have you had in the last year?” In the control condition – these people were given no mindfulness training – those who were more sexually motivated were also more likely to see strangers as attractive and as suitable partners. That’s not very surprising: if you’re in a randy mood, motivated to seek out casual sex, then you see the world through a very particular lens. (To quote Woody Allen: “My brain? That’s my second favorite organ.”) Mindfulness training, however, all but erased the correlation between sexual motivation and the tendency to see other people as sexual objects. “As one learns to perceive spontaneous pleasurable reactions to opposite-sex others as mere mental events,” write the scientists, “their effect on choice behavior…no longer occurs.” One might be in the mood, but the mood doesn’t win.

And it’s not just sex: the same logic can be applied to every appetite. In another experiment, the scientists showed how mindfulness training can help people make better eating choices in the real world. Subjects were recruited as they walked into a university cafeteria. A third of subjects were assigned to a short training session in mindful attention; everyone else was part of a control group. Then, they were allowed to choose their lunch as usual, selecting food from a large buffet.  Some of the meal options were healthy (leafy salads and other green things) and some were not (cheese puff pastries, sweet muffins, etc.)

The brief mindfulness training generated impressive results, especially among students who were very hungry. Although the training only lasted a few minutes, it led them to choose a meal with roughly 30 percent fewer calories. They were also much more likely to choose a healthy salad, at least when compared to those in the control groups (76 percent versus 49 percent), and less likely to choose an unhealthy snack (45 percent versus 63 percent).

What makes this research important is that it begins to reveal the mechanisms of mindfulness, how an appreciation for the transitory nature of consciousness can lead to practical changes in behavior. When subjects were trained to see their desires as passing thoughts, squirts of chemistry inside the head, the stimuli became less alluring. A new fMRI study from Papies and colleagues reveals the neural roots of this change. After being given a little mindfulness training, subjects showed reduced activity in brain areas associated with bodily states and increased activity in areas associated with "perspective shifting and effortful attention." In short, they were better able to tune their flesh out. Because really: how happy will a donut make us? How long will the sugary pleasure last? Not long enough. We might as well get the salad.

The Buddhist literature makes an important distinction between “responding” and “reacting.” Too often, we are locked in loops of reaction, the puppets of our most primal parts. This is obviously a mistake. Instead, we should try to respond to the body and the mind, inserting a brief pause between emotion and action, the itch and the scratch. Do I want to obey this impulse? What are its causes? What are the consequences? Mindfulness doesn’t give us the answers. It just helps us ask the questions.

This doesn’t mean we should all take up meditation or get a mantra; there are many paths to mindfulness. Langer herself no longer meditates: “The people I know won’t sit still for five minutes, let alone forty,” she told Harvard Magazine in 2010. Instead, Langer credits her art – she’s a successful painter in her spare time – with helping her maintain a more mindful attitude. “It’s not until you try to make a painting that you’re forced to really figure out what you’re looking at,” she says. “I see a tree and I say that tree is green. Fine. It is green. But then when I go to paint it, I have to figure out exactly what shade of green. And then I realize that these greens are always changing, and that as the sun moves across the sky the colors change, too. So here I am, trying to make a picture of a tree, and all of a sudden I’m thinking about how nothing is certain and everything changes. I don’t even know what a tree looks like.” That’s a mindful epiphany, and for Langer it’s built into the artistic process.

The beauty of mindfulness is that it’s ultimately an attitude towards the world that anyone can adopt. Pay attention to your thoughts and experiences. Notice their transient nature. Don’t be so mindless. These are simple ideas, which is why they can be taught to nearly anyone in a few minutes. They are also powerful ideas, which is why they can change your life.

Langer, Ellen J., Arthur Blank, and Benzion Chanowitz. "The mindlessness of ostensibly thoughtful action: The role of" placebic" information in interpersonal interaction." Journal of Personality and Social Psychology 36.6 (1978): 635.

Papies, E. K., Pronk, T. M., Keesman, M., & Barsalou, L. W. (2015). The benefits of simply observing: Mindful attention modulates the link between motivation and behavior. Journal of Personality and Social Psychology, 108(1), 148.

The Corrupting Comforts of Power

The psychology of power is defined by two long-standing mysteries.

The first is why people are so desperate to become powerful. Although power comes with plenty of spoils, it’s also extremely stressful. In a study of wild baboons in Kenya, a team of biologists found that those alpha males in charge of the troop exhibited extremely high levels of stress hormone. In fact, the stress levels of primates at the top were often higher than those at the very bottom. There are easier ways to pass on your genes.

The second mystery is why power corrupts. History is pockmarked with examples of cruel despots, leaders slowly ruined by the act of leading. It's surprisingly easy to initiate this corrupting process in the lab. A 2006 experiment by Adam Galinsky and colleagues asked subjects to either describe an experience in which they had lots of power or a time when they felt powerless. The subjects were then asked to draw the letter E on their foreheads. Those primed with feelings of power were two to three times more likely to draw the letter in a “self-oriented direction,” which meant it would appear backwards to everyone else. (Those primed to feel powerless were more likely to draw the letter so others could easily read it.) According to the psychologists, this is because feelings of power make us less interested in thinking about the perspectives of other people. We draw the letter backwards because we don’t care what they see.

A forthcoming paper by the psychologists Adam Waytz, Eileen Chou, Joe Magee and Adam Galinsky, gives us new insights into these paradoxes of power. The paper consists of eight separate experiments. After randomly assigning subjects to either a high or low power condition – participants were asked to write about all the ways in which people have power over them, or they have power over other people – they were given a short loneliness survey. As expected, those primed to feel powerful also felt less lonely. The same correlation held when people were allowed to dictate the assignments and payouts of a subordinate. Being a boss, even briefly, helps protect us from feeling left out.*

Why is power a buffer against loneliness? The key mechanism involves the desire to connect with others, or what psychologists call the “need to belong.” In another online study, 607 subjects were assigned to be either the boss or a subordinate in an online game. The boss got to decide how $10 was divided; the subordinate was simply told the outcome.

Then, everyone was given two surveys: one about their “need to belong” and another about their current feelings of loneliness. Those in the “boss” condition were less lonely than their “subordinates,” largely because they felt less need to belong. Simply put, those in charge don’t feel a desire to fit in, which makes them less sensitive to being left out. Why cater to the group when you command it?

This research begins to explain the allure of power. While power might actually make us more isolated – Machiavelli urged the prince to reject the trap of friendship and loyalty – it seems to reduce our subjective feelings of loneliness. (That powerful prince won’t miss his former friends.) And since we are wired to abhor loneliness, perhaps we lust after power as a substitute for more genuine connection. Instead of companions we collect underlings. Being feared by others can compensate for a lack of love. 

The scientists also speculate on how these new findings might explain the large body of research linking the acquisition of power to reduced levels of empathy and compassion. Because power makes us less motivated to connect with others - it's rather comfortable being alone at the top - we might also become less interested in considering their feelings, or making them feel better.

There are, of course, the usual social science caveats. These are temporary interventions, done in the lab and with online subjects. The effects of power in real life, where people rise to the top over time, and where their authority has real consequences, might be different. Perhaps Machiavelli’s prince realized that he missed his friends.

After reading this paper, it’s hard not to see power as a risky mental state; climbing the ladder often sows the seeds of our downfall. (Lincoln said it best: “Nearly all men can stand adversity, but if you want to test a man’s character, give him power.”) The essential problem of power is that leaves us socially isolated even as it masks our isolation. We feel less lonely, so we don’t realize how disconnected we’ve become. Even worse, being in charge of others makes us less interested in understanding them. We shrug off their social norms; we ignore their complaints; we are free to listen to that selfish voice telling us to take what we need.

And so the leader becomes detached from those below. He has no idea how much they hate him.

*Michael Scott was a notable exception.

Waytz, Adam, et al. "Not so lonely at the top: The relationship between power and loneliness." Organizational Behavior and Human Decision Processes 130 (2015): 69-78.

Is This Why Love Increases Lifespan?

In 1858, William Farr, a physician working for the General Register Office in the British government, conducted an analysis of the health benefits of marriage. After reviewing the death statistics of French adults, Farr concluded that married people had a roughly 40 percent lower mortality rate than those in the “celibate” category. “Marriage is a healthy estate,” wrote Farr. “The single individual is more likely to be wrecked on his voyage than the lives joined together in matrimony.”

Farr was prescient. In recent decades, dozens of epidemiological studies have demonstrated that married people are significantly less likely to suffer from viral infections, migraines, mental illness, pneumonia and dementia. They have fewer surgeries, car accidents and heart attacks. One meta-analysis concluded that the health benefits of marriage are roughly equivalent to the gains achieved by quitting smoking. (These studies are primarily concerned with married couples, but there is little to reason to believe similar correlations don’t apply to any committed long-term relationship.) The effects are even more profound when the attachment is secure. For instance, one study of patients with congestive heart failure sorted their marriages into high and low quality brackets. The researchers concluded that marital quality was as predictive of survival as the severity of the illness, with people in poor marriages dying at much faster rate.

But if the correlations are clear, their underlying cause is not. Why does a good love affair lead to a longer life?

A new study in Psychological Science, by Richard Slatcher, Emre Selcuk and Anthony Ong, helps outline at least one of the biological pathways through which love influences our long-term health. The scientists used data from 1078 adults participating in the Midlife in the United States (MIDUS) project, a longitudinal study that has been following subjects since 1995. While MIDUS looks at many different aspects of middle age, Slatcher et al. focused on the correlation between the quality of long-term romantic relationships and cortisol, a hormone with far reaching effects on the body, especially during times of stress.

In particular, the scientists investigated changes in cortisol levels over the course of a day. Cortisol levels typically peak shortly after we wake up, and then decrease steadily as the hours pass, reaching a low point before bedtime. While nearly everyone exhibits this basic hormonal arc, the slope of the drop varies from person to person. Some people have steep slopes – they begin the day with higher initial levels of cortisol, leading to sharper declines during the day – while others have flatter slopes, characterized by lower cortisol levels in the morning and a smaller drop off before sleep. In general, flatter slopes have been associated with serious health problems, including diabetes, depressionheart disease and a higher risk of death.

What determines the shape of our cortisol slope? There’s suggestive evidence that social relationships play a role, with several studies showing a connection between interpersonal problems and flatter slopes, both among adults and young children. This led Slatcher and colleagues to look at measurements of perceived partner responsiveness among MIDUS subjects. (Such responsiveness is defined as the “extent to which people feel understood, cared for, and appreciated by their romantic partners.”) After collecting this relationship data at two time points, roughly a decade apart, the scientists were able to look for correlations with the cortisol profiles of subjects.

Here’s the punchline: more responsive partners led to steeper (and healthier) cortisol slopes. Furthermore, these changes in hormone production were triggered, at least in part, by a general decline in the amount of negative emotion experienced by the subjects. This study is only a first step, and it needs to be replicated with other populations, but it begins to define the virtuous cycle set in motion by loving relationships. When we have a more responsive partner, we get better at dealing with our most unpleasant feelings, which leads to lasting changes in the way we process cortisol. The end result is a longer life.

While reading this new paper, I couldn’t help but think about the pioneering work of Michael Meaney, a neuroscientist at McGill University. In the 1990s, Meaney began studying the link between the amount of licking and grooming experienced by rat pups and their subsequent performance on a variety of stress and intelligence tests. Again and again, he found that those pups who experienced high levels of licking and grooming were less scared in new cages and less aggressive with their peers. They released fewer stress hormones when handled. They solved mazes more quickly.

More recently, Meaney and colleagues have shown how these feelings of affection alter the rat brain. High-LG pups have fewer receptors for stress hormone and more receptors for the chemicals that attenuate the stress response; they show less activity in parts of the cortex, such as the amygdala, closely associated with fear and anxiety; even their DNA is read differently, as all that maternal care activates an epigenetic switch that protects rats against chronic stress.

A similar logic probably extends to human beings. The feeling of love is not just a source of pleasure. It’s also a kind of protection.

Slatcher, Richard B., Emre Selcuk, and Anthony D. Ong. "Perceived Partner Responsiveness Predicts Diurnal Cortisol Profiles 10 Years Later." Psychological Science (2015)

The Triumph of Defensive Strategy

One of the most useful measurements of modern sabermetrics is Wins Above Replacement (WAR). Pioneered in baseball, the statistic attempts to calculate the total number of additional wins generated by a given player, at least compared to a “replacement level player” of ordinary talent. In general, a player who generates more than eight wins per season is of MVP quality. Five wins is All Star level, while most starters hover around the two wins mark.

Baseball, of course, is an ideal sport for the WAR approach, since the performance of its players is largely independent. Every hitter hits alone; every pitcher is on the mound by himself.  In recent years, however, the WAR stat has moved beyond baseball. Bill Gerard developed a similar model for soccer players in the Premier League, while John Hollinger created “Estimated Wins Added” for the NBA. All of these statistics rely on the same basic strategy as WAR in the major leagues: they compare each player’s performance to a hypothetical “replacement,” and convert the difference into the only measurement that matters: winning.

Football, for the most part, has missed out on the WAR revolution. This is for an obvious reason: it’s really hard to disentangle individual statistics from team performance. (There’s also a shortage of individual stats in football.) Take a running back that scores on a two-yard touchdown run. Was the touchdown really due to his talent? Or was it triggered by the excellent blocking of his offensive line? And what if the play was setup by a long pass, or a fifty yard punt return? The running back gets most of the glory, but it’s unclear if he deserves it; a replacement level player might have scored just as easily.

To get around these issues, most football metrics make a problematic but necessary assumption: that all player statistics are independent of teammates. The wide-receiver doesn’t depend on his quarterback and the quarterback doesn’t need a good offensive line.* But that’s beginning to change. A new paper, by Andrew Hughes, Cory Koedel and Joshua Price in the Journal of Sports Economics, outlines a WAR statistic at the position level of the NFL, rather than at the level of individual players. While this new stat won’t tell you who the MVP is in a given year, it will tell you which positions on the field are most valuable, and thus probably deserve the most salary cap space.

The logic of positional WAR, as outlined by Hughes et al., is quite elegant. In essence, the economists look at what happens to a team in the aftermath of an injury or suspension to a starter. As they note, such events – while all too common in the brutish NFL – are also largely random and exogenous. This means that they provide an ideal means of investigating the difference between a starter and a replacement player at each position. By comparing the performance of teams before and after the injury, the economists can see which starters matter the most, and which positions are the hardest to replace.

So what did they find? On the offensive side, there are few surprises. Quarterback is, by far, the most valuable position: a starting QB that misses four games due to injury will cost his team an average of 1.3 wins. That’s followed by tight ends, fullbacks, wide-receivers and outside offensive linemen, all of whom cost their team roughly half a win for every four missed games. Interestingly, running backs appear to be rather interchangeable: when a starting back goes down, there is no impact on the team’s overall performance. Such data would not surprise Bill Belichick

The defensive side is where things get strange. According to Hughes et al., teams are not hurt by the loss of defensive starters at any position. Put another way, the positional WAR for every defensive player is essentially zero. If a starting cornerback, safety, linebacker or defensive lineman goes down, a team should expect to win just as many games as before.

At first glance, this data makes no sense. Could a team really win just as many games with second-tier defensive players? The answer is almost certainly no. Instead, the low WAR of defensive positions is probably a testament to the power of defensive strategy. As Hughes et al. note, “defensive schemes can be adjusted to account for replacement players more easily than offensive schemes.” If a cornerback goes down, the safeties can help out, or the linebackers can drop back. The same pattern applies across the entire defense, as coaches and coordinators find ways to compensate for the loss of any single starter. Of course, if multiple starters go down, or if a defense has a lower level of overall talent, then it’s that much harder to compensate. Schemes and scheming have their limits.

That said, this data does suggest that the typical NFL team could improve their performance by spending less on defensive superstars. If you look at the average salary for the ten highest paid players at each position, it becomes clear that NFL teams treat defensive starters as far more valuable than their replacements: only quarterbacks and wide-receivers make more money than defensive ends, with linebackers, cornerbacks and defensive tackles close behind. Again: it’s not that these positions aren’t valuable or that Richard Sherman isn’t supremely talented. It’s that a smart defensive plan might be able make up for a missing defensive star. Russell Wilson, on the other hand, is probably worth every penny. 

And so a statistical tool designed to reveal the most important players on the field ends up, largely by accident, revealing the unexpected importance of the coaches on the sideline. While there are numerous measurements of bad coaching – one study concluded that suboptimal 4th down decisions cost the average team roughly ¾ of a win per season – this new paper highlights the impact of good coaching, especially on the defensive side.  If you have a successful system, then you might not need the best available safety or lineman. Perhaps you should save your money for the passing game. 

At its best, the sabermetric revolution is not about neat answers, or the reduction of talent to a few mathematical formulas. Rather, it’s about revealing the deep complexity of human competition, all the subtleties the eye cannot see. Sometimes, these subtleties lead us to unexpected places, like discovering that you can’t effectively evaluate half the men on the field without taking their group strategy into account. The whole precedes the parts.

* Total QBR is a possible exception to this rule. Alas, its equations are proprietary and thus impossible to evaluate.

Hughes, Andrew, Cory Koedel, and Joshua A. Price. "Positional WAR in the National Football League." Journal of Sports Economics (2015)

Is Talk Therapy Getting Less Effective?

In the late 1950s, when Aaron Beck was a young psychoanalyst at the University of Pennsylvania, he practiced classic Freudian analysis. The goal of therapy, he believed, was to give voice to the repressed urges of the id, revealing those inner conflicts – most of which involved sex - that we hid from ourselves.

But then Beck began treating a patient named Lucy. In a 1997 essay, Beck describes one of their early sessions:

“She was on the couch and we were doing classical analysis. She was presumably following the ‘fundamental rule’ that the patient must report everything that comes into her mind. During this session, she was regaling me with descriptions of her various sexual adventures. At the end of the session, I did what I usually do. I asked her, ‘Now, how have you been feeling during this session?’ She said, ‘I’ve been feeling terribly anxious, doctor.’”

When Beck asked Lucy why she felt so anxious, she gave him an answer that would reshape the future of talk therapy: “Well, actually, I thought that maybe I was boring you. I was thinking that all during the session.”

This offhand remark triggered Beck’s lifelong interest in what he called “unreported thoughts." Although these thoughts are rarely expressed, Beck believed that they shaped our experiences and influenced our emotions. After Beck taught Lucy how to evaluate her negative thoughts – and how to dismiss the incorrect ones – she began feeling better. For Beck, it was proof that talk therapy could work, but only if we talked about the right things.

In many respects, Lucy was patient zero of cognitive behavioral therapy, or CBT. Since that session, CBT has become one of the most widely practiced forms of psychotherapy. This is largely because it works: hundreds of studies have confirmed the effectiveness of CBT at treating a wide range of mental illnesses, from anxiety to schizophrenia. However, CBT is mostly closely associated with the treatment of depression. In part, this is because Beck himself focused on depression. But it’s also because CBT is a remarkably good treatment, at least when it comes to mild and moderate forms of the illness. While direct comparisons with anti-depressant medication are difficult, numerous studies have demonstrated that CBT is about as effective as the latest pills, and might even come with longer-lasting benefits. 

That said, many questions about CBT remain. One unknown involves refinements to the practice of CBT, including the introduction of new concepts (schema theory, etc.) and new techniques (mindfulness based CBT and related offshoots). Are these revisions improving CBT or making it worse? Another unknown involves the rapid growth of CBT as a treatment, and the impact of this growth on the quality of CBT therapists. Given these changes, it makes sense to investigate the healing power of CBT over time.

The hope, of course, is that CBT has been getting more effective. Given all that we’ve learned about the mind and mental illness since Beck began studying automatic thoughts, it seems reasonable to expect a little progress. This, after all, is the usual arc of modern medicine: there are very few healthcare interventions that have not improved over the last 60 years.

Alas, the initial evidence does not support the hope. That, at least, is the conclusion of a new paper in Psychological Bulletin by the researchers Tom Johnsen and Oddgeir Friborg. They collected 70 studies of CBT used as a treatment for depression, published between 1977 and 2014. Then, Johnsen and Friborg tracked the fluctuation of CBT’s effectiveness – measured as its ability to reduce depressive symptoms – over the decades. The resulting chart is a picture of decline, as the effect size of the treatment (as measured by the Beck Depression Inventory) has fallen by nearly 50 percent over the last thirty years:

The same pattern basic pattern also applies to studies using a different measure of depression, the Hamilton Rating Scale of Depression:

These are distressing and humbling charts. If nothing else, the decline they document is a reminder that it’s incredibly hard to heal the mind, and that our attempts at progress often backfire.  Decades after Beck pioneered CBT, we’re still struggling to make it better. 

So what’s causing this decline in efficacy? Johnsen and Friborg dismiss many of the obvious suspects. Publication bias, for instance, is the tendency of scientific journals to favor positive results, at least initially. (Once a positive result is established, null or inconclusive results often become easier to publish.) While Johnsen and Friborg do find evidence of publication bias in CBT research, it doesn’t seem to be responsible for the decline in efficacy.  They also find little evidence that variables such as patient health or demographics are responsible.

Instead, Johnsen and Friborg focus on two likely factors. The first factor concerns the growing popularity of the treatment, which has led many inexperienced therapists to begin using it. And since there’s a correlation between the experience of therapists and the recovery of their patients – more experience leads to a greater reduction in depressive symptoms – an influx of CBT novices might dilute its power. As Johnsen and Friborg note, CBT can seem like it’s easy to learn, since it has relatively straightforward treatment objectives. However, the effective use of CBT actually requires “proper training, considerable practice and competent supervision.” There is nothing easy about it.

The second factor is the placebo effect. In general, new medical treatments generate stronger placebo responses from patients. Everyone is excited; the breakthrough is celebrated; the intervention seems full of potential.  But then, as the hype gives way to reality, the placebo effect starts to fade. This phenomenon has been used to explain the diminished potency of various pharmaceuticals, from atypical antipsychotics to anti-depressants.  Because we are less likely to believe in the effectiveness of these pills, they actually become less effective. Our skepticism turns into a self-fulfilling prophecy.

Johnsen and Friborg speculate that a similar trend might also apply to CBT:

“In the initial phase of the cognitive era, CBT was frequently portrayed as the gold standard for the treatment of many disorders. In recent times, however, an increasing number of studies have not found this method to be superior to other techniques. Coupled with the increasing availability of information to the public, including the internet, it is not inconceivable that patients’ hope and faith in the efficacy of CBT has decreased somewhat…Moreover, whether widespread knowledge of the present results might worsen the situation remains an open question.”

It’s an unusually meta note for a scientific paper, as Johnsen and Friborg realize that their discovery might influence the very facts they describe. After all, if patients believe that CBT is no longer the "gold standard" then its decline will accelerate. And so we are stuck with a paradox: we cannot study the power of mental health treatments without impacting future results. Belief is part of the cure.

Johnsen, Tom J., and Oddgeir Friborg. "The Effects of Cognitive Behavioral Therapy as an Anti-Depressive Treatment is Falling: A Meta-Analysis." Psychological Bulletin (2015).

The Strange Allure of Almost Winning

 

In Addiction by Design: Machine Gambling in Las Vegas, the cultural anthropologist Natasha Dow Schull describes the extensive use of the “near miss” effect in slot machines. The effect exists when game designers engineer the reels to stop next to winning symbols far more often than predicted by random chance. Consider the tricks used by slot machine manufacturer Universal, which developed a two-stage process after each spin. The first stage determined whether or not the player won. If he lost – and most spins are losers - the second stage initiated the near miss effect, setting up the player to believe he had come exceedingly close to a real payout. For instance, there might be two 7s on the main payline, and then a third 7 just below.* Although near misses cost the casinos nothing, they provide gamblers with motivational juice, persuading people to stick with a game that’s stacked against them.  And so players keep losing money, because they almost won.

While the psychological power of near misses is an old idea – B.F. Skinner celebrated their influence in the early 1950s – scientists have only begun to glimpse their strange mechanics. In a 2009 Neuron paper, scientists found that near misses in a slot machine game recruited the mesolimbic reward machinery of the brain, just like actual wins. Other research has shown that people with a gambling addiction – roughly 1-5 percent of the population - show a larger than normal response in those same reward areas when exposed to near misses. In essence, their brains fail to differentiate between near misses and wins, which might play a role in their inability to step away from the casino.

What remains unclear, however, is why near misses are so influential, even among people without gambling issues. One possibility, explored in a new paper by Monica Wadhwa and JeeHye Christine Kim in Psychological Science, is that near misses trigger a particularly intense motivational state, in which people are determined to get what they want. In fact, according to Wadhwa and Kim, coming close to winning a reward can be more motivating than a real win. This suggests, rather perversely, that a gambler who keeps losing money with near misses will stay with the game longer than a gambler who actually wins some cash.

The scientists began by building their own digital game. Players were shown a grid containing sixteen tiles and were told to click on the tiles one at a time. Half of the tiles concealed a rock, while the other half concealed a diamond. The goal of the players was to uncover eight diamonds in a row. (Needless to say, the odds of this happening by chance are vanishingly slim.) Players were randomly assigned to one of three rigged conditions: a clear loss condition, in which they uncovered a rock on the very first click; a near miss without anticipation condition, in which they found a rock on the second trial but went on to find seven diamonds in total; and a near miss with anticipation condition, in which players uncovered seven diamonds in a row before uncovering a rock on the very last trial.

After the game was over, the scientists measured the impact of these various forms of losing.  In the first study, they clocked the speed of subjects as they walked down the hallway to collect a chocolate bar. As expected, those in the near miss with anticipation condition – the ones who came within a single tile of winning – walked much faster (up to 20 percent faster) than those in the other conditions. (In a separate experiment, people in this condition also salivated more when shown pictures of money.) According to the scientists, these differences in speed and salivation were triggered by the increased motivation of almost winning the game, which spilled over to an unrelated task. The Vegas equivalent would be running over to the poker tables, because you barely lost at blackjack.

The last experiment featured a scratch lottery ticket with a 6X6 grid.  If the ticket contained six 8’s in a row, the player won. Once again, the game was rigged: some tickets were clear losers, others were near winners (they contained five adjacent 8s) and some were winners. The scientists gave people these lottery tickets as they entered a fashion accessory store. Those given near miss lottery tickets showed higher levels of motivation, and went on to spend significantly more money while shopping. Maybe this is why Las Vegas is so overstuffed with luxury boutiques.

In the context of slot machines and lottery tickets, the near miss effect can seem like a programming bug, a quirk of dopaminergic wiring that leads us to lose cash on stupid games of chance. We are highly motivated, but that motivation is squandered on random number generators, dice and roulette wheels.

And yet, if you zoom out a bit, there’s a more uplifting explanation for the motivational oomph of near misses. One possibility is that the effect is actually an essential part of the learning process. Education, after all, is entwined with mistakes and disappointment; we learn how to get it right by getting it wrong. This is true whether we’re practicing jump shots or trying to write the Great American Novel – the process will be full of bricks and airballs and terrible drafts. But if every failure made us quit, then we’d never get good at anything. So the human brain had to learn how to enjoy the slow process of self-improvement, which is really a never-ending sequence of near misses. Pseudo-wins. Two sevens and a bell.

But here’s the poignant punchline of this new study: in some situations, those near misses turn out to be more motivating than real wins. Although we assume success is the ultimate goal, the bittersweet flavor of almost success is what makes us persist. It’s the ball that rims out; the sentence that works followed by one that doesn’t; the slot reel that stopped an inch short of a jackpot. And so we find ourselves drawn to those frustrating pursuits where victory is close at hand, but always just out of reach.

Such a frustrating way to live. Such an effective way to learn.

*This particular bit of slot machine programming was later deemed illegal, although similar techniques are still widely practiced.

Schüll, Natasha Dow. Addiction By Design: Machine Gambling in Las Vegas. Princeton University Press, 2012.

Wadhwa, Monica, and JeeHye Christine Kim. "Can a Near Win Kindle Motivation? The Impact of Nearly Winning on Motivation for Unrelated Rewards." Psychological Science (2015) 

 

Why Do Married Men Make More Money?

In 1979, Martha Hill, a researcher at the University of Michigan, observed a strange fact about married men: they make a lot more money, at least compared to their unmarried peers. (According to Hill’s numbers, marriage led to a roughly 25 percent boost in pay.) What’s more, the effect remained even after Hill controlled for most of the relevant variables, including work experience and training. 

In the decades since, this correlation has been repeatedly confirmed, with dozens of studies showing that married men earn between 10 and 50 percent more than their unmarried peers. (Because the world is so unfair, women get hit with a marriage penalty, as married females earn roughly 10 percent less than unmarried women.) What’s more, these income differences among men don’t seem to depend on any of the obvious demographic differences, including age, education, race and IQ scores. For whatever strange reason, companies seem to find married men more valuable. Their human capital is worth more capital.

But why? What does marriage have to do with work performance? A number of competing explanations for the male marriage premium have been proposed. There is the discrimination hypothesis – employers are biased against bachelors – and the selection explanation, which posits that men who are more likely to get married are also more likely to have the character traits required for career success. (If you can get along with your spouse, then you can probably get along with your colleagues.) Another possibility is that married men benefit from the specialization of their labor: because they don’t have to be worry about the dishes or other household chores – that’s the job of their partners - they’re more productive at work. Lastly, there is the marriage-as-education hypothesis, which suggests that married men might learn valuable skills from their marriage. In the midst of a long-term relationship, men might get trained in things like commitment and self-control, which are also useful at the office. If true, this means that the male marriage premium is rooted in something real, and that it has a causal effect on productivity. Companies are right to pay extra for men with wedding rings.

There have been a number of clever attempts to test these different possibilities. Economists have looked at the effect of shotgun weddings and the impact of a working wife on a husband’s earnings. They’ve looked at whether the gains of the marriage premium effect occur all at once or over time – most conclude the wage gains accrue over time – and that the premium dissipates as couples approach divorce. 

And yet, despite this bevy of research, the literature remains full of uncertainty. In a new paper, published last month in Labour Economics, the economists Megan de Linde Leonard and T.D. Stanley summarize the current confusion. “Researchers report estimates of the marriage-wage premium that range from 100% to a negative 39% of average wage,” they write. “In fact, over the past four years, we found 258 estimates, nine percent of which are statistically negative, 40% are significantly positive” and many of which were indistinguishable from zero. So the effect is either positive, negative, or non-existent. So much for consensus.

To help parse the disagreement, de Linde Leonard and Stanley use a technique called meta-regression analysis. The statistical equations are way over my head, but de Linde Leonard was kind enough to describe the basic methodology by email:

"To do a meta-analysis, we search for all papers, published and unpublished, that have estimates of the phenomenon that we are interested in. We then record those estimates into a spreadsheet along with other important factors about the study. (Was it published? What data set was used? Was the data from the US?, etc.) Once that is complete, we use statistical analysis to draw out the signal from the noise. We don't only use studies that we consider to be best practice; we use all the studies we can find and let the data tell us what is true. That is the beauty of the technique. We don't have to rely on our (almost always biased) professional judgment to decide what is real and important. We let the body of research do the talking."

So what did the body of research say? After looking at more than 661 estimates of the male marriage premium, de Linde Leonard and Stanley settled on a 9.4 percent premium among male workers in the United States. (The effect seems to be less potent in other countries.) Interestingly, the male marriage premium seems to getting more powerful in the 21st century, as a review of the most recent studies finds an average premium of 12.8 percent. 

What’s more, the meta-regression technique allowed the economists to assess the likelihood of various explanations for the marriage premium. Since the effect is increasing among men, even as the percentage of women in the workforce continues to increase, it seems unlikely that labor specialization plays a large role. (In other words, not doing the dishes doesn’t make you more productive at the office.*) De Linde Leonard and Stanley are also skeptical of the selection hypothesis, which suggests that married men only make more money because the men who tend to marry already possess the traits associated with high salaries. While the selection effect is real, the economists conclude that it’s not the main driver of the marriage premium, and probably accounts for just a 2 percent bump in wages, or less than 20 percent of the total marriage premium.

We’re left, then, with the marriage-as-education explanation. According to this theory, matrimony is a kind of college for the emotions, instilling partners with a very valuable set of non-cognitive traits. As De Linde Leonard and Stanley point out, marriage might cause men to “‘settle down,’ be more stable, and focused on work and career.” While we often draw a sharp distinction between the worlds of work and love, and assume that the traits and skills that are essential in one domain are irrelevant in the other, the marriage premium is a reminder that such distinctions are blurry at best. In fact, the talents that married men learn from marriage are roughly equivalent, at least in monetary value, to the income boost the average worker gets from attending college, but not graduating. (A bachelor’s degree gives people a much bigger salary boost.) Of course, women also probably pick up useful mental skills from matrimony, which makes the existence of the female marriage penalty – even if it’s just a penalty against having kids - that much more unjust. 

And yet, despite the plausibility of the marriage-as-education theory, we know remarkably little about what’s learned from our closest romantic relationships. There’s some scattered evidence: men who score higher in grit are also more likely to stay married, and those with secure romantic attachments are also happier employees. But these are just glimpses and glances of what remains a mostly mysterious schooling. Besides, the greatest “skills” we learn from marriage (or really any committed relationship) might not be measurable, at least not in the psychometric sense. This is rampant speculation, rooted in my own n of 1 experience, but it seems that marriage can provide us with a valuable sense of perspective, stretching out the timescale of our expectations. We learn that moods pass, fights get forgotten, forgiveness is possible. We realize that everything worthwhile requires years of struggle (even love!), and that success is mixed up with the salty residue of sweat and tears. I have no idea how much that wisdom is worth at the office, but I damned sure know it helps with the rest of life. 

*I’m actually partial to what might be called the non-specialization-of-labor hypothesis, which is that spouses often add tremendous value to one’s work. Call it the Vera effect

de Linde Leonard, Megan, and T. D. Stanley. "Married with children: What remains when observable biases are removed from the reported male marriage wage premium." Labour Economics 33 (2015): 72-80.

Does the Science of Self-Control Diminish Our Self-Control?

In 1998, the psychologist Roy Baumeister introduced the “strength” model of self-control. It’s a slightly misleading name, since the model attempts to describe the weakness of the will, why people so easily succumb to temptation and impulse. In Baumeister’s influential paper – it’s since been cited more than 2500 times – he and colleagues describe several simple experiments that expose our mental frailties. In one trial, subjects forced to eat radishes instead of chocolate candies gave up far sooner when asked to solve an impossible puzzle. In another trial, people told to suppress their emotions while watching a tragic scene from Terms of Endearment solved significantly fewer anagrams than those who watched a funny video instead. The lesson, writes Baumeister et al., is that the ego is easily depleted, a limited resource quickly exhausted by minor acts of self-control.

It’s a compelling theory, as it seems to explain many of our human imperfections. It’s why a long day at work often leads to a pint of ice cream on the couch and why we get grumpy and distracted whenever we miss a meal. Because the will is so feeble, we must learn to pick our battles, exerting power only when it counts. In a particularly clever follow-up experiment, published in 2007, Baumeister and colleagues showed that a variety of typical self-control tasks led to lower glucose levels in the blood. (The mind, it seems, consumes more energy when attempting to restrain itself.) Not surprisingly, giving “depleted” subjects a glass of sweet lemonade improved their subsequent performance on yet another self-control task. However, depleted subjects given lemonade sweetened with fake sugar experienced no benefits. Saccharine might trick the tongue, but it can't help your frontal lobes.

So far, so depressing: as described by Baumeister, self-control is a Sisyphean struggle, since the very act of exerting control makes it harder to control ourselves in the near future. We can diet in the morning, but that only makes us more likely to gorge in the afternoon. The id always wins.

But what if the will isn’t so fragile? In recent years, several papers have complicated and critiqued the strength (aka ego depletion) model of self-control. In a 2012 paper, Miller et al. pointed out that only people who believed in the impotence of willpower – they agreed that “after a strenuous activity, your energy is depleted and you must rest to get it refueled again” – performed worse on repeated tests of the will. In contrast, subjects who believed that self-control was seemingly inexhaustible – “After a strenuous mental activity, you feel energized for further challenging activities” – showed no depletion effects at all. This suggests that the exhaustion of willpower is caused by a belief about our mental resources, and not by an actual shortage of resources. We think we’re weak, and so we are. The science becomes a self-fulfilling prophecy.

That’s a long introduction to the latest volley in the ego-depletion debate. In a new paper published in the Journal of Personality and Social Psychology, Veronika Job, Gregory Walton, Katharina Bernecker and Carol Dweck left the lab and tracked more than 100 students at a selective American university. The assessment began with a survey about their willpower beliefs. Is it depleted by strenuous mental activity? Or does “mental stamina fuel itself”? Then, the students were sent weekly questionnaires about their self-control failures in a variety of domains, from academics (“How often did you watch TV instead of studying?”) to emotional control  (“How often did you have trouble controlling your temper?”) Finally, Job, et al., asked students to anticipate the amount of self-control they’d need to exert over the next week. Did they have a big exam coming up? A class presentation? Were they having problems with friends or professors? In addition to these surveys, the scientists got access to the students’ GPA. 

When the demand for self-control was low, the students’ beliefs about willpower had no effect on their self-control performance. However, when the semester got stressful, and the students felt a greater need to resist temptation, the scientists observed a significant difference: those who believed they had more self-control were better able to control their selves. Here are the scientists: “Far from conserving their resources and showing strong self-regulation when needed, students who endorsed the limited theory [of self-control] and who dealt with high demands over the term, procrastinated more (e.g., watching TV instead of studying), ate more junk food, and reported more excessive spending as compared to students with a nonlimited theory about willpower.”  (This relationship held even after controlling for trait levels of self-control.) What’s more, these beliefs had a tangible impact on their college grades, as students with a non-limited view of self-control got a significantly higher GPA when taking heavy course loads.

It’s a fascinating paper, limited mostly by its reliance on self-reports. (The GPA data is the lone exception.) I’m not sure how much we should trust a college student’s retrospective summary of his or her self-control performance, or how those reports might be shaped by their implicit beliefs. Are you more likely to notice and remember your failures of willpower if you believe the will is bound to fail? I have no idea. But it would be nice to see future studies track our lapses in a more objective fashion, and hopefully over a longer period of time.

That said, these quibbles obscure a bigger point. We are constantly being besieged with bodily urges that we’re trying to resist. Maybe it’s a rumbling belly, or a twitchy attention, or a leg muscle pooling with lactic acid. Although we know what we’re supposed to do – not eat a candy bar, stay on task, keep working out – it’s hard for the mind to persist. And so we give in, and tell ourselves we didn’t have a choice. The flesh can't be denied.

But here’s the good news: we’re probably tougher than we think. In one paper, published last year in Frontiers in Human Neuroscience, subjects who were flashed happy faces for 16 milliseconds at a time – that’s way too fast for conscious awareness – pedaled a bike at an intense pace for 25 minutes and 19 seconds. Those flashed sad faces only made it for 22 minutes and 22 seconds. (An even bigger boost was observed after some cyclists were primed with action words, such as GO and ENERGY.) What caused the difference? The subliminal faces didn’t strengthen their muscles, or slow down their heart rate, or mute the pain in their quads. Instead, the visuals provided a subtle motivational boost, which helped the cyclists resist the pain for 12 percent longer.

These results suggest, like the recent Job, et al. paper, that our failures of self-control are primarily not about the physical limits of the brain and body. Those limits exist, of course – the legs will eventually give out and the frontal lobes need glucose. But in the course of an ordinary day, those brute limits are far away, which means that the constraining variable on self-control is often psychological, tangled up with our motivations and expectations. And that’s why our implicit beliefs about self-control and the mind can be so important. (And also why we need to ensure our kids are given the most useful beliefs, which Dweck refers to as a "growth mindset."*) If you believe the self is weak or that the mind is fixed – say, if you’ve read all those ego depletion papers – then you might doubt your ability to stay strong; the lapse becomes inevitable. “A nonlimited theory does not turn people into self-control super heroes who never give in to temptations,” write Job, et al. “However, they lean in when demands on self-regulation are high.” The self they believe in does not wilt after choosing a radish; it is not undone by a long day; it can skip the lemonade and still keep it together.

We are not perfect. Not even close. But maybe we’re less bound to our imperfections than we think.

*In a new paper, Paunesku et. al. show that offering high-school students a brief growth mindset intervention - teaching that them "struggle is an opportunity for growth," and not an indicator of failure - led to significantly better grades among those at risk of dropping out. While this result itself isn't new, Paunesku showed that it was scalable, and could be delivered to thousands of students at low-cost using online instruction.

Job, Veronika, et al. "Implicit theories about willpower predict self-regulation and grades in everyday life." Journal of Personality and Social Psychology (April 2015)

Can People Change? The Case of Don Draper

Can people change? That is the question, it seems to me, at the dark heart of Mad Men.  We’ve spent eight years watching Don Draper try to become a better man. He wants to drink less and swim more. He wants to get close to his kids and stay faithful to his wife.

But little has changed; Don remains mostly the same. The world around him is now wearing orange plaid and bushy sideburns, but Don still looks like an astronaut, clad in crisp suits and pomade hair. He’s still sipping bourbon in bars, still sleeping around, still most alive when selling himself to strangers. What we’ve learned from the passing of time is that Don has learned nothing.  

I have no idea how Mad Men will end. Perhaps Don will have a midlife epiphany and move to California. Maybe he’ll find true love in the arms of a waitress from Racine. But given the pace of the show so far, I’d bet on a far less dramatic finale. If anything, the turbulence of the sixties only highlights the brittleness of human character. Fashions change. Politics change. A man can walk on the moon. But we are stuck with ourselves. 

Is this true? Are we really stuck? Do people ever change? Put another way: is Don Draper the exception or the rule? 

Obviously, the empirical answers are irrelevant to the success of Mad Men; the art doesn’t need to obey the facts of social science. But let’s admit that these are interesting mysteries, and that our capacity for change isn’t just relevant on cable television. It’s also a recurring plot point of real life.  

The best way to grapple with these scientific questions is to follow people over time, measuring them within the context of a longitudinal study. And since Mad Men is a basically the longitudinal study of a single man over a decade, it might be worth comparing its basic conclusions – most people don’t change – with the results of actual longitudinal research.

The most fitting comparison is the Grant Study of Adult Development, which has been tracking more than 200 men who were sophomores at Harvard between 1939 and 1944. Every few years, the subjects submit to a lengthy interview and a physical exam; their wives and children are sent questionnaires; they are analyzed using the latest medical tests, whether it’s a Rorschach blot or an fMRI. The oldest subjects are now in their mid-nineties, making them a few years older than Don Draper.

George Vaillant led the Grant study for more than thirty years, and has written extensively about its basic findings. His first survey of the project, Adaptation to Life, is a classic; his most recent book, Triumphs of Experience, provides a snapshot of the men as they approach the end of life. And while Vaillant’s writing is full of timeless wisdom and surprising correlations – alcoholism is the leading cause of divorce; a loving childhood is more predictive of income than IQ scores; loneliness dramatically increases the risk of chronic disease - it’s most enduring contribution to the scientific literature involves the reality of adult development.

Because people change. Or rather: we never stop changing, not even as old men. In fact, the persistence of personality change is one of the great themes of the Grant study. The early books are full of bold claims. But then, as the years pass, the stories of the men become more complicated, subtle, human. 

Take divorce. Vaillant initially assumed, based on his interviews with the Grant subjects, that “divorce was a serious indicator of poor mental health.” It signaled an unwillingness to commit, or perhaps an inability to deal with intimacy. These marriages didn’t fail because they were bad marriages. They failed because the men were bad partners, just like Don.

But time is the great falsifier. When the subjects were in their seventies and eighties, Vaillant conducted extensive interviews with them about their marriages. As expected, more than 90 percent of those in consistently happy first marriages were still happy. The same pattern applied to those stuck in poor relationships – they were still miserable, and probably should have gotten divorced. However, Vaillant was startled by what happened to those men who divorced and later remarried: roughly 85 percent of them said “their current marriages were happy - and had been for an average length of thirty-three years.” This data forced Vaillant to reconsider his beliefs about divorce. Instead of seeing marital failure as an innate character flaw, he came to believe that it was “often a symptom of something else,” and that these men had learned how to become good husbands. They changed.

The same idea returns again and again, both in the statistics and the individual case studies. The man raised in a loveless home becomes a loving father; the alcoholic stops drinking while another one starts; some gain wisdom, others grow bitter. The self is a verb, always becoming. As Vaillant writes, “Our journeys through this world are filled with discontinuities.” 

Of course, longitudinal studies are not the only way to measure adult development. In a recent Science paper, the psychologists Jordi Quoidbach, Daniel Gilbert and Timothy Wilson came up with a novel way to measure our inner changes. The survey itself was simple: they asked more than 19,000 adults, ranging in age from 18 to 68 years, questions about how much they’d changed during the previous ten years and how much they expected to change over the next ten. By comparing the predictions of subjects to the self-reports of those who were older, the scientists were able to measure the mismatch between how much we actually changed (a significant amount) and how much we expected to change in the future (not very much at all.)

The scientists refer to this as the “end of history illusion,” noting that people continually dismiss the possibility that their personalities, values, and preferences will evolve over time. As the scientists write, “People, it seems, regard the present as a watershed moment at which they have finally become the person they will be for the rest of their lives.” But no such moment exists. History is never over, and we never stop changing.

In his writing, Vaillant repeatedly quotes the famous line of Heraclitus: “No man ever steps in the same river twice; for it is not the same river, and he is not the same man.” Mad Men shows us the changes of the river. It shows us a society disrupted by the pill and the civil rights movement and Vietnam. But Don remains the same, forever stuck in his own status quo. For a show obsessed with verisimilitude – every surface is faithful to the period - this might be the most unrealistic thing on the screen.

Vaillant, George E. Adaptation to Life. Harvard University Press, 1977.

Vaillant, George E. Triumphs of Experience. Harvard University Press, 2012.

Quoidbach, Jordi, Daniel T. Gilbert, and Timothy D. Wilson. "The end of history illusion." Science 339.6115 (2013): 96-98.