The Sabermetrics of Effort

The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.

The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll, the Dodgers just hired Andrew Friedman, and baseball fans now speak in clumps of acronyms. (“His DICE and DIPS are solid, but I’m worried he’ll regress to the mean given his extremely high BABIP.”)

However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.

And this brings me to a new paper in the Journal of Sports Economics by Daniel Weimar and Pamela Wicker, economists at the University of Duisburg-Essen and the German Sport University Cologne. They focused on a variable of athletic performance that has long been neglected, if only because it’s so difficult to measure: effort. Intuitively, it’s obvious that player effort is important. Fans complain about basketball players slow to get back on defense; analysts gossip about pitchers who return to spring training carrying a few extra pounds; it’s what coaches are always yelling about on the sidelines. Furthermore, there's some preliminary evidence that these beliefs are rooted in reality: One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.

And yet, despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance, just like shots on goal or pass accuracy. They did this in two ways: 1) measuring the total distance run by each player during a game and 2) measuring the number of “intensive runs” - short sprints at high speed – by the players on the field.

The first thing to note is that the typical soccer player runs a lot. On average, players in the Bundesliga run 11.1 km per game and perform 58 intensive sprints. That said, there were still significant differences in running totals among players. Christoph Kramer averaged 13.1 km per game during the 2013-2014 season, while Carlos Zambrano ran less than 9 km; some players engaged in more than 70 sprints, while others executed less than 45. According to the economists, these differences reflect levels of effort, and not athletic ability, since “every professional soccer player should have the ability to run a certain distance per match.” If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.

So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game, even after accounting for a bevy of confounding variables. According to the calculations, if a team increases the average running distance of its players by 1 km (relative to the opponent), they will also increase their winning probability by 26-28 percent. Furthermore, the advantages of effort are magnified when the team difference is driven by extreme amounts of effort put forth by a few select players. As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”

Taken together, these results suggest that finding new ways to measure player effort can lead to moneyball opportunities for astute soccer teams. Since previous research demonstrates that a player’s effort has an “insignificant or negative impact” on his market value, it seems likely that teams would benefit from snapping up those players who run the most. Their extra effort isn’t appreciated or rewarded, but it will still help you win.

The same principle almost certainly applies to other sports, even if the metrics of effort aren’t quite as obvious as total running distance in soccer. How should one measure hustle in basketball? Number of loose balls chased? Time it takes to get back on defense? Or what about football? Can the same metrics of effort be used to assess linemen and wide-receivers? These questions don’t have easy answers, but given the role of effort in shaping player performance it seems worthwhile to start asking them.

There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports. The psychologist Paul Sackett frames the issue nicely in his work on maximum tests versus typical performance. Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.

Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested. (That, presumably, is what we really care about.) As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.

What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort. In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.

And that’s why I find this soccer data so interesting. Sports teams, after all, have massive financial incentives to improve their assessments of human capital; tens of millions of dollars depend on the wisdom of their personnel decisions. Given the importance of effort in player performance, I’m hopeful they’ll get more serious about finding ways to track it. With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students. As the German football league reminds us, finding ways to increase effort is extremely valuable knowledge. After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.

Old-fashioned effort just might be the next on-base percentage.

Weimar, D., & Wicker, P. (2014). Moneyball Revisited Effort and Team Performance in Professional Soccer. Journal of Sports Economics, 1527002514561789.

 

What Your Mother Has To Do With Your Lover

"They fuck you up, your mum and dad.   

    They may not mean to, but they do.   

They fill you with the faults they had

    And add some extra, just for you."

-Philip Larkin, “This Be The Verse”

The poem has the structure and simplicity of a nursery rhyme, which makes its tragic message that much harder to take. In three short verses, Larkin paints the bleakest possible view of human nature, insisting that our flaws are predestined by our birth, for children are ruined by their parents. “Man hands on misery to man,” Larkin writes; the only escape is to “get out as early as you can, and don’t have any kids yourself.” 

Larkin, of course, was exaggerating for effect - not every parent-child relationship is a story of decay. Not every family is a litany of inherited faults. In most cases, the people who love us first don’t just fuck us up - they also fix us. They cure us of the faults we’d have if left alone.

And yet, Larkin’s short verse does describe a difficult truth, which is that poor parenting can leave lasting scars. And so the terrible cycle repeats and repeats, as we inflict upon others the same sins and errors that were inflicted upon us. The sadness, Larkin writes, “deepens like a coastal shelf.”

But why? What are the mechanics of this process? A bad mum and dad might fuck us up, but what, exactly, are they fucking up?

A new paper in Psychological Science by the psychologists Lee Raby, Glenn Roisman, Jeffry Simpson, Andrew Collins and Ryan Steele gives us a glimpse of some possible answers.* By drawing on the epic Minnesota Longitudinal Study of Risk and Adaptation, Raby, et. al. were able to show how a particular kind of poor parenting - insensitivity to the child’s signals - can have lasting effects. If we don’t feel close to our caregivers, then we struggle to stay close to other people later in life. In this sense, learning to love is like learning anything else: it requires a good teacher. 

First, a little history about the Minnesota Longitudinal Study. In the mid-1970s, Alan Sroufe and Byron Egeland began recruiting 267 pregnant women living in poverty in the Minneapolis area. What makes the Minnesota study is so unique is its time-scale: the researchers have been tracking and testing the children born to these women for nearly 40 years. They made home visits during infancy and tested them in the lab when they were toddlers. They set up a preschool and a summer camp. They watched them interact with their mothers as teenagers and kept track of their grades and test scores. They were interviewed at length, repeatedly, about nearly everything in their life.

The point of all this data - and it’s a staggering amount of data - is to reveal the stark correlations between the quality of the early parent-child relationship and the ensuing trajectory of the child. Because the correlations are everywhere. According to the Minnesota study, children who are more securely attached to their mother exhibit more self-control and independence in preschool. They score higher on intelligence tests, get better grades and are far more likely to graduate from high-school. As adults, those who experienced more supportive parenting are more supportive with their own children; they also have better romantic relationships. In their masterful summary of the study, The Development of the Person, Sroufe, Egeland and Elizabeth Carlson compare our early attachment experiences to the foundation of a house. While the foundation itself is not sufficient for shelter - you also need solid beams and sturdy roof – the psychologists note that “a house cannot be stronger than its foundation.That’s what we get as young children: the beginnings of a structure on which everything else is built.

And this brings us back to the latest follow-up study, conducted when the Minnesota subjects were between 33 and 37 years old. Raby, et. al began by asking the subjects and their long-term romantic partners a series of questions about their relationship, including the top three sources of conflict. Then, the couples were instructed to seek a resolution to one of these major disagreements.    

While the subjects were having these difficult conversations, the scientists were measuring the “electrodermal reactivity” of their hands. It’s long been known that certain types of emotional experiences, such as fear and nervous arousal, trigger increased skin reactivity, opening up the glands of the palm. (Lie detectors depend on this principle; suppressing our true feelings makes us sweat.) Not surprisingly, the couples experienced higher electrodermal reactivity when talking about their relationship problems than when doing a simple breathing exercise. These were not fun conversations.

Here’s where the longitudinal data proved essential. By comparing the changes in skin response triggered by the conflict discussion to the early childhood experiences of the subjects, the scientists were able to document a troubling correlation. In general, those who “experienced less sensitive, responsive and supportive caregiving” during childhood and adolescence displayed a significantly higher skin conductivity response when talking to their partners about their relationship problems as thirtysomethings. This correlation held even after correcting for a bevy of other variables, including the quality of the current romantic relationship, gender, ethnicity, and socioeconomic status.

What explains this finding? Why does a less sensitive parent lead to sweatier palms in middle-age? One possibility - and it’s only a possibility - is that the elevated skin conductance is a marker of “behavioral inhibition,” a sign that the subjects are holding their feelings back. Because these adults had parents who struggled to respond to their emotional needs, they learned to hide their worries away.  (Why express yourself if nobody’s listening?) This might explain why these same individuals also seem to have a tougher time discussing relationship problems with their adult partner, at least based on the spike in skin reactivity. 

As the years pass, this inability to discuss relationship issues can itself become a serious issue. For instance, research by John Gottmann and colleagues at the University of Washington found that, once the honeymoon period was over, couples who experienced more “verbal conflict” were actually more likely to stay together. “For a marriage to have real staying power, couples need to air their differences,” Gottmann writes. “Rather than being destructive, occasional anger can be a resource that helps the marriage improve over time.” Intimacy requires candor and vulnerability, not inhibition and nerves.

This new study from the Minnesota subjects comes with all the usual caveats. It has a relatively small sample size - only 37 couples participated - and correlation does not prove causation. Nevertheless, it’s powerful proof that the shadow of that first loving relationship - the one we have with our parents - follows us through life, shaping every love thereafter.

Raby, K. Lee, et al. "Greater Maternal Insensitivity in Childhood Predicts Greater Electrodermal Reactivity During Conflict Discussions With Romantic Partners in Adulthood." Psychological Science (2015)

Raby, K. Lee, et al. "The interpersonal antecedents of supportive parenting: A prospective, longitudinal study from infancy to adulthood." Developmental Psychology 51.1 (2015)

Sroufe, L. A., Egeland, B., Carlson, E. A., & Collins, W. A. (2009). The Development of the Person: The Minnesota Study of Risk and Adaptation from Birth to Adulthood. Guilford Press.

*Just a reminder that this research on poor parenting has massive public policy implications. According to a 2013 report from the Center on Children and Families by Richard Reeves and Kimberly Howard, if the “emotional support skills” of the weakest parents are merely boosted to an average level, the result would be a 12.5 percent decrease in teen pregnancy, a 9 percent increase in high-school graduation rates and an 8.3 percent decrease in criminal convictions before the age of 19.

How To Convince People They're Criminals

In November 1988, Christopher Ochoa was interrogated by police about the brutal rape and murder of Nancy DePriest at a Pizza Hut in Austin, Texas. He was questioned for nearly twelve hours. The cops told him that his best friend, Richard Danziger, had already linked him to the crime scene. They said that Ochoa would be given the death penalty - and showed him where on his arm the needle would go - unless he confessed and pled guilty. 

And so that’s what Ochoa did. He testified that he and Danziger had planned to rob the Pizza Hut, and then tied up the victim with her bra before raping her; they only shot the victim in the head after she recognized them. (Ochoa and his friend worked at another Pizza Hut in the area.) During the trial, Ochoa testified against Danziger – who had maintained his innocence – and both men were sentenced to life in prison.

In 1996, a convict named Achim Josef Marino serving three life sentences wrote letters to various Texas officials insisting that he had raped and killed DePriest, and that Ochoa and Danziger were both innocent. Marino said that evidence linking him to the crime scene, including the keys of the victim, could be found at his parents’ home. After recovering this evidence, Austin police then re-interviewed Ochoa. His story, however, remained the same: he had committed the crime. He was a guilty man.

In fact, it would take another three years before students at the Innocence Project at the University of Wisconsin Law School in Madison were able to test semen recovered from the crime scene. The genetic tests proved that neither Ochoa nor Danziger had any involvement with DePriest’s rape and murder. On February 6, 2002, both men were exonerated.

There is no more potent form of legal evidence than a confession. To know that someone confessed is to assume they must have done it: why else would they submit a guilty plea? And yet, the tragic files of the Innocence Project demonstrate that about 25 percent of false convictions are caused by false confessions, as many people take responsibility for violent crimes they didn’t commit.

These false confessions have multiple causes. Most often, they seem to be associated with devious interrogation techniques (telling Ochoa that Danziger was about to implicate him) and the use of violence and intimidation during the interrogation process (insisting that Ochoa would be sentenced to death unless he pled guilty.)

And yet, false confessions are not simply a matter of police officers scaring suspects into admissions of guilt. In many instances, they also involve the generation of false memories, as suspects come to believe - typically after hours of intense and repetitive interrogation – that they committed the crimes in question. In the scientific literature, these are sometimes referred to as “honest lies,” or “phantom recollective experiences.”

What’s so unsettling is how easy it is to implant false memories in someone else’s head. This ease is old news: in the mid-1990s, Elizabeth Loftus and colleagues famously showed how a few suggestive interviews could convince people they’d been lost in a shopping mall at the age of six. Subsequent studies have extended her findings, persuading subjects that they’d been rescued by a lifeguard after nearly drowning or had tea with Prince Charles. You can trick people into misremembering details from a car accident and get them to insist that they shook hands with Bugs Bunny at Disneyland.

However, a new study by the psychologists Julia Shaw and Stephen Porter takes this false memory paradigm in a most disturbing direction, revealing a clear connection between false memories in the lab and false confessions in the legal system. In their paper, Shaw and Porter demonstrate that a majority of people can also be persuaded that they committed serious crimes. Their memories were rich, detailed and convincing. They were also complete fictions.

Shaw and Porter began the study by contacting the primary caregivers of 126 undergraduates, asking them to report “in some detail on at least one highly emotional event” experienced by the student during childhood. Then, sixty of these students were questioned three times for about forty minutes, with each of the interviews occurring a week apart. The interviews followed a technique proven to elicit false memories, as the scientists described two events from the subject’s childhood. The first event was true, at least as described by the caregiver. The second was not.

The novelty of this study involved the nature of the false event. Half of the subjects were randomly assigned to a “criminal condition,” told that they had committed a crime resulting in police contact. The crimes themselves varied, with a third told they had committed assault, another third that they had committed assault with a weapon, and the final third that they had committed theft. Those in the non-criminal condition, meanwhile, were assigned one of the following false memories: they’d been attacked by a dog, injured during a powerful emotional experience, or lost a large sum of money and gotten into a lot of trouble with their parents. 

During the interview process, the subjects were asked to recall both the true and false events. Not surprisingly, the subjects had trouble recalling the fictional event they’d never experienced. The scientists encouraged them to try anyway. To make the false memories feel more believable, they embedded their questions about the event with accurate details, such as the city the subject had lived in at the time, or the name of a friend from his or her childhood. They also relied on a collection of interrogation strategies that have been consistently associated with the generation of false confessions. Here are the scientists describing their devious method:

"The tactics that were scripted into all three interviews included incontrovertible false evidence (“In the questionnaire, your parents/ caregivers said. . .”), social pressure (“Most people are able to retrieve lost memories if they try hard enough”), and suggestive retrieval techniques (including the scripted guided imagery). Other tactics that were consistently applied included building rapport with participants (e.g., asking “How has your semester been?” when they entered the lab), using facilitators (e.g., “Good,” nodding, smiling), using pauses and silence to allow participants to respond (longer pauses seemed to often result in participants providing additional details to cut the silence), and using the open-ended prompt “what else?” when probing for additional memory details."

In the two follow-up interviews, the subjects were, once again, asked to describe their false memories. In addition, they were asked a number of questions about the nature of these memories, such as how vivid they seemed, and whether or not they felt true.

The results were shocking. Of the thirty people assigned to the criminal condition, twenty-one of them (70 percent) now reported a false memory of being involved in a serious felony that resulted in police contact. What’s more, these “honest lies” were saturated with particulars, as the subjects reported an average of more than 71 details from the non-existent event, including 12 details about their interactions with the police officers. “This study provides evidence that people can come to visualize and recall detailed false memories of engaging in criminal behavior,” write Shaw and Porter. “Not only could the young adults in our sample be led to generate such memories, but their rate of false recollection was high, and the memories themselves were richly detailed.” While the subjects’ true memories were slightly more detailed than their false memories, and they were a bit more confident that the true events had happened, there were no obvious distinctions in form or content between their real and imagined recollections.

The study, then, is yet another reminder that our memory takes a post-modern approach to the truth, recklessly blurring together the genres of autobiography and fiction. Although our recollections tend to feel accurate and immutable, the reality is that they are undergoing constant revision: we rewrite our stories of the past in light of the present. (This is known as reconsolidation theory.) The end result is that the act of remembering is inseparable from misremembering; the memoirs we carry around in our heads are overstuffed with bullshit.

What’s most disturbing, of course, is that we believe most of it anyway, which is why Shaw and Porter were able to make people remember crimes they’d never committed. When the experiment was over, after three weeks of interviews, the scientists told the subjects the truth: There was no assault, no weapon, no theft. They had been innocent all along.

It took nearly fourteen years for Christopher Ochoa to be told the same thing.

Shaw, Julia and Stephen Porter. "Constructing Rich False Memories of Committing Crime," Psychological Science. 2015.

 

Why Dieting Is So Hard

New year, new you. For many people, a new you really means a new diet, shorn of white carbs, fried foods and ice cream. (Losing weight is, by far, the most popular New Year’s resolution.) Alas, the new you has to struggle against the habits of the old you, which knows perfectly well how delicious French fries taste. Most diets fail because the old you wins.

Why is the new you so weak? A recent study in Psychological Science by Deborah Tang, Lesley Fellows and Alain Dagher at McGill University helps reveal the profound challenges faced by the typical dieter, struggling for a slimmer waistline. We were not designed to diet; the mind does not crave celery. We were designed to gorge.

The study began by asking 29 people to evaluate pictures of fifty different foods, some of which were healthy (fruits and vegetables) and some of which were not (chocolate bars, potato chips, et. al.) The subjects were asked two questions about each picture: 1) How much they wanted to eat it, on a twenty point scale and 2) How many calories it contained.

The first thing the scientists found is that people are terrible at guessing the number of calories in a given food. In fact, there was no correlation between subjects’ estimate of calories and the actual amount of calories. This failure of dietary intuition means that even when we try to eat healthy we often end up eating the wrong thing. A Jamba Juice smoothie might seem like a responsible choice, but it’s actually a speedball of energy, with a large serving of the Orange Dream Machine clocking in at 750 calories. That’s 35 percent more calories than a Big Mac.

But here's the fascinating twist: although our conscious assessments of calories are not to be trusted, the brain seems to contain a calorie counter of its own, which is pretty reliable. (This calorie counter learns through personal experience, not nutritional labels.) In short, part of you knows that the low-fat smoothie contains more calories than the double burger, even if the rest of you is in sweet denial.

The scientists revealed this internal calorie counter in two ways. First, they showed that the amount people were willing to bid in an auction for a familiar food was closely related to its true caloric content, and not their liking ratings or the number of calories they thought the food had. In short, people were willing to pay larger amounts for food with more energy, even if they didn’t particularly like the taste of it.

The second source of evidence featured fMRI data. After showing the subjects the food photos in a brain scanner, the scientists found that activity in a part of the brain called the ventromedial prefrontal cortex (vmPFC) was closely correlated with the actual number of calories, and not individual preferences or the estimated number of calories. And given previous scanning research linking the vmPFC to assessments of subjective value - it helps determine the worth of alternatives - this suggests that, for certain parts of the brain, “the reward value of a familiar food is dependent on implicit knowledge of its caloric content.” Kale juice is for suckers.

This research comes with a few unsettling implications. The first is a sobering reminder that the mind is a calorie-seeking machine. Although we live in a world of cheap glucose and abundant fats, part of us is still terrified of going hungry. That, presumably, is why we assiduously track the amount of energy in certain foods.

But wait - it gets worse. Not only does the brain ascribe high value to calorically dense foods, but it also seems to get a lot of pleasure from their consumption, regardless of how the food actually tastes. A 2008 study by researchers at Duke, for instance, showed that mutant mice who can’t taste sweet things still prefer to drink sugar water, simply because their gut enjoyed the fuel. (The ingestion of calories triggers a release of dopamine regardless of how the calories taste.) This suggests that we’d still crave that Jamba Juice smoothie even if it wasn’t loaded with fruit sugars; energy makes us happy. 

There are no easy fixes here, which is why losing weight is so hard. This is true at the individual level - the cravings of the old you are difficult to resist - and at the societal level, as the government seeks to persuade people to make healthier eating choices. In fact, this study helps explain why calorie labeling on menus doesn’t seem to work very well, at least in some early trials. Although the labels attempt to educate consumers about the true caloric content of foods, the brain is already tracking calories, which makes it hard for the fine-print on menus to compete. And even if we did notice the energetic heft of the smoothie it’s not clear how much we’d care. Simply put, we are wired to prefer those foods with the most fuel, even when that fuel makes us fat.

The old you wins again.

Tang, Deborah W., Lesley K. Fellows, and Alain Dagher. "Behavioral and Neural Valuation of Foods Is Driven by Implicit Knowledge of Caloric Content." Psychological Science 25.12 (2014): 2168-2176.

 

Are Toddlers Noble Savages?

The bluestreak cleaner wrasse is a trusting fish. When a large predator swims into its cleaning station, the tiny wrasse will often enter the gills and mouth of the “client,” picking off ectoparasites, dead skin and stray bits of mucus. The wrasse gets a meal; the client gets cleaned; everyone wins, provided nobody bites.

This is a story of direct reciprocity. Nature is full of such stories, from the grooming of Sri Lankan macaques to the sharing of blood by vampire bats. In fact, such reciprocity is an essential component of biological altruism, or the ability to show concern for the wellbeing of others. Despite our reputation for selfishness, human beings are big believers in altruism, at least when it's rooted in reciprocity. If somebody gives us something, then we tend to give something back, just like those fish and bats. 

But where does this belief in reciprocity come from? One possibility is that were hard-wired for it, and that altruism emerges automatically in early childhood. This theory has been bolstered by evidence showing that kids as young as eighteen months dont hesitate to help a stranger in need. In fact, human toddlers seem especially altruistic, at least when compared to our chimp relatives. As Michael Tomasello, the co-director of the Max Planck Institute for Evolutionary Anthropology, writes in his recent book Why We Cooperate: "From around their first birthday - when they begin to walk and talk and become truly cultural beings - human children are already cooperative and helpful in many, though obviously not all, situations. And they do not learn this from adults; it comes naturally."

It's an uplifting hypothesis, since it suggests that niceness requires no education, and that parents don't have to teach their kids how to be kind; all we have to do is not fuck them up. As Tomasello writes, “There is very little evidence in any of these cases…that the altruism children display is a result of acculturation, parental intervention or any other form of socialization.” If true, then Rousseau was mostly right: every toddler is a noble savage.

However, a new paper in PNAS by Rodolfo Cortes Barragan and Carol Dweck at Stanford University suggests that the reality of children’s altruism is a little more complicated. Their study provides powerful evidence that young kids do like to help and share, but only when they feel like they're part of a sharing culture. They want to give, but the giving is contingent on getting something back.

The experiments were straightforward. In the first study, thirty-four 1 and 2 year olds were randomly assigned to either a "reciprocal play" or "parallel play" warm-up session. In the reciprocal play setup, the scientist shared a single set of toys with the child, taking turns rolling a ball, pushing buttons on a musical toy and passing plastic rings back and forth. The parallel play condition featured the same toys, only the scientist and child each had their own set. In both conditions, the scientist sat three feet away from the toddler and flashed a smile every thirty seconds.

Then, six minutes after play began, the scientist removed the toys and began testing the willingness of the children to offer assistance. They demonstrated a need for help in reaching four different objects: a block, bottle, clothespin and pencil. The children were given thirty seconds to help, as the scientist continued to reach out for the object.

The differences were stark. When children were first exposed to the reciprocal play condition, they offered help on roughly three of the four trials. However, when they first played in parallel, the rate of assistance plummeted to an average of 1.23 out of four.

In the second study, the scientists replicated these results with a stranger. Instead of having the children help out the same person they'd been playing with, they introduced an unknown adult, who entered the room at the end of playtime. Once again, the children in reciprocal play were far more likely to help out, even though they'd never met the person before. As Barragan and Dweck note, these are "striking" shifts in behavior. While children in the parallel play condition tended to ignore the needs of a new person, those in the "reciprocal play condition responded by helping time and time again, despite the fact that this new person had previously done nothing for them and now gave them nothing in return."

The last two studies extended these results to 3 and 4 year old children. Once again, the young subjects were randomly assigned to either a reciprocal or parallel play condition. After a few minutes of play, they were given the chance to allocate stickers to themselves or the adult. Those in the reciprocal play condition shared far more stickers. The last study explored the cause of this increased altruism, showing that children were more likely to say that a reciprocal play partner would provide help or share a toy, at least when compared to a parallel play partner.

I have a selfish interest in this subject. As a parent of two young kids, a significant portion of my day is spent engaged in negotiations over scarce resources (aka toys). In my small sample size, appeals to pure altruism rarely work: nobody wants to share their Elsa doll to cheer up another toddler. However, if that Elsa doll is part of a group activity – we can dress her up together! – then an exchange of Disney characters just might be possible. As this paper demonstrates, the key is to make sharing feel like a non-zero sum game, or one in which cooperation leaves everyone better off.

And this is where parents come in. As Barragan and Dweck note, their data contradicts “the notion that socialization has little or no part to play in early occurring altruism.”  Instead, their work demonstrates how the modeling of adults – the mechanics of our playing - strongly shapes the sharing instincts of children. I’ve made the mistake of believing that my kids will share once they’ve got enough toys, that altruism depends on a sense of abundance. (Ergo: my many trips to the Disney store.) But this appears to be an expensive mistake. After all, the parallel play condition offered kids the same playthings in greater amounts, since they didn’t have to share the ball or rings with the grown-up. But that surplus didn’t make them generous. Rather, their generosity depended on being exposed to an engaged and attentive adult, willing to get down on the ground and roll a ball back and forth, back and forth. My takeaway? Buy less, play more.

The wrasse and the bat seem to be born knowing all about reciprocity; those species have quid pro quo in their bones. But human kindness is more subtle than that. Unless we are exposed to the right conditions – unless someone shares their toys with us - then we never learn how much fun sharing our toys can be.

Barragan, Rodolfo Cortes, and Carol S. Dweck. "Rethinking natural altruism: Simple reciprocal interactions trigger children’s benevolence." Proceedings of the National Academy of Sciences 111.48 (2014): 17071-17074.

The Educational Benefits of Purpose

What are the biggest impediments for teachers in the classroom? According to a recent national survey, the most frequently cited problem was “students lack of interest in learning. (Among teachers in high-poverty schools, 76 percent said this was a serious issue.) These kids know what they need to do - they just dont want to do it.

One solution to this problem is to make classroom activities less tedious. Students might be bored by the periodic table, but get excited about the chemistry of cooking. Statistics is dry; the statistics of baseball is not. In other words, the same student who appears unmotivated when staring at a textbook might be extremely motivated when the material is brought to life by a charismatic teacher.

But this approach has its limitations. For one thing, the interests of students are idiosyncratic; the spin that appeals to one child is tiresome to another. In addition, some academic tasks are inherently difficult, requiring large doses of self-control. It shouldn't be too surprising, then, that 44 percent of middle-school students would rather take out the trash than do their math homework. Not every subject can be gamified. Not everything in life is fun.

So how do we help students cope with these "boring but important" tasks? That question is the subject of a fascinating new paper in the Journal of Personality and Social Psychology by David Yeager, Marlone Henderson, David Paunesku, Gregory Walton, Sidney D'Mello, Brian Spitzer and Angela Duckworth. The researchers began with the observation that, when adolescents are asked about their reasons for doing schoolwork, they often describe motives that are surprisingly selfless, or what the scientists call self-transcendent. If a student wants to become a doctor, she doesnt just want to do it for the money she probably wants to save lives, too.

While previous research has documented the benefits of self-transcendent motives among employees in unpleasant jobs hospital orderlies, sanitation workers and telemarketers all perform better when focused on the noble purpose of their work Yeager, et. al. wanted to extend this logic to the classroom. It was not an obvious move. Its easy to say 'cleaning up this trash helps people,'" wrote first author David Yeager in an email. "It's harder to say that learning fractions helps people...It wasnt clear than any kid would say that, or that it would be motivating.

The first study involved 1364 high-school seniors at ten urban public high schools, scattered across the country. The students were asked to rate, on a five point scale, whether or not they agreed with a series of statements about their motives for going to college. Some of the motives were self-transcendent ("I want to learn things that will help me make a positive impact on the world"), while others were more self-oriented ("I want to learn more about my interests.")  

After giving the students a bevy of self-assessment surveys, it became clear that self-transcendent motives were correlated with a variety of other mental variables, such as self-control and grit. As the scientists note, an important element of self-regulation is the ability to abstract up a level, so that one understands the larger purpose of a trying task. (If you dont want to eat the marshmallow, think about your diet; if youre trying to stay focused on your homework, contemplate your future career goals.) Whats more, this boost in self-regulation had real consequences, allowing the scientists to find a strong link between measures of purpose and college enrollment. Among those students with the least self-transcendent purpose, only 30 percent were actively enrolled in a college the following school year. That percentage more than doubled, to 64 percent, among students with the most purpose.

In addition to these survey questions, the scientists gave the students a new behavioral test called the "diligence task." What makes the task so clever is the way it mirrors the real-world temptations of the digital age, as students struggle to balance the demands of homework against the lure of YouTube. In the task, students were given the choice of completing tedious math problems or watching viral videos/playing tetris. While the students were free to do whatever they preferred, they were also reminded that successfully completing the math tasks could help them stay prepared for their future careers. Not surprisingly, those who reported higher levels of self-transcendent purpose were more diligent, less likely to be tempted by mindless distraction.  As the psychologists note, these results contradict conventional stereotypes about the best way to motivate low-income students. "Telling students to focus on how they can make more money if they go to college may not give them the motives they need to actually make it to college graduation," they write. Instead, these students seem to benefit the most from having selfless motives.

This research raises the obvious question: can self-transcendent purpose be taught? In their second study, Yeager, et. al. conducted an intervention, attempting to instill students with a more meaningful set of motives. They asked 338 ninth graders at a suburban high-school in San Francisco Bay area to complete a reading and writing exercise during an elective period. Half of the students were assigned to the self-transcendent purpose condition, which was designed to get them to think about their selfless motives for learning. One student wrote about wanting to become a geneticist, so they could "help improve the world by possibly engineering crops to produce more food," while another student wanted to become an environmental engineer "to be able to solve our energy problems."

The remaining students were assigned to a control condition. Instead of thinking about how to make the world a better place, these students were asked to read and write about how high-school was different than middle-school.

The intervention worked. After three months, those students with lower math and science grade point averages who were exposed to the purpose intervention saw their GPAs go up by a significant 0.2 points. (Higher achieving students also saw a slight boost in GPA, but it wasn't statistically significant.) Although the intervention only lasted for part of a single class period, it nevertheless led to a lasting boost in academic performance.

The last two studies tried to unpack this effect. After priming undergraduates to think about the self-transcendent purpose of their schoolwork, the students were asked to engage in a tedious academic exercise. They were given 100 review questions for an upcoming psychology test and encouraged to learn deeply from the activity, which meant spending plenty of time working through each question. The results were clear: students exposed to a self-transcendent purpose intervention spent nearly twice as long (49 seconds versus 25 seconds) on each review question. Importantly, this was done in a naturalistic setting, write the scientists. That is, [it involved] looking at real world student behavior on an authentic examination review, when students were unaware that they were in a random-assignment experiment. Not surprisingly, this additional effort led to higher grades on the ensuing exam.

In a final experiment, the scientists demonstrated that a purpose intervention could increase performance on the diligence task, in which students are asked to choose between a tedious math exercise and vapid viral videos. Once again, a sense of purpose proved useful, as those primed to think of selfless reasons for schooling were better at persisting at the math task, even when it was most boring. “We just don’t often ask young people to do things that matter,” wrote David Yeager by email. “We say, ‘Be selfish for now, later when you’re an adult then you can do something important.’ But kids are yearning right now to have meaning in life.”

In the paper, the scientists quote Viktor Frankl, the psychiatrist and pioneer of logotherapy, on the importance of having a meaning in life. (I wrote about Frankl here.) “Ever more people have the means to live, but no meaning to live for,” Frankl wrote, in a critique of modern life. Society excelled at satisfying our physical wants, but it tended to ignore those spiritual needs that couldn’t be measured in a lab or sold at a store.  This was a tragic error, Frankl said, for it led us to misunderstand our most fundamental nature. “A human being…doesn’t care primarily for pleasure, happiness, or for the condition within himself,” Frankl wrote. “The true sign and signature of being human is that being human always points to and is directed towards something other than itself.”

I have a feeling Frankl would have enjoyed this paper. His critics frequently accused him of deliberate ambiguity, of remaining obscure about what “meaning” actually meant. And the critics had a point: there is no pill that can give us purpose, and it’s often unclear what a therapist can do to help a patient discover his or her reason for being. In the absence of empirical evidence – his own life was his best proof - Frankl was forced to rely on aphorisms, such as this one from Nietzsche: “He who has a why to live can bear with almost any how.”

And that’s why I think Frankl would have found these new experiments and interventions so interesting. They are reminder that meaning matters and that its impact can be measured; an intangible sense of purpose comes with tangible benefits. Again and again, we underestimate ourselves, assuming we are selfish and shallow, driven to succeed by the fruits of success. But this research proves otherwise, showing that teenagers are capable of working for selfless goals. In fact, such goals are what make them work the hardest. Because they have a why, the how takes care of itself.

Yeager, David S., et al. "Boring but Important: A Self-Transcendent Purpose for Learning Fosters Academic Self-Regulation." Journal of Personality and Social Psychology. October 2014

The Virtues of Hunger

My kitchen cupboards are filled with Trader Joe’s snacks that I bought while shopping on an empty stomach. Chocolate edamame. Pumpkin spiced pumpkin seeds. Kale chips. Lentil chips. Veggie puffs. A medley of pretzels. A collection of trail mixes. You don’t have to be Daniel Kahneman to realize that shopping while hungry is a hazardous habit, since everything looks so damned delicious. Because we are in a so-called “hot” emotional state, we end up making impulsive decisions, buying stuff that we’ll eat on the car ride home and then never again.

And it’s not just the grocery store. Dan Ariely and George Loewenstein famously demonstrated that making male subjects sexually aroused – they showed them an assortment of erotic images – sharply increased their willingness to engage in “morally questionable behavior,” such as “encouraging a date to drink to increase the chance that she would have sex with you.” It also made them less interested in using a condom.

So the science seems clear: hot emotional states are dangerous. They make us eat the marshmallow, forgo the condom, take out the subprime loan. When making a decision, it’s always better to be calm, cool and sated.

Or not.

A new paper by Denise de Ridder, Floor Kroese, Marieke Adriaanse and Catharine Evers at Utrecht University concludes that, for a certain kind of difficult strategic decision, it’s actually better to be hungry.  One possible explanation for this effect is that hunger triggers a “hot” emotional state, making us more dependent on the urges of instinct. We are less reasonable and rational, and that’s a good thing.

The Dutch researchers describe three separate experiments, all of which had relatively small sample sizes. The first experiment features the Iowa Gambling Task (IGT), a game in which subjects are given four separate decks of cards. Each of the cards leads to either a monetary gain or loss of different amounts. The subjects were told to draw from the decks and to make as much money as possible. 

But here’s the catch – not all of the decks are created equal. Two of the decks (A and B) are full of high-risk cards. They contain larger gains ($100), but also some very punishing losses (between $150 and $1250.)  In contrast, decks C and D are relatively conservative. They have smaller payoffs, but also smaller punishments. The end result is a striking contrast in the total value of the decks: while A and B lead to an average negative return of $250 for every ten drawn cards, C and D lead to an average positive return of $250. The question of the IGT is how long it takes players to figure this out.

The novelty of this study was the introduction of the hunger variable. While all of the subjects were told to not eat or drink anything (except water) from 11 PM in the evening until the morning experiment, those in the sated condition were offered a nice breakfast before playing the card game.

The results were surprising, as hungry subjects performed significantly better on the IGT. Among the final sixty trials, those with an empty stomach drew approximately 30 percent more cards from the “advantageous decks” than those who’d just eaten. According to the scientists, the advantage of hunger is that it makes us more sensitive to the urges of emotion. As Antoine Bechara, Antonio Damasio and colleagues demonstrated in their initial studies of the IGT, it only takes about ten cards before the hands of subjects start getting “nervous” – their palms begin to sweat - whenever they reached for the bad decks. (The scientists refer to this as the “pre-hunch” phase.) However, it took about eighty cards before the subjects could explain the nervousness of their hands, and “conceptualize” the differences between the decks. In other words, the feelings generated by the body preceded their conscious decisions. The hand led the mind.

And that’s why hunger might be useful, at least when it comes to the IGT. “We argue that these benefits from being in a hot state result from a greater reliance on emotions that allow for a better recognition of risks that go hand in hand with big rewards,” write de Ridder, et. al. “This would imply that insofar [as] hot states make people more impulsive, impulsivity means that they act swiftly and without explicit deliberation.”

In a follow-up experiment, the Dutch scientists engaged in a more subtle manipulation of hunger. Instead of not feeding subjects, they randomly divided fifty students into two groups. The first group was asked to evaluate a series of snack foods according to their desire to eat it: “To what extent do you feel like having [snack food] at this moment?” The second group, meanwhile, was asked to evaluate the snacks in terms of their price, or whether they seemed cheap or expensive. Once again, those primed to feel hot emotions – the subjects asked to think about their appetites – performed significantly better on the IGT.

The last study investigated a different sort of decision. Instead of playing cards, subjects were given a series of questions about whether they wanted a small reward right away or a larger reward at a later date. (“Would you prefer $27 today, or $50 in 21 days?”) This is known as a delay-discounting task, and it’s a standard tool for measuring the impulsivity of people. Previous work has shown that hot-emotional states lead to less self-control, which is why I bought chocolate edamame at Trader Joe’s and those aroused undergrads were more willing to have unprotected sex. However, the Dutch psychologists found that those students not given breakfast – they were still hungry – were actually better at choosing long-term profit over immediate gratification. Their hot emotional state made them more patient and reasoned, at least when it came to finding the optimal level of delay.

This doesn’t mean that we can walk around the world looking at pornography and expect instant wisdom. Nor will a skipped breakfast turn us into Warren Buffett. However, when we are faced with a difficult and overwhelming decision – one in which our feelings know more than we do - then mental states that makes us more sensitive to our feelings might lead to better choices. In short, it’s not the simple stuff, like shopping in a grocery store, that benefit from our hottest emotions – it’s the hard stuff. It’s drawing from decks of cards we barely understand, or playing chess, or trying to figure out what we most want from life. That's when you want to be listening to the urges of your body. That’s when the hunger helps.

de Ridder, Denise, et al. "Always Gamble on an Empty Stomach: Hunger Is Associated with Advantageous Decision Making." PloS one 9.10 (2014): e111081.

Learning To Be Alone

By any reasonable standard, human beings are born way too soon, thrust into a world for which we are not ready. Not even close.

The strange timing of our birth reflects the tradeoffs of biology. Humans have a big brain. This big brain comes with obvious advantages. But it also leads to a serious design problem: the female birth canal, which shrank during the shift to bipedalism, is too narrow for such a large skull.

This is known as the obstetrical dilemma. Natural selection solved this dilemma in typically ingenious fashion: it simply had human babies enter the world before they were ready, when the immature central nervous system was still unable to control the body. (As the development psychologist David Bjorklund notes, if human infants “were born with the same degree of neurological maturity as our ape relatives, pregnancy would last for 21 months.”) The good news is that such premature births reduce the risk to the mother and child. The bad news is that it means our offspring require constant care for more than a decade, which is roughly twice as long as any other primate.

Such care is grueling; there’s no use pretending otherwise. Hillard Kaplan, an anthropologist at the University of New Mexico, estimates that it takes approximately 13 million calories to raise a child from birth to independence. That’s a lot of food and a lot of diapers.

But childcare is not just about the feeding and shitting and sleeping. In fact, taking care of the physical stuff ends up being the easy part. As every parent knows, what’s much harder is dealing with the emotional stuff, that whirligig of moods, desires and tantrums that define the immature mind. The world fills us with feelings, but kids don’t know how to cope with these feelings. We have to show them how.

In a new paper published in Psychological Science, a team of researchers led by Dylan Gee and Laurel Gabard-Durnam (lead authors) and Nim Tottenham (senior author) outlined the neural circuits underlying this emotional education. Although there is a vast amount of research documenting the importance of the parent-child bond – secure attachments in childhood are associated with everything from high school graduation rates to a lower-risk of heart-disease as an adult – the wiring behind these differences has remained unclear.

The main experiment involved putting 53 children and teenagers, ranging in age from four to seventeen, into an fMRI scanner. (To help the younger kids tolerate the confined space, the scientists had them participate in a mock session before the experiment. They also secured their head with a bevy of padded air pillows.) While in the scanner, the children were shown a series of photographs. Some of the pictures were of their mother, while other pictures were of an “ethnicity matched” stranger. The subjects were instructed to press a button whenever they saw a smiling face, regardless of who it was.

When analyzing the fMRI data, the scientists focused on the connection between the right side of the amygdala and the medial prefrontal cortex (mPFC). Both of these are promiscuous brain areas, “lighting up” in all sorts of studies and all kinds of tasks. However, the scientists point out that the right amygdala is generally activated by stress and threats; it’s a warehouse of negative emotion. The mPFC, in contrast, helps to modulate these unfortunate feelings, allowing us to calm ourselves down and keep things in perspective. When a toddler dissolves into a tantrum because she doesn’t want to wear shoes, or go to bed, or eat her broccoli, you can blame her immature frontal lobes, which are still learning how to control her emotions. Kids are mostly id: this is why.

Here’s where things get interesting. For children older than ten, there was no significant difference in right amygdala/mPFC activity when they were flashed pictures of their mother versus a stranger. For younger children, however, the pictures of the mother made a big difference, allowing them to exhibit the same inverse connection between the amygdala and the mPFC that is generally a sign of a more developed mind. The scientists argue that these changes are evidence of “maternal buffering,” as the mere presence of a loving parent can markedly alter the ways in which children deal with their feelings. Furthermore, these shifts in brain activity were influenced by individual differences in the parent-child relationship, so that children with more secure attachments to their mother were more likely to exhibit mature emotional regulation in her presence. As John wrote in the Gospels, “Perfect love casts out fear.” Put more precisely, perfect love (and what’s more perfect than parental love?) allows kids to modulate the activity in the right amygdala, and thus achieve an emotional maturity that they are not yet capable of on their own.

While Gee, et. al provide new clarity on the wiring of this developmental process, scientists have known for decades that the process itself is exceedingly important. Although we tend to think of the human body as a closed-loop system, able to regulate its own homeostatic needs, the intricacies of the parent-child relationship reveal that we’re actually open-loops, designed to be influenced by the emotions of others. Children, in fact, are an extreme example of this open-loop system, which is why not experiencing parental buffering in the first few years of life can be such a crippling condition. Born helpless, we require an education in everything, and that includes learning to tamp down the shouts of the subterranean brain.

The child psychiatrist Donald Winnicott once observed that the goal of a parent should be to raise a child capable of being alone in their presence. That might seem like a paradox, but Winnicott was pointing out that one of the greatest gifts of love is the ability to take it for granted, to trust that it is always there, even when it goes unacknowledged. In Winnicott’s view, the process of maturity is the process of internalizing our attachments, so that the child can “forgo the actual presence of a mother or mother-figure.”

This study is a first step to understanding how this internalization happens. It shows us how the right kind of love marks the brain, how being attached to someone else endows children with a newfound maturity, a sudden strength that helps them handle a world full of scary things.

Gee, Dylan G., et al. "Maternal Buffering of Human Amygdala-Prefrontal Circuitry During Childhood but Not During Adolescence." Psychological Science (2014): 0956797614550878.

 

 

The Spell of Art

In the preface to Dave Eggers' 2000 memoir, A Heartbreaking Work of Staggering Genius, he makes the reader a generous offer. If we are bothered by the dark truth of the work - it is a book set in motion by the near simultaneous death of his parents - then we are free to pretend it's not true at all. In fact, Eggers will even help us out:

"If you are bothered by the idea of this being real, you are invited to do what the author should have done, and what authors and readers have been doing since the beginning of time: PRETEND IT’S FICTION. As a matter of fact, the author would like to make an offer...If you send in your copy of this book, in hardcover or paperback, he will send you, in exchange, a 3.5” floppy disk, on which will be a complete digital manuscript of this work, albeit with all names and locations changed, in such a way that the only people who will know who is who are those whose lives have been included, though thinly disguised. Voila! Fiction!"

It's a literary joke rooted in an old idea. The reason we believe that fiction is easier to take than the truth is because fiction requires, as Coleridge famously put it, a willing suspension of disbelief. This means, of course, that we can always suspend our suspension, return to reality, break the spell. Fiction is safer because it gives us an exit - all we have to do is remember that it's fiction.

Such intuitions about the emotional impotence of fiction (and the greater impact of The Truth) underpin a vast amount of culture. It's why there's something extra serious about movies that begin with the words "based on a true story," and why fantasy novels and comic books are considered such escapist fare. It's why horror movies need camp - we have to be reminded that it's fake, or else we'd be too scared - and why we take pulp fiction to the beach. (The truth is less relaxing.) Even my three year old daughter gets it: when she's frightened by a My Little Pony monster, she tells herself that it's all pretend. Just a cartoon. The artifice of the art is her comfort.

It's an intuition that makes sense. It sounds right. It feels right.

But it's wrong.

That, at least is the conclusion of a new study published in the Journal of Consumer Research by Jane Ebert at Brandeis University and Tom Meyvis at NYU that tested the emotional impact of fiction versus non-fiction. In one experiment, the scientists gave several dozen undergraduates a tragic story to read about a young girl who died from meningitis. Some of the subjects were randomly assigned to the "real" condition - they were told the story was true - while others were told it was a work of fiction. Then, they were asked to rate, on a nine point scale, the extent to which the story made them feel sad and distressed. Although people expected the true story to have a greater emotional impact, that wasn't what happened. Instead, those assigned to the fictional condition - they were told the death was pretend - actually felt slightly more negative emotion. The difference wasn't statistically significant (a mean of 5.79 versus 6.18) but the aesthetic expectations of the subjects were still incorrect. In short, we are much better at suspending our disbelief than we believe.

Ebert and Meyvis confirmed this in a follow-up study. Two hundred and seventy undergraduates were shown the last eight minutes of The Champ, a "movie about an ex-boxer who fights one last fight to give his young son a better future." (Spoiler alert: the boxer dies, and his son weeps over his body.) Once again, they were randomly assigned to a fictional story condition - "none of the events depicted in the movie actually happened" - or a true story condition, in which they were told that the movie was a dramatized version of a real life. As expected, there was no significant difference between the emotional reaction of those who thought the movie was pretend and those who thought it was true. However, there was one condition in which believing The Champ was fiction made a difference: when the viewing of the movie was briefly interrupted - the subjects were told, in advance, that the movie needed to be downloaded from a remote server - those who believed it was all make-believe felt significantly less sad. (Breaks didn't affect the experience of those told it was true.) According to the scientists, the brief interruptions shattered the illusion of the art, giving viewers a chance to remind themselves that it was only art.

Of course, we often watch emotional shows filled with breaks - they're called commercials. Given the data, it's interesting to think about the toll of these breaks. One possibility is that watching television shows without commercials - as happens on Netflix or HBO - provides viewers with a far a more affecting experience. But the researchers speculate that the reality of viewing is a bit more complicated. “While we don't test this in our research, we speculate that the effects of commercials will depend on what consumers do during them,” wrote Professor Ebert in an email. “If viewers are distracted by the commercials, then they may not be able to incorporate the real/fictional information while watching the movie - i.e., they won't be able to remind themselves it is only fictional. However, if viewers pay little attention to the ads they may be able to incorporate this information.” If true, this would imply that the problem isn’t commercials per se - the problem is bad commercials, since they’re the ones that interrupt the emotional spell. (I assume the same goes for DVR viewing, which requires us to fast-forward through several minutes of blurry ads.)

The larger lesson is that people are not very good at predicting their emotional reactions to aesthetic experiences. Despite a lifetime of practice, we still falsely assume that fiction won't touch us deep, that we'll be less moved by whatever isn't real. But we're wrong. And so we're gripped by Tolstoy and cry to Nicholas Sparks; we're wrecked by Game of Thrones and scared by Spider-Man. We underestimate the power of art, but the art doesn't care - it will make us feel anyway.

Ebert, Jane, and Tom Meyvis. "Reading Fictional Stories and Winning Delayed Prizes: The Surprising Emotional Impact of Distant Events.” Journal of Consumer Research. October 2014.

Are You Paying Attention?

Thank you for participating in my psychology experiment on decision-making. Please read the instructions below:

Most modern theories of decision-making recognize the fact that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables can greatly impact the decision process. In order to facilitate our research on decision-making we are interested in knowing certain factors about you, the decision maker. Specifically, we are interested in whether you actually take the time to read the directions; if not, then some of our manipulations that rely on changes in the instructions will be ineffective. So, in order to demonstrate that you have read the instructions, please ignore the sports items below. Instead, simply continue reading after the options. Thank you very much.

Which of these activities do you engage in regularly? (write down all that apply)

1)    Basketball

2)    Soccer

3)    Running

4)    Hockey

5)    Football

6)    Swimming

7)    Tennis

Did you answer the question? Then you failed the test.

The procedure above is known as an Instructional Manipulation Check, or IMC. It was first outlined in a 2009 paper, published in The Journal of Experimental Psychology, by the psychologists Daniel Oppenheimer, Tom Meyvis and Nicolas Davidenko. While scientists have always excluded those people who blatantly violate procedure – these are the outliers whose responses are incoherent, or fall many standard deviations from the mean – it’s been much harder to identify subjects whose negligence is less overt. The IMC is designed to filter these people out.

The first thing to note about the IMC is that a lot of subjects fail. In a variety of different contexts, Oppenheimer et al. found that anywhere from 14 to 46 percent of survey participants taking a survey on a computer did not read the instructions carefully, if at all.

Think, for a moment, about what this means. These subjects are almost certainly getting compensated for their participation, paid in money or course credit. And yet, nearly half of them are skimming the instructions, skipping straight ahead to the question they’re not supposed to answer.

This lack of diligence can be a serious scientific problem, introducing a large amount of noise to surveys conducted on screens. Consider what happened when the psychologists tried to replicate a classic experiment in the decision-making literature. The study, done by Richard Thaler in 1985, goes like this:

You are on the beach on a hot day. For the last hour you have been thinking about how much you would enjoy an ice-cold can of soda. Your companion needs to make a phone call and offers to bring back a soda from the only nearby place where drinks are sold, which happens to be a [run-down grocery store] [fancy resort]. Your companion asks how much you are willing to pay for the soda and will only buy it if it is below the price you state. How much are you willing to pay?

The results of Thaler’s original experiment showed that people were willing to pay substantially more for a drink from a fancy resort ($2.65) than from a shabby grocery store ($1.50), even though their experience of the drink on the beach would be identical.  It’s not a rational response, but then we’re not rational creatures. (Thaler explained this result in terms of “transaction utility,” or the tendency of people to make consumption decisions based on “perceived merits of the deal,” and not some absolute measure of value.)

As Oppenheimer et. al point out, the IMC is particularly relevant in experiments like this, since the manipulation involves a small change in the text-heavy instructions (i.e., getting a drink from a resort or a grocery store.) When Oppenheimer et. al first attempted to replicate the survey, they couldn’t do it; there was no price difference between the two groups. However, after the scientists restricted the data set so that only those participants who passed the IMC were included, they were able to detect a large shift in preferences: people really were willing to pay significantly more for a drink from a resort. The replication of Thaler’s classic paper isn’t newsworthy, of course; it’s already been cited more than 3,800 times. What is interesting, however, is that the online replication of an offline experiment required weeding out less attentive subjects.

The results of the IMC give us a glimpse into the struggles of modern social science: it’s not easy finding subjects who care, or who can stifle their boredom while completing a survey. If nothing else, it’s a reminder of our natural inattentiveness, how the mind is tilted towards distraction. As such, it’s a cautionary tale for all those scientists, pollsters and market researchers who assume people are paying careful attention to their questions. As Jon Krosnick first pointed out in 1991, most surveys require cognitive effort. And since we’re cognitive misers – always searching for the easy way out – we tend to engage in satisficing, or going with the first acceptable alternative, even when it’s incorrect.

This is a methodological limitation that’s becoming more relevant. In recent years, scientists have increasingly turned to online subjects to increase their n, recruiting participants on Amazon’s Mechanical Turk and related sites. This approach comes with real upside: for one thing, it can get psychology beyond its reliance on Western Educated subjects from Industrialized, Rich and Democratic countries. (This is known as the W.E.I.R.D. problem. It’s a problem because the vast majority of psychological research is conducted on a small, and highly unusual, segment of the human population.)

The failure rates of the IMC, however, are a reminder that this online approach comes with a potential downside. In a recent paper, a team of psychologists led by Joseph Goodman at Washington University tested the IMC on 207 online subjects recruited on Mechanical Turk. The scientists then compared their performance to 131 university students, who had been given the IMC on a computer or on paper. While only 66.2 percent of Mechanical Turk subjects passed the IMC, more than 90 percent of students taking the test on paper did. (That's slightly higher than the percentage of students who passed on computers.) Such results lead Goodman, et. al to recommend that “researchers use screening procedures to measure participants’ attention levels,” especially when conducting lengthy or complicated online surveys.

We are a distractible species. It’s possible we are even more distracted on screens, and thus less likely to carefully read the instructions. And that’s why the IMC is so necessary: unless we filter out the least attentive among us, then we’ll end up collecting data limited by their noise. Such carelessness, of course, is a fundamental part of human nature. We just don’t want it to be the subject of every study.

Oppenheimer, Daniel M., Tom Meyvis, and Nicolas Davidenko. "Instructional manipulation checks: Detecting satisficing to increase statistical power."Journal of Experimental Social Psychology 45.4 (2009): 867-872.

Goodman, Joseph K., Cynthia E. Cryder, and Amar Cheema. "Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples."Journal of Behavioral Decision Making 26.3 (2013): 213-224.

 

The Draw-A-Person Test

Imagine a world where intelligence is measured like this:

A child sits down at a desk. She is given a piece of paper and a crayon. Then, she is asked to draw a picture of a boy or girl. “Do the best that you can,” she is told. “Make sure that you draw all of him or her.” If the child hesitates, or asks for help, she is gently encouraged: “You draw it all on your own, and I’ll watch you. Draw the picture any way you like, just do the best picture you can.”

When the child is done drawing, the picture is scored. It’s a simple process, with little ambiguity. One point is awarded for the “presence and correct quantity” of various body parts, such as head, eyes, mouth, ears, arms and feet. (Clothing gets another point.) The prettiness of the picture is irrelevant. Here are six drawings from four-year olds:

The Draw-A-Person test was originally developed by Florence Goodenough, a psychologist at the University of Minnesota. Based on her work with Lewis Terman – she helped revise and validate the Stanford-Binet I.Q. test – Goodenough became interested in coming up with a new measure of intelligence that could be given to younger children. And so, in 1926, she published a short book called The Measurement of Intelligence by Drawings which described the Draw-A-Person test.* Although the test only takes a few minutes, Goodenough argued that it provided a window into the child mind, and that “the nature and content of children’s drawings are dependent primarily upon intellectual development.” In other words, those scrawls and scribbles were not meaningless marks. Rather, they reflected something fundamental about the ways in which we made sense of the world. The act of expression was an act of intelligence, and should be treated as such.

In her book, Goodenough described the obvious benefits of her intelligence test. It was fast, cheap and fun. What’s more, it seemed to be measuring something real, as children tended to generate a consistent set of scores over time. (In other words, the test was reliable.) And yet, despite these advantages, the Draw-A-Person test largely fell out of favor by the 1970s. One explanation is that it was lumped in with other “projective” techniques, such as the Rorschach Test, that were repeatedly shown to be inaccurate, too tangled up with psychoanalytic speculation.

However, a new study by Rosalind Arden and colleagues at King’s College London suggests that Goodenough’s test still has its uses, and that it manages to quantify something important about the developing mind in less than ten minutes. “Goodenough’s genius was to take a common childhood product and see its potential as an indicator of cognitive ability,” they write. “Our data show that the capacity to realize on paper the salient features of a person, in a schema, is an intelligent behavior at age 4. Performance of this drawing task relies on various cognitive, motoric, perceptual, attentional, and motivational capacities.”

How’d the scientists show this? By giving the test to 7,752 pairs of British twins, the scientists were able to compare the drawing performance of identical twins, who share all of their genetic material, with that of non-identical twins, who only share about half. This allowed them to tease out the relative importance of genetics in determining scores on the Draw-A-Person test. (All of the twin pairs were raised in the same household, at least until age 4, so they presumably had a similar home environment.) The results were interesting, as the drawings of identical twins were much more similar than those of non-identical twins. There is no drawing gene, of course, but this result does suggest that the sketches of little kids are shaped by their genetic inheritance. In fact, the results from a single drawing were as heritable among the twin pairs as their scores on more traditional intelligence tests.

Furthermore, because the researchers had scores from these intelligence tests they were able to compare performance on the Draw-A-Person test with a subject’s g factor, or general intelligence. The correlations were statistically significant but relatively modest, which is in line with previous studies. This means that one shouldn’t try to predict IQ scores based on the scribbles of a toddler; the two variables are related, but in weak ways.

However, a more interesting result emerged over time, as the scientists looked at the relationship between drawing scores at the age of 4 and measures of intelligence a decade later, when the twins were 14. According to the data, the children’s pictures were just as predictive of their intelligence scores at the age of 14 as various intelligence tests given at the age of 4. "This study does not explain artistic talent,” write the scientists. “But our results do show that whatever conflicting theories adults have about the value of verisimilitude in early figure drawing, children who express it to a greater extent are somewhat brighter than those who do not." 

Such studies trigger a predictable reaction in parents. I've got a three-year old daughter - I couldn't help but inspect her latest drawings, counting up the body parts. (There's even an app that will help you make an assessment.) But it's important to note that this is all nonsense; the science does not support my anxieties. "I too fossicked around in old drawers to look for body-parts among the fridge-magnet scrawls of my former 4-year old," Dr. Arden wrote in an email. "I realised quickly the key question was not 'is she bright?', but 'did we have fun? Did I treasure that wonderful, lightspeed flashing childhood properly?'" In a recent article put out by King's College, Arden expands on this idea, observing that while her "findings are interesting, it does not mean that parents should worry if their child draws badly. Drawing ability does not determine intelligence, there are countless factors, both genetic and environmental, which affect intelligence in later life.”

I find this study most interesting as a history-of-science counter factual, a reminder that there are countless ways to measure human intelligence, whatever that is. We've settled on a particular concept of intelligence defined by a short list of measurable mental talents. (Modern IQ tests tend to focus on abilities such as mental control, processing speed and quantitative reasoning.) But Goodenough’s tool is proof that the mystery of smarts has no single solution. The IQ test could have been a drawing test.

This sounds like a silly conjecture. But it shouldn’t. As the scientists note, figurative art is an ancient skill. Before there were written alphabets, or counting systems, humans were drawing on the walls of caves. (There’s evidence that children participated in these rituals as well, dragging their tiny fingers through the wet clay and soft cave walls.) "This long history endows the drawing test with ecological validity and relevance to an extent that is unusual in psychometrics," write the scientists. After all, the Make-A-Person test measures one of the most uniquely human talents there is: the ability to express the mind on the page, to re-describe the world until life becomes art, or at least a crayon stick figure.

*Goodenough originally called it the Draw-A-Man test, but later realized that the gendered description made it harder for young girls.

Arden, Rosalind, et al. "Genes Influence Young Children’s Human Figure Drawings and Their Association With Intelligence a Decade Later." Psychological Science (2014)

The Tragedy of Leaded Gas

In December 1973, the EPA issued new regulations governing the use of lead in gasoline. These rules, authorized as part of the Clean Air Act and signed into law by President Nixon, were subject to years of political and legal wrangling. Automobile manufacturers insisted the regulations would damage car engines; oil companies warned about a spike in gasoline prices; politicians worried about the negative economic impact. In 1975, a consortium of lead producers led by the Ethyl Corporation and DuPont sued the EPA in an attempt to stop the regulations from taking effect. They argued that “lead is naturally present in the environment” and that the health impact of atmospheric lead remained unclear.

The EPA won the lawsuit. In a March 1976 opinion, the U.S. Court of Appeals of the District of Columbia established the so-called precautionary principle, noting that the potential for harm – even if it has not been proven as fact – still leaves society with an obligation to act. “Man’s ability to alter his environment,” wrote the judges, “has developed far more rapidly than his ability to foresee with certainty the effects of his alterations.” And so the phaseout of leaded gasoline took hold: by 1990, the amount of lead in gasoline had been reduced by 99 percent.*

This federal regulation is one of the most important achievements of the American government in the post WWII era. That it’s a largely unanticipated achievement only makes it more remarkable. According to the latest data, the removal of lead from gasoline is not simply a story of clean air and blue skies. Rather, it has become a tale of sweeping social impact, a case-study in how the removal of a single environmental toxin can influence everything from IQ scores to teenage pregnancy to rates of violent crime.

For the last several years, Jessica Wolpaw Reyes, an economist at Amherst College, has been studying the surprising impact of this environmental success. Her studies take advantage of a natural experiment: for a variety of “mostly random” reasons, including the distribution network of petroleum pipelines, the number of pumps available at gas stations and the local assortment of cars, the phaseout of leaded gasoline didn’t happen at a uniform rate across the country. Rather, different states showed large variation in their consumption of leaded gasoline well into the 1980s. If lead poisoning was largely responsible for the spike in criminal behavior – rates of violent crime in America quadrupled between 1960 and 1991 - then the removal of lead should predict the pace of its subsequent decline. (In many American cities, crime has returned to pre-1965 levels.) In other words, the first states to transition fully to unleaded gasoline should also be the first to experience the benefits.

That’s exactly what Reyes found. In a 2007 study, Reyes concluded that “the phase-out of lead from gasoline was responsible for approximately a 56 percent decline in violent crime” in the 1990s. What’s more, Reyes predicted that the Clean Air Act would continue to generate massive societal benefits in the future, “up to a 70 percent drop in violent crime by the year 2020.” And so a law designed to get rid of smog ended up getting rid of crime. It’s not the prison-industrial complex that keeps us safe. It’s the EPA.

As Reyes herself noted, these correlations raise far more questions than they answer. She concluded her 2007 paper, for instance, by noting that if the causal relationship between lead and crime were real, and not a statistical accident, then the rate of lead removal should also be linked to other behavioral problems, including substance abuse, teenage pregnancy, and childhood aggression. Violent crime, after all, does not exist in a vacuum.

In an important new working paper, Reyes has expanded on her previous research, showing that exposure to lead in early childhood has far-reaching negative effects. By employing data on more than eleven thousand children from the National Longitudinal Survey of Youth (NLSY), she has revealed the relationship between levels of lead in the blood and impulsive behavior in a number of domains. Consider the steep decline in teenage pregnancy in the 1990s, which has proved difficult to explain. According to Reyes, changes in lead levels caused by the Clean Air Act have played a very significant role:

“To be specific, we can consider the change in probability associated with a change in blood lead from 15 µg/dl to 5 µg/dl, a change that approximates the population-wide reduction that resulted from the phaseout of lead from gasoline. This calculation yields a predicted 12 percentage point decrease in the likelihood of pregnancy by age 17, and a 24 percentage point decrease in the likelihood of pregnancy by age 19 (from a 40% chance to a 16% chance). This is undoubtedly large: the lead decrease reduces the likelihood of teen pregnancy by more than half.

Similar patterns held for aggressive behavior and criminal behavior among teenagers. In both cases, the rise and fall of these social problems appears to be closely correlated with the rise and fall of leaded gasoline. In short, says Reyes, exposure to lead “triggers an unfolding series of adverse behavioral outcomes.” It makes it harder to children to resist their most risky impulses, whether having unprotected sex or getting into a violent fight. (Other research shows that lead is closely linked to lower IQ scores: the typical increase in lead levels caused by leaded gasoline decreases IQ scores, on average, by roughly six points.) Placed in this context, the correlation with crime rates is no longer so surprising. Rather, it’s the natural outgrowth of a poisoned generation of children, unable to fully control themselves.

There’s one last interesting conclusion in Reyes’ new study. Because the NLSY survey contained information about parental income and education, she was able to see how leaded gasoline impacted kids across the socioeconomic spectrum. While most environmental toxins disproportionately harm poor families – they can’t afford to live in less polluted places – leaded gasoline was, in the words of Dr. Herbert Needleman, an “equal opportunity pollutant…not limited to poor African-American children.” In fact, as Reyes points out, atmospheric lead was one of the few adverse environmental influences that wealthier families could not escape, as “it was in the very air children breathed.” As a result, Reyes’ analysis shows that the children of higher-income parents were, on average, more harmed by leaded gasoline, showing a steeper drop-off across a range of negative behavioral outcomes. “In a way, the advantaged children had more to lose,” Reyes writes. “Consequently, gasoline lead may have been an equalizer of sorts.”

There are, of course, inherent limitations to these sorts of econometric studies. There might be hidden confounds, or systematic differences between generations of children that are unaccounted for by the statistical model. As Jim Manzi has pointed out, the variation in the state-by-state adoption rates of unleaded gasoline might not be quite as random as it seems, but instead be linked to subtle “differences in political economy that in turn will affect changes in crime rates.” Society is more complicated than our statistics.

But it’s important to note that the link between lead and societal problems is not merely a statistical story. Rather, it is rooted in decades of neurological evidence, which tell the same causal tale at a cellular level. Lead has long been recognized as a neurotoxin, interfering with the release of transmitters in the brain. (The chemical seems to have a particular affinity for the NMDA receptor, a pathway essential for learning and memory.) Other studies have shown that high levels of lead to apoptosis, a fancy word for the mass suicide of brain cells. And then there’s the Cincinnati Lead Study, which has been tracking 376 children born between 1979 and 1984 in the poorer parts of the city. While the study has shown a strong link between lead exposure and violent crime – for every 5 ug/dl increase in blood levels at the age of six, the risk of arrest for a violent crime as a young adult increases by nearly 50 percent – it has also investigated the impact of this exposure to lead on the brain. In a 2008 paper published in PLOS Medicine, a team of researchers led by Kim Cecil used MRI scans to measure the brain volume of enrolled subjects who are now between the ages of 19 and 24. The scientists found a clear link between lead levels in early childhood and the loss of brain volume in adulthood. Most telling was where the loss occurred, as the scientists found the greatest damage in the prefrontal cortex, a region closely associated with impulse control, emotional regulation and goal planning. (The correlations were strongest among male subjects, which might explain why men with lead exposure are more prone to antisocial behavior.)

At the end of her new working paper, Reyes makes an argument for “strengthening the threads” between disparate disciplines, closing the explanatory gap between policy-makers, public health professionals, environmentalists and social scientists. As she notes, it’s becoming increasingly clear that the boundaries of these fields overlap, and that any complete explanation of a complex social phenomena (say, the fall in crime rates) must also concern itself with leaded gasoline, the prefrontal cortex and economic inequality. “The foregoing results suggest that lead – and other environmental toxicants that impair behavior – may be missing links in social scientists’ explanations of social behavior,” Reyes writes. “Social problems may be, to some degree, rooted in environmental problems.”

*Despite the legal decision, the lead industry continued to fight the implementation of the EPA regulations. As Gerald Markowitz and David Rosner argue in Lead Wars, the main impetus for the removal of lead from gasoline was not the new rules themselves but rather the introduction of catalytic converters, which were installed to combat sulfur emissions. Because lead damaged the platinum catalyst in the converter, General Motors and other car manufacturers were eventually forced to call for the end of leaded gasoline. 

Via: Marginal Revolution

Reyes, Jessica Wolpaw. "Lead exposure and behavior: Effects on antisocial and risky behavior among children and adolescents." NBER Working Paper, August 2014

Reyes, Jessica Wolpaw. "Environmental policy as social policy? The impact of childhood lead exposure on crime." The BE Journal of Economic Analysis & Policy 7.1 (2007).

Markowitz, Gerald, and David Rosner. Lead Wars: The Politics of Science and the Fate of America's Children. Univ of California Press, 2013. p. 77-80

Communism, Inequality, Dishonesty

Dan Ariely has been trying, for years, to find evidence that different cultures give rise to different levels of dishonesty. It's an attractive hypothesis – “It seems like it should be true,” Ariely told me - and would add to the growing literature on the cultural influences of human nature. No man is an island, etc.

Unfortunately, Ariely and his collaborators have been unable to find any solid evidence that such differences in dishonesty exist. He's run experiments in the United States, Italy, England, Canada, Turkey, China, Portugal, South Africa and Kenya, but every culture looks basically the same. Bullshit appears to be a behavioral constant.

Until now.

A new study by Ariely, Ximena Garcia-Rada, and Heather Mann at the Duke University Center for Advanced Hindsight and Lars Hornuf at the University of Munich has found a significant difference in levels of dishonesty among German citizens. But here’s the catch – these differences exist within the sample, between people with East German and West German roots.

The experiment went like this. A subject was given a standard six-sided die and asked to throw it forty times. Before the throwing began, he or she was told to pick one side of the die (top or bottom) to focus on. After each throw, the subject wrote down the score from their chosen side. Reporting higher scores made them more likely to get a bigger monetary payout at the end of the experiment.

What does this have to do with lying? Because the subjects never told the scientist which side of the die they selected, they could cheat by writing down the higher number, switching between the top and bottom of the die depending on the roll. For instance, if they rolled a one, they could pretend they had selected the bottom side and report a six instead.

Not surprisingly, people took advantage of the wiggle-room, reporting numbers that were higher than expected given the laws of chance. What was a bit more surprising, at least given Ariely’s history of null results, was that East Germans were significantly more dishonest. While those with roots in the West reported high rolls (4,5 or 6) on 55 percent of their throws, those from the East reported high rolls 60 percent of the time. “Since the scale of possible cheating ranges from 50 percent high rolls to 100 percent high rolls, cheating by West Germans corresponds to 10 percent and cheating by East Germans to 20 percent of what had been feasible,” write the scientists. “Thus, East Germans cheated twice as much as West Germans overall.”

There are a few possible explanations here. The first is that the communist experience of East Germans undermined their sense of honesty. As the scientists note, life in East Germany was defined by layers of deceit. “In many instances, socialism pressured or forced people to work around official laws,” they write. And then there was the Stasi intelligence bureaucracy, which spied on more than a third of all East German citizens. “Unlike in democratic societies, freedom of speech did not represent a virtue in socialist regimes,” write Ariely, et. al. “It was therefore often necessary to misrepresent your thoughts to avoid repressions from the regime.” And so lying became an East German habit, a means of survival, a way of coping with the scarcity and repression. This helps explain why older East Germans – they spent more time under the communist regime – were also more likely to cheat. “Socialistic regimes in general are corrupt, but I don’t think that has to be the case,” Ariely told me. “Personally, I think that in a small socialist society, like a kibbutz, socialism could prosper without corruption.”

But there’s another possible explanation, which is less about the ideological struggle of the Cold War and more about the particular politics of Germany. According to this account, the primary cause of East German dishonesty is not the crooked influence of socialism but rather the hazards of social comparison. East Germans aren’t more dishonest because of their communist experience – they’re more dishonest because of their post-communist existence.

A little history might be helpful. In the run-up to unification, German Chancellor Helmut Kohl famously declared that the five states of Eastern Germany would quickly become “blooming landscapes” under the capitalist system. That didn’t happen. Instead, East Germany was defined by a surge of bankruptcies, chronic unemployment and mass migration. While the situation has certainly improved in recent years – the unemployment rate is “only” a third higher in the East – German income still shows a sharp geographic split, with East Germans making 30 percent less money on average. “If you were born in the East, unification came with lots of promises,” Ariely says. “These promises did not come to full fruition. And I think if you’re an East German then you’re reminded every day of these broken promises…Even generations later there’s still a financial gap.”

Such resentments have real consequences. Previous research has shown that exposing people to abundant wealth, such as a large pile of cash, leads to higher levels of cheating. The same pattern exists when people feel underpaid and when they believe that they’ve been treated unfairly. In short, there appears to be something contagious about ethical lapses. In an unjust world, anything goes; since nothing can make it right, we might as well do wrong.

While both explanations might contribute to the observed result, it’s worth noting that these explanations come with contradictory implications. If communism itself is the problem, then the admirable goal of social equality is inherently flawed, since it’s bound up with increased levels of dishonesty. “To ensure that everyone gets the same thing, you need to give some people less than they deserve, or they think they deserve,” Ariely says. “And when people feel life has treated them unfairly, maybe they feel more okay with cheating and lying.”

However, if the main cause of East German dishonesty is social comparison – those feelings of inferiority generated by being a poor person in a rich country – then the problem isn’t the political quest for equality: it’s current levels of inequality in wealthy capitalist societies. (Remember that Chinese citizens did not show higher levels of dishonesty, which suggests that communism is not solely responsible for the effect.) “It’s getting to the point where there are very few places where the rich and poor really interact,” Ariely says, in reference to the United States. “The contrast is getting more obvious, and that’s a painful daily reminder if you’re not well off.” These reminders seem to make us less honest, or at least more willing to cheat.

So there is no obvious cure. The noble ethos of Marx – “From each according to his ability, to each according to his need” – seems just as problematic as the unequal outcomes of modern capitalism, in which some mixture of ability and luck determine all. Every political system has flaws that make us dishonest, which is another way of saying that maybe the problem isn’t the system at all.

Ariely, Dan, et al. "The (True) Legacy of Two Really Existing Economic Systems." (2014).

The Purpose Driven Life

Viktor Frankl was trained as a psychiatrist in Vienna in the early 1930s, during the peak of Freud’s influence. He internalized the great man’s theories, writing at one point that “all spiritual creations turn out to be mere sublimations of the libido.” The human mind, powered by its id engine, wanted primal things. Mostly, it just wanted sex.

Unfortunately, Frankl didn’t find this therapeutic framework very useful. While working as a doctor in the so-called “suicide pavilion” at the Steinhof hospital – he treated more than 1200 at-risk women over four years - Frankl began to question his training. The pleasure principle, he came to believe, was not the main motive of existence; the despair of these women was about more than a thwarted id.

So what were these women missing? Why were they suicidal? Frankl’s simple answer was that their depression was caused by a lack of meaning. The noun is deliberately vague, for there is no universal fix; every person’s meaning will be different. For some people, it was another person to care for, or a lasting relationship. For others, it was an artistic skill, or a religious belief, or an unwritten novel. But the point was that meaning was at the center of things, for “life can be pulled by goals as surely as it can be pushed by drives.” What we craved wasn’t happiness for its own sake, Frankl said, but something to be happy about.

And so, inspired by this insight, Frankl began developing his own school of psychotherapy, which he called logotherapy. (Logos is Greek for meaning; therapeuo means “to heal or make whole.” Logotherapy, then, literally translates as “healing through meaning.”)  As a clinician, Frankl’s goal was not the elimination of pain or worry. Rather, it was showing patients how to locate a sense of purpose in their lives. As Nietzsche put it, “He who has a why to live can bear with almost any how.” Frankl wanted to help people find their why.

Logotherapy now survives primarily as a work of literature, closely associated with Frankl’s best-selling Holocaust memoir, Man’s Search for Meaning. Amid the horrors of Auschwitz and Dachau, Frankl explored the practical utility of logotherapy. In the book he explains, again and again, how a sense of meaning helped his fellow prisoners survive in such a hellish place. He describes two men on the verge of suicide. Both of the inmates used the same argument: “They had nothing more to expect from life,” so they might as well stop living in pain. Frankl, however, used his therapeutic training to convince the men that “life was still expecting something from them.” For one man, that meant thinking about his adored child, waiting for him in a foreign country. For the other man, it was his scientific research, which he wanted to finish after the war. Because these prisoners remembered that their life still had meaning, they were able to resist the temptation of suicide. 

I was thinking of Frankl while reading a new paper in Psychological Science by Patrick Hill and Nicholas Turiano. The research explores one of Frankl’s essential themes: the link between finding a purpose in life and staying alive. The new study picks up where several recent longitudinal studies have left off. While prior research has found a consistent relationship between a sense of purpose and “diminished mortality risk” in older adults, this new paper looks at the association across the entire lifespan. Hill and Turiano assessed life purpose with three questions, asking their 6163 subjects to say, on a scale from 1 to 7, how strongly they disagreed or agreed with the following statements:

  1. Some people wander aimlessly through life, but I am not one of them.
  2. I live life one day at a time and don’t really think about the future.
  3. I sometimes feel as if I’ve done all there is to do in life.

Then the scientists waited. For 14 years. After counting up the number of deaths in their sample (569 people), the scientists looked to see if there was any relationship between the people who died and their sense of purpose in life.

Frankl would not be surprised by the results, as the scientists found that purpose was significantly correlated with reduced mortality. (For every standard deviation increase in life purpose, the risk of dying during the study period decreased by 15 percent. That’s roughly equivalent to the reduction in mortality that comes from a engaging in a modest amount of exercise.) This statistical relationship held even after Hill and Turiano corrected for other markers of psychological well-being, such as having a positive disposition. Meaning still mattered. A sense of purpose – regardless of what the purpose was – kept us from death. “These findings suggest the importance of establishing a direction for life as early as possible,” write the scientists.

Of course, these correlations cannot reveal their cause. One hypothesis, which is currently being explored by Hill and Turiano, is that people with a sense of purpose are also more likely to engage in healthier behaviors, if only because they have a reason to eat their kale and go the gym. (Nihilism leads to hedonism.) But that’s only a guess. Frankl himself remained metaphysical to the end.  The closest he ever got to a testable explanation was to insist that man was wired for “self-transcendence,” which Frankl defined as being in a relationship with “someone or something other than oneself.” While Freud stressed the inherent selfishness of man, Frankl believed that we needed a purpose as surely as we needed sex and water and food. We are material machines driven by immaterial desires.

Frankl, Viktor E. Man's Search for Meaning. Simon and Schuster, 1985.

Haddon Klingberg, Jr. When Life Calls Out To Us: The love and lifework of Viktor and Elly Frankl. Random House, 2012.

Hill, Patrick L., and Nicholas A. Turiano. "Purpose in Life as a Predictor of Mortality Across Adulthood." Psychological Science (2014): 0956797614531799.

The Too-Much-Talent Effect

A few years ago, the psychologists Adam Galinsky and Roderick Swaab began working on a study that looked at the relationship between national levels of egalitarianism – the belief that everyone deserves equal rights and opportunities – and the performance of national soccer teams in international competitions like the World Cup. It was an admittedly speculative hypothesis, an attempt to find a link between a vague cultural ethos and success on the field. But their logic went something like this: because talented athletes often come from impoverished communities, the most successful countries in the highly competitive World Cup would find a way to draw from the biggest pools of human talent. Think here of the great Pele, who was too poor to afford a soccer ball so he practiced his kicks with a grapefruit instead. Or the famous Diego Maradona, born in a shantytown on the outskirts of Buenos Aires. These men had talent but little else. It is a testament to egalitarianism that they were still able to get the opportunities to succeed.

It’s a nice theory, but is it true? After controlling for a number of variables, including GDP, population size, length of national soccer history and climate, Galinsky and Swaab found that egalitarianism was, indeed, “strongly linked” to better performance in international competition. It also predicted the quantity of talent on each team, with more egalitarian countries producing more players under contract with elite European clubs. In short, the most successful soccer countries don’t necessarily have the most innately talented populations. Instead, they do a better job of not squandering the talent they already have. 

It’s a fascinating study with broad implications. It suggests, for one thing, that much of the national variation in performance – and it doesn’t matter if we’re talking about the soccer pitch or 8th grade math scores – has to do with how well countries utilize their available human capital. What T.S. Eliot said about the excess of literary geniuses during the Elizabethan age (Shakespeare, Marlowe, Spenser, Donne, etc.) turns out to be a far more general truth. “The great ages did not perhaps produce much more talent than ours,” Eliot wrote, “but less talent was wasted.”

So far, so interesting. But as often happens in science, answers have a slippery way of inspiring new questions; the scientific process is a perpetual mystery generating machine. And it’s this next mystery – one utterly unrelated to egalitarianism – that most interests me.

While analyzing the soccer data, Galinsky and Swaab noticed something very peculiar – at a certain point, having more highly talented players on a national team led to worse performance. It was an unsettling finding, since people generally assume that talent exists in a linear relationship with success. (More talent is always better.) Such logic underpins the frenzy of NBA free-agency – every team is begging for superstars – and the predictions of bookies and commentators, who believe that the most gifted teams are the most likely to win. It’s why an already loaded Barcelona team just spent more than $100 million to acquire Luis Suarez, a player who has become as famous for biting as he has for striking.

And so, armed with this anomaly, Galinsky, Swaab and colleagues at INSEAD, Columbia University and VU University Amsterdam, decided to continue the investigation. After confirming the result among soccer teams competing at the 2010 and 2014 World Cup – too much talent appeared to be a burden, making national teams less likely to win – the scientists decided to see if their findings could be extended to other sports.

They turned first to basketball, looking at the impact of top talent on NBA team performance between 2002 and 2012. They coded talent by looking at the Estimated Wins Added (EWA) statistic, a measure that reflects the approximate number of wins a given player adds to a team’s season total. (In the 2013-2014 season, Kevin Durant led the league with an EWA of 30.1. LeBron was second with 27.3.) Once again, talent exhibited a tipping point: NBA teams benefited from having the best players unless they had too many of them. While most general managers assume the link between talent and performance is linear – a straight line with an upward slope – the scientists found that it was actually curved, and teams with more than 60 percent top talent did worse than their less skilled competition. Swaab and Galinsky call this the “too-much-talent” effect.

The relationship between team talent levels and team performance in the NBA

The relationship between team talent levels and team performance in the NBA

What accounts for the negative returns of excessive talent? The problem isn’t talent itself; there’s nothing inherently wrong with gifted players. Rather, Galinsky and Swaab argue that too much talent can disrupt the dynamics required for effective teamwork. “Too much talent is really a metaphor for having ineffective coordination among players,” Galinsky says. “Sometimes, you need a hierarchy on a team. You need to have different roles. But if everyone thinks they should be the one with the ball, then you’re going to run into problems.” Galinsky, et al. documented this drop-off in coordination by tracking various measures of “intra-team coordination,” such as the number of assists and defensive rebounds per game. (Both stats require teammates to work together.) Sure enough, the-too-much-talent effect was mediated by a drop-off in effective coordination, as teams with too many top-flight athletes also struggled with their chemistry. The egos didn’t gel; the players competed for the spotlight; all the talent became a curse.

When I asked Galinsky for an example of a team undone by their surfeit of talent, he cites a 2013 quote from Mike D’Antoni, the head coach of a gifted Lakers team that woefully underperformed. (The starting five featured four probable Hall of Famers: Kobe Bryant, Steve Nash, Dwight Howard and Pau Gasol.) “Have you ever watched an All-Star game? It's god-awful,” D’Antoni said to reporters. “Everybody gets the ball and goes one on one and then they play no defense. That’s our team. That’s us. We’re an All-Star team.” The 2012-13 Lakers were swept by the Spurs in the first round of the playoffs. 

Likewise, the LeBron era Miami Heat only succeeded once their talented stars learned how to work together. “When Dwayne Wade got hurt [in 2012), the Heat became a less talented team,” Galinsky says. “But I think his injury also made it clear that he was subordinate to James, and that James was the true leader of the team. That helped them play together. Having less pure talent actually increased their performance.” This suggests that the too-much-talent effect might explain a bit of the The Ewing Theory, which occurs when a team performs better after the loss of one of its stars.

Of course, if athletic talent exists in a tensioned relationship with teamwork, then the effect should not exist in sports, such as baseball, that require less coordination.  “If you have five starting pitchers, those pitchers don’t need to like each other, because they all start on different days,” Galinsky says. “Too much talent shouldn’t be a big problem.” (The scientists quote Bill Simmons in their paper, noting that baseball is “an individual sport masquerading as a team sport.”) To test this hypothesis, Galinsky, et. al. used the Wins Above Replacement stat, or WAR, to assess the talent level of every MLB player. Then, they looked to see how different levels of team talent were related to team performance. As predicted, the relationship never turned negative: for baseball clubs, having more highly skilled players was always better. “These results suggest that people’s lay beliefs about the relationship between talent and performance are accurate, but only for tasks low in interdependence,” write the scientists.

The relationship between team talent levels and team performance in MLB

The relationship between team talent levels and team performance in MLB

These findings aren’t just relevant for sports teams. Rather, the scientists insist that the too-much-talent effect should apply to many different kinds of collective activity. While organizations place a big emphasis on acquiring top talent – it’s often their top HR priority – the importance of talent depends on the nature of the task. If success depends on the accumulation of individual performances – think of a sales team, or hedge fund traders – then more talent will lead to better outcomes. However, if success requires a high level of coordination among colleagues, then more talent can backfire, especially if the group lacks a clear hierarchy or well-defined roles.  And that’s why the best basketball teams, Galinsky argues, feature talented athletes who focus on different aspects of the game. “No one would argue that the Jordan era Bulls teams weren’t incredibly gifted,” he says. “But Jordan, Pippen and Rodman all understood their roles.  They knew what they needed to do.”

There is, I think, one final implication of this paper. In a world of moneyball GMs and SportVU tracking, it’s easy to dismiss the importance of team chemistry as yet another myth of the small data age, an intangible factor in a time of measurable facts. But this paper provides fans and coaches with a useful way of thinking about the importance of player chemistry, even if we still can’t reliably quantify it.* We’ve always known that team coordination matters, that a group of talented athletes can become more (or less) than the sum of their parts. But now we have empirical proof – a lack of chemistry is the one problem that more talent cannot solve.

*We might not be able to quantify player chemistry, but there does seem to be some consensus among players as to who has it. Talented athletes take big pay cuts to play with LeBron – he makes his teammates better - but Houston couldn't convince any superstars to play with Dwight Howard and James Harden. 

Swaab, Roderick I., and Adam D. Galinsky. "Egalitarianism Makes Organizations Stronger: Cross-National Variation in Institutional and Psychological Equality Predicts Talent Levels and the Performance of National Teams." Organizational Behavior & Human Decision Processes (forthcoming)

Swaab, Roderick I., et al. "The Too-Much-Talent Effect Team Interdependence Determines When More Talent Is Too Much or Not Enough." Psychological Science (2014)

 

 

 

"A Wandering Mind Is An Unhappy Mind"

Last year, in an appearance on the Conan O’Brien show, the comedian Louis C.K. riffed on smartphones and the burden of human consciousness:

"That's what the phones are taking away, is the ability to just sit there. That's being a person...Because underneath everything in your life there is that thing, that empty—forever empty. That knowledge that it's all for nothing and you're alone. It's down there.

And sometimes when things clear away, you're not watching anything, you're in your car, and you start going, 'Oh no, here it comes. That I'm alone.' It's starts to visit on you. Just this sadness. Life is tremendously sad, just by being in it...

That's why we text and drive. I look around, pretty much 100 percent of the people driving are texting. And they're killing, everybody's murdering each other with their cars. But people are willing to risk taking a life and ruining their own because they don't want to be alone for a second because it's so hard."

The punchline stings because it’s mostly true. People really hate just sitting there. We need distractions to distract us from ourselves. That, at least, is the conclusion of a new paper published in Science by the psychologist Timothy Wilson and colleagues. The study consists of 11 distinct experiments, all of which revolved around the same theme: forcing subjects to be alone with themselves for up to 15 minutes. Not alone with a phone. Alone with themselves.

The point of these experiments was to study the experience of mind-wandering, which is what we do when we have nothing to do at all. When the subjects were surveyed after their session of enforced boredom – they were shorn of all gadgets, reading materials and writing implements - they reported feelings of intense unpleasantness. One of Wilson’s experimental conditions consisted of giving subjects access to a nine-volt battery capable of administering an unpleasant shock. To Wilson’s surprise, 12 out of 18 male subjects (and 6 out of 24 female subjects) chose to shock themselves repeatedly. “What is striking,” Wilson et. al write, “is that simply being alone with their own thoughts for 15 minutes was apparently so aversive that it drove many participants to self-administer an electrical shock that they had earlier said they would pay to avoid…Most people seem to prefer doing something rather than nothing, even if that something is negative.”

These lab results build on a 2010 experience-sampling study by Mathew Killingsworth and Daniel Gilbert that contacted 2250 adults at random intervals via their iPhones. The subjects were asked about their current level of happiness, their current activity and whether or not they were thinking about their current activity. On average, subjects reported that their minds were wandering – thinking about something besides what they were doing – in 46.9 percent of the samples. (Sex was the only activity during which people did not report high levels of mind-wandering.) Here’s where things get disturbing: all this mind-wandering made people unhappy, even when they were daydreaming about happy things. “In conclusion,” write Killingsworth and Gilbert, “a human mind is a wandering mind, and a wandering mind is an unhappy mind.” Although we typically use mind-wandering to reflect on the past and plan for the future, these useful thoughts deny us our best shot at happiness, which is losing ourselves in the present moment. As Killingsworth and Gilbert put it: “The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost.” 

Given these dismal results, it’s easy to understand the appeal of the digital world, with its constant froth of new information. To carry a smartphone is to never be alone; a swipe of the fingers turns on a screen that keeps us mindlessly entertained, the brain lost in the glowing screen. It’s important to note, however, that Wilson et. al. didn’t find any correlation between time spent on smartphones and the ability to enjoy mind-wandering. Contrary to what Louis C.K. argued, there’s little to reason to think that our gadgets are the cause of our inability to be alone. They distract us from ourselves, but we’ve always sought distractions, whether it’s television, novels or a comic on a stage. We seek these distractions because, as Wilson et. al. write, "it is hard to steer our thoughts in pleasant directions and keep them there." And so our daydreams often end up in dark places, as we ruminate on our errors and regrets. (It shouldn't be too surprising, then, that there's a consistent relationship between mind-wandering and dysphoria.) Here's Louis C.K. once again:

"The thing is, because we don't want that first bit of sad, we push it away with a little phone or a jack-off or the food...You never feel completely sad or completely happy, you just feel kinda satisfied with your product, and then you die. So that's why I don't want to get a phone for my kids.”

One last point. It's interesting to think of this new research in light of religious traditions that emphasize both the struggle of existence and the importance of living in the moment. According to the Buddha, the first noble truth of the world is dukkha, which roughly translates as “suffering.” This pain can't be escaped - everyone dies - but it can be assuaged, at least if we learn to think properly. (The Buddhist term for such thinking, sati, is often translated as mindfulness, or "attentiveness to the present.") Instead of letting the mind disengage, Buddhism emphasizes the importance of using meditative practice to stay tethered to this here now. Because once you admit the big picture sadness, once you accept the inevitability of sorrow and despair, then a wandering mind keeps wandering back to that brutal truth. The only escape is embrace what's actually happening, even if it means sitting in a bare room, noticing the waves of boredom and sadness that wash over the mind. "Let the sadness hit you like a truck," Louis C.K. says, sounding a little bit like a foul-mouthed Buddha. "You're lucky to live sad moments."

The Skin Is A Social Organ

Your body is covered in hairy skin.* Below the surface of this skin are wispy sensory nerves known as a C-fiber tactile afferents, or CTs. These nerves are designed to respond to gentle contact - even the slightest of indentations can turn them on, starting a cascade of electrical signals that ends with a feeling of touch. For a long time, the most notable fact about these nerves was their lack of speed: because CTs had no myelin insulation, they were about 50 times slower at transmitting sensory signals to the brain than myelinated A-fiber nerves. 

And so a simple model of the touch system emerged: we had a fast pathway, modulated by A-fibers, which gave us quick and precise information about the surface of the body. Such a system had an obvious function, allowing us to touch the world, manipulate objects and monitor the body in space.

But if we have this fast sensory system, then why are the vast majority of nerves in hairy skin slow CT fibers? It’s like a customer with a broadband keeping around a dial-up modem, just in case.

In recent years, however, it’s become clear that CT fibers are not merely an archaic back-up or useless redundancy. Rather, they are endowed with their own unique purpose, which is just as essential as the speedy transmission of A-fibers. In a new Perspective published in Neuron, the neuroscientists Francis McGlone, Johan Wessberg and Hakan Olausson lay out the argument. They suggest that a particular kind of Cfiber nerve is largely responsible for the emotional quality of touch, passing along crucial information about the “affective and rewarding properties” of the most tender contact. When we talk about the power of touch – say, the healing properties of a hug, or a gentle caress – we are talking about the powers of these slow nerves.

There are multiple strands of evidence. The first is neurological patients with selective damage to A-fibers, leaving them with a touch pathway composed exclusively of C-fibers. These people are mostly numb. However, this numbness comes with a strange loophole – if their skin is brushed gently at a low velocity (between 1 and 10 centimeters per second), their interior bodies can be filled with pleasurable sensations. The feeling is vague – some patients couldn’t even identify the body quadrant that was being stroked - but everyone felt it.

The second piece of evidence is the inverse situation: patients with a rare genetic mutation that wipes out their C-fiber pathway, so that only A-fibers remain. While these patients have primarily been studied for their inability to feel pain – they are often oblivious to severe wounds, such as that from a broken bone – it turns out that they’re also less likely to experience pleasure from a soft touch

These differences in the function of A and C fibers are echoed in the brain. While skin stroking in normal subjects triggers activation in the somatosensory cortex – the part of the brain that tells us where the sensation is coming from – patients with only C-fibers show a selective activation in the posterior insular cortex and other limbic areas. According to McGlone et. al, this suggests that a class of touch sensitive C-fibers have “excitatory projections mainly to emotion-related” systems in the brain. They are designed to fill us with feeling, not to tell us where in the flesh these feelings are coming from.

This all makes sense, if you think about. We are creatures of touch, naked apes that still enjoy getting groomed. We soothe children with soft strokes and kiss the limbs of lovers; the skin is a social organ. While neuroscience tends to focus on vision and hearing as conduits for social information, McGlone et. al. point that the epidermis is also “the site of events and processes crucial to the way we think about, feel about, and interact with one another.”

These touches are most important during development. As Harry Harlow first observed, the absence of comforting contact is deeply stressful for young monkeys, leaving them with a wound from which they never recover. More recent studies have found that separating infant monkeys from their mother with a transparent screen – they could still hear, smell and see her – led to chronic activation of stress pathways in the brain. The stress was only diminished if the young monkeys were allowed to form “peer touch relationships,” suggesting that physical contact is required for normal brain development. Michael Meaney, meanwhile, has shown that rat pups born to mothers that engaged in lots of licking and grooming were much better at coping with stressful situations, such as the open-field test. They solved mazes more quickly, were less aggressive with their peers and lived longer lives. Meaney argues that these differences are driven by differences in the brain, as rat pups exposed to a surfeit of tender contact have fewer receptors for stress hormone and more receptors for the chemicals that attenuate the stress response.

And then there’s the tragic evidence from early 20th century orphanages and foundling hospitals. In these childcare institutions, there was an intense focus on cleanliness and efficiency. As the psychologist Robert Karen notes, this meant that babies were “typically prop-fed, the bottle propped up for them so that they wouldn’t have to be held during feeding. This was considered ideally antiseptic, and it was labor-saving as well.”

Unfortunately, such routines proved deadly. Although these hospitals supplied infants with adequate nutrition and warmth, they struggled to keep them alive. A 1915 review of ten infant foundling hospitals in the Eastern United States, for instance, concluded that up to 75 percent of the children died before their second birthday. (The best hospital in the study had a 31.7 percent mortality rate.) In fact, it wasn’t until the early 1930s, when pediatricians like Harry Bakwin began insisting that nurses touch the babies that mortality rates declined. The soft touches, carried along by those CT nerves, were a kind of sustenance.

Of course, the newfound recognition of C-fibers doesn’t mean the mystery of emotional touch has been solved. The pleasure of contact isn’t just a bottom-up phenomenon, triggered by some peripheral nerves in the flesh. Rather, it’s entangled with all sorts of higher order variables, from the context of touch to the “relationship of the touchee with the toucher.” If anything, the fact that we’re only now beginning to outline the mechanics of the caress is a reminder that the nervous system is full of unknowns, threaded with wires we don’t understand. Somehow, in the milliseconds after the skin is stroked, we turn that mechanical twitch into a powerful feeling, which eases our anxiety and reminds us why it’s good to be alive.

*The only non-hairy parts of the skin - so-called glabrous skin - are found on the soles of the feet and the palms of the hands. 

McGlone, Francis, Johan Wessberg, and Håkan Olausson. "Discriminative and Affective Touch: Sensing and Feeling." Neuron 82.4 (2014): 737-755.

 

Pity the Fish

Consider the lobster; pity the fish. In his justly celebrated Gourmet essay, David Foster Wallace argued that the lobster was not a mindless invertebrate, but rather a creature capable of feeling, especially pain. Wallace made his case with the brute facts of comparative neurology - lobsters have plenty of pain receptors - but also with anecdotes of the kitchen, as the crustacean resists its boiling death.  "After all the abstract intellection," Wallace writes, "there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience."

I was thinking of Wallace's essay while reading a new paper in Animal Cognition by Culum Brown, a biologist at Macquarie University in Australia. Brown does for the fish what Wallace did for the lobster, calmly reviewing the neurological data and insisting that our undersea cousins deserve far more dignity and compassion that we currently give them. Brown does not mince words:

"All evidence suggests that fish are, in fact, far more intelligent than we give them credit. Recent reviews of fish cognition suggest fish show a rich array of sophisticated behaviours. For example, they have excellent long-term memories, develop complex traditions, show signs of Machiavellian intelligence, cooperate with and recognise one another and are even capable of tool use. Emerging evidence also suggests that, despite appearances, the fish brain is also more similar to our own than we previously thought. There is every reason to believe that they might also be conscious and thus capable of suffering."

What makes this review article so necessary is that, as Brown notes, fish are afforded virtually no protections against human cruelty. They are the most consumed animal; the most popular pet; the only creature for which it’s an acceptable leisure activity to hook them with a metal barb and then reel them, against their frantic wishes, into an environment in which they will slowly suffocate to death, drowning in air.

Such suffering is ignored because we assume it doesn't exist; fish are supposed to be primitive beasts, cold-blooded and unconscious. But Brown gathers a persuasive range of evidence highlighting our error: 

  • Fish are exquisitely sensitive creatures, with perceptual abilities that track (or exceed) those of mammals. 
  • Fish can learn a simple Pavlovian conditioning task - light paired with food - significantly faster than rats and dogs. They also exhibit one-trial learning: pike that have been hooked often become "hook shy" for over a year.
  • Fish have incredible spatial memories. Gobies, for instance, sometimes leap from rock pool to rock pool. "Even after being removed from their home pools for 40 days, the fish could still remember the location of surrounding pools," writes Brown. "This astonishing ability makes use of a cognitive map built-up during the high tide when the fish are free to roam over the rock platform."
  • Fish exhibit social learning. Salmon born in a hatchery can be taught to recognize unknown live prey by pairing them with fish that already feed off the prey. Guppies can pass along foraging routes; some scientists speculate that the recent shift in cod spawning grounds reflects the "systematic removal of older, knowledgeable individuals by commercial fishing."
  • Fish know each other. Guppies can easily recognize up to 15 individuals. If allowed to choose, fish prefer to shoal with fish they have met before.
  • Fish exhibit a high degree of social intelligence.  "If a pair of fish inspects a predator," Brown writes, “they glide back and forth as they advance towards the predator each taking it in turn to lead. If a partner should defect or cheat in any way, perhaps by hanging back, the other fish will refuse to cooperate with that individual on future encounters." Or look at the cleaner wrasse, which removes parasites and dead skin from the surface of "client fish." Each wrasse has a large set of regular customers, who they seek to please in order to ensure return business. "If the cleaner should accidentally bite the client, then the client will rapidly swim away. But the cleaner has a mode of reconciliation; they chase after the distraught client and give them a back rub, thus enticing them to come again." Interestingly, the wrasse are far less likely to nip predatory fish, suggesting that they are able to categorize clients according to their aggressive potential. 
  • Fish build nests and use tools. At least 9000 species of fish construct nests, either for eggs or shelter. Wrasse species often use rocks to crush sea urchin shells; they use anvils to break open shellfish. Meanwhile, cod in the laboratory figured out how to use tiny metal tags embedded in their backs to operate a feeder.
  • Fish rely on the same basic circuitry of nerves to process pain as mammals. This shouldn’t be too surprising: the pain receptors in all vertebrates are descended from an early fishlike ancestor. Furthermore, there’s evidence that fish also respond to pain in a “cognitive sense” – they have an experience of suffering. Brown cites a study showing that fish injected with acetic acid display “attention deficits,” and lose their fear of novel objects. Presumably, he writes, this is because “the cognitive experience of pain is dominant over or overshadows other processes.”

Brown concludes his review by arguing that fish deserve to be included in our “moral circle.” The vertebrate taxa are worthy of the same protections against wanton suffering that we offer to most land based mammals. And yet, Brown readily admits that, given current fishing practices, the “ramifications for such animal welfare legislation…is perhaps too daunting to consider.” Billions of humans depend on fish for sustenance, but there is no way to catch a fish without being cruel. 

My own dietary decisions are harder to defend. I don't fish, but I love to eat them. Wild salmon is my favorite. Brown’s paper reminded me of a wonderful Stanley Kunitz poem, “King of the River.” The poem describes the heroic journey of a Pacific salmon, as it returns to the fresh water of its birth to spawn and die. If Brown makes the empirical case for fish – they know more than we think, they feel more than we want them to - then Kunitz takes us inside the strange mind of the orange fleshed vertebrate, swimming madly upriver, its suicidal trip driven by a familiar mixture of “nostalgia and desire.”  

"A dry fire eats at you.

Fat drips from your bones.

The flutes of your gills discolor.

You have become a ship for parasites.

The great clock of your life

is slowing down,

and the small clocks run wild.

For this you were born."

Brown, C. "Fish intelligence, sentience and ethics. Animal Cognition. June 2014.

The Violence of the Pass

Football is going to change. That much is clear. The correlation between the impacts sustained on the football field and the brain damage of players is no longer just a correlation: it’s starting to look like a tragic cause.

But how is the sport going to change? There will be better helmets, of course, and stricter rules about helmet-to-helmet contact, and more accurate monitoring of head trauma.  We’ll start tracking the linear acceleration (g) of skulls as carefully as we track the stats of quarterbacks.

However, it’s also worth considering the ways in which the concussion crisis will interact with pre-existing football trends. Over the last decade, the single most notable shift within the sport has been the rise of the passing offense, with the amount of passing yards increasing by roughly twenty percent. In 2003, as Ty Schalter notes, only Indianapolis used the shotgun offense more than 30 percent of the time. (Three teams never used the shotgun at all.) By 2012, most teams were approaching a shotgun usage rate of 40 percent or higher.

At first glance, this shift towards passing might seem like an effective response to the concussion crisis. Studies relying on head telemetry data – they use special helmets outfitted with a network of sensors – show that linemen and linebackers sustain, by far, the most sub-concussive hits over the course of a game. (Running backs take the hardest hits.) Less running plays, then, should translate to less wear and tear on the brains of those players brawling at the line of scrimmage. (Pass routes are the only part of the game in which, after five yards, no meaningful contact is allowed; it’s football pretending to be basketball.) When a pass dominant offense takes the field, the game is still violent, but the violence seems contained. More spread equals less smash mouth.  

Alas, a new paper by Douglas Martini, James Eckner, Jeffrey Kutcher and Steven Broglio at the University of Michigan, Ann Arbor, suggests that the rise of the passing offense will do little to quell the concussion crisis. In fact, it might even be making the problem worse. In their study, Martini et. al. tracked 83 high school football athletes using the HITS head impact telemetry system. While most public attention has focused on the brains of NFL players, these highly paid athletes actually represent a very small sliver of those at risk. There are, give or take, a few thousand players on NFL payrolls. There are approximately 68,000 football players at the college level. And there are 1.2 million football players at the high-school level.

The question investigated by these researchers was whether or not offensive style influenced the amount and distribution of head impacts. One team utilized a run-first offense (RFO); the other used a pass-first offense (PFO). The RFO team passed, on average, 8.8 times a game and ran the ball 32.9 times, while the PFO passed 25.6 times and ran 26.3 times. 

So what did they find? The first thing to state is the obvious: football is a contact sport. These 83 teenagers endured 35,681 head impacts over the course of the season; at least six of these impacts resulted in serious concussions. 

What’s more, the different offensive styles resulted in significantly different patterns of impact. The running offense generated about 1.5 times as many total head blows as the passing offense – many of these occurred during practice – while the passing offense generated bigger average blows, especially during the games. This was true across every measure of head impact, from linear acceleration (g) to the overall hit severity profile (HITsp). In short, when teams throw the ball in the air, there are fewer total hits, but each hit is harder, especially for skill position players, such as running backs and wide-receivers. The scientists speculate that the root cause of these differences is simple physics, as players in the pass offense are “able to reach higher running velocities before contacting an opponent than the equivalent RFO athletes… As such, the PFO athletes would have larger initial velocities that resulted in greater deceleration values following impact.” And it’s the deceleration that’s dangerous, as the soft brain lurches into the hard bones of the skull. This helps explain why, in 2012 and 2013, receivers and cornerbacks sustained more concussions than any other positions in the NFL. Their speed across the field more than makes up for their lack of mass.

The larger lesson is that there appears to be a fundamental tradeoff between the frequency of hits in a football game and their magnitude. (More research on this subject is desperately needed - the NFL should install head telemetry units in every helmet.) The passing attack might look less aggressive, but appearances can be deceiving; the elegant throws still end with a cloud of dust. If nothing else, this study is yet another reminder that head violence is an intrinsic part of football, and not a by-product of a particular style of play.  

Martini, Douglas, et al. "Subconcussive head impact biomechanics: comparing differing offensive schemes." Medicine and science in sports and exercise 45.4 (2013): 755-761.


Cohesion, PTSD and War

I’ve been reading Head Strong, an excellent new book by Michael D. Matthews, a professor of engineering psychology at West Point. The book describes the history and future of military psychology, from the birth of intelligence testing during WWI to the next generation of immersive battlefield simulations.

Not surprisingly, the problem of Post-Traumatic Stress Disorder (PTSD) is a recurring theme, as Matthews discusses recent attempts by the Armed Forces to promote resilience. (“The military does a good job of teaching its soldiers to kill. But it does not do a good job of teaching them to cope with it,” he writes.)  Matthews details the Comprehensive Soldier Fitness (CSF) program, based on the work of Martin Seligman, and the unintended consequences of creating weapons systems so effective that they “give the individual soldier the firepower of a traditional squad or platoon.” One potential downside of these new systems, Matthews argues, is that American soldiers will gain the ability to control a large territory by themselves, and thus end up isolated from their comrades. “Soldiers fight for their buddies, who traditionally they could literally reach out and touch,” he writes. While technology makes the dispersal of troops possible, Matthews suggests there will be no substitute for the “physical presence of others,” especially when soldiers are “placed in situations of mortal danger.”

Interesting stuff. But there was one data point in the book that I couldn’t stop thinking about, even though Matthews mentions it almost as an aside. While pointing out that PTSD rates vary widely between military units – the overall rate for deployed soldiers hovers between 10 and 25 percent – Matthews notes that “highly trained and specialized units including SEAL teams, Rangers, and other elite organizations” have proven far more resistant to the disorder. (Their PTSD rates are typically less than five percent.) What makes this statistic even more surprising is that these elite units tend to see frequent and intense combat – in objective terms, they have experienced the most trauma. And yet, they seem the least troubled by its aftermath.

Why are elite units so resilient? There are many variables at work here; PTSD is triggered by a multitude of risk factors. For starters, elite units tend to be better educated and in better physical condition, both of which are correlated with a reduced incidence of PTSD. Self-selection also plays a role: anyone tough enough to become a Ranger or Seal has learned how to handle stress and hardship.

Matthews, however, mentions a protective factor that is often overlooked, at least in popular discussions of PTSD: unit cohesion. According to Matthews, elite units are “highly cohesive"; the soldiers form close relationships, built out of their shared experiences. In Pentagon surveys, they are more likely to agree with statements such as “my unit is like family to me,” or “members of my unit understand me.” 

A series of recent studies backs up Matthews’ argument, highlighting the protective effects of unit cohesion. One analysis of 705 Air Force medical personnel deployed as part of Operation Iraqi Freedom found a “significant linear interaction…such that greater cohesion was associated with lower levels of PTSD symptom severity.” When stress exposure was high, for instance, medics in the most cohesive units reported PTSD symptoms that were approximately 25 percent less severe, at least as measured by the military’s PTSD checklist. Another study of 4901 male personnel from the UK armed services (Royal Navy, Royal Marines, British Army and Royal Air Force) concluded that unit cohesion was associated with significantly lower levels of PTSD and other mental disorders, such as depression. The British scientists end their paper by stressing the importance of fostering unit cohesion among soldiers, given "that so many other factors which have a positive association with higher levels of mental health problems are un-modifiable (for example, family background and exposures on deployment)." When it comes to PTSD, cohesion isn't just an incredibly important variable - it's a variable the Armed Forces can influence.

The explanation for these results is straightforward: in the aftermath of a terrible life event, other people are the best medicine. It doesn’t matter if we’re being helped by another soldier or a loving spouse - it’s really hard to get over the trauma alone. According to a highly cited meta-analysis of the risk factors associated with PTSD, a lack of social support is incredibly dangerous for those dealing with an acute stressor. (Among military subjects, a lack of social support was the single most important risk factor; among civilians, it placed second.) Close relationships, in this sense, are the ultimate coping mechanism, allowing us to survive the worst parts of life.

In some instances, the presence of close relationships seems to matter more than the stressor itself. Consider a natural experiment that took place during World War II, when approximately 70,000 young Finnish children were evacuated to temporary foster homes in Sweden and Denmark. For the kids who stayed behind in Finland, life was certainly filled with moments of trauma and stress — there were regular air bombardments, severe food shortages and invasions by the Soviets and the Germans. Those kids sent away, however, experienced a different kind of stress. Their wartime experiences might have featured less actual war, but the lack of social support would prove, over time, to be even more dangerous. A 2009 study found that Finnish adults who had been sent away from their parents between 1939 and 1944 were nearly twice as likely to die from cardiovascular illness as those who had stayed at home. A follow-up study found that these temporary war orphans also showed higher levels of stress hormone, stress reactivity and depression, sixty years after they’d been separated from their families. Chronic stress sucks. But chronic stress in the absence of supportive relationships can be crippling.

Perhaps this is why soldiers in elite units are so resilient. When the Armed Forces take unit cohesion seriously, they turn out be remarkably good at it, able to create deep, emotional bonds among their members. Over time, these relationships become an essential part of how soldiers cope with the violence. While unit cohesion has traditionally been seen through the prism of combat performance – more cohesive units perform better in battle – it seems likely that the biggest benefits of cohesion come after the war.  

Matthews, Michael D. Head Strong: How Psychology is Revolutionizing War. Oxford University Press, 2013.

Dickstein, Benjamin D., et al. "Unit cohesion and PTSD symptom severity in Air Force medical personnel." Military Medicine 175.7 (2010): 482-486.