Why Facebook Rules the World

One day, when historians tell the strange story of the 21st century, this age of software and smartphones, populism and Pokemon, they will focus on a fundamental shift in the way people learn about the world. Within the span of a generation, we went from watching the same news shows on television, and reading the same newspapers in print, to getting a personalized feed of everything that our social network finds interesting, as filtered by a clever algorithm. The main goal of the algorithm is to keep us staring at the screen, increasing the slight odds that we might click on an advertisement.

I’m talking, of course, about Facebook. Given the huge amount of attention Facebook commands—roughly 22 percent of the internet time Americans spend on their mobile devices is spent on the social network—it has generated a relatively meager amount of empirical research. (It didn't help that the company’s last major experiment became a silly controversy.) Furthermore, most of the research that does exist explores the network’s impact on our social lives. In general, these studies find small, mostly positive correlations between Facebook use and a range of social measures: our Facebook friends are not the death of real friendship.

What this research largely overlooks, however, is a far more basic question: why is Facebook so popular? What is it about the social network (and social media in general) that makes it so attractive to human attention? It’s a mystery at the heart of the digital economy, in which fortunes hinge on the allocation of eyeballs.

One of the best answers for the appeal of Facebook comes from a 2013 paper by a team of researchers at UCSD. (First author Laura Mickes, senior authors Christine Harris and Nicholas Christenfeld.) Their paper begins with a paradox: the content of Facebook is often mundane, full of what the scientists refer to as “trivial ephemera.” Here’s a random sampling of my current feed: there’s an endorsement of a new gluten-free pasta, a smattering of child photos, emotional thoughts on politics and a post about a broken slide at the local park. As the scientists point out, these Facebook “microblogs” are full of quickly composed comments and photos, an impulsive record of everyday life.

Such content might not sound very appealing, especially when there is so much highly polished material already competing for our attention. (Why read our crazy uncle on the election when there’s the Times?) And yet, the “microblog” format has proven irresistible: Facebook’s “news” feed is the dominant information platform of our century, with nearly half of Americans using it as a source for news.  This popularity, write the scientists, “suggests that something about such ‘microblogging’ resonates with human nature.”

To make sense of this resonance, the scientists conducted some simple memory experiments. In their first study, they compared the mnemonic power of Facebook posts to sentences from published books. (The Facebook posts were taken from the feeds of five research assistants, while the book sentences were randomly selected from new titles.) The subjects were shown 100 of these stimuli for three seconds each. Then, they were given a recognition test consisting of these stimuli along with another 100 “lures” – similar content they had not seen - and asked to assess their confidence, on a twenty-point scale, as to whether they previously been exposed to a given stimulus.

According to the data, the Facebook posts were much more memorable than the published sentences. (This effect held even after controlling for sentence length and the use of “irregular typography,” such as emoticons.) But this wasn’t because people couldn’t remember the sentences extracted from books – their performance here was on par with other studies of textual memory. Rather, it was largely due to the “remarkable memorability” of the Facebook posts. Their content was trivial. It was also unforgettable.

In a follow-up condition, the scientists replaced the book sentences with photographs of human faces. (They also gathered a new collection of Facebook posts, to make sure their first set wasn’t an anomaly.) Although it’s long been argued that the human brain is “specially designed to process and store facial information,” the scientists found that the Facebook posts were still far easier to remember.

This is not a minor effect: the difference in memory performance between Facebook posts and these other stimuli is roughly equivalent to the difference between people with amnesia due to brain damage and those with a normal memory. What’s more, this effect exists even when the Facebook content is about people we don’t even know. Just imagine how memorable it is when the feed is drawn from our actual friends.

To better understand the mnemonic advantage of microblogs, the scientists ran several additional experiments. In one study, they culled text from CNN.com, drawing from both the news and entertainment sections. The text came in three forms: headlines, sentences from the articles, and reader comments. As you can probably guess, the reader comments were much more likely to be remembered, especially when compared to sentences from the articles. Subjects were also better at remembering content from the entertainment section, at least compared to news content.

Based on this data, the scientists argue that the extreme memorability of Facebook posts is being driven by at least two factors. The first is that people are drawn to “unfiltered, largely unconsidered postings,” whether it’s a Facebook microblog or a blog comment. When it comes to text, we don’t want polish and reflection. We want gut and fervor. We want Trump’s tweets.

The second factor is the personal filter of Facebook, which seems to take advantage of our social nature.  We remember random updates from our news feed for the same reason we remember all the names of the Pitt-Jolie children: we are gossipy creatures, perpetually interested in the lives of others.

This research helps explain the value of Facebook, which is currently the 7th most valuable company in the world. The success of the company, which sells ads against our attention, is ultimately dependent on our willingness to read the haphazard content produced by other people for free. This might seem like a bug, but it’s actually an essential feature of the social network. “These especially memorable Facebook posts,” write the scientists, “may be far closer than professionally crafted sentences to tapping into the basic language capacities of our minds. Perhaps the very sentences that are so effortlessly generated are, for that reason, the same ones that are readily remembered.” While traditional media companies assume people want clean and professional prose, it turns out that we’re compelled to remember the casual and flippant. The problem, of course, is that the Facebook news algorithm is filtered to maximize attention, not truth, which can lead to the spread of sticky lies. When our private feed is full of memorable falsehoods what happens to public discourse?

And it’s not just Facebook: the rise of the smartphone has encouraged a parallel rise in informal messaging. (We've gone from email to emojis in a few short years.) Consider Snapchat, the social network du jour. It's entire business model depends on the eagerness of users to consume raw visual content, produced by friends in the grip of System 1. In a universe overflowing with professional video content, it might seem perverse that we spend so much time watching grainy videos of random events. But this is what we care about. This is what we remember.

The creation of content used to be a professional activity. It used to require moveable type and a printing press and a film crew. But digital technology democratized the tools. And once that happened, once anyone could post anything, we discovered an entirely new form of text and video. We learned that the most powerful publishing platform is social, because it embeds the information in a social context. (And we are social animals.) But we also learned about our preferred style, which is the absence of style: the writing that sticks around longest in our memory is what seems to take the least amount of time to create. All art aspires to the condition of the Facebook post. 

Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. (2013). Major memory for microblogs. Memory & cognition, 41(4), 481-489.

The Psychology of the Serenity Prayer

One of the essential techniques of Cognitive-Behavioral Therapy (CBT) is reappraisal. It’s a simple enough process: when you are awash in negative emotion, you should reappraise the stimulus to make yourself feel better.

Let’s say, for instance, that you are stuck in traffic and are running late to your best friend’s birthday party. You feel guilty and regretful; you are imagining all the mean things people are saying about you. “She’s always late!” “He’s so thoughtless.” “If he were a good friend, he’d be here already.”

To deal with this loop of negativity, CBT suggests that you think of new perspectives that lessen the stress. The traffic isn’t your fault. Nobody will notice. Now you get to finish this interesting podcast.

It’s an appealing approach, rooted in CBT’s larger philosophy that the way an individual perceives a situation is often more predictive of his or her feelings than the situation itself. 

There’s only one problem with reappraisal: it might not work. For instance, a recent meta-analysis showed that the technique is only modestly useful at modulating negative emotions. What’s worse, there’s suggestive evidence that, in some contexts, reappraisal may actually backfire. According to a 2013 paper by Allison Troy, et al., among people who were stressed about a controllable situation—say, being fired because of poor work performance—better reappraisal ability was associated with higher levels of depression. 

Why doesn’t reappraisal always work? One possible answer involves an old hypothesis known as the strategy-situation fit, first outlined by Richard Lazarus and Susan Folkman in the late 1980s. This approach assumes that there is no universal fix for anxiety and depression, no single tactic that always grants us peace of mind. Instead, we must think strategically about which strategies to use, as their effectiveness will depend on the larger context.

A new paper by Simon Haines et al. (senior author Peter Koval) in Psychological Science provides new evidence for the strategy-situation fit model. While previous research has suggested that the success of reappraisal depends on the nature of the stressor—it’s only useful when we can’t control the source of the stress—these Australian researchers wanted to measure the relevant variables in the real world, and not just in the lab. To do this, they designed a new smartphone app that pushed out surveys at random moments. Each survey asked their participants a few questions about their use of reappraisal and the controllability of their situation. These responses were then correlated with several questionnaires measuring well-being and mental health.

The results confirmed the importance of strategy-situation fit. According to the data, people with lower levels of well-being (they had more depressive symptoms and/or stress) used reappraisal in the wrong contexts, increasing their use of the technique when they were in situations they perceived as controllable. For example, instead of leaving the house earlier, or trying to perform better at work, people with poorer “strategy-situation fit” might spend time trying to talk themselves into a better mood. People with higher levels of well-being, in contrast, were more likely to use reappraisal at the right time, when they were confronted with situations they felt they could not control. (Bad weather, mass layoffs, etc.) This leads Haines et al. to conclude that, “rather than being a panacea, reappraisal may be adaptive only in relatively uncontrollable situations.”

Why doesn’t reappraisal help when we can influence the situation? One possibility is that focusing on our reaction might make us less likely to take our emotions seriously. We’re so focused on changing our thoughts—think positive!—that we forget to seek an effective solution. 

Now for the caveats. The most obvious limitation of this paper is that the researchers relied on subjects to assess the controllability of a given situation; there were no objective measurements. The second limitation is the lack of causal data. Because this was not a longitudinal study, it’s still unclear if higher levels of well-being are a consequence or a precursor of more strategic reappraisal use. The best way to deal with our emotions is an ancient question. It won’t be solved anytime soon.

That said, this study does offer some useful advice for practitioners and patients using CBT. As I noted in an earlier blog, there is worrying evidence that CBT has gotten less effective over time, at least as measured by its ability to reduce depressive symptoms. (One of the leading suspects behind this trend is the growing popularity of the treatment, which has led more inexperienced therapists to begin using it.) While more study is clearly needed, this research suggests ways in which standard CBT might be improved. It all comes down to an insight summarized by the great Reinhold Niebuhr in the Serenity Prayer:

God, grant me the serenity to accept the things I cannot change,

Courage to change the things I can,

And wisdom to know the difference.                                         

That’s wisdom: tailoring our response based on what we can and cannot control. Serenity is a noble goal, but sometimes the best way to fix ourselves is to first fix the world.

Haines, Simon J., et al. "The Wisdom to Know the Difference Strategy-Situation Fit in Emotion Regulation in Daily Life Is Associated With Well-Being." Psychological Science (2016): 0956797616669086.

How Southwest Airlines Is Changing Modern Science

The history of science is largely the history of individual genius. From Galileo to Einstein, Isaac Newton to Charles Darwin, we tend to celebrate the breakthroughs achieved by a mind working by itself, seeing more reality than anyone has ever seen before.

It’s a romantic narrative. It’s also obsolete. As documented in a pair of Science papers by Stefan Wuchty, Benjamin Jones and Brian Uzzi, modern science is increasingly a team sport: more than 80 percent of science papers are now co-authored. These teams are also producing the most influential research, as papers with multiple authors are 6.3 times more likely to get at least 1000 citations. The era of the lone genius is over.

What’s causing the dramatic increase in scientific collaboration? One possibility is that the rise of teams is a response to the increasing complexity of modern science. To advance knowledge in the 21st century, one has to master an astonishing amount of information and experimental know-how; because we have discovered so much, it’s harder to discover something new. (In other words, the mysteries that remain often exceed the capabilities of the individual mind.) This means that the most important contributions now require collaboration, as people from different specialties work together to solve extremely difficult problems.

But this might not be the only reason scientists are working together more frequently. Another possibility is that the rise of teams is less about shifts in knowledge and more about the increasing ease of interacting with other researchers. It’s not about science getting hard. It’s about collaboration getting easy.

While it seems likely that both of these explanations are true—the trend is probably being driven by multiple factors—a new paper emphasizes the changes that have reduced the costs of academic collaboration. To do this, the economists Christian Catalini, Christian Fons-Rosen and Patrick Gaule looked at what happens to scientific teams after Southwest Airlines enters a metropolitan market. (On average, the entrance of Southwest leads to a roughly 20 percent reduction in fares and a 44 percent increase in passengers.) If these research partnerships are held back by practical obstacles—money, time, distance, etc.—then the arrival of Southwest should lead to a spike in teamwork.

That’s exactly what they found. According to the researchers, after Southwest begins a new route collaborations among scientists increase across every scientific discipline. (Physicists increase their collaborations by 26 percent, while biologists seem to really love cheap airfare: their collaborations increase by 85 percent.) To better understand these trends, and to rule out some possible confounds, Catalini et al. zoomed in on collaborations among chemists. They tracked the research produced by 819 pairs of chemists between 1993 and 2012. Once again, they found that the entry of Southwest into a new market leads to an approximately 30 percent spike in collaboration among chemists living near the new routes. What’s more, this trend towards teamwork showed no signs of existing before the arrival of the low-cost airline.

At first glance, it seems likely that these new collaborations triggered by Southwest will produce research of lower quality. After all, the fact that the scientists waited to work together until airfares were slightly cheaper suggests that they didn’t think their new partnership would create a lot of value. (A really enticing collaboration should have been worth a more expensive flight, especially since the arrival of Southwest didn’t significantly increase the number of direct routes.) But that isn’t what Catalini et al. found. Instead, they discovered that Southwest’s entry into a market led to an increase in higher quality publications, at least as measured by the number of citations. Taken together, these results suggest that cheaper air travel is not only redrawing the map of scientific collaboration, but fundamentally improving the quality of research.

There is one last fascinating implication of this dataset. The spread of Southwest paralleled the rise of the Internet, as it became far easier to communicate and collaborate using digital tools, such as email and Skype. In theory, these virtual interactions should make face-to-face conversations unnecessary. Why put up with the hassle of air travel when there’s Facetime? Why meet in person when there’s Google Docs? The Death of Distance and all that.

But this new paper is a reminder that face-to-face interactions are still uniquely valuable. I’ve written before about the research of Isaac Kohane, a professor at Harvard Medical School. A few years ago, he published a study that looked at the influence physical proximity on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Geography turned out to be a crucial variable: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were located within ten meters of each other, while the least cited papers tended to emerge from collaborators who were a kilometer or more apart.

Even in the 21st century, the best way to work together is to be together. The digital world is full of collaborative tools, but these tools are still not a substitute for meetings that take place in person.* That’s why we get on a plane.

Never change Southwest.

Catalini, Christian, Christian Fons-Rosen, and Patrick Gaulé. "Did cheaper flights change the geography of scientific collaboration?" SSRN Working Paper (2016). 

* Consider a study that looked at the spread of Bitnet, a precursor to the internet. As one might expect, the computer network significantly increased collaboration among electrical engineers at connected universities. However, the boost in collaboration was far larger among engineers who were within driving distance of each other.  Yet more evidence for the power of in-person interactions comes from a 2015 paper by Catalini, which looked at the relocation of scientists following the removal of asbestos from Paris Jussieu, the largest science university in France. He found that science labs that had been randomly relocated in the same area were 3.4 to 5 times more likely to collaborate. Meat space matters.

Do Social Scientists Know What They're Talking About?

The world is lousy with experts. They are everywhere: opining in op-eds, prognosticating on television, tweeting out their predictions. These experts have currency because their opinions are, at least in theory, grounded in their expertise. Unlike the rest of us, they know what they’re talking about.

But do they really? The most famous study of political experts, led by Philip Tetlock at the University of Pennsylvania, concluded that the vast majority of pundits barely beat random chance when it came to predicting future events, such as the winner of the next presidential election. They spun out confident predictions but were never held accountable when their predictions proved wrong. The end result was a public sphere that rewarded overconfident blowhards. Cable news, Q.E.D.

While the thinking sins identified by Tetlock are universal - we’re all vulnerable to overconfidence and confirmation bias - it’s not clear that the flaws of political experts can be generalized to other forms of expertise. For one thing, predicting geopolitics is famously fraught: there are countless variables to consider, interacting in unknowable ways. It’s possible, then, that experts might perform better in a narrower setting, attempting to predict the outcomes of experiments in their own field.

A new study, by Stefano DellaVigna at UC Berkeley and Devin Pope at the University of Chicago, aims to put academic experts to this more stringent test. They assembled 208 experts from the fields of economics, behavioral economics and psychology and asked them to forecast the impact of different motivators on the performance of subjects performing an extremely tedious task. (They had to press the “a” and “b” buttons on their keyboard as quickly as possible for ten minutes.) The experimental conditions ranged from the obvious - paying for better performance - to the subtle, as DellaVigna and Pope also looked at the influence of peer comparisons, charity and loss aversion.  What makes these questions interesting is that DellaVigna and Pope already knew the answers: they’d run these motivational studies on nearly 10,000 subjects. The mystery was whether or not the experts could predict the actual results.

To make the forecasting easier, the experts were given three benchmark conditions and told the average number of presses, or “points,” in each condition. For instance, when subjects were told that their performance would not affect their payment, they only averaged 1521 points. However, when they were paid 10 cents for every 100 points, they averaged 2175 total points. The experts were asked to predict the number of points in fifteen additional experimental conditions.

The good news for experts is that these academics did far better than Tetlock’s pundits. When asked to predict the average points in each condition, they demonstrated the wisdom of crowds: their predictions were off by only 5 percent. If you’re a policy maker, trying to anticipate the impact of a motivational nudge, you’d be well served by asking a bunch of academics for their opinions. 

The bad news is that, on an individual level, these academics still weren’t very good. They might have looked prescient when their answers were pooled together, but the results were far less impressive if you looked at the accuracy of experts in isolation. Perhaps most distressing, at least for the egos of experts, is that non-scientists were much better at ranking the treatments against each other, forecasting which conditions would be most and least effective. (As DellaVigna pointed out in an email, this is less a consequence of expert failure and more a tribute to the fact that non-experts did “amazingly well” at the task.) The takeaway is straightforward: there might be predictive value in a diverse group of academics, but you’d be foolish to trust the forecast of a single one.

Furthermore, there was shockingly little relationship between the credentials of academia and overall performance. Full professors tended to underperform assistant professors, while having more Google Scholar citations was correlated with lower levels of accuracy. (PhD students were “at least as good” as their bosses.) Academic experience clearly has virtues. But making better predictions about experiments does not seem to be one of them.

Since Tetlock published his damning critique of political pundits, he has gone on to study so-called “superforecasters,” those amateurs whose predictions of world events are consistently more accurate than those of intelligence analysts with access to classified information. (In general, these superforecasters share a particular temperament: they’re willing to learn from their mistakes, quick to update their beliefs and tend to think in shades of gray.) After mining the data, DellaVigna and Pope were able to identify their own superforecasters. As a group, these non-experts significantly outperformed the academics, improving on the average error rate of the professors by more than 20 percent. These people had no background in behavioral research. They were paid $1.50 for 10 minutes of their time. And yet, they were better than the experts at predicting research outcomes.

The limitations of expertise are best revealed by the failure of the experts to foresee their own shortcomings. When the academics were surveyed by DellaVigna and Pope, they predicted that high-citation experts would be significantly more accurate. (The opposite turned out to be true.) They also expected PhD students to underperform the professors – that didn’t happen, either – and that academics with training in psychology would perform the best. (The data points in the opposite direction.)

It’s a poignant lapse. These experts have been trained in human behavior. They have studied our biases and flaws. And yet, when it comes to their own performance, they are blind to their own blindspots. The hardest thing to know is what we don’t.

DellaVigna, Stefano, and Devin Pope. Predicting Experimental Results: Who Knows What? NBER Working Paper, 2016.      

The Power of Family Memory

In a famous series of studies conducted in the 1980s, the psychologists Betty Hart and Todd Risley gave parents a new variable to worry about: the number of words they speak to their children. According to Hart and Risley, the quantity of spoken language in a household is predictive of IQ scores, vocabulary size and overall academic success. The language gap even begins to explain socio-economic disparities in educational outcomes, as upper-class parents speak, on average, about 3.5 times more to their kids than their poorer peers. Hart and Risley referred to the lack of spoken words in poor households as "the early catastrophe."

In recent years, however, it’s become clear that it’s not just the amount of language that counts. Rather, researchers have found that some kinds of conversations are far more effective at promoting mental and emotional development than others. While all parents engage in roughly similar amounts of so-called “business talk” – these are interactions in which the parent is offering instructions, such as “Hold out your hands,” or “Stop whining!” – there is far more variation when it comes to what Hart and Risley called “language dancing,” or conversations in which the parent and child are engaged in a genuine dialogue. According to a 2009 study by researchers at the UCLA School of Public Health, parent-child dialogues were six times as effective in promoting the development of language skills as those in which the adult did all the talking.

So conversation is better than instruction; dialogues over monologues. But this only leads to the next practical question: What’s the best kind of conversation to have with children? If we only have a limited amount of “language dancing” time every day - my kids usually start negotiating for dessert roughly five minutes into dinner - then what should we choose to chat about? And this isn’t just a concern for precious helicopter parents. Rather, it’s a relevant topic for researchers trying to design interventions for at-risk children, as they attempt to give caregivers the tools to ensure successful development.

A new answer is emerging. According to a recent paper by the psychologists Karen Salmon and Elaine Reese, one of the best subjects of parent-child conversation is the past, or what they refer to as “elaborative reminiscing.” As evidence, Salmon and Reese cite a wide variety of studies, drawn from more than three decades of research on children between the ages of 18 months and 5 years, all of which converge on a similar theme: discussing our memories is an extremely effective way to promote cognitive and emotional growth. Maybe it’s a scene from our last family vacation, or an accounting of what happened at school that day, or that time I locked my keys in the car - the details of the memory don’t seem to matter that much. What does is that we remember together.

Here’s an example of the everyday reminiscing the scientists recommend:

Mother: “What was the first thing he [the barber] did?”

Child: “Bzzzz.” (running his hand over his head)

Mother: “He used the clippers, and I think you liked the clippers. And you know how I know? Because you were smiling.”

Child: “Because they were tickling.”

Mother: “They were tickling, is that how they felt? Did they feel scratchy?”

Child: “No.”

Mother: “And after the clippers, what did he use then?”

Child: “The spray.”

Mother: “Yes. Why did he use the spray?”

Child: (silent)

Mother: “He used the spray to tidy your hair. And I noticed that you closed your eyes, and I thought ‘Jesse’s feeling a little bit scared,’ but you didn’t move or cry and I thought you were being very brave.”

It’s such an ordinary conversation, but Salmon and Reese point out its many virtues. For one thing, the questions are leading the child through his recent haircut experience. He is learning how to remember, what it takes to unpack a scene, the mechanics of turning the past into a story. Over time, these skills play a huge role in language development, which is why children that engage in more elaborative reminiscing with their parents tend to have more advanced vocabularies, better early literacy scores and improved narrative skills. In fact, one study found that teaching low-income mothers to “reminisce in more elaborative ways” led to bigger improvements in narrative skills and story comprehension than an interactive book-reading program.

But talking about the past isn’t just about turning our kids into better storytellers. It’s also about boosting their emotional intelligence, teaching them how to handle those feelings they’d rather forget. In A Book About Love, I wrote about research showing that children raised in households that engage in the most shared recollection report higher levels of emotional well-being and a stronger sense of personal identity. The family unit also becomes stronger, as those children and parents who know more about the past also scored higher on a widely used measure of “reported family functioning.” Salmon and Reese expand on these findings, citing research showing that emotional reminiscing is linked to long-term improvements in the ability of children to regulate their negative emotions, handle difficult situations and identify the feelings of themselves and others.

Consider the haircut conversation above. Notice how the mother identifies the feelings felt by the child: enjoyment, tickling, fear. She suggests triggers for these emotions - the clippers, the water spray - and helps her son understand their fleeting nature. (Because the feelings are no longer present, they can be discussed calmly. That’s why talking about remembered emotions is often more useful than talking about emotions in the heat of the moment.) The virtue of such dialogues is that they teach children how to cope with their feelings, even when what they feel is fury and fear. As Salmon and Reese note, these are particularly important skills for mothers who have been exposed to adverse or traumatic experiences, such as drug abuse or domestic violence. Studies show that these at-risk parents are much less likely to incorporate “emotion words” when talking with their children. And when they do discuss their memories, Salmon and Reese write, they often “remain stuck in anger.” Their past isn’t past yet.

Perhaps this is another benefit of elaborative reminiscing. When we talk about our memories with loved ones, we translate the event into language, giving that swirl of emotion a narrative arc. (As the psychologist James Pennebaker has written, "Once it [a painful memory] is language based, people can better understand the experience and ultimately put it behind them.") And so the conversation becomes a moment of therapy, allowing us to make sense of what happened and move on. 

It was just a haircut, but you were so brave.   

Salmon, Karen, and Elaine Reese. "The Benefits of Reminiscing With Young Children." Current Directions in Psychological Science 25.4 (2016): 233-238.       


The Overview Effect

After six weeks in orbit, circling the earth in a claustrophobic space station, the three-person crew of Skylab 4 decided to go on a strike. For 24 hours, the astronauts refused to work, and even turned off their communications radio linking them to Earth. While NASA was confused by the space revolt—mission control was concerned the astronauts were depressed—the men up in space insisted they just wanted more time to admire their view of the earth. As the NASA flight director later put it, the astronauts were asserting “their needs to reflect, to observe, to find their place amid these baffling, fascinating, unprecedented experiences.”

The Skylab 4 crew was experiencing a phenomenon known as the overview effect, which refers to the intense emotional reaction that can be triggered by the sight of the earth from beyond its atmosphere. Sam Durrance, who flew on two shuttle missions, described the feeling like this: “You’ve seen pictures and you’ve heard people talk about it. But nothing can prepare you for what it actually looks like. The Earth is dramatically beautiful when you see it from orbit, more beautiful than any picture you’ve ever seen. It’s an emotional experience because you’re removed from the Earth but at the same time you feel this incredible connection to the Earth like nothing I’d ever felt before.”

The Caribbean Sea, as seen from ISS Expedition 40

The Caribbean Sea, as seen from ISS Expedition 40

What’s most remarkable about the overview effect is that the effect lasts: the experience of awe often leaves a permanent mark on the lives of astronauts. A new paper by a team of scientists (the lead author is David Yaden at the University of Pennsylvania) investigates the overview effect in detail, with a particular focus on how this vision of earth can “settle into long-term changes in personal outlook and attitude involving the individual’s relationship to Earth and its inhabitants.” For many astronauts, this is the view they never get over.

How does this happen? How does a short-lived perception alter one’s identity? There is no easy answer. In this paper, the scientists focus on how the sight of the distant earth is so contrary to our usual perspective that it forces our “self-schema” to accommodate an entirely new point of view. We might conceptually understand that the earth is a lonely speck floating in space, a dot of blue amid so much black. But it’s an entirely different thing to bear witness to this reality, to see our fragile planet from hundreds of miles away. The end result is that the self itself is changed; this new perspective of earth alters one’s perspective on life, with the typical astronaut reporting “a greater affiliation with humanity as a whole.” Here’s Ed Gibson, the science pilot on Skylab 4: “You see how diminutive your life and concerns are compared to other things in the universe. Your life and concerns are important to you, of course. But you can see that a lot of the things you worry about do not make much difference in an overall sense.”

There are two interesting takeaways. The first one, emphasized in the paper, is that the overview effect might serve as a crucial coping mechanism for the challenges of space travel. Astronauts live a grueling existence: they are stressed, isolated and exhausted. They live in cramped quarters, eat terrible food and never stop working. If we are going to get people to Mars, then we need to give astronauts tools to endure their time on a spaceship. As the crew of Skylab 4 understood, one of the best ways to withstand space travel is to appreciate its strange beauty.

The second takeaway has to do with the power of awe and wonder. When you read old treatises on human nature, these lofty emotions are often celebrated. Aristotle argued that all inquiry began with the feeling of awe, that “it is owing to their wonder that men both now begin and at first began to philosophize.” Rene Descartes, meanwhile, referred to wonder as the first of the passions, “a sudden surprise of the soul that brings it to focus on things that strike it as unusual and extraordinary.” In short, these thinkers saw the experience of awe as a fundamental human state, a feeling so strong it could shape our lives.

But now? We have little time for awe in the 21st century; wonder is for the young and unsophisticated. To the extent we consider these feelings it’s for a few brief moments on a hike in a National Park, or to marvel at a child’s face when they first enter Disneyland. (And then we get out our phones and take a picture.) Instead of cultivating awe, we treat it as just another fleeting feeling; wonder is for those who don’t know any better.

The overview effect, however, is a reminder that these emotions can have a lasting impact. Like the Skylab 4 astronauts, we can push back against our hectic schedules, insisting that we find some time to stare out the window.  

Who knows? The view just might change your life.

Yaden, David B., et al. "The overview effect: Awe and self-transcendent experience in space flight." Psychology of Consciousness: Theory, Research, and Practice 3.1 (2016): 1.


How Magicians Make You Stupid

The egg bag magic trick is simple enough. A magician produces an egg and places it in a cloth bag. Then, the magician uses some poor sleight of hand, pretending to hide the egg in his armpit. When the bag is revealed as empty, the audience assumes it knows where the egg really is.

But the egg isn’t there. The armpit was a false solution, distracting the crowd from the real trick: the bag contains a secret compartment. When the magician finally lifts his arm, the audience is impressed by the vanishing. How did he remove the egg from his armpit? It never occurs to them that the egg never left the bag.

Magicians are intuitive psychologists, reverse-engineering the mind and preying on all its weak spots. They build illusions out of our frailties, hiding rabbits in our attentional blind spots and distracting the eyes with hand waves and wands. And while people in the audience might be aware of their perceptual shortcomings – those fingers move so fast! - they are often blind to a crucial cognitive limitation, which allows magicians to keep us from deciphering the trick. In short, magicians know that people tend to to fixate on particular answers (the egg is in the armpit), and thus ignore alternative ones (it’s a trick bag), even when the alternatives are easier to execute. 

When it comes to problem-solving, this phenomenon is known as the einstellung effect. (Einstellung is German for “setting” or “attitude.”) First identified by the psychologist Abraham Luchins in the early 1940s, the effect has since been replicated in numerous domains. Consider a study that gave chess experts a series of difficult chess problems, each of which contained two solutions. The players were asked to find the shortest possible way to win. The first solution was obvious and took five moves to execute. The second solution was less familiar, but could be achieved in only three moves. As expected, these expert players found the first solution right away. Unfortunately, most of them then failed to identify the second one, even though it was more efficient. The good answer blinded them to the better one.  

Back to magic tricks. A new paper in Cognition, by Cyril Thomas and Andre Didierjean, extends the reach of the einstellung effect by showing that it limits our problem-solving abilities even when the false solution is unfamiliar and unlikely. Put another way, preposterous explanations can also become mental blocks, preventing us from finding answers that should be obvious. To demonstrate this, the scientists showed 90 students one of three versions of a card trick. The first version went like this: a performer showed the subject a brown-backed card surrounded by six red-backed cards. After randomly touching the back of the red cards, he asked the subject to choose one of the six, which was turned face up. It was a jack of hearts. The magician then flipped over the brown-backed card at the center, which was also a jack of hearts. The experiment concluded with the magician asking the subject to guess the secret of the trick. In this version, 83 percent of subjects quickly figured it out: all of the cards were the same.

The second version featured the same trick, except that the magician slyly introduced a false solution. Before a card was picked, he explained that he was able to influence other people’s choices through physical suggestions. He then touched the back of the red cards, acting as if these touches could sway the subject’s mind. After the trick was complete, these subjects were also asked to identify the secret. However, most of these subjects couldn’t figure it out: only 17 percent of people realized that every card was the jack of hearts. Their confusion persisted even after the magician encouraged them to keep thinking of alternative explanations.

This is a remarkable mental failure. It’s a reminder that our beliefs are not a mirror to the world, but rather bound up with the limits of the human mind. In this particular case, our inability to see the obvious trick seems to be a side-effect of our feeble working memory, which can only focus on a few bits of information at any given moment. (In an email, Thomas notes that it is more “economical to focus on one solution, and to not lose time…searching for a hypothetical alternative one.”) And so we fixate on the most salient answer, even when it makes no sense. As Thomas points out, a similar lapse explains the success of most mind-reading performances: we are so seduced by the false explanation (parapsychology!) that we neglect the obvious trick, which is that the magician gathered personal information about us from Facebook. The performance works because we lack the bandwidth to think of a far more reasonable explanation.

Thomas and Didierjean end their paper with a disturbing thought. “If a complete stranger (the magician) can fix spectators’ minds by convincing them that he/she can control their individual choice with his own gesture,” they write, “to what extent can an authority figure (e.g., policeman) or someone that we trust (e.g., doctors, politicians) fix our mind with unsuitable ideas?” They don’t answer the question, but they don’t need to. Just turn on the news.

Thomas, Cyril, and André Didierjean. "Magicians fix your mind: How unlikely solutions block obvious ones." Cognition 154 (2016): 169-173.

What Can Toilet Paper Teach Us About Poverty?

“Costco is where you go broke saving money.”

-My Uncle

The fundamental paradox of big box stores is that the only way to save money is to spend lots of it. Want to get a discount on that shampoo? Here's a liter. That’s a great price for chapstick – now you have 32 of them. The same logic applies to most staples of modern life, from diapers to Pellegrino, Uni-ball pens to laundry detergent.

For consumers, this buy-in-bulk strategy can lead to real savings, especially if the alternative is a bodega or Whole Foods. (Brand name diapers, for instance, cost nearly twice as much at my local grocery store compared to Costco.) However, not every American is equally likely to seek out these discounts. In particular, some studies have found that lower-income households  – the ones who could benefit the most from that huge bottle of Kirkland shampoo – pay higher prices because they don’t make bulk purchases.

A new paper, “Frugality is Hard to Afford,” by A. Yesim Orhun and Mike Palazzolo investigates why this phenomenon exists. Their data set featured the toilet paper purchases of more than 100,000 American families over seven years. Orhun and Palazzolo focused on toilet paper for several reasons. First, consumption of toilet paper is relatively constant. Second, toilet paper is easy to store – it doesn’t spoil – making it an ideal product to purchase in bulk, at least if you’re trying to get a discount. Third, the range of differences between brands of toilet paper is rather small, at least when compared to other consumer products such as detergent and toothpaste. 

So what did Orhun and Palazzolo find? As expected, lower income households were far less likely take advantage of the lower unit prices that come with bulk purchases. Over time, these shopping habits add up, as the poorest families end up paying, on average, 5.9 percent more per sheet of toilet paper. 

The question, of course, is why this behavior exists. Shouldn’t poor households be the most determined to shop around for cheap rolls? The most obvious explanation is what Orhun and Palazzolo refer to as a liquidity constraint: the poor simply lack the cash to “invest” in a big package of toilet paper. As a result, they are forced to buy basic household supplies on an as-needed basis, which makes it much harder to find the best possible price.

But this is not the only constraint imposed by poverty. In a 2013 Science paper, the behavioral scientists Anandi Mani, Sendhil Mullainathan, Eldar Shafir and Jiaying Zhao argued that not having money also imposes a mental burden, as our budgetary worries consume scarce attentional resources. This makes it harder for low-income households to plan for the future, whether it’s buying toilet paper in bulk or saving for retirement. “The poor, in this view, are less capable not because of inherent traits,” write the scientists, “but because the very context of poverty imposes load and impedes cognitive capacity.”

Consider a clever experiment conducted by Mani, et al. at a New Jersey mall. They asked shoppers about various hypothetical scenarios involving a financial problem. For instance, they might be told that their “car is having some trouble and requires $[X] to be fixed.” Some subjects were told that their repair was extremely expensive ($1500), while others were told it was relatively cheap ($150.) Then, all participants were given a series of challenging cognitive tasks, including some questions from an intelligence test and a measure of impulse control.

The results were startling. Among rich subjects, it didn’t really matter how much the car cost to fix – they performed equally well when the repair estimate was $150 or $1500.  Poor subjects, however, showed a troubling difference. When the repair estimate was low, they performed roughly equivalent to rich subjects. But when the repair estimate was high they suddenly showed a steep drop off in performance on both tests, comparable in magnitude to the mental deficit associated with losing a full night of sleep or becoming an alcoholic.

This new toilet paper study provides some additional evidence that poverty takes a toll on our choices. In one analysis, Orhun and Palazzolo looked at how purchase behavior was altered at the start of the month, when low income households are more likely to receive paychecks and food stamps. As the researchers note, this influx of money should temporarily ease the stress of being poor, thus making it easier to buy in bulk.  

That’s exactly what they found. When the poorest households were freed from their most pressing liquidity constraints, they made much more cost-effective toilet paper decisions. (This also suggests that poorer households are not simply buying smaller bundles due to a lack of storage space or transportation, as these factors are not likely to exhibit week-by-week fluctuation.) Of course, the money didn't last long; the following week, these households reverted back to their old habits, overpaying for household products. And so those with the least end up with even less.

Orhun, A. Yesim, and Mike Palazzolo. "Frugality is hard to afford." University of Michigan Working Paper (2016).

Mani, Anandi, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao. "Poverty impedes cognitive function." Science 341 (2013): 976-980.

The Nordic Paradox

By virtually every measure, the Nordic countries – Denmark, Finland, Iceland, Norway and Sweden - are a paragon of gender equality. It doesn’t matter if you’re looking at the wage gap or political participation or educational attainment: the Nordic region is the most gender equal place in the world.

But this equality comes with a disturbing exception: Nordic women also suffer from intimate partner violence (IPV) at extremely high rates. (IPV is defined by the CDC as the experience of “physical violence, sexual violence, stalking and psychological aggression by a current or former intimate partner.”) While the average lifetime prevalence for intimate partner violence for women living in Europe is 22 percent – a horrifyingly high number by itself – Nordic countries perform even worse. In fact, Denmark has the highest rate of IPV in the EU at 32 percent, closely followed by Finland (30 percent) and Sweden (28 percent.) And it’s not just violence from partners: other surveys have looked at violence against women in general. Once again, the Nordic countries had some of the highest rates of violence in the EU, as measured by reports of sexual assault, physical abuse or emotional abuse.

A new paper in Social Science & Medicine by Enrique Gracia and Juan Merlo refers to the existence of these two realities – gender equality and high rates of violence against woman – as the Nordic paradox. It’s a paradox because a high risk of IPV for women is generally associated with lower levels of gender equality, particularly in poorer countries. (For example, 71 percent of Ethiopian women have suffered from IPV.) This makes intuitive sense: a country that disregards the rights of women, or fails to treat them as equals, also seems more likely to tolerate their abuse.

And yet, the same logic doesn’t seem to apply at the other extreme of gender equality. As Gracia and Merlo note, European countries with lower levels of gender equality, such as Italy and Greece, also report much lower levels of IPV (roughly 30 percent lower) than Nordic nations.

What explains this paradox? Why hasn’t the gender equality of Nordic countries reduced violence against women? That’s the tragic mystery investigated by Gracia and Merlo.

One possibility is that the paradox is caused by differences in reporting, as women in Nordic countries might feel more free to disclose the abuse. This also makes intuitive sense: if you live in a country with higher levels of gender equality, then you might be less likely to fear retribution when accusing a partner, or telling the police about a sex crime. (In Saudi Arabia, only 3.3 of women who suffered from IPV told the police or a judge.) However, Gracia and Merlo cast shade on this explanation, noting that the available evidence suggests lower levels of disclosure of IPV among women in the Nordic countries. For instance, while 20 percent of women in Europe said that the most serious incident of IPV they’d experienced was brought to the attention of the police, only 10 percent of women in Denmark and Finland could say the same thing. The same trend is supported by other data, including rape statistics and “victim blaming” surveys. Finally, even if part of the Nordic paradox was a reporting issue, this would only reinforce the real mystery, which is that gender equal societies still suffer from epidemic levels of violence against women.

The main hypothesis advanced by Gracia and Merlo – and it’s only a hypothesis – is that high gender equality might create a backlash effect among men, triggering high levels of violence against women.  Because gender equality disrupts traditional gender norms, it might also reinforce “victim-blaming attitudes,” in which the violence is excused or justified. Gracia and Merlo cite related studies showing that women with “higher economic status relative to their partners can be at greater IPV risk depending on whether their partners hold more traditional gender beliefs.” For these backwards men, the success of women is perceived as a threat, an undermining of their identity. This backlash is further exacerbated by women becoming more independent and competitive in gender equal societies, thus increasing the potential for conflict with partners who insist on control and subservience. Progress leaves some people behind, and those people tend to get angry.

At best, the backlash effect is only a partial explanation for the Nordic Paradox. Gracia and Merlo argue that a real understanding of the prevalence of IPV – why is it still so common, even in developed countries? – will require looking beyond national differences and instead investigating the risk factors that affect the individual. How much does he drink? What is her employment status? Do they live together? What is the neighborhood like? Even brutish behaviors have complicated roots; we need a thick description of life to understand them.  

On the one hand, the Nordic paradox is a testament to liberal values, a reminder that thousands of years of gender inequality can be reversed in a few short decades. The progress is real. But it’s also a reminder that progress is difficult, full of strange backlashes and reversals. Two steps forward, one step back. Or is it the other way around? We can see the moral universe bending, but goddamn is it slow.

Gracia, Enrique, and Juan Merlo. "Intimate partner violence against women and the Nordic paradox." Social Science & Medicine 157 (2016): 27-30.

via MR

Did "Clean" Water Increase the Murder Rate?

The construction of public waterworks across the United States in the late 19th and early 20th centuries was one of the great infrastructure investments in American history. As David Cutler and Grant Miller have demonstrated, these waterworks accounted for “nearly half of the total mortality reduction in major cities, three-quarters of the infant mortality reduction, and nearly two-thirds of the child mortality reduction.” Within a generation, the scourge of waterborne infectious diseases – from cholera to typhoid fever – was largely eliminated. Moving to a city no longer took years off your life, a sociological trend that unleashed untold amounts of human innovation.

However, not all urban waterworks were created equal. Some systems were built with metal pipes containing large amounts of lead. (At the time, lead pipes were considered superior to iron pipes, as they were more durable and easier to bend.) Unfortunately, these pipes leached lead particulates into the water, exposing city dwellers to water that tasted clean but was actually a poison.

Over the last few decades, researchers have amassed an impressive body of evidence linking lead exposure in childhood to a tragic list of symptoms, including higher rates of violent crime and lower scores on the IQ test. (One study found that lead levels are four times higher among convicted juvenile offenders than among non-delinquent high school students.) In 2014, I wrote about a paper by Jesssica Wolpow Reyes that documented the association between leaded gasoline and violent crime:

Reyes concluded that “the phase-out of lead from gasoline was responsible for approximately a 56 percent decline in violent crime” in the 1990s. What’s more, Reyes predicted that the Clean Air Act would continue to generate massive societal benefits in the future, “up to a 70 percent drop in violent crime by the year 2020.” And so a law designed to get rid of smog ended up getting rid of crime. It’s not the prison-industrial complex that keeps us safe. It’s the EPA.

But these studies have their limitations. For one thing, the pace at which states reduced their use of leaded gas might be related to other social or political variables that influence the crime rate. It’s also possible that those neighborhoods with the highest risk of lead poisoning might suffer from additional maladies linked to crime, such as poverty and poor schools. To convincingly demonstrate that lead causes crime, researchers need to find a credible source of variation in lead exposure that is completely independent (aka exogenous) to the factors that might shape criminal behavior.

That is the goal of a new paper by James Feigenbaum and Christopher Muller. Their study mines the historical record, drawing from homicide data between 1921 and 1936 (when the first generation of children exposed to lead pipes were adults) and the materials used to construct each urban water system. If lead was responsible for higher crime rates, then those cities with higher lead content in their pipes (and also more acidic water, which leaches out the lead) should also experience larger spikes in crime decades later.

What makes this research strategy especially useful is that the decision to use lead pipes in a city’s water system was based in part on its proximity to a lead refinery. (Cities that were closer to a refinery were more likely to invest in lead pipes, as the lower transportation costs made the “superior” option more affordable.) In addition, Feigenbaum and Muller were able to look at how the lead content of pipes interacted with the acidity of a city’s water supply, thus allowing them to further isolate the causal role of lead.

The results were clear: cities that used lead pipes had homicide rates that were between 14 and 36 percent higher than cities that opted for cheaper iron pipes.

These violent crime increases are especially striking given that those cities using lead pipes tended to be wealthier, better educated and more “health conscious” than those that did not. All things being equal, one might expect these places to have lower rates of violent crime. But because of a little noticed engineering decision, the water of these cities contained a neurotoxin, which interfered with brain development and made it harder for their residents to reign in their emotions.

The brain is a plastic machine, molded by its environment. When we introduce a new technology – and it doesn’t matter if it’s an urban water system or the smartphone – it’s often impossible to predict the long-term consequences. Who would have guessed that the more expensive lead pipes would lead to spikes in crime decades later? Or that the heavy use of road salt in the winter would lead to a 21st century water crisis in Flint, as the chloride ions pull lead out of the old pipes?

One day, the scientists of the future will study our own blind spots, as we invest in technologies that mess with the mind in all sorts of subtle ways. History reminds us that these tradeoffs are often unexpected. After all, it took decades before we realized that, for some unlucky cities, even clean water came with a terrible cost.

James Feigenbaum and Christopher Muller. "Lead Exposure and Violent Crime in the Early Twentieth Century." Explorations in Economic History (2016)

The Importance of Learning How to Fail

“An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.” -Niels Bohr

Carol Dweck has devoted her career to studying how our beliefs about intelligence influence the way we learn. In general, she finds that people subscribe to one of two different theories of mental ability. The first theory is known as the fixed mindset – it holds that intelligence is a fixed quantity, and that each of us is allotted a certain amount of smarts we cannot change. The second theory is known as the growth mindset. It’s more optimistic, holding that our intelligence and talents can be developed through hard work and practice. "Do people with this mindset believe than anyone can be anything?" Dweck asks. "No, but they believe that a person's true potential is unknown (and unknowable); that it's impossible to foresee what can be accomplished with years of passion, toil, and training."

You can probably guess which mindset is more useful for learning. As Dweck and colleagues have repeatedly demonstrated, children with a fixed mindset tend to wilt in the face of challenges. For them, struggle and failure are a clear sign they aren’t smart enough for the task; they should quit before things get embarrassing. Those with a growth mindset, in contrast, respond to difficulty by working harder. Their faith in growth becomes a self-fulling prophecy; they get smarter because they believe they can.

The question, of course, is how to instill a growth mindset in our children. While Dweck is perhaps best known for her research on praise – it’s better to compliment a child for her effort than for her intelligence, as telling a kid she’s smart can lead to a fixed mindset – it remains unclear how children develop their own theories about intelligence.  What makes this mystery even more puzzling is that, according to multiple studies, the mindset of parents’ is surprisingly disconnected from the mindsets of their children. In other words, believing in the plasticity of intelligence is no guarantee that our kids will feel the same way.

What explains this disconnect? One possibility is that parents are accidental hypocrites. We might subscribe to the growth mindset for ourselves, but routinely praise our kids for being smart. Or perhaps we tell them to practice, practice, practice, but then get frustrated when they can’t master fractions, or free throws, or riding without training wheels. (I’m guilty of both these sins.) The end result is a muddled message about the mind’s potential.

However, in an important new paper, Kyla Haimovitz and Carol Dweck reveal the real influence behind the mindsets of our children. It turns out that the crucial variable is not what we think about intelligence – it’s how we react to failure.

Consider the following scenario: a child comes home with a bad grade on a math quiz. How do you respond? Do you try to comfort the child and tell him that it’s okay if he isn’t the most talented? Will you worry that he isn’t good at math? Or would you encourage him to describe what he learned from doing poorly on the test?

Parents with a failure-is-debilitating attitude tend to focus on the importance of performance: doing well on the quiz, succeeding at school, getting praise from other people. When confronted with the specter of failure, these parents get anxious and worried. Over time, their children internalize these negative reactions, concluding that failure is a dead-end, to be avoided at all costs. If at first you don’t succeed, then don’t try again.

In contrast, those parents who see failure as part of the learning process are more likely to see the bad grade as an impetus for extra effort, whether it’s asking the teacher for help or trying a new studying strategy. They realize that success is a marathon, requiring some pain along the way. You only learn how to get it right by getting it wrong.

According to the scientists, this is how our failure mindsets get inherited – our children either learn to focus on the appearance of success or on the long-term rewards of learning. Over time, these attitudes towards failure shape their other mindsets, influencing how they felt about their own potential. If they worked harder, could they get good at math? Or was algebra simply beyond their reach? 

Although the scientists found that children were bad at guessing the intelligence mindsets of their parents – they don’t know if we’re in the growth or fixed category - the kids were surprisingly good at predicting their parents’ relationship to failure. This suggests that our failure mindsets are much more “visible” than our beliefs about intelligence. Our children might forget what happened after the home-run, but they damn sure remember what we said after the strike-out. 

This study helps clarify the forces that shape our children. What matters most is not what we say after a triumph or achievement - it’s how we deal with their disappointments. Do we pity our kids when they struggle? (Sympathy is a natural reaction; it also sends the wrong message.) Do we steer them away from potential defeats? Or do we remind them that failure is an inescapable part of life, a state that cannot be avoided, only endured. Most worthy things are hard.

Haimovitz, K., and C. S. Dweck. "What Predicts Children's Fixed and Growth Intelligence Mind-Sets? Not Their Parents' Views of Intelligence but Their Parents' Views of Failure." Psychological Science (2016).


Is Tanking An Effective Strategy in the NBA?

In his farewell manifesto, former Philadelphia 76ers General Manager Sam Hinkie spends 13 pages explaining away the dismal performance of his team, which has gone 47-199 over the last three seasons. Hinkie’s main justification for all the losses involves the consolation of draft picks, which are the NBA’s way of rewarding the worst teams in the league. Here’s Hinkie:

"In the first 26 months on the job we added more than one draft pick (or pick swap) per month to our coffers. That’s more than 26 new picks or options to swap picks over and above the two per year the NBA allots each club. That’s not any official record, because no one keeps track of such records. But it is the most ever. And it’s not close. And we kick ourselves for not adding another handful."

This is the tanking strategy. While the 76ers have been widely criticized for their consistent disinterest in winning games, Hinkie argues that it was a necessary by-product of their competitive position in 2013, when he took over as GM. (According to a 2013 ESPN ranking of each NBA team’s three-year winning potential, the 76ers ranked 24th out of 30.) And so Hinkie, with his self-described “reverence for disruption” and “contrarian mindset,” set out to take a “long view” of basketball success. The best way for the 76ers to win in the future was to keep on losing in the present.  

Hinkie is a smart guy. At the very least, he was taking advantage of the NBA’s warped incentive structure, which can lead to a “treadmill of mediocrity” among teams too good for the lottery but too bad to succeed in the playoffs. However, Hinkie’s devotion to tanking – and his inability to improve the team’s performance - does raise an interesting set of empirical questions. Simply put: is tanking in the NBA an effective strategy? (I’m a Lakers fan, so it would be nice to know.) And if tanking doesn't work, why doesn't it?

A new study in the Journal of Sports Economics, published six days before Hinkie’s resignation, provides some tentative answers. In the paper, Akira Motomura, Kelsey Roberts, Daniel Leeds and Michael Leeds set out to determine whether or not it “pays to build through the draft in the National Basketball Association.” (The alternative, of course, is to build through free agency and trades.) Motomura et al. rely on two statistical tests to make this determination. The first test is whether teams with more high draft picks (presumably because they tanked) improve at a faster rate than teams with fewer such picks. The second test is whether teams that rely more on players they have drafted for themselves win more games than teams that acquire players in other ways. The researchers analyzed data from the 1995 to 2013 NBA seasons.

What did they find? The punchline is clear: building through the draft is not a good idea. Based on the data, Motomura et al. conclude that “recent high draft picks do not help and often reduce improvement,” as teams with one additional draft pick between 4 and 10 can be expected to lose an additional 6 to 9 games three years later. Meanwhile, those teams lucky enough to have one of the first three picks should limit their expectations, as those picks tend to have “little or no impact” on team performance. The researchers are blunt: “Overall, having more picks in the Top 17 slots of the draft does not help and tends to be associated with less improvement.”

There are a few possible explanations for why the draft doesn’t rescue bad teams. The most likely source of failure is the sheer difficultly of selecting college players, even when you’re selecting first. (One study found that draft order predicts only about 5 percent of a player’s performance in the NBA.) For every Durant there are countless Greg Odens; Hinkie’s own draft record is a testament to the intrinsic uncertainty of picking professional athletes.

That said, some general managers appear to be far better at evaluating players. “While more and higher picks do not generally help teams, having better pickers does,” write the scientists. They find, for instance, that R.C. Buford, the GM of the Spurs, is worth an additional 23 to 29 wins per season. Compare that to the “Wins Over Replacement” generated by Stephen Curry, who has just finished one of the best regular season performances in NBA history. According to Basketball Reference, Curry was worth an additional 26.4 wins during the 2015-2016 regular season. If you believe these numbers, R.C. Buford is one of the most valuable (and underpaid) men in the NBA.

So it’s important to hire the best GM. But this new study also finds franchise effects that exist independently of the general manager, as certain organizations are simply more likely to squeeze wins from their draft picks. The researchers credit these franchise differences largely to player development, especially when it comes to “developing players who might not have been highly regarded entering the NBA.” This is proof that “winning cultures” are a real thing, and that a select few NBA teams are able to consistently instill the habits required to maximize the talent of their players. Draft picks are nice. Organizations win championships. And tanking is no way to build an organization.

In his manifesto, Hinkie writes at length about the importance of bringing the rigors of science to the uncertainties of sport: “If you’re not sure, test it,” Hinkie writes. “Measure it. Do it again. See if it repeats.” Although previous research by the sports economist Dave Berri has cast doubt on the effectiveness of tanking,” this new paper should remind every basketball GM that the best way to win over the long-term is to develop a culture that doesn’t try to lose.

Motomura, Akira, et al. “Does It Pay to Build Through the Draft in the National Basketball Association?” Journal of Sports Economics, March 2016.