Is Facebook Bad for Democracy?

We are living in an era of extreme partisanship. As documented by the Pew Research Center, majorities of people in both parties now express “very unfavorable” views of the other side, with most concluding that the policies of the opposition “are so misguided that they threaten the nation’s well-being.” 79 percent of Republicans approve of Trump’s performance as president, while 79 percent of Democrats disapprove. In many respects, party affiliation has become the lens through which we see the world; even the Super Bowl can’t escape the stink of politics.

There are two ways of understanding these divisions.

The first is look at the historical parallels. Partisanship, after all, is as American as apple pie and SUVs. George Washington, in his farewell address, warned that the rise of political parties might lead to a form of “alternate domination,” as the parties would gradually “incline the minds of men to seek security... in the absolute power of an individual.” In the election of 1800, his prophesy almost came true, as several states were preparing to summon their militias if Jefferson lost. Our democracy has always been a contact sport.

But there’s another way of explaining the political splintering of the 21st century. Instead of seeing our current divide as a continuation of old historical trends, this version focuses on the impact of new social media. Donald Trump is not the latest face of our factional republic—he’s the first political figure to fully take advantage of these new information technologies.

Needless to say, this second hypothesis is far more depressing. We know our democracy can handle partisan passions. It’s less clear it can survive Facebook.

Why might technology be cratering our public discourse? To answer this question, a new paper in PLOS ONE by a team of Italian researchers at the IMT School for Advanced Studies Lucca and Brian Uzzi at Northwestern looked at 12 million users of Facebook and YouTube. They began by identifying 413 different Facebook pages that could be sorted into one of two categories: Conspiracy or Science. Conspiracy pages were those that featured, in the delicate wording of the scientists, “alternative information sources and myth narratives—pages which disseminate controversial information, usually lacking supporting evidence and most often contradictory of the official news.” (Examples include Infowars, the Fluoride Action Network and the ironically named I Fucking Love Truth.) Science pages, meanwhile, were defined as those having “the main mission of diffusing scientific knowledge.” (Examples include Nature, Astronomy Magazine and Eureka Alerts.)

The researchers then looked at how users interacted with videos appearing on these sites on both Facebook and YouTube. They looked at comments, shares and likes between January 2010 and December 2014. As you can probably guess, many users began the study only watching videos from either the Conspiracy or Science categories. (These people are analogous to voters with entrenched party affiliations.) The researchers, however, were most interested in those users who interacted with both categories; these folks liked Neil deGrasse Tyson and Alex Jones. Think of them as analogous to registered Democrats who voted for Trump, or Republicans who might vote for a Democratic congressperson in the 2018 midterms.

Here’s where things get unsettling. After just fifty interactions on YouTube and Facebook, most of these “independents” started watching videos exclusively from one side. Their diversity of opinions gave way to uniformity, their quirkiness subsumed by polarization. The filter bubble won. And it won fast.

Why does the online world encourage polarization? The scientists focus on two frequently cited forces. The most powerful force is confirmation bias, that tendency to seek out information that confirms our pre-existing beliefs. It’s much more fun to learn about why we’re right (Fluoride = cancer) than consider the possibility we might be wrong (Fluoride is a safe and easy way to prevent tooth decay). Entire media empires have been built on this depressing insight.

The second force driving online polarization is the echo chamber effect. Most online platforms (such as the Facebook News Feed) are controlled by algorithms designed to give us a steady drip of content we want to see. That’s a benign aspiration, but what it often means in practice is that the software filters out dissent and dissonance. If you liked an Infowars video about the evils of vaccines, then Facebook thinks you might also like their videos about fluoride. (This helps explain why previous research has found that more active Facebook users tend to get their information from a smaller number of news sources.) “Inside an echo chamber, the thing that makes people’s thinking evolve is the even more extreme point of view,” Uzzi said in a recent interview with Anne Ford. “So you become even more left-wing or even more right-wing.” The end result is an ironic affliction: we are more certain than ever, but we understand less about the world.

This finding jives nicely with another new paper that directly tested the impact of filtered newsfeeds. In a clever lab experiment, Ivan Dylko and colleagues showed that feeds similar to those on Facebook led people to spend far less time reading articles that contradicted their  political beliefs. Dylko et al. end on a somber note: “Taken together, these findings show that customizability technology can undermine important foundations of deliberative democracy. If this technology becomes even more popular, we can expect these detrimental effects to increase.”

The obvious solution to these problems is to engage in more debunking. If people are seeking out fake news and false conspiracies, then we should confront them with real facts. (This is what Facebook is trying to do, as they now include links to debunked articles in News Feeds.) Alas, the evidence suggests that this strategy might backfire. A previous paper by several of the Italian scientists found that Facebook users prone to conspiracy thinking react to contradictory information by “increasing their engagement within the conspiracy echo chamber.” In other words, when people are told they’re wrong, they don’t revise their beliefs. They just work harder to prove themselves right. It’s cognitive dissonance all the way down.

It was only a few generations ago that most Americans got their news from a few old white men on television. We could choose between Walter Cronkite (CBS), John Chancellor (NBC) and Harry Reasoner (ABC). It was easy to assume that Americans wanted this shared public discourse, or at least a fact-checked voice of authority, which is why nearly 30 million people watched Cronkite every night.* But now it’s clear that we only watched these shows because we had no choice—their appeal depended on the monopoly of network television. Once this monopoly disappeared, and technology gave us the ability to curate our own news, we flocked to what we really wanted: a platform catering to our biases and beliefs.

Tell me I’m right, but call it the truth.

Bessi, Alessandro, Fabiana Zollo, Michela Del Vicario, Michelangelo Puliga, Antonio Scala, Guido Caldarelli, Brian Uzzi, and Walter Quattrociocchi. "Users Polarization on Facebook and Youtube." PLOS ONE 11, no. 8 (2016): e0159641.

*The shared public discourse reduced political partisanship. In the 1950s, the American Political Association published a report fretting about the lack of ideological distinction between the two parties. The lack of overt partisanship, they said, might be undermining voter participation.

 

Nobody Knows Anything (NFL Draft Edition)

Pity the Cleveland Browns fan. Seemingly every year, the poor performance of the team leads to a high first-round pick: in this year’s draft, the Browns are making the first selection. And every year the team squanders the high pick, either by trading down and missing a superstar (Julio Jones in 2013) or trading up for a pick that didn’t pan out (Johnny Manziel in 2014, Trent Richardson in 2012, Brady Quinn in 2007, et al.) The draft is supposed to be a source of hope, a consolation prize for all the failures of the past. But for the hapless Browns, it has become yet another reminder of their chronic struggles.

This blog is not another critique of a pitiful team. The Browns might have a terrible track record in the draft, but I’m here to tell you that it’s not their fault. And that’s for a simple reason: picking college players is largely a crapshoot, a game of dice played with young athletes. The Browns might not know how to identify the college players with the most potential, but there’s little evidence that anybody else does, either. 

It’s not for lack of trying. Every year, professional football teams invest a huge amount of time and effort into choosing which college players to take with their draft picks. This is for the obvious reason: picks are extremely valuable. (Because the NFL has a strict cap on rookie salaries, new players are significantly underpaid, at least compared to their veteran colleagues.) Given the high stakes involved, it seems reasonable to assume that teams would have developed effective methods of identifying those players most likely to succeed in the pros. 

But they haven’t. That, at least, is the conclusion of a 2013 analysis of the NFL draft by Cade Massey and Richard Thaler. Consider one of their damning pieces of evidence, which involves the likelihood that a given player performs better in the NFL than the next player chosen in the draft at his position. As Massey and Thaler note, this is the practical question that teams continually face in the draft, as they debate the advantages of trading up to acquire a specific athlete.

Unfortunately, there is virtually no evidence that teams know what they’re doing: only 52 percent of picks outperform those players chosen next at the same position. “Across all rounds, all positions, all years, the chance that a player proves to be better than the next-best alternative is only slightly better than a coin flip,” write the economists. Or consider this statistic, which should strike fear into the heart of every NFL general manager: over their first five years in the league, draft picks from the first round have more seasons with zero starts (15.3 percent) than seasons that end with a selection to the Pro Bowl (12.8 percent). While draft order is roughly correlated with talent – players taken early tend to have better professional careers – Massey writes in an email that he “considers differences between team performance in the draft to be, effectively, all chance.” The Browns aren’t stupid, just unlucky.

If teams admitted their ignorance, they could adjust their strategy accordingly. They could discount their scouting analysis and remember that college performance is only weakly correlated with NFL output. They might even explore new player assessment strategies, as the old ones don't seem to work very well. 

Alas, teams routinely act as if they can identify the best players, which is what leads them to trade-up for more valuable picks. But this is precisely the wrong approach. As proof, Massey and Thaler compute a statistic they call “surplus value,” which reflects the worth of a player’s performance (as calculated by the pay scale of NFL veterans) minus his actual compensation. “If picks are valued by the surplus they produce, then the first pick in the first round is the worst pick in the round, not the best,” write the economists. “In paying a steep price to trade up, teams are paying a lot to acquire a pick that is worth less than the ones they are giving up.”

Why are most NFL teams so bad at the draft? The main culprit is what Massey and Thaler refer to as “overconfidence exacerbated by information.” Teams assume their judgments about prospective players are more accurate than they are, especially when they amass large amounts of data and analytics. What they fail to realize is that much of this information isn’t predictive, and that it’s almost certainly framed by the same biases and blind spots that limit our assessments of other people in everyday life. As Massey and Thaler write: “The problem is not that future performance is difficult to predict, but that decision makers do not appreciate how difficult it is.” 

There is something deeply sobering about the limits of draft intelligence among NFL teams. These are athletes, after all, whose performance has been measured by a dizzying array of advanced stats; they have been scouted for years and run through a gauntlet of psychological and physical assessments. (As the economists write, “football teams almost certainly are in a better position to predict performance than most employers choosing workers.”) However, even in this rarified domain, the mystery of human beings still dominates. We live in the age of big data and sabermetrics, which means that it’s harder than ever to know what we don’t. But this paper is an important reminder that such meta-knowledge is essential—when we ignore the error bars, we’re much more likely to make a very big mistake.

Bill Belichick, the coach of the New England Patriots (and former coach of the Cleveland Browns!), has won lots of games by pushing back against the curse of overconfidence. If Belichick has a signature move in the draft, it’s trading down, swapping a high pick for multiple less valuable ones. (Under Belichick, the Patriots have gained more than 25 compensatory draft picks.) If teams could reliably assess talent, this strategy would make little sense, since it would mean giving up on superstars. However, given the near impossibility of predicting elite player performance, gaining more picks is an astute move. Since nobody knows who to choose, the only way to play is to make a lot of bets.

Massey, Cade, and Richard H. Thaler. "The loser's curse: Decision making and market efficiency in the National Football League draft." Management Science 59.7 (2013): 1479-1495.

When Is Ignorance Bliss?

The first line of Aristotle’s Metaphysics begins with a seemingly obvious truth: “All men by nature desire to know.” According to Aristotle, this desire for knowledge is our defining instinct, the quality that sets our mind apart. As the cognitive psychologist George Miller put it, we are informavores, blessed with a boundless appetite for information.

It’s a comforting vision. However, like all dictums about human nature, it also comes with plenty of caveats and exceptions. Take spoiler alerts. It’s hard to read an article about a work of entertainment that doesn’t contain a warning to readers. The assumption of these warnings, of course, is that people don’t want to know, at least when it comes to narratives.

And it’s not just the latest twists in Scandal that we’re trying to avoid. Twenty percent of Malawi adults at risk for HIV decline to get the results of their HIV test, even when offered cash incentives; approximately 10 percent of Canadians with a family history of Huntington Disease choose to not undergo genetic testing. (Even James Watson declined to have his risk of Alzheimer’s revealed.) These are just specific examples of a larger phenomenon. Given the advances in genetic testing and biomarkers, the Aristotelian model would predict that we’d all become subscribers to 23andMe. But that’s not happening.

A new paper in Psychological Review by Gerd Gigerenzer and Rocio Garcia-Retamero explores the motives of our willful ignorance. They begin by establishing its prevalence, surveying more than 2000 German and Spanish adults about various forms of future knowledge. Their results are clear proof that most of us want spoiler alerts for real life: between 85 and 90 percent of subjects say they don’t want to know when or why their partner will die. (They feel the same way about their own death.) They also don’t want to know if their marriage will eventually end in divorce. This preference for ignorance even applies to positive events: between 40 and 70 percent of subjects don't want to know about their future Christmas gifts, or who won the big soccer match, or the gender of their next child.

To understand our reasons for ignorance, Gigerenzer and Garcia-Retamero asked subjects about their risk attitudes. They found that people who are more risk-averse (as measured by their insurance purchases and their choices playing a simple lottery game) are more likely to prefer not knowing. While this might appear counterintuitive—learning how you will die might help reduce the risk of dying— Gigerenzer and Garcia-Retamero explain these results in terms of anticipatory regret. People avoid risks because they don’t want to regret those losing gambles. They avoid life spoilers for a similar reason, as they're trying to avoid regretting the decision to know. 

On the one hand, this intuition has a logical sheen. It’s not that ignorance is bliss—it’s just better than knowing that life can be shitty and full of suffering. Knowing exactly how we’ll suffer might only make it worse. The same principle also applies to the good stuff: we think we'll be less happy if we know about our happiness in advance. Life is like a joke—it's not so funny if we get the punchline first.

But there’s also some compelling evidence that our intuitions about regretting future knowledge are wrong. For one thing, it’s not clear that spoilers spoil anything. Consider a 2011 study by Jonathan Leavitt and Nicholas Christenfeld. The scientists gave several dozen undergraduates twelve different short stories. The stories came in three different flavors: ironic twist stories (such as Chekhov’s “The Bet”), straight up mysteries (“A Chess Problem” by Agatha Christie) and “literary stories” by writers like Updike and Carver. Some subjects read the story as is, without a spoiler. Some read the story with a spoiler carefully embedded in the actual text, as if Chekhov himself had given away the end. And some read the story with a spoiler disclaimer in the preface.

Here’s the shocking twist: the scientists found that almost every single story, regardless of genre, was more pleasurable when prefaced with some sort of spoiler. It doesn’t matter if it’s Harry Potter or Hamlet: an easy way to make a good story even better is to spoil it at the start. As the scientists write, “Erroneous intuitions about the nature of spoilers may persist because individual readers are unable to compare spoiled and unspoiled experiences of a novel story. Other intuitions about suspense may be similarly wrong: Perhaps birthday presents are better when wrapped in cellophane, and engagement rings when not concealed in chocolate mousse.”

In fiction as in life: we assume our pleasure depends on ignorance. However, Leavitt and Christenfeld argue that spoilers enhance narrative pleasure by letting readers pay more attention to developments along the way. Because we know the destination, we’re better able to enjoy the journey. 

There's more to life than how it ends.

Gigerenzer, Gerd, and Rocio Garcia-Retamero. "Cassandra’s regret: The psychology of not wanting to know." Psychological Review 124.2 (2017): 179 

Why College Should Become A Lottery

Barry Schwartz, a psychologist at UC-Berkeley and Swarthmore, does not think much of the college admissions process. In a new paper, he tells a story about a friend who spent an afternoon with a high-school student. His friend was impressed by the student and, for the first time in thirty years of teaching, decided to send a note to the dean of admissions. Despite the note, the student did not get in. Schwartz describes what happened next:

“Curious, my friend asked the dean why. ‘No reason,’ said the dean. ‘No reason?,’ replied my friend, somewhat incredulous. ‘Yes, no reason. I can’t tell you how many applicants we reject for no reason.’”

For Schwartz, such stories are a sign of a broken system. Although colleges pretend to be paragons of meritocracy, their selection methods are rife with randomness. “Despite their very best efforts to make the selection process rational and reasonable, admissions people are, in effect, running a lottery,” Schwartz writes. “To get into Harvard (or Stanford, or Yale, or Swarthmore), you need to be good...and you need to be lucky.”

Schwartz devotes much of his article to the severe negative consequences inflicted by this capricious selection process. He begins by lamenting the ways in which it discourages students from experimenting, both inside and outside the classroom. Because teenagers are so terrified of failure—Harvard requires perfection!—they refuse to take classes that might end with the crushing disappointment of a B+. Over time, this can lead to high-school students that “may look better than ever before” but are probably learning less.  

But wait: it gets worse. Much worse. Suniya Luthar, a professor of psychology at Arizona State University, has spent the last several years documenting the emotional toll of the college competition on upper-middle class children. Although these affluent kids lead enviable lives on paper—they have educated white-collar parents, high test scores and attend elite high-schools—they are roughly twice as likely to suffer from the symptoms of depression and anxiety than the national average. They are also far more likely to have eating disorders and meet the diagnostic criteria for substance abuse.  

There are, of course, countless variables driving this epidemic of mental issues among affluent teenagers. (Maybe it’s Snapchat’s fault? Or a side-effect of helicopter parenting?) However, Luthar argues that one of the main causes is what she calls the “pressure to achieve.” The problem with the pressure is that it’s a double-edged sword. If a student’s achievements fall short, then he feels inadequate. However, even if a student gets straight As, she probably still lives in what Luthar calls “a state of fear of not achieving.” Over time, that chronic sense of fear can lead to anxiety disorders and depression; kids are burned out on stress before they even leave their childhood homes. 

How can we fix this competitive morass? Schwartz offers a provocative solution. (In an email, he observes that he first offered this proposal a decade ago. In the years since, it’s only gotten more necessary.) The first phase of his plan involves filtering applicants using the same academic standards currently in place. Schwartz estimates that these standards—GPA, SAT scores, extracurricular activities, etc.—could cut the applicant pool by up to two-thirds. But here’s the crucial twist: after this initial culling, all of the acceptable students would be entered into an admissions lottery. The winners would be drawn at random.

Such a lottery system, Schwartz writes, would offer multiple advantages over our current fake meritocracy. For one thing, it would be much less stressful for teenagers to strive to be “good enough” rather than the best; high-achieving students wouldn’t have to be the highest achieving. This, in turn, would “free students up to do the things they were really passionate about.” Instead of chasing extrinsic rewards—does Stanford need an oboe player?—adolescents would be free to follow their sense of intrinsic motivation.* By making selective colleges less selective, Schwartz says, they can get happier and more well-rounded students.

The hybrid lottery system would also force colleges to be more transparent about their selection methods. Right now, the admissions process is a black box; such secrecy is what allows colleges to accept legacies and reject otherwise qualified students for no particular reason. However, if the schools were forced to define their lottery cut-off, they would have to reflect on the measurements that actually predict academic success. And this doesn’t mean the criteria must be quantitative. As Schwartz notes, “criteria for ‘good enough’ can be sufficiently flexible that applicants who are athletes, violinists, minorities, or from Alaska get ‘credit’ for these characteristics,” just as in the current system.

The most obvious objection to Schwartz’s lottery system is ethical. For many people, it just seems wrong to base a major life decision on a roll of the dice. But here’s the thing: the college application process is already a crapshoot. (The differences used to differentiate applicants—say, 10 points on the SAT—are often smaller than the amount of error in the assessments.) By making the lottery explicit, students and schools would at least be forced to have a candid conversation about the role of luck in life. Instead of taking full credit for our admission, or blaming ourselves for our rejection, we’d admit that much of success is random chance and pure contingency. Perhaps, Schwartz writes, this might make students a little “more empathic when they encounter people who may be just as deserving as they are, but less lucky.”

Schwartz is best known for his research on the pitfalls of the maximizing decision-making strategy, in which people obsess over finding the best possible alternative. The problem with this approach, Schwartz and colleagues have repeatedly found, is that it ends up making us miserable. Instead of being satisfied with a perfectly acceptable option, we get stressed about finding a better one. And then, once we make a choice, studies show that maximizers end up drenched in regret, fixated on their foregone options. We’re trained to be maximizers by consumer culture—who wants to settle for the second best laundry detergent?—but it’s usually a shortcut to a sad life.

This new paper extends the maximizing critique to higher-education. In Schwartz’s telling, the college application process is a particularly powerful example of how the maximizing approach can lead us astray. Given the inherent uncertainty of matching students and colleges, Schwartz argues that it’s foolish to try to find the ideal school. Rather, we should practice an approach that Herbert Simon called satisficing, in which we search for colleges that are good enough. After all, the evidence suggests we can be equally happy at a multitude of places. 

This, perhaps, is the greatest virtue of the lottery proposal: by making it impossible for students to act like maximizers—chance chooses for them—they will be given a life lesson in the power of satisficing. Instead of wasting their dreams on a dream school, they should follow their adolescent passions and embrace the chanciness of life. You can’t always get exactly what you want. But if you practice satisficing, you just might get what you need.

*The danger of replacing intrinsic motivation with extrinsic rewards was first demonstrated in a classic study of preschoolers. Some of the young children were told they would get a reward for drawing with pens. You might think this would encourage the kids to draw even more. It didn’t. Instead, those toddlers given an “expected reward” were less likely to use the pens in the future. (And when they did use the pens, they spent less time drawing.) The extrinsic rewards, said the scientists, had turned “play into work.”

Schwartz, Barry (2016) “Why Selective Colleges Should Become Less Selective—And Get Better Students,” Capitalism and Society: Vol. 11: Iss. 2.

The Headwinds Paradox (Or Why We All Feel Like Victims)

When you are running into the wind, the air feels like a powerful force. It’s blowing you back, slowing you down, an annoying obstacle making your run that much harder.

And then you turn around and the headwind becomes a tailwind. The air that had been pushing you back is now propelling you forward. But here’s the question: do you still notice it?

Probably not. Simply put, headwinds are far more salient than tailwinds. When it comes to exercise, we fixate on the barrier and ignore the boost.

In a new paper, the psychologists Shai Davidai and Thomas Gilovich show that this same asymmetry is present across many aspects of life, and not just when we’re running on a windy day.

As evidence, Davidai and Gilovich conducted a number of clever studies. In the first experiment, they asked people which political party was advantaged or disadvantaged by the rules of American democracy, such as the electoral college. As expected, partisans on both sides believed their side suffered from the headwinds, so that Democrats were convinced the political system favored Republicans and Republicans believed it favored Democrats. Interestingly, the size of the effect was mediated by the level of political engagement, with more engagement leading to a stronger sense of unfairness. In short, the more you think about American politics the more convinced you are that the system is stacked against you. (In fairness to Democrats, recent history suggests they might be right.)

A similar effect was also observed among football fans, who were much more likely to notice the difficult games on their team’s upcoming schedule than the easy ones. The headwinds/tailwinds asymmetry even shaped the career beliefs of academics, as people in a given sub-discipline believed they faced more hurdles than those in other sub-disciplines.

And then there’s family life, that rich vein of grievance. When the psychologists asked siblings if their parents had been harder on the older or younger child, their answers depended largely on their own position in the family. Older children were convinced that their parents had gone easy on their little siblings, while younger siblings insisted the discipline had been evenly distributed. Mom always loves someone else the most.

According to Davidai and Gilovich, the underlying cause of the headwind effect is the availability heuristic, in which our judgement is distorted by the ease with which relevant examples come to mind. First described by Kahneman and Tversky, the availability heuristic is why people think tornadoes are deadlier than asthma—tornadoes generate headlines, even though asthma takes 20 times more lives—and why spouses tend to overestimate their share of household chores. (We remember that time we took out the garbage; we don’t remember all those times we didn’t.) As Timur Kuran and Cass Sunstein point out, the availability bias might be “the most fundamental heuristic” of them all, constantly distorting our judgements of frequency and probability. We see through a glass, darkly; the availability heuristic is often what makes the glass so dark. 

This new paper shows how the availability bias can even warp our life narratives. We think our memory reflects the truth; it feels like a fair accounting of events. In reality, though, it’s a story tilted towards resentment, since it’s so much easier for us to remember every slight, wound and obstacle.

Why does this matter? Didn’t we already know that our memory is mostly bullshit? Davidai and Gilovich argue that this particular mnemonic flaw comes with serious practical consequences. For one thing, the headwind effect makes it harder for us to experience gratitude, which research shows is associated with higher levels of happiness, fewer hospitalizations and a more generous approach towards others. Because we take the tailwinds of life for granted—the headwinds consume all our attention—we have to work to notice our blessings. We easily remember who hurt us; we soon forget who helped us.

This effect can even shape public policy, limiting our interest in helping the less fortunate. We’re so biased towards our adversities that we can’t empathize with the adversities of others, even when they might be far more challenging. And since we tend to neglect our God given advantages—good parents, silver spoons, etc.—we discount the role they played in our success. The end result is a series of false beliefs about what it takes to succeed.

In a recent interview, Rob Lowe lamented the obstacles that had limited his early career opportunities. Handsome actors like himself, he said, are subject to “an unbelievable bias and prejudice against quote-unquote good-looking people.”

We’re all victims. Even beauty is a headwind.

Davidai, Shai, and Thomas Gilovich. "The headwinds/tailwinds asymmetry: An availability bias in assessments of barriers and blessings." Journal of Personality and Social Psychology 111.6 (2016): 835. 

Fewer Friends, Better Marriages: The Modern American Social Network

In A Book About Love, I wrote about research showing that the social networks of Americans have been shrinking for decades. Miller McPherson, a sociologist at the University of Arizona and Duke University, has helped document the decline. In 1985, 26.1 percent of respondents reported discussing important matters with a “comember of a group,” such as a church congregant. In 2004, McPherson found that the percentage had fallen to 11.8. In 1985, 18.5 percent of subjects had important conversations with their neighbors. That number shrank to 7.9 percent two decades later. Other studies have reached similar conclusions. Robert Putnam, for instance, has used the DDB Needham Life Style Surveys to show that the average married couple entertained friends at home approximately fifteen times per year in the 1970s. By the late 1990s, that number was down to eight, “a decline of 45 percent in barely two decades.”

These surveys raise the obvious question: If we’re no longer socializing with our neighbors, or having dinner parties with our friends, then what the hell are we doing? 

One possibility is screens. Conversation is hard; it’s much easier to chill with Netflix and the cable box. According to this depressing speculation, technology is an enabler of loneliness, allowing us to forget how isolated we’ve become. 

But there’s another possibility. While it seems clear that we’re spending less time with our friends and acquaintances (texting doesn’t count), we might be spending more time with our spouses and children. (McPherson found, for instance, that the percentage of Americans who said their spouse was their “only confidant” nearly doubled between 1985 and 2004.) If true, this would suggest that our social network isn’t fraying so much as it’s gradually becoming more focused and intimate.

A new paper by Katie Genadek, Sarah Flood and Joan Garcia Roman at the University of Minnesota, drawing from time use survey data from 1965 to 2012, aims to answer these important unknowns. Their data provides a fascinating portrait of the social trends shaping the lives of American families.

I’ll start with the punchline: on average, spouses are spending more time with each other than they did in 1965. This trend is particularly visible among married couples with children. Here are the scientists: “In 1965, individuals with children spent about two hours per day with both their spouse and child(ren); by 2012 this had increased 50 minutes to almost three hours.” Instead of bowling with neighbors, we’re taking our kids to soccer practice.

Of course, when it comes to togetherness time, quality matters more than quantity. One cynical explanation for the increase in family time is that much of it might involve screens. Maybe we’re not hanging out—we’re just sharing a wifi network. But the data doesn’t seem to show that. In 1975, couples spent 79 minutes watching television together. In 2012, that number had increased by only 13 minutes. What’s more, spouses are still making time for shared activities that don’t involve TV. Although our total amount of leisure time has remained remarkably constant – Keynes’ leisure society has not come to pass – we are more likely to spend this free time with our spouse.

This is particularly true among couples with children. The big news buried in this time use data is that parents are doing a lot more parenting. In 1965, parents spent 41 minutes engaged in “primary care” for their little ones. That number had more than doubled, to 88 minutes, in 2012. We’re also far more likely to parent together, with the number of minutes spent as a family unit quadrupling from 6 minutes in 1965 to 27 minutes in 2012. This increase in family time comes despite the sharp increase in women working outside the home.

It’s so easy to despair about the state of the world. What’s important to remember, however, is that these more intimate benchmarks of life are trending in the right direction.  Amid all the calls to make America great again, we’re liable to forget that the greatest generations spent a staggeringly little amount of time with their families. The nuclear family is supposed to be disintegrating, but these time diaries show us the opposite, as Americans are choosing to spend an increasing percentage of their time with their partner and children.

What makes this survey data more compelling is that it jives with recent research showing the growing role played by our spouses in determining our own life happiness. In a separate study based on data from 47,000 couples, Genadek and Flood found that individuals are nearly twice as happy when they are with their spouse as when they’re not. Meanwhile, a recent meta-analysis of ninety-three studies by the psychologist Christine Proulx found that the rewards of a good marriage have surged in recent decades, with the most loving couples providing a bigger lift to the “personal well-being” of the partners. In fact, the influence of a good marriage on overall levels of life satisfaction has nearly doubled since the late 1970s. Given this happiness boost, it shouldn’t be too surprising that we’re spending more time with our spouses. If we’re lucky, we already live with the people who make us happiest. 

Genadek, Katie R., Sarah M. Flood and Joan Garcia Roman. “Trends in Spouses’ Shared Time in the United States, 1965-2012.” Demography (2016)  

Why Facebook Rules the World

One day, when historians tell the strange story of the 21st century, this age of software and smartphones, populism and Pokemon, they will focus on a fundamental shift in the way people learn about the world. Within the span of a generation, we went from watching the same news shows on television, and reading the same newspapers in print, to getting a personalized feed of everything that our social network finds interesting, as filtered by a clever algorithm. The main goal of the algorithm is to keep us staring at the screen, increasing the slight odds that we might click on an advertisement.

I’m talking, of course, about Facebook. Given the huge amount of attention Facebook commands—roughly 22 percent of the internet time Americans spend on their mobile devices is spent on the social network—it has generated a relatively meager amount of empirical research. (It didn't help that the company’s last major experiment became a silly controversy.) Furthermore, most of the research that does exist explores the network’s impact on our social lives. In general, these studies find small, mostly positive correlations between Facebook use and a range of social measures: our Facebook friends are not the death of real friendship.

What this research largely overlooks, however, is a far more basic question: why is Facebook so popular? What is it about the social network (and social media in general) that makes it so attractive to human attention? It’s a mystery at the heart of the digital economy, in which fortunes hinge on the allocation of eyeballs.

One of the best answers for the appeal of Facebook comes from a 2013 paper by a team of researchers at UCSD. (First author Laura Mickes, senior authors Christine Harris and Nicholas Christenfeld.) Their paper begins with a paradox: the content of Facebook is often mundane, full of what the scientists refer to as “trivial ephemera.” Here’s a random sampling of my current feed: there’s an endorsement of a new gluten-free pasta, a smattering of child photos, emotional thoughts on politics and a post about a broken slide at the local park. As the scientists point out, these Facebook “microblogs” are full of quickly composed comments and photos, an impulsive record of everyday life.

Such content might not sound very appealing, especially when there is so much highly polished material already competing for our attention. (Why read our crazy uncle on the election when there’s the Times?) And yet, the “microblog” format has proven irresistible: Facebook’s “news” feed is the dominant information platform of our century, with nearly half of Americans using it as a source for news.  This popularity, write the scientists, “suggests that something about such ‘microblogging’ resonates with human nature.”

To make sense of this resonance, the scientists conducted some simple memory experiments. In their first study, they compared the mnemonic power of Facebook posts to sentences from published books. (The Facebook posts were taken from the feeds of five research assistants, while the book sentences were randomly selected from new titles.) The subjects were shown 100 of these stimuli for three seconds each. Then, they were given a recognition test consisting of these stimuli along with another 100 “lures” – similar content they had not seen - and asked to assess their confidence, on a twenty-point scale, as to whether they previously been exposed to a given stimulus.

According to the data, the Facebook posts were much more memorable than the published sentences. (This effect held even after controlling for sentence length and the use of “irregular typography,” such as emoticons.) But this wasn’t because people couldn’t remember the sentences extracted from books – their performance here was on par with other studies of textual memory. Rather, it was largely due to the “remarkable memorability” of the Facebook posts. Their content was trivial. It was also unforgettable.

In a follow-up condition, the scientists replaced the book sentences with photographs of human faces. (They also gathered a new collection of Facebook posts, to make sure their first set wasn’t an anomaly.) Although it’s long been argued that the human brain is “specially designed to process and store facial information,” the scientists found that the Facebook posts were still far easier to remember.

This is not a minor effect: the difference in memory performance between Facebook posts and these other stimuli is roughly equivalent to the difference between people with amnesia due to brain damage and those with a normal memory. What’s more, this effect exists even when the Facebook content is about people we don’t even know. Just imagine how memorable it is when the feed is drawn from our actual friends.

To better understand the mnemonic advantage of microblogs, the scientists ran several additional experiments. In one study, they culled text from CNN.com, drawing from both the news and entertainment sections. The text came in three forms: headlines, sentences from the articles, and reader comments. As you can probably guess, the reader comments were much more likely to be remembered, especially when compared to sentences from the articles. Subjects were also better at remembering content from the entertainment section, at least compared to news content.

Based on this data, the scientists argue that the extreme memorability of Facebook posts is being driven by at least two factors. The first is that people are drawn to “unfiltered, largely unconsidered postings,” whether it’s a Facebook microblog or a blog comment. When it comes to text, we don’t want polish and reflection. We want gut and fervor. We want Trump’s tweets.

The second factor is the personal filter of Facebook, which seems to take advantage of our social nature.  We remember random updates from our news feed for the same reason we remember all the names of the Pitt-Jolie children: we are gossipy creatures, perpetually interested in the lives of others.

This research helps explain the value of Facebook, which is currently the 7th most valuable company in the world. The success of the company, which sells ads against our attention, is ultimately dependent on our willingness to read the haphazard content produced by other people for free. This might seem like a bug, but it’s actually an essential feature of the social network. “These especially memorable Facebook posts,” write the scientists, “may be far closer than professionally crafted sentences to tapping into the basic language capacities of our minds. Perhaps the very sentences that are so effortlessly generated are, for that reason, the same ones that are readily remembered.” While traditional media companies assume people want clean and professional prose, it turns out that we’re compelled to remember the casual and flippant. The problem, of course, is that the Facebook news algorithm is filtered to maximize attention, not truth, which can lead to the spread of sticky lies. When our private feed is full of memorable falsehoods what happens to public discourse?

And it’s not just Facebook: the rise of the smartphone has encouraged a parallel rise in informal messaging. (We've gone from email to emojis in a few short years.) Consider Snapchat, the social network du jour. It's entire business model depends on the eagerness of users to consume raw visual content, produced by friends in the grip of System 1. In a universe overflowing with professional video content, it might seem perverse that we spend so much time watching grainy videos of random events. But this is what we care about. This is what we remember.

The creation of content used to be a professional activity. It used to require moveable type and a printing press and a film crew. But digital technology democratized the tools. And once that happened, once anyone could post anything, we discovered an entirely new form of text and video. We learned that the most powerful publishing platform is social, because it embeds the information in a social context. (And we are social animals.) But we also learned about our preferred style, which is the absence of style: the writing that sticks around longest in our memory is what seems to take the least amount of time to create. All art aspires to the condition of the Facebook post. 

Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. (2013). Major memory for microblogs. Memory & cognition, 41(4), 481-489.

The Psychology of the Serenity Prayer

One of the essential techniques of Cognitive-Behavioral Therapy (CBT) is reappraisal. It’s a simple enough process: when you are awash in negative emotion, you should reappraise the stimulus to make yourself feel better.

Let’s say, for instance, that you are stuck in traffic and are running late to your best friend’s birthday party. You feel guilty and regretful; you are imagining all the mean things people are saying about you. “She’s always late!” “He’s so thoughtless.” “If he were a good friend, he’d be here already.”

To deal with this loop of negativity, CBT suggests that you think of new perspectives that lessen the stress. The traffic isn’t your fault. Nobody will notice. Now you get to finish this interesting podcast.

It’s an appealing approach, rooted in CBT’s larger philosophy that the way an individual perceives a situation is often more predictive of his or her feelings than the situation itself. 

There’s only one problem with reappraisal: it might not work. For instance, a recent meta-analysis showed that the technique is only modestly useful at modulating negative emotions. What’s worse, there’s suggestive evidence that, in some contexts, reappraisal may actually backfire. According to a 2013 paper by Allison Troy, et al., among people who were stressed about a controllable situation—say, being fired because of poor work performance—better reappraisal ability was associated with higher levels of depression. 

Why doesn’t reappraisal always work? One possible answer involves an old hypothesis known as the strategy-situation fit, first outlined by Richard Lazarus and Susan Folkman in the late 1980s. This approach assumes that there is no universal fix for anxiety and depression, no single tactic that always grants us peace of mind. Instead, we must think strategically about which strategies to use, as their effectiveness will depend on the larger context.

A new paper by Simon Haines et al. (senior author Peter Koval) in Psychological Science provides new evidence for the strategy-situation fit model. While previous research has suggested that the success of reappraisal depends on the nature of the stressor—it’s only useful when we can’t control the source of the stress—these Australian researchers wanted to measure the relevant variables in the real world, and not just in the lab. To do this, they designed a new smartphone app that pushed out surveys at random moments. Each survey asked their participants a few questions about their use of reappraisal and the controllability of their situation. These responses were then correlated with several questionnaires measuring well-being and mental health.

The results confirmed the importance of strategy-situation fit. According to the data, people with lower levels of well-being (they had more depressive symptoms and/or stress) used reappraisal in the wrong contexts, increasing their use of the technique when they were in situations they perceived as controllable. For example, instead of leaving the house earlier, or trying to perform better at work, people with poorer “strategy-situation fit” might spend time trying to talk themselves into a better mood. People with higher levels of well-being, in contrast, were more likely to use reappraisal at the right time, when they were confronted with situations they felt they could not control. (Bad weather, mass layoffs, etc.) This leads Haines et al. to conclude that, “rather than being a panacea, reappraisal may be adaptive only in relatively uncontrollable situations.”

Why doesn’t reappraisal help when we can influence the situation? One possibility is that focusing on our reaction might make us less likely to take our emotions seriously. We’re so focused on changing our thoughts—think positive!—that we forget to seek an effective solution. 

Now for the caveats. The most obvious limitation of this paper is that the researchers relied on subjects to assess the controllability of a given situation; there were no objective measurements. The second limitation is the lack of causal data. Because this was not a longitudinal study, it’s still unclear if higher levels of well-being are a consequence or a precursor of more strategic reappraisal use. The best way to deal with our emotions is an ancient question. It won’t be solved anytime soon.

That said, this study does offer some useful advice for practitioners and patients using CBT. As I noted in an earlier blog, there is worrying evidence that CBT has gotten less effective over time, at least as measured by its ability to reduce depressive symptoms. (One of the leading suspects behind this trend is the growing popularity of the treatment, which has led more inexperienced therapists to begin using it.) While more study is clearly needed, this research suggests ways in which standard CBT might be improved. It all comes down to an insight summarized by the great Reinhold Niebuhr in the Serenity Prayer:

God, grant me the serenity to accept the things I cannot change,

Courage to change the things I can,

And wisdom to know the difference.                                         

That’s wisdom: tailoring our response based on what we can and cannot control. Serenity is a noble goal, but sometimes the best way to fix ourselves is to first fix the world.

Haines, Simon J., et al. "The Wisdom to Know the Difference Strategy-Situation Fit in Emotion Regulation in Daily Life Is Associated With Well-Being." Psychological Science (2016): 0956797616669086.

How Southwest Airlines Is Changing Modern Science

The history of science is largely the history of individual genius. From Galileo to Einstein, Isaac Newton to Charles Darwin, we tend to celebrate the breakthroughs achieved by a mind working by itself, seeing more reality than anyone has ever seen before.

It’s a romantic narrative. It’s also obsolete. As documented in a pair of Science papers by Stefan Wuchty, Benjamin Jones and Brian Uzzi, modern science is increasingly a team sport: more than 80 percent of science papers are now co-authored. These teams are also producing the most influential research, as papers with multiple authors are 6.3 times more likely to get at least 1000 citations. The era of the lone genius is over.

What’s causing the dramatic increase in scientific collaboration? One possibility is that the rise of teams is a response to the increasing complexity of modern science. To advance knowledge in the 21st century, one has to master an astonishing amount of information and experimental know-how; because we have discovered so much, it’s harder to discover something new. (In other words, the mysteries that remain often exceed the capabilities of the individual mind.) This means that the most important contributions now require collaboration, as people from different specialties work together to solve extremely difficult problems.

But this might not be the only reason scientists are working together more frequently. Another possibility is that the rise of teams is less about shifts in knowledge and more about the increasing ease of interacting with other researchers. It’s not about science getting hard. It’s about collaboration getting easy.

While it seems likely that both of these explanations are true—the trend is probably being driven by multiple factors—a new paper emphasizes the changes that have reduced the costs of academic collaboration. To do this, the economists Christian Catalini, Christian Fons-Rosen and Patrick Gaule looked at what happens to scientific teams after Southwest Airlines enters a metropolitan market. (On average, the entrance of Southwest leads to a roughly 20 percent reduction in fares and a 44 percent increase in passengers.) If these research partnerships are held back by practical obstacles—money, time, distance, etc.—then the arrival of Southwest should lead to a spike in teamwork.

That’s exactly what they found. According to the researchers, after Southwest begins a new route collaborations among scientists increase across every scientific discipline. (Physicists increase their collaborations by 26 percent, while biologists seem to really love cheap airfare: their collaborations increase by 85 percent.) To better understand these trends, and to rule out some possible confounds, Catalini et al. zoomed in on collaborations among chemists. They tracked the research produced by 819 pairs of chemists between 1993 and 2012. Once again, they found that the entry of Southwest into a new market leads to an approximately 30 percent spike in collaboration among chemists living near the new routes. What’s more, this trend towards teamwork showed no signs of existing before the arrival of the low-cost airline.

At first glance, it seems likely that these new collaborations triggered by Southwest will produce research of lower quality. After all, the fact that the scientists waited to work together until airfares were slightly cheaper suggests that they didn’t think their new partnership would create a lot of value. (A really enticing collaboration should have been worth a more expensive flight, especially since the arrival of Southwest didn’t significantly increase the number of direct routes.) But that isn’t what Catalini et al. found. Instead, they discovered that Southwest’s entry into a market led to an increase in higher quality publications, at least as measured by the number of citations. Taken together, these results suggest that cheaper air travel is not only redrawing the map of scientific collaboration, but fundamentally improving the quality of research.

There is one last fascinating implication of this dataset. The spread of Southwest paralleled the rise of the Internet, as it became far easier to communicate and collaborate using digital tools, such as email and Skype. In theory, these virtual interactions should make face-to-face conversations unnecessary. Why put up with the hassle of air travel when there’s Facetime? Why meet in person when there’s Google Docs? The Death of Distance and all that.

But this new paper is a reminder that face-to-face interactions are still uniquely valuable. I’ve written before about the research of Isaac Kohane, a professor at Harvard Medical School. A few years ago, he published a study that looked at the influence physical proximity on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Geography turned out to be a crucial variable: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were located within ten meters of each other, while the least cited papers tended to emerge from collaborators who were a kilometer or more apart.

Even in the 21st century, the best way to work together is to be together. The digital world is full of collaborative tools, but these tools are still not a substitute for meetings that take place in person.* That’s why we get on a plane.

Never change Southwest.

Catalini, Christian, Christian Fons-Rosen, and Patrick Gaulé. "Did cheaper flights change the geography of scientific collaboration?" SSRN Working Paper (2016). 

* Consider a study that looked at the spread of Bitnet, a precursor to the internet. As one might expect, the computer network significantly increased collaboration among electrical engineers at connected universities. However, the boost in collaboration was far larger among engineers who were within driving distance of each other.  Yet more evidence for the power of in-person interactions comes from a 2015 paper by Catalini, which looked at the relocation of scientists following the removal of asbestos from Paris Jussieu, the largest science university in France. He found that science labs that had been randomly relocated in the same area were 3.4 to 5 times more likely to collaborate. Meat space matters.

Do Social Scientists Know What They're Talking About?

The world is lousy with experts. They are everywhere: opining in op-eds, prognosticating on television, tweeting out their predictions. These experts have currency because their opinions are, at least in theory, grounded in their expertise. Unlike the rest of us, they know what they’re talking about.

But do they really? The most famous study of political experts, led by Philip Tetlock at the University of Pennsylvania, concluded that the vast majority of pundits barely beat random chance when it came to predicting future events, such as the winner of the next presidential election. They spun out confident predictions but were never held accountable when their predictions proved wrong. The end result was a public sphere that rewarded overconfident blowhards. Cable news, Q.E.D.

While the thinking sins identified by Tetlock are universal - we’re all vulnerable to overconfidence and confirmation bias - it’s not clear that the flaws of political experts can be generalized to other forms of expertise. For one thing, predicting geopolitics is famously fraught: there are countless variables to consider, interacting in unknowable ways. It’s possible, then, that experts might perform better in a narrower setting, attempting to predict the outcomes of experiments in their own field.

A new study, by Stefano DellaVigna at UC Berkeley and Devin Pope at the University of Chicago, aims to put academic experts to this more stringent test. They assembled 208 experts from the fields of economics, behavioral economics and psychology and asked them to forecast the impact of different motivators on the performance of subjects performing an extremely tedious task. (They had to press the “a” and “b” buttons on their keyboard as quickly as possible for ten minutes.) The experimental conditions ranged from the obvious - paying for better performance - to the subtle, as DellaVigna and Pope also looked at the influence of peer comparisons, charity and loss aversion.  What makes these questions interesting is that DellaVigna and Pope already knew the answers: they’d run these motivational studies on nearly 10,000 subjects. The mystery was whether or not the experts could predict the actual results.

To make the forecasting easier, the experts were given three benchmark conditions and told the average number of presses, or “points,” in each condition. For instance, when subjects were told that their performance would not affect their payment, they only averaged 1521 points. However, when they were paid 10 cents for every 100 points, they averaged 2175 total points. The experts were asked to predict the number of points in fifteen additional experimental conditions.

The good news for experts is that these academics did far better than Tetlock’s pundits. When asked to predict the average points in each condition, they demonstrated the wisdom of crowds: their predictions were off by only 5 percent. If you’re a policy maker, trying to anticipate the impact of a motivational nudge, you’d be well served by asking a bunch of academics for their opinions. 

The bad news is that, on an individual level, these academics still weren’t very good. They might have looked prescient when their answers were pooled together, but the results were far less impressive if you looked at the accuracy of experts in isolation. Perhaps most distressing, at least for the egos of experts, is that non-scientists were much better at ranking the treatments against each other, forecasting which conditions would be most and least effective. (As DellaVigna pointed out in an email, this is less a consequence of expert failure and more a tribute to the fact that non-experts did “amazingly well” at the task.) The takeaway is straightforward: there might be predictive value in a diverse group of academics, but you’d be foolish to trust the forecast of a single one.

Furthermore, there was shockingly little relationship between the credentials of academia and overall performance. Full professors tended to underperform assistant professors, while having more Google Scholar citations was correlated with lower levels of accuracy. (PhD students were “at least as good” as their bosses.) Academic experience clearly has virtues. But making better predictions about experiments does not seem to be one of them.

Since Tetlock published his damning critique of political pundits, he has gone on to study so-called “superforecasters,” those amateurs whose predictions of world events are consistently more accurate than those of intelligence analysts with access to classified information. (In general, these superforecasters share a particular temperament: they’re willing to learn from their mistakes, quick to update their beliefs and tend to think in shades of gray.) After mining the data, DellaVigna and Pope were able to identify their own superforecasters. As a group, these non-experts significantly outperformed the academics, improving on the average error rate of the professors by more than 20 percent. These people had no background in behavioral research. They were paid $1.50 for 10 minutes of their time. And yet, they were better than the experts at predicting research outcomes.

The limitations of expertise are best revealed by the failure of the experts to foresee their own shortcomings. When the academics were surveyed by DellaVigna and Pope, they predicted that high-citation experts would be significantly more accurate. (The opposite turned out to be true.) They also expected PhD students to underperform the professors – that didn’t happen, either – and that academics with training in psychology would perform the best. (The data points in the opposite direction.)

It’s a poignant lapse. These experts have been trained in human behavior. They have studied our biases and flaws. And yet, when it comes to their own performance, they are blind to their own blindspots. The hardest thing to know is what we don’t.

DellaVigna, Stefano, and Devin Pope. Predicting Experimental Results: Who Knows What? NBER Working Paper, 2016.      

The Power of Family Memory

In a famous series of studies conducted in the 1980s, the psychologists Betty Hart and Todd Risley gave parents a new variable to worry about: the number of words they speak to their children. According to Hart and Risley, the quantity of spoken language in a household is predictive of IQ scores, vocabulary size and overall academic success. The language gap even begins to explain socio-economic disparities in educational outcomes, as upper-class parents speak, on average, about 3.5 times more to their kids than their poorer peers. Hart and Risley referred to the lack of spoken words in poor households as "the early catastrophe."

In recent years, however, it’s become clear that it’s not just the amount of language that counts. Rather, researchers have found that some kinds of conversations are far more effective at promoting mental and emotional development than others. While all parents engage in roughly similar amounts of so-called “business talk” – these are interactions in which the parent is offering instructions, such as “Hold out your hands,” or “Stop whining!” – there is far more variation when it comes to what Hart and Risley called “language dancing,” or conversations in which the parent and child are engaged in a genuine dialogue. According to a 2009 study by researchers at the UCLA School of Public Health, parent-child dialogues were six times as effective in promoting the development of language skills as those in which the adult did all the talking.

So conversation is better than instruction; dialogues over monologues. But this only leads to the next practical question: What’s the best kind of conversation to have with children? If we only have a limited amount of “language dancing” time every day - my kids usually start negotiating for dessert roughly five minutes into dinner - then what should we choose to chat about? And this isn’t just a concern for precious helicopter parents. Rather, it’s a relevant topic for researchers trying to design interventions for at-risk children, as they attempt to give caregivers the tools to ensure successful development.

A new answer is emerging. According to a recent paper by the psychologists Karen Salmon and Elaine Reese, one of the best subjects of parent-child conversation is the past, or what they refer to as “elaborative reminiscing.” As evidence, Salmon and Reese cite a wide variety of studies, drawn from more than three decades of research on children between the ages of 18 months and 5 years, all of which converge on a similar theme: discussing our memories is an extremely effective way to promote cognitive and emotional growth. Maybe it’s a scene from our last family vacation, or an accounting of what happened at school that day, or that time I locked my keys in the car - the details of the memory don’t seem to matter that much. What does is that we remember together.

Here’s an example of the everyday reminiscing the scientists recommend:

Mother: “What was the first thing he [the barber] did?”

Child: “Bzzzz.” (running his hand over his head)

Mother: “He used the clippers, and I think you liked the clippers. And you know how I know? Because you were smiling.”

Child: “Because they were tickling.”

Mother: “They were tickling, is that how they felt? Did they feel scratchy?”

Child: “No.”

Mother: “And after the clippers, what did he use then?”

Child: “The spray.”

Mother: “Yes. Why did he use the spray?”

Child: (silent)

Mother: “He used the spray to tidy your hair. And I noticed that you closed your eyes, and I thought ‘Jesse’s feeling a little bit scared,’ but you didn’t move or cry and I thought you were being very brave.”

It’s such an ordinary conversation, but Salmon and Reese point out its many virtues. For one thing, the questions are leading the child through his recent haircut experience. He is learning how to remember, what it takes to unpack a scene, the mechanics of turning the past into a story. Over time, these skills play a huge role in language development, which is why children that engage in more elaborative reminiscing with their parents tend to have more advanced vocabularies, better early literacy scores and improved narrative skills. In fact, one study found that teaching low-income mothers to “reminisce in more elaborative ways” led to bigger improvements in narrative skills and story comprehension than an interactive book-reading program.

But talking about the past isn’t just about turning our kids into better storytellers. It’s also about boosting their emotional intelligence, teaching them how to handle those feelings they’d rather forget. In A Book About Love, I wrote about research showing that children raised in households that engage in the most shared recollection report higher levels of emotional well-being and a stronger sense of personal identity. The family unit also becomes stronger, as those children and parents who know more about the past also scored higher on a widely used measure of “reported family functioning.” Salmon and Reese expand on these findings, citing research showing that emotional reminiscing is linked to long-term improvements in the ability of children to regulate their negative emotions, handle difficult situations and identify the feelings of themselves and others.

Consider the haircut conversation above. Notice how the mother identifies the feelings felt by the child: enjoyment, tickling, fear. She suggests triggers for these emotions - the clippers, the water spray - and helps her son understand their fleeting nature. (Because the feelings are no longer present, they can be discussed calmly. That’s why talking about remembered emotions is often more useful than talking about emotions in the heat of the moment.) The virtue of such dialogues is that they teach children how to cope with their feelings, even when what they feel is fury and fear. As Salmon and Reese note, these are particularly important skills for mothers who have been exposed to adverse or traumatic experiences, such as drug abuse or domestic violence. Studies show that these at-risk parents are much less likely to incorporate “emotion words” when talking with their children. And when they do discuss their memories, Salmon and Reese write, they often “remain stuck in anger.” Their past isn’t past yet.

Perhaps this is another benefit of elaborative reminiscing. When we talk about our memories with loved ones, we translate the event into language, giving that swirl of emotion a narrative arc. (As the psychologist James Pennebaker has written, "Once it [a painful memory] is language based, people can better understand the experience and ultimately put it behind them.") And so the conversation becomes a moment of therapy, allowing us to make sense of what happened and move on. 

It was just a haircut, but you were so brave.   

Salmon, Karen, and Elaine Reese. "The Benefits of Reminiscing With Young Children." Current Directions in Psychological Science 25.4 (2016): 233-238.       

 

The Overview Effect

After six weeks in orbit, circling the earth in a claustrophobic space station, the three-person crew of Skylab 4 decided to go on a strike. For 24 hours, the astronauts refused to work, and even turned off their communications radio linking them to Earth. While NASA was confused by the space revolt—mission control was concerned the astronauts were depressed—the men up in space insisted they just wanted more time to admire their view of the earth. As the NASA flight director later put it, the astronauts were asserting “their needs to reflect, to observe, to find their place amid these baffling, fascinating, unprecedented experiences.”

The Skylab 4 crew was experiencing a phenomenon known as the overview effect, which refers to the intense emotional reaction that can be triggered by the sight of the earth from beyond its atmosphere. Sam Durrance, who flew on two shuttle missions, described the feeling like this: “You’ve seen pictures and you’ve heard people talk about it. But nothing can prepare you for what it actually looks like. The Earth is dramatically beautiful when you see it from orbit, more beautiful than any picture you’ve ever seen. It’s an emotional experience because you’re removed from the Earth but at the same time you feel this incredible connection to the Earth like nothing I’d ever felt before.”

The Caribbean Sea, as seen from ISS Expedition 40

The Caribbean Sea, as seen from ISS Expedition 40

What’s most remarkable about the overview effect is that the effect lasts: the experience of awe often leaves a permanent mark on the lives of astronauts. A new paper by a team of scientists (the lead author is David Yaden at the University of Pennsylvania) investigates the overview effect in detail, with a particular focus on how this vision of earth can “settle into long-term changes in personal outlook and attitude involving the individual’s relationship to Earth and its inhabitants.” For many astronauts, this is the view they never get over.

How does this happen? How does a short-lived perception alter one’s identity? There is no easy answer. In this paper, the scientists focus on how the sight of the distant earth is so contrary to our usual perspective that it forces our “self-schema” to accommodate an entirely new point of view. We might conceptually understand that the earth is a lonely speck floating in space, a dot of blue amid so much black. But it’s an entirely different thing to bear witness to this reality, to see our fragile planet from hundreds of miles away. The end result is that the self itself is changed; this new perspective of earth alters one’s perspective on life, with the typical astronaut reporting “a greater affiliation with humanity as a whole.” Here’s Ed Gibson, the science pilot on Skylab 4: “You see how diminutive your life and concerns are compared to other things in the universe. Your life and concerns are important to you, of course. But you can see that a lot of the things you worry about do not make much difference in an overall sense.”

There are two interesting takeaways. The first one, emphasized in the paper, is that the overview effect might serve as a crucial coping mechanism for the challenges of space travel. Astronauts live a grueling existence: they are stressed, isolated and exhausted. They live in cramped quarters, eat terrible food and never stop working. If we are going to get people to Mars, then we need to give astronauts tools to endure their time on a spaceship. As the crew of Skylab 4 understood, one of the best ways to withstand space travel is to appreciate its strange beauty.

The second takeaway has to do with the power of awe and wonder. When you read old treatises on human nature, these lofty emotions are often celebrated. Aristotle argued that all inquiry began with the feeling of awe, that “it is owing to their wonder that men both now begin and at first began to philosophize.” Rene Descartes, meanwhile, referred to wonder as the first of the passions, “a sudden surprise of the soul that brings it to focus on things that strike it as unusual and extraordinary.” In short, these thinkers saw the experience of awe as a fundamental human state, a feeling so strong it could shape our lives.

But now? We have little time for awe in the 21st century; wonder is for the young and unsophisticated. To the extent we consider these feelings it’s for a few brief moments on a hike in a National Park, or to marvel at a child’s face when they first enter Disneyland. (And then we get out our phones and take a picture.) Instead of cultivating awe, we treat it as just another fleeting feeling; wonder is for those who don’t know any better.

The overview effect, however, is a reminder that these emotions can have a lasting impact. Like the Skylab 4 astronauts, we can push back against our hectic schedules, insisting that we find some time to stare out the window.  

Who knows? The view just might change your life.

Yaden, David B., et al. "The overview effect: Awe and self-transcendent experience in space flight." Psychology of Consciousness: Theory, Research, and Practice 3.1 (2016): 1.

 

How Magicians Make You Stupid

The egg bag magic trick is simple enough. A magician produces an egg and places it in a cloth bag. Then, the magician uses some poor sleight of hand, pretending to hide the egg in his armpit. When the bag is revealed as empty, the audience assumes it knows where the egg really is.

But the egg isn’t there. The armpit was a false solution, distracting the crowd from the real trick: the bag contains a secret compartment. When the magician finally lifts his arm, the audience is impressed by the vanishing. How did he remove the egg from his armpit? It never occurs to them that the egg never left the bag.

Magicians are intuitive psychologists, reverse-engineering the mind and preying on all its weak spots. They build illusions out of our frailties, hiding rabbits in our attentional blind spots and distracting the eyes with hand waves and wands. And while people in the audience might be aware of their perceptual shortcomings – those fingers move so fast! - they are often blind to a crucial cognitive limitation, which allows magicians to keep us from deciphering the trick. In short, magicians know that people tend to to fixate on particular answers (the egg is in the armpit), and thus ignore alternative ones (it’s a trick bag), even when the alternatives are easier to execute. 

When it comes to problem-solving, this phenomenon is known as the einstellung effect. (Einstellung is German for “setting” or “attitude.”) First identified by the psychologist Abraham Luchins in the early 1940s, the effect has since been replicated in numerous domains. Consider a study that gave chess experts a series of difficult chess problems, each of which contained two solutions. The players were asked to find the shortest possible way to win. The first solution was obvious and took five moves to execute. The second solution was less familiar, but could be achieved in only three moves. As expected, these expert players found the first solution right away. Unfortunately, most of them then failed to identify the second one, even though it was more efficient. The good answer blinded them to the better one.  

Back to magic tricks. A new paper in Cognition, by Cyril Thomas and Andre Didierjean, extends the reach of the einstellung effect by showing that it limits our problem-solving abilities even when the false solution is unfamiliar and unlikely. Put another way, preposterous explanations can also become mental blocks, preventing us from finding answers that should be obvious. To demonstrate this, the scientists showed 90 students one of three versions of a card trick. The first version went like this: a performer showed the subject a brown-backed card surrounded by six red-backed cards. After randomly touching the back of the red cards, he asked the subject to choose one of the six, which was turned face up. It was a jack of hearts. The magician then flipped over the brown-backed card at the center, which was also a jack of hearts. The experiment concluded with the magician asking the subject to guess the secret of the trick. In this version, 83 percent of subjects quickly figured it out: all of the cards were the same.

The second version featured the same trick, except that the magician slyly introduced a false solution. Before a card was picked, he explained that he was able to influence other people’s choices through physical suggestions. He then touched the back of the red cards, acting as if these touches could sway the subject’s mind. After the trick was complete, these subjects were also asked to identify the secret. However, most of these subjects couldn’t figure it out: only 17 percent of people realized that every card was the jack of hearts. Their confusion persisted even after the magician encouraged them to keep thinking of alternative explanations.

This is a remarkable mental failure. It’s a reminder that our beliefs are not a mirror to the world, but rather bound up with the limits of the human mind. In this particular case, our inability to see the obvious trick seems to be a side-effect of our feeble working memory, which can only focus on a few bits of information at any given moment. (In an email, Thomas notes that it is more “economical to focus on one solution, and to not lose time…searching for a hypothetical alternative one.”) And so we fixate on the most salient answer, even when it makes no sense. As Thomas points out, a similar lapse explains the success of most mind-reading performances: we are so seduced by the false explanation (parapsychology!) that we neglect the obvious trick, which is that the magician gathered personal information about us from Facebook. The performance works because we lack the bandwidth to think of a far more reasonable explanation.

Thomas and Didierjean end their paper with a disturbing thought. “If a complete stranger (the magician) can fix spectators’ minds by convincing them that he/she can control their individual choice with his own gesture,” they write, “to what extent can an authority figure (e.g., policeman) or someone that we trust (e.g., doctors, politicians) fix our mind with unsuitable ideas?” They don’t answer the question, but they don’t need to. Just turn on the news.

Thomas, Cyril, and André Didierjean. "Magicians fix your mind: How unlikely solutions block obvious ones." Cognition 154 (2016): 169-173.

What Can Toilet Paper Teach Us About Poverty?

“Costco is where you go broke saving money.”

-My Uncle

The fundamental paradox of big box stores is that the only way to save money is to spend lots of it. Want to get a discount on that shampoo? Here's a liter. That’s a great price for chapstick – now you have 32 of them. The same logic applies to most staples of modern life, from diapers to Pellegrino, Uni-ball pens to laundry detergent.

For consumers, this buy-in-bulk strategy can lead to real savings, especially if the alternative is a bodega or Whole Foods. (Brand name diapers, for instance, cost nearly twice as much at my local grocery store compared to Costco.) However, not every American is equally likely to seek out these discounts. In particular, some studies have found that lower-income households  – the ones who could benefit the most from that huge bottle of Kirkland shampoo – pay higher prices because they don’t make bulk purchases.

A new paper, “Frugality is Hard to Afford,” by A. Yesim Orhun and Mike Palazzolo investigates why this phenomenon exists. Their data set featured the toilet paper purchases of more than 100,000 American families over seven years. Orhun and Palazzolo focused on toilet paper for several reasons. First, consumption of toilet paper is relatively constant. Second, toilet paper is easy to store – it doesn’t spoil – making it an ideal product to purchase in bulk, at least if you’re trying to get a discount. Third, the range of differences between brands of toilet paper is rather small, at least when compared to other consumer products such as detergent and toothpaste. 

So what did Orhun and Palazzolo find? As expected, lower income households were far less likely take advantage of the lower unit prices that come with bulk purchases. Over time, these shopping habits add up, as the poorest families end up paying, on average, 5.9 percent more per sheet of toilet paper. 

The question, of course, is why this behavior exists. Shouldn’t poor households be the most determined to shop around for cheap rolls? The most obvious explanation is what Orhun and Palazzolo refer to as a liquidity constraint: the poor simply lack the cash to “invest” in a big package of toilet paper. As a result, they are forced to buy basic household supplies on an as-needed basis, which makes it much harder to find the best possible price.

But this is not the only constraint imposed by poverty. In a 2013 Science paper, the behavioral scientists Anandi Mani, Sendhil Mullainathan, Eldar Shafir and Jiaying Zhao argued that not having money also imposes a mental burden, as our budgetary worries consume scarce attentional resources. This makes it harder for low-income households to plan for the future, whether it’s buying toilet paper in bulk or saving for retirement. “The poor, in this view, are less capable not because of inherent traits,” write the scientists, “but because the very context of poverty imposes load and impedes cognitive capacity.”

Consider a clever experiment conducted by Mani, et al. at a New Jersey mall. They asked shoppers about various hypothetical scenarios involving a financial problem. For instance, they might be told that their “car is having some trouble and requires $[X] to be fixed.” Some subjects were told that their repair was extremely expensive ($1500), while others were told it was relatively cheap ($150.) Then, all participants were given a series of challenging cognitive tasks, including some questions from an intelligence test and a measure of impulse control.

The results were startling. Among rich subjects, it didn’t really matter how much the car cost to fix – they performed equally well when the repair estimate was $150 or $1500.  Poor subjects, however, showed a troubling difference. When the repair estimate was low, they performed roughly equivalent to rich subjects. But when the repair estimate was high they suddenly showed a steep drop off in performance on both tests, comparable in magnitude to the mental deficit associated with losing a full night of sleep or becoming an alcoholic.

This new toilet paper study provides some additional evidence that poverty takes a toll on our choices. In one analysis, Orhun and Palazzolo looked at how purchase behavior was altered at the start of the month, when low income households are more likely to receive paychecks and food stamps. As the researchers note, this influx of money should temporarily ease the stress of being poor, thus making it easier to buy in bulk.  

That’s exactly what they found. When the poorest households were freed from their most pressing liquidity constraints, they made much more cost-effective toilet paper decisions. (This also suggests that poorer households are not simply buying smaller bundles due to a lack of storage space or transportation, as these factors are not likely to exhibit week-by-week fluctuation.) Of course, the money didn't last long; the following week, these households reverted back to their old habits, overpaying for household products. And so those with the least end up with even less.

Orhun, A. Yesim, and Mike Palazzolo. "Frugality is hard to afford." University of Michigan Working Paper (2016).

Mani, Anandi, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao. "Poverty impedes cognitive function." Science 341 (2013): 976-980.

The Nordic Paradox

By virtually every measure, the Nordic countries – Denmark, Finland, Iceland, Norway and Sweden - are a paragon of gender equality. It doesn’t matter if you’re looking at the wage gap or political participation or educational attainment: the Nordic region is the most gender equal place in the world.

But this equality comes with a disturbing exception: Nordic women also suffer from intimate partner violence (IPV) at extremely high rates. (IPV is defined by the CDC as the experience of “physical violence, sexual violence, stalking and psychological aggression by a current or former intimate partner.”) While the average lifetime prevalence for intimate partner violence for women living in Europe is 22 percent – a horrifyingly high number by itself – Nordic countries perform even worse. In fact, Denmark has the highest rate of IPV in the EU at 32 percent, closely followed by Finland (30 percent) and Sweden (28 percent.) And it’s not just violence from partners: other surveys have looked at violence against women in general. Once again, the Nordic countries had some of the highest rates of violence in the EU, as measured by reports of sexual assault, physical abuse or emotional abuse.

A new paper in Social Science & Medicine by Enrique Gracia and Juan Merlo refers to the existence of these two realities – gender equality and high rates of violence against woman – as the Nordic paradox. It’s a paradox because a high risk of IPV for women is generally associated with lower levels of gender equality, particularly in poorer countries. (For example, 71 percent of Ethiopian women have suffered from IPV.) This makes intuitive sense: a country that disregards the rights of women, or fails to treat them as equals, also seems more likely to tolerate their abuse.

And yet, the same logic doesn’t seem to apply at the other extreme of gender equality. As Gracia and Merlo note, European countries with lower levels of gender equality, such as Italy and Greece, also report much lower levels of IPV (roughly 30 percent lower) than Nordic nations.

What explains this paradox? Why hasn’t the gender equality of Nordic countries reduced violence against women? That’s the tragic mystery investigated by Gracia and Merlo.

One possibility is that the paradox is caused by differences in reporting, as women in Nordic countries might feel more free to disclose the abuse. This also makes intuitive sense: if you live in a country with higher levels of gender equality, then you might be less likely to fear retribution when accusing a partner, or telling the police about a sex crime. (In Saudi Arabia, only 3.3 of women who suffered from IPV told the police or a judge.) However, Gracia and Merlo cast shade on this explanation, noting that the available evidence suggests lower levels of disclosure of IPV among women in the Nordic countries. For instance, while 20 percent of women in Europe said that the most serious incident of IPV they’d experienced was brought to the attention of the police, only 10 percent of women in Denmark and Finland could say the same thing. The same trend is supported by other data, including rape statistics and “victim blaming” surveys. Finally, even if part of the Nordic paradox was a reporting issue, this would only reinforce the real mystery, which is that gender equal societies still suffer from epidemic levels of violence against women.

The main hypothesis advanced by Gracia and Merlo – and it’s only a hypothesis – is that high gender equality might create a backlash effect among men, triggering high levels of violence against women.  Because gender equality disrupts traditional gender norms, it might also reinforce “victim-blaming attitudes,” in which the violence is excused or justified. Gracia and Merlo cite related studies showing that women with “higher economic status relative to their partners can be at greater IPV risk depending on whether their partners hold more traditional gender beliefs.” For these backwards men, the success of women is perceived as a threat, an undermining of their identity. This backlash is further exacerbated by women becoming more independent and competitive in gender equal societies, thus increasing the potential for conflict with partners who insist on control and subservience. Progress leaves some people behind, and those people tend to get angry.

At best, the backlash effect is only a partial explanation for the Nordic Paradox. Gracia and Merlo argue that a real understanding of the prevalence of IPV – why is it still so common, even in developed countries? – will require looking beyond national differences and instead investigating the risk factors that affect the individual. How much does he drink? What is her employment status? Do they live together? What is the neighborhood like? Even brutish behaviors have complicated roots; we need a thick description of life to understand them.  

On the one hand, the Nordic paradox is a testament to liberal values, a reminder that thousands of years of gender inequality can be reversed in a few short decades. The progress is real. But it’s also a reminder that progress is difficult, full of strange backlashes and reversals. Two steps forward, one step back. Or is it the other way around? We can see the moral universe bending, but goddamn is it slow.

Gracia, Enrique, and Juan Merlo. "Intimate partner violence against women and the Nordic paradox." Social Science & Medicine 157 (2016): 27-30.

via MR

Did "Clean" Water Increase the Murder Rate?

The construction of public waterworks across the United States in the late 19th and early 20th centuries was one of the great infrastructure investments in American history. As David Cutler and Grant Miller have demonstrated, these waterworks accounted for “nearly half of the total mortality reduction in major cities, three-quarters of the infant mortality reduction, and nearly two-thirds of the child mortality reduction.” Within a generation, the scourge of waterborne infectious diseases – from cholera to typhoid fever – was largely eliminated. Moving to a city no longer took years off your life, a sociological trend that unleashed untold amounts of human innovation.

However, not all urban waterworks were created equal. Some systems were built with metal pipes containing large amounts of lead. (At the time, lead pipes were considered superior to iron pipes, as they were more durable and easier to bend.) Unfortunately, these pipes leached lead particulates into the water, exposing city dwellers to water that tasted clean but was actually a poison.

Over the last few decades, researchers have amassed an impressive body of evidence linking lead exposure in childhood to a tragic list of symptoms, including higher rates of violent crime and lower scores on the IQ test. (One study found that lead levels are four times higher among convicted juvenile offenders than among non-delinquent high school students.) In 2014, I wrote about a paper by Jesssica Wolpow Reyes that documented the association between leaded gasoline and violent crime:

Reyes concluded that “the phase-out of lead from gasoline was responsible for approximately a 56 percent decline in violent crime” in the 1990s. What’s more, Reyes predicted that the Clean Air Act would continue to generate massive societal benefits in the future, “up to a 70 percent drop in violent crime by the year 2020.” And so a law designed to get rid of smog ended up getting rid of crime. It’s not the prison-industrial complex that keeps us safe. It’s the EPA.

But these studies have their limitations. For one thing, the pace at which states reduced their use of leaded gas might be related to other social or political variables that influence the crime rate. It’s also possible that those neighborhoods with the highest risk of lead poisoning might suffer from additional maladies linked to crime, such as poverty and poor schools. To convincingly demonstrate that lead causes crime, researchers need to find a credible source of variation in lead exposure that is completely independent (aka exogenous) to the factors that might shape criminal behavior.

That is the goal of a new paper by James Feigenbaum and Christopher Muller. Their study mines the historical record, drawing from homicide data between 1921 and 1936 (when the first generation of children exposed to lead pipes were adults) and the materials used to construct each urban water system. If lead was responsible for higher crime rates, then those cities with higher lead content in their pipes (and also more acidic water, which leaches out the lead) should also experience larger spikes in crime decades later.

What makes this research strategy especially useful is that the decision to use lead pipes in a city’s water system was based in part on its proximity to a lead refinery. (Cities that were closer to a refinery were more likely to invest in lead pipes, as the lower transportation costs made the “superior” option more affordable.) In addition, Feigenbaum and Muller were able to look at how the lead content of pipes interacted with the acidity of a city’s water supply, thus allowing them to further isolate the causal role of lead.

The results were clear: cities that used lead pipes had homicide rates that were between 14 and 36 percent higher than cities that opted for cheaper iron pipes.

These violent crime increases are especially striking given that those cities using lead pipes tended to be wealthier, better educated and more “health conscious” than those that did not. All things being equal, one might expect these places to have lower rates of violent crime. But because of a little noticed engineering decision, the water of these cities contained a neurotoxin, which interfered with brain development and made it harder for their residents to reign in their emotions.

The brain is a plastic machine, molded by its environment. When we introduce a new technology – and it doesn’t matter if it’s an urban water system or the smartphone – it’s often impossible to predict the long-term consequences. Who would have guessed that the more expensive lead pipes would lead to spikes in crime decades later? Or that the heavy use of road salt in the winter would lead to a 21st century water crisis in Flint, as the chloride ions pull lead out of the old pipes?

One day, the scientists of the future will study our own blind spots, as we invest in technologies that mess with the mind in all sorts of subtle ways. History reminds us that these tradeoffs are often unexpected. After all, it took decades before we realized that, for some unlucky cities, even clean water came with a terrible cost.

James Feigenbaum and Christopher Muller. "Lead Exposure and Violent Crime in the Early Twentieth Century." Explorations in Economic History (2016)

The Importance of Learning How to Fail

“An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.” -Niels Bohr

Carol Dweck has devoted her career to studying how our beliefs about intelligence influence the way we learn. In general, she finds that people subscribe to one of two different theories of mental ability. The first theory is known as the fixed mindset – it holds that intelligence is a fixed quantity, and that each of us is allotted a certain amount of smarts we cannot change. The second theory is known as the growth mindset. It’s more optimistic, holding that our intelligence and talents can be developed through hard work and practice. "Do people with this mindset believe than anyone can be anything?" Dweck asks. "No, but they believe that a person's true potential is unknown (and unknowable); that it's impossible to foresee what can be accomplished with years of passion, toil, and training."

You can probably guess which mindset is more useful for learning. As Dweck and colleagues have repeatedly demonstrated, children with a fixed mindset tend to wilt in the face of challenges. For them, struggle and failure are a clear sign they aren’t smart enough for the task; they should quit before things get embarrassing. Those with a growth mindset, in contrast, respond to difficulty by working harder. Their faith in growth becomes a self-fulling prophecy; they get smarter because they believe they can.

The question, of course, is how to instill a growth mindset in our children. While Dweck is perhaps best known for her research on praise – it’s better to compliment a child for her effort than for her intelligence, as telling a kid she’s smart can lead to a fixed mindset – it remains unclear how children develop their own theories about intelligence.  What makes this mystery even more puzzling is that, according to multiple studies, the mindset of parents’ is surprisingly disconnected from the mindsets of their children. In other words, believing in the plasticity of intelligence is no guarantee that our kids will feel the same way.

What explains this disconnect? One possibility is that parents are accidental hypocrites. We might subscribe to the growth mindset for ourselves, but routinely praise our kids for being smart. Or perhaps we tell them to practice, practice, practice, but then get frustrated when they can’t master fractions, or free throws, or riding without training wheels. (I’m guilty of both these sins.) The end result is a muddled message about the mind’s potential.

However, in an important new paper, Kyla Haimovitz and Carol Dweck reveal the real influence behind the mindsets of our children. It turns out that the crucial variable is not what we think about intelligence – it’s how we react to failure.

Consider the following scenario: a child comes home with a bad grade on a math quiz. How do you respond? Do you try to comfort the child and tell him that it’s okay if he isn’t the most talented? Will you worry that he isn’t good at math? Or would you encourage him to describe what he learned from doing poorly on the test?

Parents with a failure-is-debilitating attitude tend to focus on the importance of performance: doing well on the quiz, succeeding at school, getting praise from other people. When confronted with the specter of failure, these parents get anxious and worried. Over time, their children internalize these negative reactions, concluding that failure is a dead-end, to be avoided at all costs. If at first you don’t succeed, then don’t try again.

In contrast, those parents who see failure as part of the learning process are more likely to see the bad grade as an impetus for extra effort, whether it’s asking the teacher for help or trying a new studying strategy. They realize that success is a marathon, requiring some pain along the way. You only learn how to get it right by getting it wrong.

According to the scientists, this is how our failure mindsets get inherited – our children either learn to focus on the appearance of success or on the long-term rewards of learning. Over time, these attitudes towards failure shape their other mindsets, influencing how they felt about their own potential. If they worked harder, could they get good at math? Or was algebra simply beyond their reach? 

Although the scientists found that children were bad at guessing the intelligence mindsets of their parents – they don’t know if we’re in the growth or fixed category - the kids were surprisingly good at predicting their parents’ relationship to failure. This suggests that our failure mindsets are much more “visible” than our beliefs about intelligence. Our children might forget what happened after the home-run, but they damn sure remember what we said after the strike-out. 

This study helps clarify the forces that shape our children. What matters most is not what we say after a triumph or achievement - it’s how we deal with their disappointments. Do we pity our kids when they struggle? (Sympathy is a natural reaction; it also sends the wrong message.) Do we steer them away from potential defeats? Or do we remind them that failure is an inescapable part of life, a state that cannot be avoided, only endured. Most worthy things are hard.

Haimovitz, K., and C. S. Dweck. "What Predicts Children's Fixed and Growth Intelligence Mind-Sets? Not Their Parents' Views of Intelligence but Their Parents' Views of Failure." Psychological Science (2016).

 

Is Tanking An Effective Strategy in the NBA?

In his farewell manifesto, former Philadelphia 76ers General Manager Sam Hinkie spends 13 pages explaining away the dismal performance of his team, which has gone 47-199 over the last three seasons. Hinkie’s main justification for all the losses involves the consolation of draft picks, which are the NBA’s way of rewarding the worst teams in the league. Here’s Hinkie:

"In the first 26 months on the job we added more than one draft pick (or pick swap) per month to our coffers. That’s more than 26 new picks or options to swap picks over and above the two per year the NBA allots each club. That’s not any official record, because no one keeps track of such records. But it is the most ever. And it’s not close. And we kick ourselves for not adding another handful."

This is the tanking strategy. While the 76ers have been widely criticized for their consistent disinterest in winning games, Hinkie argues that it was a necessary by-product of their competitive position in 2013, when he took over as GM. (According to a 2013 ESPN ranking of each NBA team’s three-year winning potential, the 76ers ranked 24th out of 30.) And so Hinkie, with his self-described “reverence for disruption” and “contrarian mindset,” set out to take a “long view” of basketball success. The best way for the 76ers to win in the future was to keep on losing in the present.  

Hinkie is a smart guy. At the very least, he was taking advantage of the NBA’s warped incentive structure, which can lead to a “treadmill of mediocrity” among teams too good for the lottery but too bad to succeed in the playoffs. However, Hinkie’s devotion to tanking – and his inability to improve the team’s performance - does raise an interesting set of empirical questions. Simply put: is tanking in the NBA an effective strategy? (I’m a Lakers fan, so it would be nice to know.) And if tanking doesn't work, why doesn't it?

A new study in the Journal of Sports Economics, published six days before Hinkie’s resignation, provides some tentative answers. In the paper, Akira Motomura, Kelsey Roberts, Daniel Leeds and Michael Leeds set out to determine whether or not it “pays to build through the draft in the National Basketball Association.” (The alternative, of course, is to build through free agency and trades.) Motomura et al. rely on two statistical tests to make this determination. The first test is whether teams with more high draft picks (presumably because they tanked) improve at a faster rate than teams with fewer such picks. The second test is whether teams that rely more on players they have drafted for themselves win more games than teams that acquire players in other ways. The researchers analyzed data from the 1995 to 2013 NBA seasons.

What did they find? The punchline is clear: building through the draft is not a good idea. Based on the data, Motomura et al. conclude that “recent high draft picks do not help and often reduce improvement,” as teams with one additional draft pick between 4 and 10 can be expected to lose an additional 6 to 9 games three years later. Meanwhile, those teams lucky enough to have one of the first three picks should limit their expectations, as those picks tend to have “little or no impact” on team performance. The researchers are blunt: “Overall, having more picks in the Top 17 slots of the draft does not help and tends to be associated with less improvement.”

There are a few possible explanations for why the draft doesn’t rescue bad teams. The most likely source of failure is the sheer difficultly of selecting college players, even when you’re selecting first. (One study found that draft order predicts only about 5 percent of a player’s performance in the NBA.) For every Durant there are countless Greg Odens; Hinkie’s own draft record is a testament to the intrinsic uncertainty of picking professional athletes.

That said, some general managers appear to be far better at evaluating players. “While more and higher picks do not generally help teams, having better pickers does,” write the scientists. They find, for instance, that R.C. Buford, the GM of the Spurs, is worth an additional 23 to 29 wins per season. Compare that to the “Wins Over Replacement” generated by Stephen Curry, who has just finished one of the best regular season performances in NBA history. According to Basketball Reference, Curry was worth an additional 26.4 wins during the 2015-2016 regular season. If you believe these numbers, R.C. Buford is one of the most valuable (and underpaid) men in the NBA.

So it’s important to hire the best GM. But this new study also finds franchise effects that exist independently of the general manager, as certain organizations are simply more likely to squeeze wins from their draft picks. The researchers credit these franchise differences largely to player development, especially when it comes to “developing players who might not have been highly regarded entering the NBA.” This is proof that “winning cultures” are a real thing, and that a select few NBA teams are able to consistently instill the habits required to maximize the talent of their players. Draft picks are nice. Organizations win championships. And tanking is no way to build an organization.

In his manifesto, Hinkie writes at length about the importance of bringing the rigors of science to the uncertainties of sport: “If you’re not sure, test it,” Hinkie writes. “Measure it. Do it again. See if it repeats.” Although previous research by the sports economist Dave Berri has cast doubt on the effectiveness of tanking,” this new paper should remind every basketball GM that the best way to win over the long-term is to develop a culture that doesn’t try to lose.

Motomura, Akira, et al. “Does It Pay to Build Through the Draft in the National Basketball Association?” Journal of Sports Economics, March 2016.

Does Stress Cause Early Puberty?

The arrival of puberty is a bodily event influenced by psychological forces. The most potent of these forces is stress: decades of research have demonstrated that a stressful childhood accelerates reproductive development, at least as measured by menarche, or the first menstrual cycle. For instance, girls growing up with fathers who have a history of socially deviant behavior tend to undergo puberty a year earlier than those with more stable fathers, while girls who have been maltreated (primarily because of physical or sexual abuse) begin menarche before those who have not. One study even found that Finnish girls evacuated from their homeland during WWII – they had to endure the trauma of separation from their parents – reached puberty at a younger age and had more children than those who stayed behind. 

There’s a cold logic behind these correlations. When times are stressful, living things tend to devote more resources to reproductive development, as they want to increase the probability of passing on their genes before death. This leads to earlier puberty and reduced investment in developmental processes less directly related to sex and mating. If nothing else, the data is yet another reminder that early childhood stress has lasting effects, establishing developmental trajectories that are hard to undo.

But these unsettling findings leave many questions unanswered. For starters, what kind of stress is the most likely to speed up reproductive development? Scientists often divide early life stressors into two broad categories: harshness and unpredictability. Harshness is strongly related to a lack of money, and is typically measured by looking at how a family’s income relates to the federal poverty line.  Unpredictability, in contrast, is linked to factors such as the consistency of father figures inside the house and the number of changes in residence. Are both of these forms of stress equally important at triggering the onset of reproductive maturation? Or do they have different impacts on human development?

Another key question is how this stress can be buffered. If a child is going to endure a difficult beginning, then what is the best way to minimize the damage?

These questions get compelling answers in a new study by a team of researchers from four different universities. (The lead author is Sooyeon Sung at the University of Minnesota.) Their subjects were 492 females born in 1991 at ten different hospitals across the United States. Because these girls were part of a larger study led by the National Institute of Child Health and Human Development, Sung. et al. were able to draw on a vast amount of relevant data, from a child’s attachment to her mother at 15 months to the fluctuating income of her family. These factors were then tested against the age of menarche, as the scientists attempted to figure out the psychological variables that determine the onset of puberty. 

The first thing they found is that environmental harshness (but not unpredictability) predicts the timing of the first menstrual cycle. While this correlation is limited by the relatively small number of impoverished families in the sample, it does suggest that not all stress is created equal, at least when it comes to the acceleration of reproductive development. It’s also evidence that poverty itself is stressful, and that children raised in the poorest households are marked by their scarcities.

But the news isn’t all terrible. The most significant result to emerge from this new paper is that the effects of childhood stress on reproductive development can be minimized by a secure mother-daughter relationship. When the subjects were 15 months old, they were classified using the Strange Situation procedure, a task pioneered by Mary Ainsworth in the mid-1960s. The experiment is a carefully scripted melodrama, as a child is repeatedly separated and reunited with his or her mother. The key variable is how the child responds to these reunions. Securely attached infants get upset when their mothers leave, but are excited by her return; they greet her with affectionate hugs and are quickly soothed. Insecure infants, on the other hand, are difficult to calm down, either because they feign indifference to their parent or because they react with anger when she comes back.  

Countless studies have confirmed the power of these attachment categories: Securely attached infants get better grades in high school, have more satisfying marriages and are more likely to be sensitive parents to their own children, to cite just a few consistent findings. However, this new study shows that having a secure attachment can also dramatically minimize the developmental effects of stress and poverty, at least when measured by the onset of puberty. 

Love is easy to dismiss as a scientific variable. It’s an intangible feeling, a fiction invented by randy poets and medieval troubadours. How could love matter when life is sex and death and selfish genes?

And yet, even within the unsparing framework of evolution we can still measure the sweeping influence of love. For these children growing up in the harshest environments, the security of attachment is not just a source of pleasure. It's their shield.

Sung, Sooyeon, Jeffry A. Simpson, Vladas Griskevicius, I. Sally, Chun Kuo, Gabriel L. Schlomer, and Jay Belsky. "Secure infant-mother attachment buffers the effect of early-life stress on age of menarche." Psychological Science, 2016.

The Curious Robot

Curiosity is the strangest mental state. The mind usually craves certainty; being right feels nice; mystery is frustrating. But curiosity pushes back against these lesser wants, compelling us to seek out the unknown and unclear. To be curious is to feel the pleasure of learning, even when what we learn is that we’re wrong.

One of my favorite theories of creativity is the so-called “information gap” model, first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is what happens when we experience a gap “between what we know and what we want to know…It is the feeling of deprivation that results from an awareness of the gap.” As such, curiosity is a mostly aversive state, an intellectual itch begging to be scratched. It occurs when we know just enough to know how little we understand.

The abstract nature of curiosity – it’s a motivational state unlinked to any specific stimulus or reward – has made it difficult to study, especially in the lab. There is no test to measure curiosity, nor is there a way to assess its benefits in the real world. Curiosity seems important – “Curiosity is, in great and generous minds, the first passion and the last,” wrote Samuel Johnson – but at times this importance verges on the intangible.

Enter a new paper by the scientists Pierre-Yves Oudeyer and Linda Smith that explores curiosity in robots and its implications for human nature. The paper is based on a series of experiments led by Ouedeyer, Frederic Kaplan and colleagues in which a pair of adorable quadruped machines – they look like a dogs from the 22nd century – were set loose on an infant play mat. One of these robots is the “learner,” while the other is the “teacher.” Here’s a picture of the setup:

The learner robot begins with a set of “primitives,” or simple pre-programmed instincts. It can, for instance, turn its head, kick its legs and make sounds of various pitches. These primitives begin as just that: crude scripts of being, patterns of actions that are not very impressive. The robot looks about as useful as a newborn.

But these primitives have a magic trick: they are bootstrapped to a curious creature, as the robot has been programmed to seek out those experiences that are the most educational. Consider a simple leg movement. The robot begins by predicting what will happen after the movement. Will the toy move to the left? Will the teacher respond with a sound? Then, after the leg kick, the robot measures the gap between its predictions and reality. This feedback leads to a new set of predictions, which leads to another leg kick and another measurement of the gap. A shrinking gap is evidence of its learning.

Here’s where curiosity proves essential. As the scientists note, the robot is tuned to explore “activities where the estimated reward from learning progress is high,” where the gap between what it predicts and what actually happens decreases most quickly. Let’s say, for instance, that the robot has four possible activities to pursue, represented in the chart below:

A robot driven by curiosity will avoid activity 4 - too easy, no improvement - and also activity 1, which is too hard. Instead, it will first focus on activity 3, as investing in that experience leads to a sharp drop in prediction errors. Once that curve starts to flatten - the robot has begun learning at a slower rate - it will shift to activity 2, as that activity now generates the biggest educational reward.

This simple model of curiosity – it leads us to the biggest knowledge gaps that can be closed in the least amount of time - generates consistent patterns of development, at least among these robots. In Oudeyer's experiments, the curious machines typically followed the same sequence of pursuits. The first phase involved “unorganized body babbling,” which led to the exploration of each “motor primitive.” These primitives were then applied to the external environment, often with poor results: the robot might vocalize towards the elephant toy (which can’t talk back), or try to hit the teacher. The fourth phase featured more effective interactions, such as talking to the teacher robot (rather than hitting it), or grasping the elephant. “None of these specific objectives were pre-programmed,” write the scientists. “Instead, they self-organized through the dynamic interaction between curiosity-driven exploration, statistical inference, the properties of the body, and the properties of the environment.”

It’s an impressive achievement for a mindless machine. It’s also a clear demonstration of the power of curiosity, at least when unleashed in the right situation. As Oudeyer and Smith note, many species are locked in a brutal struggle to survive; they have to prioritize risk avoidance over unbridled interest. (Curiosity killed the cat and all that.) Humans, however, are “highly protected for a long period” in childhood, a condition of safety that allows us, at least in theory, to engage in reckless exploration of the world. Because we have such a secure beginning, our minds are free to enjoy learning with little consideration of its downside. Curiosity is the faith that education is all upside.*

The implication, of course, is that curiosity is a defining feature of human development, allowing us to develop “domain-specific” talents – speech, tool use, literacy, chess, etc. – that require huge investments of time and attention.* When it comes to complex skills, failure is often a prerequisite for success; we only learn how to get it right by getting it wrong again and again. Curiosity is what draws us to these useful errors. It’s the mental quirk that lets us enjoy the steepest learning curves, those moments when we become all too aware of the endless gaps in our knowledge. The point of curiosity is not to make those gaps disappear – it’s to help us realize they never will.

*A new paper by Christopher Hsee and Bowen Ruan in Psychological Science demonstrates that even curiosity can have negative consequences. Across a series of studies, they show that our "inherent desire" to resolve uncertainty can lead people to endure aversive stimuli, such as electric shocks, even when the curiosity comes with no apparent benefit. They refer to this as the Pandora Effect. I'd argue, however, that the occasional perversities of curiosity are far outweighed by the curse of being incurious, as that can lead to confirmation bias, overconfidence, filter bubbles and all sorts of errors with massive consequences, both at the individual and societal level.

Oudeyer, Pierre-Yves, and L. Smith. "How evolution may work through curiosity-driven developmental process." Topics in Cognitive Science.