When Is Ignorance Bliss?

The first line of Aristotle’s Metaphysics begins with a seemingly obvious truth: “All men by nature desire to know.” According to Aristotle, this desire for knowledge is our defining instinct, the quality that sets our mind apart. As the cognitive psychologist George Miller put it, we are informavores, blessed with a boundless appetite for information.

It’s a comforting vision. However, like all dictums about human nature, it also comes with plenty of caveats and exceptions. Take spoiler alerts. It’s hard to read an article about a work of entertainment that doesn’t contain a warning to readers. The assumption of these warnings, of course, is that people don’t want to know, at least when it comes to narratives.

And it’s not just the latest twists in Scandal that we’re trying to avoid. Twenty percent of Malawi adults at risk for HIV decline to get the results of their HIV test, even when offered cash incentives; approximately 10 percent of Canadians with a family history of Huntington Disease choose to not undergo genetic testing. (Even James Watson declined to have his risk of Alzheimer’s revealed.) These are just specific examples of a larger phenomenon. Given the advances in genetic testing and biomarkers, the Aristotelian model would predict that we’d all become subscribers to 23andMe. But that’s not happening.

A new paper in Psychological Review by Gerd Gigerenzer and Rocio Garcia-Retamero explores the motives of our willful ignorance. They begin by establishing its prevalence, surveying more than 2000 German and Spanish adults about various forms of future knowledge. Their results are clear proof that most of us want spoiler alerts for real life: between 85 and 90 percent of subjects say they don’t want to know when or why their partner will die. (They feel the same way about their own death.) They also don’t want to know if their marriage will eventually end in divorce. This preference for ignorance even applies to positive events: between 40 and 70 percent of subjects don't want to know about their future Christmas gifts, or who won the big soccer match, or the gender of their next child.

To understand our reasons for ignorance, Gigerenzer and Garcia-Retamero asked subjects about their risk attitudes. They found that people who are more risk-averse (as measured by their insurance purchases and their choices playing a simple lottery game) are more likely to prefer not knowing. While this might appear counterintuitive—learning how you will die might help reduce the risk of dying— Gigerenzer and Garcia-Retamero explain these results in terms of anticipatory regret. People avoid risks because they don’t want to regret those losing gambles. They avoid life spoilers for a similar reason, as they're trying to avoid regretting the decision to know. 

On the one hand, this intuition has a logical sheen. It’s not that ignorance is bliss—it’s just better than knowing that life can be shitty and full of suffering. Knowing exactly how we’ll suffer might only make it worse. The same principle also applies to the good stuff: we think we'll be less happy if we know about our happiness in advance. Life is like a joke—it's not so funny if we get the punchline first.

But there’s also some compelling evidence that our intuitions about regretting future knowledge are wrong. For one thing, it’s not clear that spoilers spoil anything. Consider a 2011 study by Jonathan Leavitt and Nicholas Christenfeld. The scientists gave several dozen undergraduates twelve different short stories. The stories came in three different flavors: ironic twist stories (such as Chekhov’s “The Bet”), straight up mysteries (“A Chess Problem” by Agatha Christie) and “literary stories” by writers like Updike and Carver. Some subjects read the story as is, without a spoiler. Some read the story with a spoiler carefully embedded in the actual text, as if Chekhov himself had given away the end. And some read the story with a spoiler disclaimer in the preface.

Here’s the shocking twist: the scientists found that almost every single story, regardless of genre, was more pleasurable when prefaced with some sort of spoiler. It doesn’t matter if it’s Harry Potter or Hamlet: an easy way to make a good story even better is to spoil it at the start. As the scientists write, “Erroneous intuitions about the nature of spoilers may persist because individual readers are unable to compare spoiled and unspoiled experiences of a novel story. Other intuitions about suspense may be similarly wrong: Perhaps birthday presents are better when wrapped in cellophane, and engagement rings when not concealed in chocolate mousse.”

In fiction as in life: we assume our pleasure depends on ignorance. However, Leavitt and Christenfeld argue that spoilers enhance narrative pleasure by letting readers pay more attention to developments along the way. Because we know the destination, we’re better able to enjoy the journey. 

There's more to life than how it ends.

Gigerenzer, Gerd, and Rocio Garcia-Retamero. "Cassandra’s regret: The psychology of not wanting to know." Psychological Review 124.2 (2017): 179 

Why College Should Become A Lottery

Barry Schwartz, a psychologist at UC-Berkeley and Swarthmore, does not think much of the college admissions process. In a new paper, he tells a story about a friend who spent an afternoon with a high-school student. His friend was impressed by the student and, for the first time in thirty years of teaching, decided to send a note to the dean of admissions. Despite the note, the student did not get in. Schwartz describes what happened next:

“Curious, my friend asked the dean why. ‘No reason,’ said the dean. ‘No reason?,’ replied my friend, somewhat incredulous. ‘Yes, no reason. I can’t tell you how many applicants we reject for no reason.’”

For Schwartz, such stories are a sign of a broken system. Although colleges pretend to be paragons of meritocracy, their selection methods are rife with randomness. “Despite their very best efforts to make the selection process rational and reasonable, admissions people are, in effect, running a lottery,” Schwartz writes. “To get into Harvard (or Stanford, or Yale, or Swarthmore), you need to be good...and you need to be lucky.”

Schwartz devotes much of his article to the severe negative consequences inflicted by this capricious selection process. He begins by lamenting the ways in which it discourages students from experimenting, both inside and outside the classroom. Because teenagers are so terrified of failure—Harvard requires perfection!—they refuse to take classes that might end with the crushing disappointment of a B+. Over time, this can lead to high-school students that “may look better than ever before” but are probably learning less.  

But wait: it gets worse. Much worse. Suniya Luthar, a professor of psychology at Arizona State University, has spent the last several years documenting the emotional toll of the college competition on upper-middle class children. Although these affluent kids lead enviable lives on paper—they have educated white-collar parents, high test scores and attend elite high-schools—they are roughly twice as likely to suffer from the symptoms of depression and anxiety than the national average. They are also far more likely to have eating disorders and meet the diagnostic criteria for substance abuse.  

There are, of course, countless variables driving this epidemic of mental issues among affluent teenagers. (Maybe it’s Snapchat’s fault? Or a side-effect of helicopter parenting?) However, Luthar argues that one of the main causes is what she calls the “pressure to achieve.” The problem with the pressure is that it’s a double-edged sword. If a student’s achievements fall short, then he feels inadequate. However, even if a student gets straight As, she probably still lives in what Luthar calls “a state of fear of not achieving.” Over time, that chronic sense of fear can lead to anxiety disorders and depression; kids are burned out on stress before they even leave their childhood homes. 

How can we fix this competitive morass? Schwartz offers a provocative solution. (In an email, he observes that he first offered this proposal a decade ago. In the years since, it’s only gotten more necessary.) The first phase of his plan involves filtering applicants using the same academic standards currently in place. Schwartz estimates that these standards—GPA, SAT scores, extracurricular activities, etc.—could cut the applicant pool by up to two-thirds. But here’s the crucial twist: after this initial culling, all of the acceptable students would be entered into an admissions lottery. The winners would be drawn at random.

Such a lottery system, Schwartz writes, would offer multiple advantages over our current fake meritocracy. For one thing, it would be much less stressful for teenagers to strive to be “good enough” rather than the best; high-achieving students wouldn’t have to be the highest achieving. This, in turn, would “free students up to do the things they were really passionate about.” Instead of chasing extrinsic rewards—does Stanford need an oboe player?—adolescents would be free to follow their sense of intrinsic motivation.* By making selective colleges less selective, Schwartz says, they can get happier and more well-rounded students.

The hybrid lottery system would also force colleges to be more transparent about their selection methods. Right now, the admissions process is a black box; such secrecy is what allows colleges to accept legacies and reject otherwise qualified students for no particular reason. However, if the schools were forced to define their lottery cut-off, they would have to reflect on the measurements that actually predict academic success. And this doesn’t mean the criteria must be quantitative. As Schwartz notes, “criteria for ‘good enough’ can be sufficiently flexible that applicants who are athletes, violinists, minorities, or from Alaska get ‘credit’ for these characteristics,” just as in the current system.

The most obvious objection to Schwartz’s lottery system is ethical. For many people, it just seems wrong to base a major life decision on a roll of the dice. But here’s the thing: the college application process is already a crapshoot. (The differences used to differentiate applicants—say, 10 points on the SAT—are often smaller than the amount of error in the assessments.) By making the lottery explicit, students and schools would at least be forced to have a candid conversation about the role of luck in life. Instead of taking full credit for our admission, or blaming ourselves for our rejection, we’d admit that much of success is random chance and pure contingency. Perhaps, Schwartz writes, this might make students a little “more empathic when they encounter people who may be just as deserving as they are, but less lucky.”

Schwartz is best known for his research on the pitfalls of the maximizing decision-making strategy, in which people obsess over finding the best possible alternative. The problem with this approach, Schwartz and colleagues have repeatedly found, is that it ends up making us miserable. Instead of being satisfied with a perfectly acceptable option, we get stressed about finding a better one. And then, once we make a choice, studies show that maximizers end up drenched in regret, fixated on their foregone options. We’re trained to be maximizers by consumer culture—who wants to settle for the second best laundry detergent?—but it’s usually a shortcut to a sad life.

This new paper extends the maximizing critique to higher-education. In Schwartz’s telling, the college application process is a particularly powerful example of how the maximizing approach can lead us astray. Given the inherent uncertainty of matching students and colleges, Schwartz argues that it’s foolish to try to find the ideal school. Rather, we should practice an approach that Herbert Simon called satisficing, in which we search for colleges that are good enough. After all, the evidence suggests we can be equally happy at a multitude of places. 

This, perhaps, is the greatest virtue of the lottery proposal: by making it impossible for students to act like maximizers—chance chooses for them—they will be given a life lesson in the power of satisficing. Instead of wasting their dreams on a dream school, they should follow their adolescent passions and embrace the chanciness of life. You can’t always get exactly what you want. But if you practice satisficing, you just might get what you need.

*The danger of replacing intrinsic motivation with extrinsic rewards was first demonstrated in a classic study of preschoolers. Some of the young children were told they would get a reward for drawing with pens. You might think this would encourage the kids to draw even more. It didn’t. Instead, those toddlers given an “expected reward” were less likely to use the pens in the future. (And when they did use the pens, they spent less time drawing.) The extrinsic rewards, said the scientists, had turned “play into work.”

Schwartz, Barry (2016) “Why Selective Colleges Should Become Less Selective—And Get Better Students,” Capitalism and Society: Vol. 11: Iss. 2.

The Headwinds Paradox (Or Why We All Feel Like Victims)

When you are running into the wind, the air feels like a powerful force. It’s blowing you back, slowing you down, an annoying obstacle making your run that much harder.

And then you turn around and the headwind becomes a tailwind. The air that had been pushing you back is now propelling you forward. But here’s the question: do you still notice it?

Probably not. Simply put, headwinds are far more salient than tailwinds. When it comes to exercise, we fixate on the barrier and ignore the boost.

In a new paper, the psychologists Shai Davidai and Thomas Gilovich show that this same asymmetry is present across many aspects of life, and not just when we’re running on a windy day.

As evidence, Davidai and Gilovich conducted a number of clever studies. In the first experiment, they asked people which political party was advantaged or disadvantaged by the rules of American democracy, such as the electoral college. As expected, partisans on both sides believed their side suffered from the headwinds, so that Democrats were convinced the political system favored Republicans and Republicans believed it favored Democrats. Interestingly, the size of the effect was mediated by the level of political engagement, with more engagement leading to a stronger sense of unfairness. In short, the more you think about American politics the more convinced you are that the system is stacked against you. (In fairness to Democrats, recent history suggests they might be right.)

A similar effect was also observed among football fans, who were much more likely to notice the difficult games on their team’s upcoming schedule than the easy ones. The headwinds/tailwinds asymmetry even shaped the career beliefs of academics, as people in a given sub-discipline believed they faced more hurdles than those in other sub-disciplines.

And then there’s family life, that rich vein of grievance. When the psychologists asked siblings if their parents had been harder on the older or younger child, their answers depended largely on their own position in the family. Older children were convinced that their parents had gone easy on their little siblings, while younger siblings insisted the discipline had been evenly distributed. Mom always loves someone else the most.

According to Davidai and Gilovich, the underlying cause of the headwind effect is the availability heuristic, in which our judgement is distorted by the ease with which relevant examples come to mind. First described by Kahneman and Tversky, the availability heuristic is why people think tornadoes are deadlier than asthma—tornadoes generate headlines, even though asthma takes 20 times more lives—and why spouses tend to overestimate their share of household chores. (We remember that time we took out the garbage; we don’t remember all those times we didn’t.) As Timur Kuran and Cass Sunstein point out, the availability bias might be “the most fundamental heuristic” of them all, constantly distorting our judgements of frequency and probability. We see through a glass, darkly; the availability heuristic is often what makes the glass so dark. 

This new paper shows how the availability bias can even warp our life narratives. We think our memory reflects the truth; it feels like a fair accounting of events. In reality, though, it’s a story tilted towards resentment, since it’s so much easier for us to remember every slight, wound and obstacle.

Why does this matter? Didn’t we already know that our memory is mostly bullshit? Davidai and Gilovich argue that this particular mnemonic flaw comes with serious practical consequences. For one thing, the headwind effect makes it harder for us to experience gratitude, which research shows is associated with higher levels of happiness, fewer hospitalizations and a more generous approach towards others. Because we take the tailwinds of life for granted—the headwinds consume all our attention—we have to work to notice our blessings. We easily remember who hurt us; we soon forget who helped us.

This effect can even shape public policy, limiting our interest in helping the less fortunate. We’re so biased towards our adversities that we can’t empathize with the adversities of others, even when they might be far more challenging. And since we tend to neglect our God given advantages—good parents, silver spoons, etc.—we discount the role they played in our success. The end result is a series of false beliefs about what it takes to succeed.

In a recent interview, Rob Lowe lamented the obstacles that had limited his early career opportunities. Handsome actors like himself, he said, are subject to “an unbelievable bias and prejudice against quote-unquote good-looking people.”

We’re all victims. Even beauty is a headwind.

Davidai, Shai, and Thomas Gilovich. "The headwinds/tailwinds asymmetry: An availability bias in assessments of barriers and blessings." Journal of Personality and Social Psychology 111.6 (2016): 835. 

Fewer Friends, Better Marriages: The Modern American Social Network

In A Book About Love, I wrote about research showing that the social networks of Americans have been shrinking for decades. Miller McPherson, a sociologist at the University of Arizona and Duke University, has helped document the decline. In 1985, 26.1 percent of respondents reported discussing important matters with a “comember of a group,” such as a church congregant. In 2004, McPherson found that the percentage had fallen to 11.8. In 1985, 18.5 percent of subjects had important conversations with their neighbors. That number shrank to 7.9 percent two decades later. Other studies have reached similar conclusions. Robert Putnam, for instance, has used the DDB Needham Life Style Surveys to show that the average married couple entertained friends at home approximately fifteen times per year in the 1970s. By the late 1990s, that number was down to eight, “a decline of 45 percent in barely two decades.”

These surveys raise the obvious question: If we’re no longer socializing with our neighbors, or having dinner parties with our friends, then what the hell are we doing? 

One possibility is screens. Conversation is hard; it’s much easier to chill with Netflix and the cable box. According to this depressing speculation, technology is an enabler of loneliness, allowing us to forget how isolated we’ve become. 

But there’s another possibility. While it seems clear that we’re spending less time with our friends and acquaintances (texting doesn’t count), we might be spending more time with our spouses and children. (McPherson found, for instance, that the percentage of Americans who said their spouse was their “only confidant” nearly doubled between 1985 and 2004.) If true, this would suggest that our social network isn’t fraying so much as it’s gradually becoming more focused and intimate.

A new paper by Katie Genadek, Sarah Flood and Joan Garcia Roman at the University of Minnesota, drawing from time use survey data from 1965 to 2012, aims to answer these important unknowns. Their data provides a fascinating portrait of the social trends shaping the lives of American families.

I’ll start with the punchline: on average, spouses are spending more time with each other than they did in 1965. This trend is particularly visible among married couples with children. Here are the scientists: “In 1965, individuals with children spent about two hours per day with both their spouse and child(ren); by 2012 this had increased 50 minutes to almost three hours.” Instead of bowling with neighbors, we’re taking our kids to soccer practice.

Of course, when it comes to togetherness time, quality matters more than quantity. One cynical explanation for the increase in family time is that much of it might involve screens. Maybe we’re not hanging out—we’re just sharing a wifi network. But the data doesn’t seem to show that. In 1975, couples spent 79 minutes watching television together. In 2012, that number had increased by only 13 minutes. What’s more, spouses are still making time for shared activities that don’t involve TV. Although our total amount of leisure time has remained remarkably constant – Keynes’ leisure society has not come to pass – we are more likely to spend this free time with our spouse.

This is particularly true among couples with children. The big news buried in this time use data is that parents are doing a lot more parenting. In 1965, parents spent 41 minutes engaged in “primary care” for their little ones. That number had more than doubled, to 88 minutes, in 2012. We’re also far more likely to parent together, with the number of minutes spent as a family unit quadrupling from 6 minutes in 1965 to 27 minutes in 2012. This increase in family time comes despite the sharp increase in women working outside the home.

It’s so easy to despair about the state of the world. What’s important to remember, however, is that these more intimate benchmarks of life are trending in the right direction.  Amid all the calls to make America great again, we’re liable to forget that the greatest generations spent a staggeringly little amount of time with their families. The nuclear family is supposed to be disintegrating, but these time diaries show us the opposite, as Americans are choosing to spend an increasing percentage of their time with their partner and children.

What makes this survey data more compelling is that it jives with recent research showing the growing role played by our spouses in determining our own life happiness. In a separate study based on data from 47,000 couples, Genadek and Flood found that individuals are nearly twice as happy when they are with their spouse as when they’re not. Meanwhile, a recent meta-analysis of ninety-three studies by the psychologist Christine Proulx found that the rewards of a good marriage have surged in recent decades, with the most loving couples providing a bigger lift to the “personal well-being” of the partners. In fact, the influence of a good marriage on overall levels of life satisfaction has nearly doubled since the late 1970s. Given this happiness boost, it shouldn’t be too surprising that we’re spending more time with our spouses. If we’re lucky, we already live with the people who make us happiest. 

Genadek, Katie R., Sarah M. Flood and Joan Garcia Roman. “Trends in Spouses’ Shared Time in the United States, 1965-2012.” Demography (2016)  

Why Facebook Rules the World

One day, when historians tell the strange story of the 21st century, this age of software and smartphones, populism and Pokemon, they will focus on a fundamental shift in the way people learn about the world. Within the span of a generation, we went from watching the same news shows on television, and reading the same newspapers in print, to getting a personalized feed of everything that our social network finds interesting, as filtered by a clever algorithm. The main goal of the algorithm is to keep us staring at the screen, increasing the slight odds that we might click on an advertisement.

I’m talking, of course, about Facebook. Given the huge amount of attention Facebook commands—roughly 22 percent of the internet time Americans spend on their mobile devices is spent on the social network—it has generated a relatively meager amount of empirical research. (It didn't help that the company’s last major experiment became a silly controversy.) Furthermore, most of the research that does exist explores the network’s impact on our social lives. In general, these studies find small, mostly positive correlations between Facebook use and a range of social measures: our Facebook friends are not the death of real friendship.

What this research largely overlooks, however, is a far more basic question: why is Facebook so popular? What is it about the social network (and social media in general) that makes it so attractive to human attention? It’s a mystery at the heart of the digital economy, in which fortunes hinge on the allocation of eyeballs.

One of the best answers for the appeal of Facebook comes from a 2013 paper by a team of researchers at UCSD. (First author Laura Mickes, senior authors Christine Harris and Nicholas Christenfeld.) Their paper begins with a paradox: the content of Facebook is often mundane, full of what the scientists refer to as “trivial ephemera.” Here’s a random sampling of my current feed: there’s an endorsement of a new gluten-free pasta, a smattering of child photos, emotional thoughts on politics and a post about a broken slide at the local park. As the scientists point out, these Facebook “microblogs” are full of quickly composed comments and photos, an impulsive record of everyday life.

Such content might not sound very appealing, especially when there is so much highly polished material already competing for our attention. (Why read our crazy uncle on the election when there’s the Times?) And yet, the “microblog” format has proven irresistible: Facebook’s “news” feed is the dominant information platform of our century, with nearly half of Americans using it as a source for news.  This popularity, write the scientists, “suggests that something about such ‘microblogging’ resonates with human nature.”

To make sense of this resonance, the scientists conducted some simple memory experiments. In their first study, they compared the mnemonic power of Facebook posts to sentences from published books. (The Facebook posts were taken from the feeds of five research assistants, while the book sentences were randomly selected from new titles.) The subjects were shown 100 of these stimuli for three seconds each. Then, they were given a recognition test consisting of these stimuli along with another 100 “lures” – similar content they had not seen - and asked to assess their confidence, on a twenty-point scale, as to whether they previously been exposed to a given stimulus.

According to the data, the Facebook posts were much more memorable than the published sentences. (This effect held even after controlling for sentence length and the use of “irregular typography,” such as emoticons.) But this wasn’t because people couldn’t remember the sentences extracted from books – their performance here was on par with other studies of textual memory. Rather, it was largely due to the “remarkable memorability” of the Facebook posts. Their content was trivial. It was also unforgettable.

In a follow-up condition, the scientists replaced the book sentences with photographs of human faces. (They also gathered a new collection of Facebook posts, to make sure their first set wasn’t an anomaly.) Although it’s long been argued that the human brain is “specially designed to process and store facial information,” the scientists found that the Facebook posts were still far easier to remember.

This is not a minor effect: the difference in memory performance between Facebook posts and these other stimuli is roughly equivalent to the difference between people with amnesia due to brain damage and those with a normal memory. What’s more, this effect exists even when the Facebook content is about people we don’t even know. Just imagine how memorable it is when the feed is drawn from our actual friends.

To better understand the mnemonic advantage of microblogs, the scientists ran several additional experiments. In one study, they culled text from CNN.com, drawing from both the news and entertainment sections. The text came in three forms: headlines, sentences from the articles, and reader comments. As you can probably guess, the reader comments were much more likely to be remembered, especially when compared to sentences from the articles. Subjects were also better at remembering content from the entertainment section, at least compared to news content.

Based on this data, the scientists argue that the extreme memorability of Facebook posts is being driven by at least two factors. The first is that people are drawn to “unfiltered, largely unconsidered postings,” whether it’s a Facebook microblog or a blog comment. When it comes to text, we don’t want polish and reflection. We want gut and fervor. We want Trump’s tweets.

The second factor is the personal filter of Facebook, which seems to take advantage of our social nature.  We remember random updates from our news feed for the same reason we remember all the names of the Pitt-Jolie children: we are gossipy creatures, perpetually interested in the lives of others.

This research helps explain the value of Facebook, which is currently the 7th most valuable company in the world. The success of the company, which sells ads against our attention, is ultimately dependent on our willingness to read the haphazard content produced by other people for free. This might seem like a bug, but it’s actually an essential feature of the social network. “These especially memorable Facebook posts,” write the scientists, “may be far closer than professionally crafted sentences to tapping into the basic language capacities of our minds. Perhaps the very sentences that are so effortlessly generated are, for that reason, the same ones that are readily remembered.” While traditional media companies assume people want clean and professional prose, it turns out that we’re compelled to remember the casual and flippant. The problem, of course, is that the Facebook news algorithm is filtered to maximize attention, not truth, which can lead to the spread of sticky lies. When our private feed is full of memorable falsehoods what happens to public discourse?

And it’s not just Facebook: the rise of the smartphone has encouraged a parallel rise in informal messaging. (We've gone from email to emojis in a few short years.) Consider Snapchat, the social network du jour. It's entire business model depends on the eagerness of users to consume raw visual content, produced by friends in the grip of System 1. In a universe overflowing with professional video content, it might seem perverse that we spend so much time watching grainy videos of random events. But this is what we care about. This is what we remember.

The creation of content used to be a professional activity. It used to require moveable type and a printing press and a film crew. But digital technology democratized the tools. And once that happened, once anyone could post anything, we discovered an entirely new form of text and video. We learned that the most powerful publishing platform is social, because it embeds the information in a social context. (And we are social animals.) But we also learned about our preferred style, which is the absence of style: the writing that sticks around longest in our memory is what seems to take the least amount of time to create. All art aspires to the condition of the Facebook post. 

Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. (2013). Major memory for microblogs. Memory & cognition, 41(4), 481-489.

The Psychology of the Serenity Prayer

One of the essential techniques of Cognitive-Behavioral Therapy (CBT) is reappraisal. It’s a simple enough process: when you are awash in negative emotion, you should reappraise the stimulus to make yourself feel better.

Let’s say, for instance, that you are stuck in traffic and are running late to your best friend’s birthday party. You feel guilty and regretful; you are imagining all the mean things people are saying about you. “She’s always late!” “He’s so thoughtless.” “If he were a good friend, he’d be here already.”

To deal with this loop of negativity, CBT suggests that you think of new perspectives that lessen the stress. The traffic isn’t your fault. Nobody will notice. Now you get to finish this interesting podcast.

It’s an appealing approach, rooted in CBT’s larger philosophy that the way an individual perceives a situation is often more predictive of his or her feelings than the situation itself. 

There’s only one problem with reappraisal: it might not work. For instance, a recent meta-analysis showed that the technique is only modestly useful at modulating negative emotions. What’s worse, there’s suggestive evidence that, in some contexts, reappraisal may actually backfire. According to a 2013 paper by Allison Troy, et al., among people who were stressed about a controllable situation—say, being fired because of poor work performance—better reappraisal ability was associated with higher levels of depression. 

Why doesn’t reappraisal always work? One possible answer involves an old hypothesis known as the strategy-situation fit, first outlined by Richard Lazarus and Susan Folkman in the late 1980s. This approach assumes that there is no universal fix for anxiety and depression, no single tactic that always grants us peace of mind. Instead, we must think strategically about which strategies to use, as their effectiveness will depend on the larger context.

A new paper by Simon Haines et al. (senior author Peter Koval) in Psychological Science provides new evidence for the strategy-situation fit model. While previous research has suggested that the success of reappraisal depends on the nature of the stressor—it’s only useful when we can’t control the source of the stress—these Australian researchers wanted to measure the relevant variables in the real world, and not just in the lab. To do this, they designed a new smartphone app that pushed out surveys at random moments. Each survey asked their participants a few questions about their use of reappraisal and the controllability of their situation. These responses were then correlated with several questionnaires measuring well-being and mental health.

The results confirmed the importance of strategy-situation fit. According to the data, people with lower levels of well-being (they had more depressive symptoms and/or stress) used reappraisal in the wrong contexts, increasing their use of the technique when they were in situations they perceived as controllable. For example, instead of leaving the house earlier, or trying to perform better at work, people with poorer “strategy-situation fit” might spend time trying to talk themselves into a better mood. People with higher levels of well-being, in contrast, were more likely to use reappraisal at the right time, when they were confronted with situations they felt they could not control. (Bad weather, mass layoffs, etc.) This leads Haines et al. to conclude that, “rather than being a panacea, reappraisal may be adaptive only in relatively uncontrollable situations.”

Why doesn’t reappraisal help when we can influence the situation? One possibility is that focusing on our reaction might make us less likely to take our emotions seriously. We’re so focused on changing our thoughts—think positive!—that we forget to seek an effective solution. 

Now for the caveats. The most obvious limitation of this paper is that the researchers relied on subjects to assess the controllability of a given situation; there were no objective measurements. The second limitation is the lack of causal data. Because this was not a longitudinal study, it’s still unclear if higher levels of well-being are a consequence or a precursor of more strategic reappraisal use. The best way to deal with our emotions is an ancient question. It won’t be solved anytime soon.

That said, this study does offer some useful advice for practitioners and patients using CBT. As I noted in an earlier blog, there is worrying evidence that CBT has gotten less effective over time, at least as measured by its ability to reduce depressive symptoms. (One of the leading suspects behind this trend is the growing popularity of the treatment, which has led more inexperienced therapists to begin using it.) While more study is clearly needed, this research suggests ways in which standard CBT might be improved. It all comes down to an insight summarized by the great Reinhold Niebuhr in the Serenity Prayer:

God, grant me the serenity to accept the things I cannot change,

Courage to change the things I can,

And wisdom to know the difference.                                         

That’s wisdom: tailoring our response based on what we can and cannot control. Serenity is a noble goal, but sometimes the best way to fix ourselves is to first fix the world.

Haines, Simon J., et al. "The Wisdom to Know the Difference Strategy-Situation Fit in Emotion Regulation in Daily Life Is Associated With Well-Being." Psychological Science (2016): 0956797616669086.

How Southwest Airlines Is Changing Modern Science

The history of science is largely the history of individual genius. From Galileo to Einstein, Isaac Newton to Charles Darwin, we tend to celebrate the breakthroughs achieved by a mind working by itself, seeing more reality than anyone has ever seen before.

It’s a romantic narrative. It’s also obsolete. As documented in a pair of Science papers by Stefan Wuchty, Benjamin Jones and Brian Uzzi, modern science is increasingly a team sport: more than 80 percent of science papers are now co-authored. These teams are also producing the most influential research, as papers with multiple authors are 6.3 times more likely to get at least 1000 citations. The era of the lone genius is over.

What’s causing the dramatic increase in scientific collaboration? One possibility is that the rise of teams is a response to the increasing complexity of modern science. To advance knowledge in the 21st century, one has to master an astonishing amount of information and experimental know-how; because we have discovered so much, it’s harder to discover something new. (In other words, the mysteries that remain often exceed the capabilities of the individual mind.) This means that the most important contributions now require collaboration, as people from different specialties work together to solve extremely difficult problems.

But this might not be the only reason scientists are working together more frequently. Another possibility is that the rise of teams is less about shifts in knowledge and more about the increasing ease of interacting with other researchers. It’s not about science getting hard. It’s about collaboration getting easy.

While it seems likely that both of these explanations are true—the trend is probably being driven by multiple factors—a new paper emphasizes the changes that have reduced the costs of academic collaboration. To do this, the economists Christian Catalini, Christian Fons-Rosen and Patrick Gaule looked at what happens to scientific teams after Southwest Airlines enters a metropolitan market. (On average, the entrance of Southwest leads to a roughly 20 percent reduction in fares and a 44 percent increase in passengers.) If these research partnerships are held back by practical obstacles—money, time, distance, etc.—then the arrival of Southwest should lead to a spike in teamwork.

That’s exactly what they found. According to the researchers, after Southwest begins a new route collaborations among scientists increase across every scientific discipline. (Physicists increase their collaborations by 26 percent, while biologists seem to really love cheap airfare: their collaborations increase by 85 percent.) To better understand these trends, and to rule out some possible confounds, Catalini et al. zoomed in on collaborations among chemists. They tracked the research produced by 819 pairs of chemists between 1993 and 2012. Once again, they found that the entry of Southwest into a new market leads to an approximately 30 percent spike in collaboration among chemists living near the new routes. What’s more, this trend towards teamwork showed no signs of existing before the arrival of the low-cost airline.

At first glance, it seems likely that these new collaborations triggered by Southwest will produce research of lower quality. After all, the fact that the scientists waited to work together until airfares were slightly cheaper suggests that they didn’t think their new partnership would create a lot of value. (A really enticing collaboration should have been worth a more expensive flight, especially since the arrival of Southwest didn’t significantly increase the number of direct routes.) But that isn’t what Catalini et al. found. Instead, they discovered that Southwest’s entry into a market led to an increase in higher quality publications, at least as measured by the number of citations. Taken together, these results suggest that cheaper air travel is not only redrawing the map of scientific collaboration, but fundamentally improving the quality of research.

There is one last fascinating implication of this dataset. The spread of Southwest paralleled the rise of the Internet, as it became far easier to communicate and collaborate using digital tools, such as email and Skype. In theory, these virtual interactions should make face-to-face conversations unnecessary. Why put up with the hassle of air travel when there’s Facetime? Why meet in person when there’s Google Docs? The Death of Distance and all that.

But this new paper is a reminder that face-to-face interactions are still uniquely valuable. I’ve written before about the research of Isaac Kohane, a professor at Harvard Medical School. A few years ago, he published a study that looked at the influence physical proximity on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Geography turned out to be a crucial variable: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were located within ten meters of each other, while the least cited papers tended to emerge from collaborators who were a kilometer or more apart.

Even in the 21st century, the best way to work together is to be together. The digital world is full of collaborative tools, but these tools are still not a substitute for meetings that take place in person.* That’s why we get on a plane.

Never change Southwest.

Catalini, Christian, Christian Fons-Rosen, and Patrick Gaulé. "Did cheaper flights change the geography of scientific collaboration?" SSRN Working Paper (2016). 

* Consider a study that looked at the spread of Bitnet, a precursor to the internet. As one might expect, the computer network significantly increased collaboration among electrical engineers at connected universities. However, the boost in collaboration was far larger among engineers who were within driving distance of each other.  Yet more evidence for the power of in-person interactions comes from a 2015 paper by Catalini, which looked at the relocation of scientists following the removal of asbestos from Paris Jussieu, the largest science university in France. He found that science labs that had been randomly relocated in the same area were 3.4 to 5 times more likely to collaborate. Meat space matters.

Do Social Scientists Know What They're Talking About?

The world is lousy with experts. They are everywhere: opining in op-eds, prognosticating on television, tweeting out their predictions. These experts have currency because their opinions are, at least in theory, grounded in their expertise. Unlike the rest of us, they know what they’re talking about.

But do they really? The most famous study of political experts, led by Philip Tetlock at the University of Pennsylvania, concluded that the vast majority of pundits barely beat random chance when it came to predicting future events, such as the winner of the next presidential election. They spun out confident predictions but were never held accountable when their predictions proved wrong. The end result was a public sphere that rewarded overconfident blowhards. Cable news, Q.E.D.

While the thinking sins identified by Tetlock are universal - we’re all vulnerable to overconfidence and confirmation bias - it’s not clear that the flaws of political experts can be generalized to other forms of expertise. For one thing, predicting geopolitics is famously fraught: there are countless variables to consider, interacting in unknowable ways. It’s possible, then, that experts might perform better in a narrower setting, attempting to predict the outcomes of experiments in their own field.

A new study, by Stefano DellaVigna at UC Berkeley and Devin Pope at the University of Chicago, aims to put academic experts to this more stringent test. They assembled 208 experts from the fields of economics, behavioral economics and psychology and asked them to forecast the impact of different motivators on the performance of subjects performing an extremely tedious task. (They had to press the “a” and “b” buttons on their keyboard as quickly as possible for ten minutes.) The experimental conditions ranged from the obvious - paying for better performance - to the subtle, as DellaVigna and Pope also looked at the influence of peer comparisons, charity and loss aversion.  What makes these questions interesting is that DellaVigna and Pope already knew the answers: they’d run these motivational studies on nearly 10,000 subjects. The mystery was whether or not the experts could predict the actual results.

To make the forecasting easier, the experts were given three benchmark conditions and told the average number of presses, or “points,” in each condition. For instance, when subjects were told that their performance would not affect their payment, they only averaged 1521 points. However, when they were paid 10 cents for every 100 points, they averaged 2175 total points. The experts were asked to predict the number of points in fifteen additional experimental conditions.

The good news for experts is that these academics did far better than Tetlock’s pundits. When asked to predict the average points in each condition, they demonstrated the wisdom of crowds: their predictions were off by only 5 percent. If you’re a policy maker, trying to anticipate the impact of a motivational nudge, you’d be well served by asking a bunch of academics for their opinions. 

The bad news is that, on an individual level, these academics still weren’t very good. They might have looked prescient when their answers were pooled together, but the results were far less impressive if you looked at the accuracy of experts in isolation. Perhaps most distressing, at least for the egos of experts, is that non-scientists were much better at ranking the treatments against each other, forecasting which conditions would be most and least effective. (As DellaVigna pointed out in an email, this is less a consequence of expert failure and more a tribute to the fact that non-experts did “amazingly well” at the task.) The takeaway is straightforward: there might be predictive value in a diverse group of academics, but you’d be foolish to trust the forecast of a single one.

Furthermore, there was shockingly little relationship between the credentials of academia and overall performance. Full professors tended to underperform assistant professors, while having more Google Scholar citations was correlated with lower levels of accuracy. (PhD students were “at least as good” as their bosses.) Academic experience clearly has virtues. But making better predictions about experiments does not seem to be one of them.

Since Tetlock published his damning critique of political pundits, he has gone on to study so-called “superforecasters,” those amateurs whose predictions of world events are consistently more accurate than those of intelligence analysts with access to classified information. (In general, these superforecasters share a particular temperament: they’re willing to learn from their mistakes, quick to update their beliefs and tend to think in shades of gray.) After mining the data, DellaVigna and Pope were able to identify their own superforecasters. As a group, these non-experts significantly outperformed the academics, improving on the average error rate of the professors by more than 20 percent. These people had no background in behavioral research. They were paid $1.50 for 10 minutes of their time. And yet, they were better than the experts at predicting research outcomes.

The limitations of expertise are best revealed by the failure of the experts to foresee their own shortcomings. When the academics were surveyed by DellaVigna and Pope, they predicted that high-citation experts would be significantly more accurate. (The opposite turned out to be true.) They also expected PhD students to underperform the professors – that didn’t happen, either – and that academics with training in psychology would perform the best. (The data points in the opposite direction.)

It’s a poignant lapse. These experts have been trained in human behavior. They have studied our biases and flaws. And yet, when it comes to their own performance, they are blind to their own blindspots. The hardest thing to know is what we don’t.

DellaVigna, Stefano, and Devin Pope. Predicting Experimental Results: Who Knows What? NBER Working Paper, 2016.      

The Power of Family Memory

In a famous series of studies conducted in the 1980s, the psychologists Betty Hart and Todd Risley gave parents a new variable to worry about: the number of words they speak to their children. According to Hart and Risley, the quantity of spoken language in a household is predictive of IQ scores, vocabulary size and overall academic success. The language gap even begins to explain socio-economic disparities in educational outcomes, as upper-class parents speak, on average, about 3.5 times more to their kids than their poorer peers. Hart and Risley referred to the lack of spoken words in poor households as "the early catastrophe."

In recent years, however, it’s become clear that it’s not just the amount of language that counts. Rather, researchers have found that some kinds of conversations are far more effective at promoting mental and emotional development than others. While all parents engage in roughly similar amounts of so-called “business talk” – these are interactions in which the parent is offering instructions, such as “Hold out your hands,” or “Stop whining!” – there is far more variation when it comes to what Hart and Risley called “language dancing,” or conversations in which the parent and child are engaged in a genuine dialogue. According to a 2009 study by researchers at the UCLA School of Public Health, parent-child dialogues were six times as effective in promoting the development of language skills as those in which the adult did all the talking.

So conversation is better than instruction; dialogues over monologues. But this only leads to the next practical question: What’s the best kind of conversation to have with children? If we only have a limited amount of “language dancing” time every day - my kids usually start negotiating for dessert roughly five minutes into dinner - then what should we choose to chat about? And this isn’t just a concern for precious helicopter parents. Rather, it’s a relevant topic for researchers trying to design interventions for at-risk children, as they attempt to give caregivers the tools to ensure successful development.

A new answer is emerging. According to a recent paper by the psychologists Karen Salmon and Elaine Reese, one of the best subjects of parent-child conversation is the past, or what they refer to as “elaborative reminiscing.” As evidence, Salmon and Reese cite a wide variety of studies, drawn from more than three decades of research on children between the ages of 18 months and 5 years, all of which converge on a similar theme: discussing our memories is an extremely effective way to promote cognitive and emotional growth. Maybe it’s a scene from our last family vacation, or an accounting of what happened at school that day, or that time I locked my keys in the car - the details of the memory don’t seem to matter that much. What does is that we remember together.

Here’s an example of the everyday reminiscing the scientists recommend:

Mother: “What was the first thing he [the barber] did?”

Child: “Bzzzz.” (running his hand over his head)

Mother: “He used the clippers, and I think you liked the clippers. And you know how I know? Because you were smiling.”

Child: “Because they were tickling.”

Mother: “They were tickling, is that how they felt? Did they feel scratchy?”

Child: “No.”

Mother: “And after the clippers, what did he use then?”

Child: “The spray.”

Mother: “Yes. Why did he use the spray?”

Child: (silent)

Mother: “He used the spray to tidy your hair. And I noticed that you closed your eyes, and I thought ‘Jesse’s feeling a little bit scared,’ but you didn’t move or cry and I thought you were being very brave.”

It’s such an ordinary conversation, but Salmon and Reese point out its many virtues. For one thing, the questions are leading the child through his recent haircut experience. He is learning how to remember, what it takes to unpack a scene, the mechanics of turning the past into a story. Over time, these skills play a huge role in language development, which is why children that engage in more elaborative reminiscing with their parents tend to have more advanced vocabularies, better early literacy scores and improved narrative skills. In fact, one study found that teaching low-income mothers to “reminisce in more elaborative ways” led to bigger improvements in narrative skills and story comprehension than an interactive book-reading program.

But talking about the past isn’t just about turning our kids into better storytellers. It’s also about boosting their emotional intelligence, teaching them how to handle those feelings they’d rather forget. In A Book About Love, I wrote about research showing that children raised in households that engage in the most shared recollection report higher levels of emotional well-being and a stronger sense of personal identity. The family unit also becomes stronger, as those children and parents who know more about the past also scored higher on a widely used measure of “reported family functioning.” Salmon and Reese expand on these findings, citing research showing that emotional reminiscing is linked to long-term improvements in the ability of children to regulate their negative emotions, handle difficult situations and identify the feelings of themselves and others.

Consider the haircut conversation above. Notice how the mother identifies the feelings felt by the child: enjoyment, tickling, fear. She suggests triggers for these emotions - the clippers, the water spray - and helps her son understand their fleeting nature. (Because the feelings are no longer present, they can be discussed calmly. That’s why talking about remembered emotions is often more useful than talking about emotions in the heat of the moment.) The virtue of such dialogues is that they teach children how to cope with their feelings, even when what they feel is fury and fear. As Salmon and Reese note, these are particularly important skills for mothers who have been exposed to adverse or traumatic experiences, such as drug abuse or domestic violence. Studies show that these at-risk parents are much less likely to incorporate “emotion words” when talking with their children. And when they do discuss their memories, Salmon and Reese write, they often “remain stuck in anger.” Their past isn’t past yet.

Perhaps this is another benefit of elaborative reminiscing. When we talk about our memories with loved ones, we translate the event into language, giving that swirl of emotion a narrative arc. (As the psychologist James Pennebaker has written, "Once it [a painful memory] is language based, people can better understand the experience and ultimately put it behind them.") And so the conversation becomes a moment of therapy, allowing us to make sense of what happened and move on. 

It was just a haircut, but you were so brave.   

Salmon, Karen, and Elaine Reese. "The Benefits of Reminiscing With Young Children." Current Directions in Psychological Science 25.4 (2016): 233-238.       

 

The Overview Effect

After six weeks in orbit, circling the earth in a claustrophobic space station, the three-person crew of Skylab 4 decided to go on a strike. For 24 hours, the astronauts refused to work, and even turned off their communications radio linking them to Earth. While NASA was confused by the space revolt—mission control was concerned the astronauts were depressed—the men up in space insisted they just wanted more time to admire their view of the earth. As the NASA flight director later put it, the astronauts were asserting “their needs to reflect, to observe, to find their place amid these baffling, fascinating, unprecedented experiences.”

The Skylab 4 crew was experiencing a phenomenon known as the overview effect, which refers to the intense emotional reaction that can be triggered by the sight of the earth from beyond its atmosphere. Sam Durrance, who flew on two shuttle missions, described the feeling like this: “You’ve seen pictures and you’ve heard people talk about it. But nothing can prepare you for what it actually looks like. The Earth is dramatically beautiful when you see it from orbit, more beautiful than any picture you’ve ever seen. It’s an emotional experience because you’re removed from the Earth but at the same time you feel this incredible connection to the Earth like nothing I’d ever felt before.”

The Caribbean Sea, as seen from ISS Expedition 40

The Caribbean Sea, as seen from ISS Expedition 40

What’s most remarkable about the overview effect is that the effect lasts: the experience of awe often leaves a permanent mark on the lives of astronauts. A new paper by a team of scientists (the lead author is David Yaden at the University of Pennsylvania) investigates the overview effect in detail, with a particular focus on how this vision of earth can “settle into long-term changes in personal outlook and attitude involving the individual’s relationship to Earth and its inhabitants.” For many astronauts, this is the view they never get over.

How does this happen? How does a short-lived perception alter one’s identity? There is no easy answer. In this paper, the scientists focus on how the sight of the distant earth is so contrary to our usual perspective that it forces our “self-schema” to accommodate an entirely new point of view. We might conceptually understand that the earth is a lonely speck floating in space, a dot of blue amid so much black. But it’s an entirely different thing to bear witness to this reality, to see our fragile planet from hundreds of miles away. The end result is that the self itself is changed; this new perspective of earth alters one’s perspective on life, with the typical astronaut reporting “a greater affiliation with humanity as a whole.” Here’s Ed Gibson, the science pilot on Skylab 4: “You see how diminutive your life and concerns are compared to other things in the universe. Your life and concerns are important to you, of course. But you can see that a lot of the things you worry about do not make much difference in an overall sense.”

There are two interesting takeaways. The first one, emphasized in the paper, is that the overview effect might serve as a crucial coping mechanism for the challenges of space travel. Astronauts live a grueling existence: they are stressed, isolated and exhausted. They live in cramped quarters, eat terrible food and never stop working. If we are going to get people to Mars, then we need to give astronauts tools to endure their time on a spaceship. As the crew of Skylab 4 understood, one of the best ways to withstand space travel is to appreciate its strange beauty.

The second takeaway has to do with the power of awe and wonder. When you read old treatises on human nature, these lofty emotions are often celebrated. Aristotle argued that all inquiry began with the feeling of awe, that “it is owing to their wonder that men both now begin and at first began to philosophize.” Rene Descartes, meanwhile, referred to wonder as the first of the passions, “a sudden surprise of the soul that brings it to focus on things that strike it as unusual and extraordinary.” In short, these thinkers saw the experience of awe as a fundamental human state, a feeling so strong it could shape our lives.

But now? We have little time for awe in the 21st century; wonder is for the young and unsophisticated. To the extent we consider these feelings it’s for a few brief moments on a hike in a National Park, or to marvel at a child’s face when they first enter Disneyland. (And then we get out our phones and take a picture.) Instead of cultivating awe, we treat it as just another fleeting feeling; wonder is for those who don’t know any better.

The overview effect, however, is a reminder that these emotions can have a lasting impact. Like the Skylab 4 astronauts, we can push back against our hectic schedules, insisting that we find some time to stare out the window.  

Who knows? The view just might change your life.

Yaden, David B., et al. "The overview effect: Awe and self-transcendent experience in space flight." Psychology of Consciousness: Theory, Research, and Practice 3.1 (2016): 1.

 

How Magicians Make You Stupid

The egg bag magic trick is simple enough. A magician produces an egg and places it in a cloth bag. Then, the magician uses some poor sleight of hand, pretending to hide the egg in his armpit. When the bag is revealed as empty, the audience assumes it knows where the egg really is.

But the egg isn’t there. The armpit was a false solution, distracting the crowd from the real trick: the bag contains a secret compartment. When the magician finally lifts his arm, the audience is impressed by the vanishing. How did he remove the egg from his armpit? It never occurs to them that the egg never left the bag.

Magicians are intuitive psychologists, reverse-engineering the mind and preying on all its weak spots. They build illusions out of our frailties, hiding rabbits in our attentional blind spots and distracting the eyes with hand waves and wands. And while people in the audience might be aware of their perceptual shortcomings – those fingers move so fast! - they are often blind to a crucial cognitive limitation, which allows magicians to keep us from deciphering the trick. In short, magicians know that people tend to to fixate on particular answers (the egg is in the armpit), and thus ignore alternative ones (it’s a trick bag), even when the alternatives are easier to execute. 

When it comes to problem-solving, this phenomenon is known as the einstellung effect. (Einstellung is German for “setting” or “attitude.”) First identified by the psychologist Abraham Luchins in the early 1940s, the effect has since been replicated in numerous domains. Consider a study that gave chess experts a series of difficult chess problems, each of which contained two solutions. The players were asked to find the shortest possible way to win. The first solution was obvious and took five moves to execute. The second solution was less familiar, but could be achieved in only three moves. As expected, these expert players found the first solution right away. Unfortunately, most of them then failed to identify the second one, even though it was more efficient. The good answer blinded them to the better one.  

Back to magic tricks. A new paper in Cognition, by Cyril Thomas and Andre Didierjean, extends the reach of the einstellung effect by showing that it limits our problem-solving abilities even when the false solution is unfamiliar and unlikely. Put another way, preposterous explanations can also become mental blocks, preventing us from finding answers that should be obvious. To demonstrate this, the scientists showed 90 students one of three versions of a card trick. The first version went like this: a performer showed the subject a brown-backed card surrounded by six red-backed cards. After randomly touching the back of the red cards, he asked the subject to choose one of the six, which was turned face up. It was a jack of hearts. The magician then flipped over the brown-backed card at the center, which was also a jack of hearts. The experiment concluded with the magician asking the subject to guess the secret of the trick. In this version, 83 percent of subjects quickly figured it out: all of the cards were the same.

The second version featured the same trick, except that the magician slyly introduced a false solution. Before a card was picked, he explained that he was able to influence other people’s choices through physical suggestions. He then touched the back of the red cards, acting as if these touches could sway the subject’s mind. After the trick was complete, these subjects were also asked to identify the secret. However, most of these subjects couldn’t figure it out: only 17 percent of people realized that every card was the jack of hearts. Their confusion persisted even after the magician encouraged them to keep thinking of alternative explanations.

This is a remarkable mental failure. It’s a reminder that our beliefs are not a mirror to the world, but rather bound up with the limits of the human mind. In this particular case, our inability to see the obvious trick seems to be a side-effect of our feeble working memory, which can only focus on a few bits of information at any given moment. (In an email, Thomas notes that it is more “economical to focus on one solution, and to not lose time…searching for a hypothetical alternative one.”) And so we fixate on the most salient answer, even when it makes no sense. As Thomas points out, a similar lapse explains the success of most mind-reading performances: we are so seduced by the false explanation (parapsychology!) that we neglect the obvious trick, which is that the magician gathered personal information about us from Facebook. The performance works because we lack the bandwidth to think of a far more reasonable explanation.

Thomas and Didierjean end their paper with a disturbing thought. “If a complete stranger (the magician) can fix spectators’ minds by convincing them that he/she can control their individual choice with his own gesture,” they write, “to what extent can an authority figure (e.g., policeman) or someone that we trust (e.g., doctors, politicians) fix our mind with unsuitable ideas?” They don’t answer the question, but they don’t need to. Just turn on the news.

Thomas, Cyril, and André Didierjean. "Magicians fix your mind: How unlikely solutions block obvious ones." Cognition 154 (2016): 169-173.

What Can Toilet Paper Teach Us About Poverty?

“Costco is where you go broke saving money.”

-My Uncle

The fundamental paradox of big box stores is that the only way to save money is to spend lots of it. Want to get a discount on that shampoo? Here's a liter. That’s a great price for chapstick – now you have 32 of them. The same logic applies to most staples of modern life, from diapers to Pellegrino, Uni-ball pens to laundry detergent.

For consumers, this buy-in-bulk strategy can lead to real savings, especially if the alternative is a bodega or Whole Foods. (Brand name diapers, for instance, cost nearly twice as much at my local grocery store compared to Costco.) However, not every American is equally likely to seek out these discounts. In particular, some studies have found that lower-income households  – the ones who could benefit the most from that huge bottle of Kirkland shampoo – pay higher prices because they don’t make bulk purchases.

A new paper, “Frugality is Hard to Afford,” by A. Yesim Orhun and Mike Palazzolo investigates why this phenomenon exists. Their data set featured the toilet paper purchases of more than 100,000 American families over seven years. Orhun and Palazzolo focused on toilet paper for several reasons. First, consumption of toilet paper is relatively constant. Second, toilet paper is easy to store – it doesn’t spoil – making it an ideal product to purchase in bulk, at least if you’re trying to get a discount. Third, the range of differences between brands of toilet paper is rather small, at least when compared to other consumer products such as detergent and toothpaste. 

So what did Orhun and Palazzolo find? As expected, lower income households were far less likely take advantage of the lower unit prices that come with bulk purchases. Over time, these shopping habits add up, as the poorest families end up paying, on average, 5.9 percent more per sheet of toilet paper. 

The question, of course, is why this behavior exists. Shouldn’t poor households be the most determined to shop around for cheap rolls? The most obvious explanation is what Orhun and Palazzolo refer to as a liquidity constraint: the poor simply lack the cash to “invest” in a big package of toilet paper. As a result, they are forced to buy basic household supplies on an as-needed basis, which makes it much harder to find the best possible price.

But this is not the only constraint imposed by poverty. In a 2013 Science paper, the behavioral scientists Anandi Mani, Sendhil Mullainathan, Eldar Shafir and Jiaying Zhao argued that not having money also imposes a mental burden, as our budgetary worries consume scarce attentional resources. This makes it harder for low-income households to plan for the future, whether it’s buying toilet paper in bulk or saving for retirement. “The poor, in this view, are less capable not because of inherent traits,” write the scientists, “but because the very context of poverty imposes load and impedes cognitive capacity.”

Consider a clever experiment conducted by Mani, et al. at a New Jersey mall. They asked shoppers about various hypothetical scenarios involving a financial problem. For instance, they might be told that their “car is having some trouble and requires $[X] to be fixed.” Some subjects were told that their repair was extremely expensive ($1500), while others were told it was relatively cheap ($150.) Then, all participants were given a series of challenging cognitive tasks, including some questions from an intelligence test and a measure of impulse control.

The results were startling. Among rich subjects, it didn’t really matter how much the car cost to fix – they performed equally well when the repair estimate was $150 or $1500.  Poor subjects, however, showed a troubling difference. When the repair estimate was low, they performed roughly equivalent to rich subjects. But when the repair estimate was high they suddenly showed a steep drop off in performance on both tests, comparable in magnitude to the mental deficit associated with losing a full night of sleep or becoming an alcoholic.