How A Tired Mind Limits the Body

When it comes to our self-understanding, we have been held back by an extraordinary philosophical mistake. It’s a forgivable error, since it reflects our most basic intuitions. The mistake I’m talking about is dualism, which holds that the mind and body are fundamentally separate things.

To borrow the famous framework of Rene Descartes, the human mind is a “thinking thing,” composed of an immaterial substance. (Our thoughts are airy nothings, etc.) The body, in contrast, is a “thing that exists,” just a mortal machine that bleeds. For Descartes, dualism was a defining feature of humanity. Every animal has a body. Only we have a mind.

The dualist faith continues to shape our lives. Like Descartes, we tend to assume that mental events have mental causes—you are sad because your brain is sad—and that physical events have physical causes. (If your back is in pain, there’s something wrong with your back.) Dualism is why we treat depression with pills (rather than exercise, which is often just as effective) and undergo so many spinal surgeries (which are often ineffective.)

Dualism seems obviously true. But it’s mostly false. In recent years, modern neuroscience has demolished these old Cartesian distinctions. It has done this mostly by showing how the body is not a mere power plant to the brain, but rather shapes every aspect of conscious experience. The bacteria in your intestines, for instance, seem to influence your mood, while that feeling of fear probably began as a slightly elevated heart rate. Our memory is improved when it’s connected to physical movement and the sweat glands in your palm can anticipate your gambling mistakes long before the cortex catches up. As the neuroscientist Antonio Damasio has written, “The body contributes more than life support. It contributes a content that is part and parcel of the workings of the normal mind.”

These studies are convincing. And yet, even if one acknowledges the subtle powers of the body— the soul is surprisingly carnal—there is still one realm in which dualism is taken for granted: athletic performance. When we look at our best athletes, we appreciate them as physical specimens, blessed with better flesh than the rest of us. They must have bigger hearts and more fast-twitch muscle fibers; highly efficient lungs and lower resting pulses. We ignore their “thinking thing” and focus instead on their body, “the thing that exists.”

But even here the body/mind distinction proves illusory. Consider a new paper by Daniel Longman, Jay Stock and Jonathan Wells. Their subjects were sixty-two male rowers from the University of Cambridge. They were all in excellent shape. On their first visit to the lab, the men rowed as intensely as possible for three minutes as the scientists tracked their total power output. On their second visit, the men were given an arduous mental task. Seventy-five words were briefly flashed on a screen; their job was to remember as many of them as possible.

The last visit to the lab combined these two measures. While the men worked up a sweat on the rowing machine, they were simultaneously shown a new set of words and asked to remember them. As expected, combining the tasks led to a dropoff in performance: the men remembered fewer words and generated less power on the rowing machine.

But here’s the interesting part: the decline was asymmetric, with physical performance suffering a dropoff that was roughly 25 percent greater than mental performance.

What accounts for this asymmetry? The scientists suggest that it’s rooted in the scarcity of blood sugar and oxygen, as the brain and body compete for the same finite resources. And since we are creatures of cogito—thinking is our competitive advantage—it only makes sense that we’d privilege the cortex over our quadriceps.

The larger lesson is that our thoughts and body are not separate systems—they are deeply intertwined, engaged in a constant dialectic. Those rowers didn’t perform worse because their muscles were run down. Rather, they had less physical power because their selfish brain decided to feed itself first. This means that the best athletes don’t just have better bodies – they also have minds that don’t hold them back.      

Such research adds to the evidence for the so-called Central Governor theory of physical endurance. (I wrote about this recently in Men’s Health.) Most closely associated with Timothy Noakes, now an emeritus professor at the University of Cape Town, the Central Governor theory argues that the feeling of bodily fatigue is primarily caused by the brain, and not the body. As Noakes points out, in the final stages of a race, up to 65 percent of muscle fibers in the leg remain inactive. In addition, levels of ATP—the molecule used to transport energy within our cells—almost never fall below 60 percent of their resting value. This suggests that we still have plenty of energy left, even when the body feels exhausted. The Central Governor is just too scared to use it.

It’s a simple idea with radical implications. After all, we’ve assumed for nearly a century that our physical limits were largely reducible to the laws of muscular chemistry. (In the 1920s, the British physiologist and Nobel laureate Archibald Hill began writing about the effect of “oxygen debt” and the accumulation of lactic acid during intense exercise.) Noakes, however, argues that the reality is far more complicated, and that our sense of fatigue is a subjective mental construct, based on countless variables, from the temperature of the skin to the cheers of the crowd. “I am not saying that what takes place in the muscles is irrelevant,” Noakes writes in his autobiography, Challenging Beliefs. “What I am saying is that what takes place physiologically in the muscles in not what causes fatigue.”

And this brings us back to dualism. After all, unless you admit the enormous mental component of physical performance then won’t be able to train effectively. You’ll be focused on VO2 max and lactate concentrations—highly imperfect measures at best—when you should be building up the threshold of your Central Governor.

So how does one train the Central Governor? In my Men’s Health piece, I profiled Holden Macrae, professor of Sports Medicine at Pepperdine. As part of the Red Bull High Performance research project, he gave endurance athletes a tedious mental chore for 30 minutes. Once their brain was sufficiently run down, Macrae then had them perform a difficult cycling workout. “We found that the power output of the mentally pre-fatigued athletes was way lower than the non-fatigued,” he told me. “It didn’t matter that their bodies were fresh. Their brains were tired, and that shaped their performance.”

Macrae argues that these findings have practical implications for training. If elite athletes are looking to push the boundaries of their endurance, then they should begin their physical training after a brain workout. “Because you are stressing the mind and the body at the same time, you are forcing yourself to write a new software program,” he says. “It’s the same logic as high-altitude training, only you don’t have to go anywhere. You just have to do something boring first.”

The appeal of dualism is inseparable from the fact that it feels true; the body and mind seem like such separate entities. But one of the profound potentials of modern neuroscience is the way it can falsify our longstanding assumptions about human nature. You are not your brain, and your body is not just a body; the soul and the flesh have a very porous relationship. Once we understand that, we can find ways to get more out of both.

Or at least get in a better workout.

Longman, Daniel, Jay T. Stock, and Jonathan CK Wells. "A trade-off between cognitive and physical performance, with relative preservation of brain function." Scientific Reports 7.1 (2017): 13709.

Does Anyone Ever Change Their Mind?

Democracy is expensive. During the 2016 general election, candidates spent nearly $7 billion dollars on their campaigns. (More than $2.5 billion was spent just on the presidential contest.) This money paid for attack ads on television and direct marketing in the mail; it went to voter outreach in swing districts, fancy consultants and targeted ads on Facebook. The goal of all this spending was simple: to persuade more Americans to vote for them.

Did it work? Were those billions well spent? According to a new paper by Joshua Kalla of UC-Berkeley and David Broockman at Stanford, the overwhelming majority of campaign activity failed to persuade voters. As they bluntly state, “We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero.” Not close to zero. Not even one or two percent. Zero.

Kalla and Broockman come to this shocking conclusion by conducting the first meta-analysis of campaign outreach and advertising. Based on a review of forty field experiments, they found that the average effect of all these professional interventions was negligible. (Or, to be exact, - 0.02 percentage points.) While two of the forty studies did find a significant shift in voter behavior, Kalla and Broockman rightly note that these studies looked at interventions with limited applicability. (In one case, the candidate himself knocked on doors, while the other intervention relied on an onerous survey that most voters would never answer.)

However, Kalla and Broockman weren’t content to re-analyze the null results of the past. Given the “dearth of statistically precise studies” the political scientists decided to conduct nine of their own field experiments.  They teamed up with Working America, the community organizing affiliate of the AFL-CIO, to study the impact of canvassing in a variety of different campaigns.

The good news, at least for the political industrial complex, is that Working America had an impact during primaries and special elections. Take the Democratic primary for the mayor of Philadelphia. Kalla and Broockman estimate that a Working America canvass conducted six weeks before election day boosted support for their endorsed candidate by approximately 11 percentage points. A similar effect was observed during a special election for a seat in the Washington State Legislature.

However, the effect size shrank to zero when Kalla and Broockman looked at attempts to influence voters during the general election. When Working America tried to persuade people in Ohio, North Carolina, Florida and Missouri to vote for their candidates for the U.S. Senate, Governor and President, the scientists consistently found no impact from the interventions. As they write, “we conclude that, on average, personal contact—such as door-to-door canvassing or phone calls—conducted within two months of a general election has no substantive effect on vote choice.”

This doesn’t mean campaigns are irrelevant. Candidates can still shape voters’ preferences by changing their policy positions and influencing the media narrative. However, Kalla and Broockman do present solid evidence that most of the stuff campaigns spend their billions on is essentially worthless, at least in the general.

Why are voters so hard to persuade? One likely cause is our hyper-partisan age, which has been exacerbated by online filter bubbles. (Republican Facebook is very different from Democratic Facebook.) As Kalla and Broockman write, “When it comes to providing voters with new arguments, frames, and information, by the time election day arrives, voters are likely to have already absorbed all the arguments and information they care to retain from the media and other sources.”

The key caveat in that sentence is “care to retain.” While voters are inundated with information about the election, they are depressingly good at ignoring dissonant facts, or those arguments that might rattle their partisan opinions. (Roughly half of Trump voters, for instance, think that he won the popular vote and that President Obama was born in Kenya.) The end result is that partisanship dominates persuasion; the vast majority of voters vote for their side, with little consideration of candidate or policy details. In a primary election, those partisan cues are less obvious, which means voters are more open-minded about the actual candidates. Persuasion stands a chance.

President Trump's success depends on these trends. His rhetoric and norm violations are consistently directed at a highly specific (and very conservative) slice of the electorate. This approach might be toxic for the body politic, but it does reflect a certain realism about the limits of persuasion. After all, if the other side can’t be reached, then moderation is for chumps. Modern politics isn’t the art of compromise – it’s the act of targeted arousal. (And Facebook makes such targeting extremely easy.)

President Trump's key insight was that all those norm violations would exact a minimal price at the ballot box. When it was time to vote in the general, he knew that partisanship would dominate, and that even those offended Republicans would hold their noses and vote for their guy.

Is there a solution? Not really. I am, however, slightly encouraged by recent research on human ignorance. In a classic study conducted on Yale undergraduates, the psychologists Leonid Rozenblit and Frank Keil asked people to rate how well they understood the objects they used every day, such as toilets, car speedometers and zippers. Then, the students were asked to write detailed descriptions of how these objects worked, before reassessing their understanding.

The quick exercise revealed that most people dramatically overestimate their understanding. We think we know how toilets work because we flush them several times a day, but almost nobody could explain the ingenious siphoning action used to purge the bowl. As Rozenblit and Keil write, “Most people feel they understand the world with far greater detail, coherence and depth than they really do.” They called this mistake the illusion of explanatory depth.

This same illusion is ruining our politics. In a 2013 study, a team of psychologists led by Philip Fernbach found that the illusion of explanatory depth led people to overestimate their understanding of political issues such as the flat tax, single-payer health care system and Iran sanctions. As with the toilet, it wasn’t until people tried to explain their knowledge, along with the impact of their chosen policies, that they realized how little they actually knew. Interestingly, acknowledging the unknown also led them to moderate their political opinions. As Steven Sloman and Philip Fernbach write in their book The Knowledge Illusion, “The attempt to explain caused their positions to converge.”

The practical lesson is that political persuasion isn’t just about slick videos and clever framing. In fact, most of that stuff doesn’t seem to work at all. (Charisma is no match for cognitive dissonance.) Rather, to the extent persuasion seems possible, it seems to be conditional on voters recognizing their own lack of knowledge, or at least grappling with the complexity of the issues. Sloman and Fernbach put it well: “A good leader must be able to help people realize their ignorance without making them feel stupid.”

There is a smidgen of hope in this research. If Trump represents the triumph of hyper-partisanship—he’s most interested in reaffirming the beliefs of his base—these findings suggest that candidates might also persuade voters by emphasizing the hard questions, and not just their partisan answers. At the very least, such rhetoric makes moderation more appealing.

This was an underappreciated part of the Obama playbook. While the former professor was often criticized for his long-winded and nuanced responses, that nuance might have been more persuasive to voters than another set of rehearsed talking points. As President Obama once observed, when asked about the challenges of the Presidency: “These are big, tough, complicated problems. Somebody noted to me that by the time something reaches my desk, that means it’s really hard. Because if it were easy, somebody else would have made the decision and somebody else would have solved it.”

Obama understood what the science reveals: If you want to change someone else’s mind, yelling out your answers won't work; facts are not convincing. Instead, try beginning with an admission of doubt. (Recent research by David Hagmann and George Loewenstein shows that "expressions of doubt and acknowledgment of opposing views increases persuasiveness," especially in the context of motivated reasoning.) We are most persuasive when we first admit we don't know everything.

Kalla, Joshua L., and David E. Broockman. "The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments." American Political Science Review (2017): 1-19.

The Increasing Value of Social Skills

The progress of technology is best measured by our obsolescence: we have a knack for creating machines that are better than us. From self-driving trucks to software that reads MRIs, many of our current jobs will soon be outsourced to our own ingenious inventions.

And yet, even as the robots take over, it’s clear that there are some skills that are still best suited for human beings. We might be irrational, distractible and bad at math, but we are also empathetic, cunning and creative. Robots are smart. We are social.

It’s easy to dismiss these soft skills. Unlike IQ scores, they are hard to quantify. However, according to a new working paper by Per-Anders Edin, Peter Fredriksson, Martin Nybom and Bjorn Ockert, these squishy interpersonal skills are precisely the sort of talents that are most in demand in the 21st century. 

What’s driving this shift? The answer is technological. Before there were computers in our pockets, the most valuable minds excelled at cognitive stuff: they were adept at abstraction and gifted with numbers. But now? Those talents are easily replaced by cheap gadgets and free software. Computation has become a commodity. As a result, so-called non-cognitive skills—a catch-all category that includes everything from teamwork to self-control—are becoming increasingly valuable.

To prove this point, the researchers took advantage of a unique data set: between 1969 and 1994, nearly every Swedish male underwent a battery of psychological tests as part of the enlistment procedure for the military draft. Their cognitive scores were based on four tests measuring reasoning ability, verbal comprehension, spatial ability and technical understanding. Their non-cognitive skills, in contrast, were assessed during a 20-minute interview with a trained psychologist. During the interview, the draftee was scored on dimensions including “social maturity,” perseverance and emotional stability.

The researchers then matched these cognitive and non-cognitive scores to wage data collected by the Swedish government. Among workers in the private sector, they found that the returns to cognitive skills was relatively flat between 1992 and 2013. This jives with related research from the United States labor market, showing that employment growth in “cognitively demanding occupations” slowed down dramatically in the 21st century. 

However, Edin et al. observed the opposite trend when it came to non-cognitive skills. For these Swedish workers, being good at the interpersonal and emotional was increasingly valuable, with the partial return to non-cognitive skills roughly doubling over the same time-period. It’s not that intelligence doesn’t matter. It’s that emotional intelligence matters more.

Screen Shot 2017-08-25 at 9.18.02 AM.png

According to the economists, one of the reasons non-cognitive skills are becoming more valuable is that they are required for managerial roles.  A good manager doesn’t just issue edicts: he or she must also coordinate workers, placate egos and deal with disagreements. As the economist David Deming noted in his paper, “The Growing Importance of Social Skills in the Labor Market,”  “Such non-routine interaction is at the heart of the human advantage over machines…Reading the minds of others and reacting is an unconscious process, and skill in social settings has evolved in humans over thousands of years.” (Watson might trounce us at chess, but the supercomputer would probably be a terrible boss.) The importance of such non-cognitive skills for management helps explain why the bigger paychecks are going to those with the best social skills.  

Screen Shot 2017-08-25 at 9.06.38 AM.png

This research inevitably leads back to education. The traditional classroom, after all, has been mostly focused on building up cognitive skills. We drill students on arithmetic and pre-algebra; we ask them to memorize answers and follow the rules; the ultimate measure of one’s education is the SAT, a highly cognitive test. Such talents will always be necessary: even in the age of robots, it’s nice to know your multiplication tables.

However, it’s becoming increasingly clear that our classrooms are preparing students for a workforce that no longer exists. They are being taught the most replaceable skills, drilled on the tasks that computers already perform. (It’s a bit like teaching parchment preparation after Gutenberg.) This trend is only getting worse in the age of standardized tests, which focus classroom time on material that can be easily measured by multiple choice questions. Unfortunately, that’s often the very kind of education that technology has rendered obsolete. If you need to memorize it, then chances are a computer can do it better.

The obvious alternative is for classrooms to follow the money, at least when it comes to the wage returns on non-cognitive skills. We should invest in classrooms that teach students how to work together and handle their feelings, even if such soft skills are harder to assess. What’s more, there’s reason to believe that many of these socio-emotional skills are learned relatively early in life, suggesting that we need to invest in effective pre-school and kindergarten curriculums. (Interventions targeting at-risk parents have also proven effective.) While these enhanced socio-emotional abilities might not translate to improved academic performance, there’s evidence that they remain linked to adult outcomes such as employment, earnings and mental health.

The modern metaphor of the human mind is that it’s a biological computer, three pounds of meaty microchips. But it turns out that the real value of the mind in the 21st century depends on all the ways it’s not like a computer at all. It’s not about how much information we can process, because there’s always a machine that can process more. It’s about how we handle those feelings that only we can feel.

The future belongs to those who play well with others.

Hat tip: Marginal Revolution

Does Divorce Increase the Risk of the Common Cold, Even Decades Later?

On November 30, 1939, 450,000 Soviet troops stormed across the Finnish border, setting off nearly five years of brutal conflict. The cities of Finland were strafed by bombers; severe food rationing was put into effect; roughly 2.5 percent of the population was killed. To protect Finnish children from the war, about 70,000 of them were evacuated to temporary foster homes in Sweden and Denmark.

At first glance, it seems like evacuating children from a war zone is the responsible choice. Nevertheless, multiple studies have found that those Finnish children who were sent away have had to deal with the more severe long-term consequences. They might have avoided the acute stress of war, but they had to cope with the chronic stress of separation. A 2009 study found that Finnish adults who were separated from their parents between 1939 and 1944 showed an 86 percent increase in deaths due to cardiovascular illness compared to those who had stayed at home. Although more than sixty years had passed since the war, these temporary orphans were also significantly more likely to have high blood pressure and type 2 diabetes. Other studies have documented elevated levels of stress hormone and increased risk of severe depressive symptoms among the wartime evacuees.

What explains these tragic correlations? The Finnish studies build on decades of research showing that disruptions to our early attachment relationships—such as separating young children from their parents during wartime—can have a permanent impact on our health.

The latest evidence in support of early attachment and adult medical outcomes comes from a new paper in PNAS by the scientists Michael Murphy, Sheldon Cohen, Denise Janicki-Deverts and William Doyle. But these researchers didn’t look at wartime evacuations – they looked at divorce. While parental divorce during childhood has been statistically linked to an increased risk for various physical ailments, from asthma to cancer, these studies have tended to rely on self-reports. As a result, it’s been difficult to determine the underlying cause of the correlations.

To explore this practical mystery, Murphy et al. came up with a clever experimental design. They quarantined 201 healthy adults and gave them nasal drops containing rhinovirus 39, a virus that causes the common cold. They carefully monitored the health of the subjects over the next five days, tracking their symptoms, weighing their mucus, and collecting various markers of immune response and inflammation.

The first thing the scientists found is that not all divorces are created equal. This accords with a growing body of evidence showing that the quality of the parents’ relationship with each other after separation may be more important in predicting the adjustment of their children than the separation itself. This led the scientists to ask their subjects whose parents lived apart if their parents spoke to each other after the separation. As the scientists note, “having parents who are separated and not on speaking terms suggests high levels of acrimony in the childhood family environment.” Such conflict can be extremely stressful for children.

How did these bitter separations during childhood impact the response of the adult subjects to a cold virus? The results were clear. Those adults whose parents lived apart and never spoke during their childhood were more than three times as likely to develop a cold than adults from intact families or those whose parents separated but were still on speaking terms. What’s more, the differences persisted even after the scientists corrected for a raft of possible confounding variables, such as demographics, childhood SES, body mass index, etc. 

PLANS = Parents that Lived Apart and Never Spoke

PLANS = Parents that Lived Apart and Never Spoke

There are two possible explanations for this increased risk. The first is that a bitter divorce weakens the immune system, making those subjects more vulnerable to the rhinovirus, even decades later. The second possibility is that divorce heightens the inflammatory response post-infection, thus triggering the annoying symptoms (mucus, sore throat, mild fever, etc.) associated with the common cold.

The evidence strongly favors the second explanation. For one thing, there was no statistical relationship between divorce and viral infection: everyone was equally likely to show antibodies to the virus in their blood. However, there was a large difference in how the body responded to the infection, with those from non-speaking divorced households being much more likely to exhibit symptoms of illness. (This increased risk was mediated by measurements of inflammatory cytokines.) Although it’s still unclear why an ugly divorce might alter our response to viral infection, one intriguing hypothesis is that the chronic stress of having parents who never stop fighting can cause immune cells to become desensitized to the very hormones that help suppress the inflammation response. In other words, an ugly parental separation can mark our stress response for life, an invisible wound we never get over.

This research is an important reminder of attachment’s long reach: even the most basic aspects of our physical health, like resistance to the common cold, are shaped by emotional events that happened decades before. But it’s also a demonstration that not every rupture of attachment leaves lasting scars; different kinds of divorce can have a very different impact on children. According to this data, the key element might be finding some way to constructively communicate with our former spouse, at least in matters relating to childcare. While intervention studies are needed to directly test this possibility, it seems likely that a little civility can help buffer the fallout of living apart.

Murphy, Michael LM, et al. "Offspring of parents who were separated and not speaking to one another have reduced resistance to the common cold as adults." Proceedings of the National Academy of Sciences (2017)

Can Love Help You Forget Painful Memories?

 "Perfect love casts out fear." 

-Gospel of John 4:18

It's now a firmly established fact that loving attachments are an important component of good health. According to dozens of epidemiological studies, people in long-term relationships are significantly less likely to suffer from cancer, viral infections, mental illness, pneumonia, and dementia. They have fewer surgeries, car accidents, and heart attacks. Their wounds heal faster and they have a lower risk of auto-immune diseases. 

Consider the results from the Harvard Study of Adult Development, which has been tracking 268 Harvard men since the late 1930s. While the study set out to identify the medical measurements that could predict health outcomes—they tracked everything from the circumference of the chest to the hanging length of the scrotum—none of the data proved useful. Instead, what George Vaillant and other scientists discovered after tracking the men for nearly seven decades is that "the capacity for love turns out to be a great predictor of mortality." For instance, those men in the "loveless" category—they had the fewest attachments—were three times more likely to be diagnosed with a mental illness and five times more likely to be "unusually anxious." The loneliest men were also ten times more likely to suffer from a chronic illness before the age of fifty-two and three times more likely to become heavy users of alcohol and tranquilizers. 

Those are just a few of the tragic correlations. (I wrote more about them in A Book About Love, which is soon out in paperback.) Nevertheless, the causal mechanics of these health benefits are unclear. How, exactly, do close relationships prevent such a wide variety of serious illnesses, from alcoholism to heart disease? Why does love keep us alive?

Those practical mysteries are the subject of a new paper in PLOS One by Erica Hornstein and Naomi Eisenberger, psychologists at UCLA. The scientists began by asking subjects to identify “the individual who gives you the most support on a daily basis.” They were told that these individuals could come from any relationship: parent, friend, romantic partner, etc.  The subjects were then asked to provide a picture of this supportive figure.

The experiment itself was a classic fear learning paradigm. First, the scientists had to calibrate the proper amount of electric shock for each subject: they wanted the experience to be “extremely uncomfortable, but not painful.”  (The shock is what triggers the fear.) During the acquisition phase, the subjects were shown various neutral images, such as different clocks and stools. One of these images was paired with a picture of their social support figure while the other was paired with a stranger matched for gender, age and ethnicity. Finally, these images and faces were matched to that “extremely uncomfortable” electric shock. This pairing process was repeated six times.

It might seem unlikely that the mere picture of a loved one could keep us from remembering a fearful association. Nevertheless, when Hornstein and Eisenberger measured fear responses using the skin conductance response of the hand—when you’re scared or anxious, the hands begin to sweat—they found a dramatic difference between the images paired with support figures and strangers.

Screen Shot 2017-06-26 at 8.55.17 PM.png

What accounts for this striking difference? According to the scientists, the most plausible explanation is that the picture of a loved one can “inhibit the formation of fear associations,” preventing us from remembering those scary stimuli in the first place. This builds on related work by Erica Hornstein, Michael Fanselow and Naomi Eisenberger showing that pictures of social support figures can also enhance fear extinction, so that subjects are less likely to react to images that had previously been paired with an electric shock. In short, thinking of a loved one can serve as a useful form of amnesia, at least when it comes to fear memories.

This is pure speculation, but I wonder if studies like this might be used to develop new therapeutic tools. So much of therapy is about learning how to retell our personal history in less painful ways, reducing those triggers that send us into paroxysms of fear, anxiety and despair. One obvious approach would be make sure our retellings of negative events are somehow told in conjunction with our support figures. Maybe it involves having a picture of a partner nearby, or asking questions about the trauma that frame the event in terms of how our attachment figures helped us through. When it comes to buffering the bad stuff, the best medicine is the people we love. A good relationship is like Xanax without the side-effects.

Hornstein, Erica A., and Naomi I. Eisenberger. "Unpacking the buffering effect of social support figures: Social support attenuates fear acquisition." PloS one 12.5 (2017): e0175891.

Is Facebook Bad for Democracy?

We are living in an era of extreme partisanship. As documented by the Pew Research Center, majorities of people in both parties now express “very unfavorable” views of the other side, with most concluding that the policies of the opposition “are so misguided that they threaten the nation’s well-being.” 79 percent of Republicans approve of Trump’s performance as president, while 79 percent of Democrats disapprove. In many respects, party affiliation has become the lens through which we see the world; even the Super Bowl can’t escape the stink of politics.

There are two ways of understanding these divisions.

The first is look at the historical parallels. Partisanship, after all, is as American as apple pie and SUVs. George Washington, in his farewell address, warned that the rise of political parties might lead to a form of “alternate domination,” as the parties would gradually “incline the minds of men to seek security... in the absolute power of an individual.” In the election of 1800, his prophesy almost came true, as several states were preparing to summon their militias if Jefferson lost. Our democracy has always been a contact sport.

But there’s another way of explaining the political splintering of the 21st century. Instead of seeing our current divide as a continuation of old historical trends, this version focuses on the impact of new social media. Donald Trump is not the latest face of our factional republic—he’s the first political figure to fully take advantage of these new information technologies.

Needless to say, this second hypothesis is far more depressing. We know our democracy can handle partisan passions. It’s less clear it can survive Facebook.

Why might technology be cratering our public discourse? To answer this question, a new paper in PLOS ONE by a team of Italian researchers at the IMT School for Advanced Studies Lucca and Brian Uzzi at Northwestern looked at 12 million users of Facebook and YouTube. They began by identifying 413 different Facebook pages that could be sorted into one of two categories: Conspiracy or Science. Conspiracy pages were those that featured, in the delicate wording of the scientists, “alternative information sources and myth narratives—pages which disseminate controversial information, usually lacking supporting evidence and most often contradictory of the official news.” (Examples include Infowars, the Fluoride Action Network and the ironically named I Fucking Love Truth.) Science pages, meanwhile, were defined as those having “the main mission of diffusing scientific knowledge.” (Examples include Nature, Astronomy Magazine and Eureka Alerts.)

The researchers then looked at how users interacted with videos appearing on these sites on both Facebook and YouTube. They looked at comments, shares and likes between January 2010 and December 2014. As you can probably guess, many users began the study only watching videos from either the Conspiracy or Science categories. (These people are analogous to voters with entrenched party affiliations.) The researchers, however, were most interested in those users who interacted with both categories; these folks liked Neil deGrasse Tyson and Alex Jones. Think of them as analogous to registered Democrats who voted for Trump, or Republicans who might vote for a Democratic congressperson in the 2018 midterms.

Here’s where things get unsettling. After just fifty interactions on YouTube and Facebook, most of these “independents” started watching videos exclusively from one side. Their diversity of opinions gave way to uniformity, their quirkiness subsumed by polarization. The filter bubble won. And it won fast.

Why does the online world encourage polarization? The scientists focus on two frequently cited forces. The most powerful force is confirmation bias, that tendency to seek out information that confirms our pre-existing beliefs. It’s much more fun to learn about why we’re right (Fluoride = cancer) than consider the possibility we might be wrong (Fluoride is a safe and easy way to prevent tooth decay). Entire media empires have been built on this depressing insight.

The second force driving online polarization is the echo chamber effect. Most online platforms (such as the Facebook News Feed) are controlled by algorithms designed to give us a steady drip of content we want to see. That’s a benign aspiration, but what it often means in practice is that the software filters out dissent and dissonance. If you liked an Infowars video about the evils of vaccines, then Facebook thinks you might also like their videos about fluoride. (This helps explain why previous research has found that more active Facebook users tend to get their information from a smaller number of news sources.) “Inside an echo chamber, the thing that makes people’s thinking evolve is the even more extreme point of view,” Uzzi said in a recent interview with Anne Ford. “So you become even more left-wing or even more right-wing.” The end result is an ironic affliction: we are more certain than ever, but we understand less about the world.

This finding jives nicely with another new paper that directly tested the impact of filtered newsfeeds. In a clever lab experiment, Ivan Dylko and colleagues showed that feeds similar to those on Facebook led people to spend far less time reading articles that contradicted their  political beliefs. Dylko et al. end on a somber note: “Taken together, these findings show that customizability technology can undermine important foundations of deliberative democracy. If this technology becomes even more popular, we can expect these detrimental effects to increase.”

The obvious solution to these problems is to engage in more debunking. If people are seeking out fake news and false conspiracies, then we should confront them with real facts. (This is what Facebook is trying to do, as they now include links to debunked articles in News Feeds.) Alas, the evidence suggests that this strategy might backfire. A previous paper by several of the Italian scientists found that Facebook users prone to conspiracy thinking react to contradictory information by “increasing their engagement within the conspiracy echo chamber.” In other words, when people are told they’re wrong, they don’t revise their beliefs. They just work harder to prove themselves right. It’s cognitive dissonance all the way down.

It was only a few generations ago that most Americans got their news from a few old white men on television. We could choose between Walter Cronkite (CBS), John Chancellor (NBC) and Harry Reasoner (ABC). It was easy to assume that Americans wanted this shared public discourse, or at least a fact-checked voice of authority, which is why nearly 30 million people watched Cronkite every night.* But now it’s clear that we only watched these shows because we had no choice—their appeal depended on the monopoly of network television. Once this monopoly disappeared, and technology gave us the ability to curate our own news, we flocked to what we really wanted: a platform catering to our biases and beliefs.

Tell me I’m right, but call it the truth.

Bessi, Alessandro, Fabiana Zollo, Michela Del Vicario, Michelangelo Puliga, Antonio Scala, Guido Caldarelli, Brian Uzzi, and Walter Quattrociocchi. "Users Polarization on Facebook and Youtube." PLOS ONE 11, no. 8 (2016): e0159641.

*The shared public discourse reduced political partisanship. In the 1950s, the American Political Association published a report fretting about the lack of ideological distinction between the two parties. The lack of overt partisanship, they said, might be undermining voter participation.

 

Nobody Knows Anything (NFL Draft Edition)

Pity the Cleveland Browns fan. Seemingly every year, the poor performance of the team leads to a high first-round pick: in this year’s draft, the Browns are making the first selection. And every year the team squanders the high pick, either by trading down and missing a superstar (Julio Jones in 2013) or trading up for a pick that didn’t pan out (Johnny Manziel in 2014, Trent Richardson in 2012, Brady Quinn in 2007, et al.) The draft is supposed to be a source of hope, a consolation prize for all the failures of the past. But for the hapless Browns, it has become yet another reminder of their chronic struggles.

This blog is not another critique of a pitiful team. The Browns might have a terrible track record in the draft, but I’m here to tell you that it’s not their fault. And that’s for a simple reason: picking college players is largely a crapshoot, a game of dice played with young athletes. The Browns might not know how to identify the college players with the most potential, but there’s little evidence that anybody else does, either. 

It’s not for lack of trying. Every year, professional football teams invest a huge amount of time and effort into choosing which college players to take with their draft picks. This is for the obvious reason: picks are extremely valuable. (Because the NFL has a strict cap on rookie salaries, new players are significantly underpaid, at least compared to their veteran colleagues.) Given the high stakes involved, it seems reasonable to assume that teams would have developed effective methods of identifying those players most likely to succeed in the pros. 

But they haven’t. That, at least, is the conclusion of a 2013 analysis of the NFL draft by Cade Massey and Richard Thaler. Consider one of their damning pieces of evidence, which involves the likelihood that a given player performs better in the NFL than the next player chosen in the draft at his position. As Massey and Thaler note, this is the practical question that teams continually face in the draft, as they debate the advantages of trading up to acquire a specific athlete.

Unfortunately, there is virtually no evidence that teams know what they’re doing: only 52 percent of picks outperform those players chosen next at the same position. “Across all rounds, all positions, all years, the chance that a player proves to be better than the next-best alternative is only slightly better than a coin flip,” write the economists. Or consider this statistic, which should strike fear into the heart of every NFL general manager: over their first five years in the league, draft picks from the first round have more seasons with zero starts (15.3 percent) than seasons that end with a selection to the Pro Bowl (12.8 percent). While draft order is roughly correlated with talent – players taken early tend to have better professional careers – Massey writes in an email that he “considers differences between team performance in the draft to be, effectively, all chance.” The Browns aren’t stupid, just unlucky.

If teams admitted their ignorance, they could adjust their strategy accordingly. They could discount their scouting analysis and remember that college performance is only weakly correlated with NFL output. They might even explore new player assessment strategies, as the old ones don't seem to work very well. 

Alas, teams routinely act as if they can identify the best players, which is what leads them to trade-up for more valuable picks. But this is precisely the wrong approach. As proof, Massey and Thaler compute a statistic they call “surplus value,” which reflects the worth of a player’s performance (as calculated by the pay scale of NFL veterans) minus his actual compensation. “If picks are valued by the surplus they produce, then the first pick in the first round is the worst pick in the round, not the best,” write the economists. “In paying a steep price to trade up, teams are paying a lot to acquire a pick that is worth less than the ones they are giving up.”

Why are most NFL teams so bad at the draft? The main culprit is what Massey and Thaler refer to as “overconfidence exacerbated by information.” Teams assume their judgments about prospective players are more accurate than they are, especially when they amass large amounts of data and analytics. What they fail to realize is that much of this information isn’t predictive, and that it’s almost certainly framed by the same biases and blind spots that limit our assessments of other people in everyday life. As Massey and Thaler write: “The problem is not that future performance is difficult to predict, but that decision makers do not appreciate how difficult it is.” 

There is something deeply sobering about the limits of draft intelligence among NFL teams. These are athletes, after all, whose performance has been measured by a dizzying array of advanced stats; they have been scouted for years and run through a gauntlet of psychological and physical assessments. (As the economists write, “football teams almost certainly are in a better position to predict performance than most employers choosing workers.”) However, even in this rarified domain, the mystery of human beings still dominates. We live in the age of big data and sabermetrics, which means that it’s harder than ever to know what we don’t. But this paper is an important reminder that such meta-knowledge is essential—when we ignore the error bars, we’re much more likely to make a very big mistake.

Bill Belichick, the coach of the New England Patriots (and former coach of the Cleveland Browns!), has won lots of games by pushing back against the curse of overconfidence. If Belichick has a signature move in the draft, it’s trading down, swapping a high pick for multiple less valuable ones. (Under Belichick, the Patriots have gained more than 25 compensatory draft picks.) If teams could reliably assess talent, this strategy would make little sense, since it would mean giving up on superstars. However, given the near impossibility of predicting elite player performance, gaining more picks is an astute move. Since nobody knows who to choose, the only way to play is to make a lot of bets.

Massey, Cade, and Richard H. Thaler. "The loser's curse: Decision making and market efficiency in the National Football League draft." Management Science 59.7 (2013): 1479-1495.

When Is Ignorance Bliss?

The first line of Aristotle’s Metaphysics begins with a seemingly obvious truth: “All men by nature desire to know.” According to Aristotle, this desire for knowledge is our defining instinct, the quality that sets our mind apart. As the cognitive psychologist George Miller put it, we are informavores, blessed with a boundless appetite for information.

It’s a comforting vision. However, like all dictums about human nature, it also comes with plenty of caveats and exceptions. Take spoiler alerts. It’s hard to read an article about a work of entertainment that doesn’t contain a warning to readers. The assumption of these warnings, of course, is that people don’t want to know, at least when it comes to narratives.

And it’s not just the latest twists in Scandal that we’re trying to avoid. Twenty percent of Malawi adults at risk for HIV decline to get the results of their HIV test, even when offered cash incentives; approximately 10 percent of Canadians with a family history of Huntington Disease choose to not undergo genetic testing. (Even James Watson declined to have his risk of Alzheimer’s revealed.) These are just specific examples of a larger phenomenon. Given the advances in genetic testing and biomarkers, the Aristotelian model would predict that we’d all become subscribers to 23andMe. But that’s not happening.

A new paper in Psychological Review by Gerd Gigerenzer and Rocio Garcia-Retamero explores the motives of our willful ignorance. They begin by establishing its prevalence, surveying more than 2000 German and Spanish adults about various forms of future knowledge. Their results are clear proof that most of us want spoiler alerts for real life: between 85 and 90 percent of subjects say they don’t want to know when or why their partner will die. (They feel the same way about their own death.) They also don’t want to know if their marriage will eventually end in divorce. This preference for ignorance even applies to positive events: between 40 and 70 percent of subjects don't want to know about their future Christmas gifts, or who won the big soccer match, or the gender of their next child.

To understand our reasons for ignorance, Gigerenzer and Garcia-Retamero asked subjects about their risk attitudes. They found that people who are more risk-averse (as measured by their insurance purchases and their choices playing a simple lottery game) are more likely to prefer not knowing. While this might appear counterintuitive—learning how you will die might help reduce the risk of dying— Gigerenzer and Garcia-Retamero explain these results in terms of anticipatory regret. People avoid risks because they don’t want to regret those losing gambles. They avoid life spoilers for a similar reason, as they're trying to avoid regretting the decision to know. 

On the one hand, this intuition has a logical sheen. It’s not that ignorance is bliss—it’s just better than knowing that life can be shitty and full of suffering. Knowing exactly how we’ll suffer might only make it worse. The same principle also applies to the good stuff: we think we'll be less happy if we know about our happiness in advance. Life is like a joke—it's not so funny if we get the punchline first.

But there’s also some compelling evidence that our intuitions about regretting future knowledge are wrong. For one thing, it’s not clear that spoilers spoil anything. Consider a 2011 study by Jonathan Leavitt and Nicholas Christenfeld. The scientists gave several dozen undergraduates twelve different short stories. The stories came in three different flavors: ironic twist stories (such as Chekhov’s “The Bet”), straight up mysteries (“A Chess Problem” by Agatha Christie) and “literary stories” by writers like Updike and Carver. Some subjects read the story as is, without a spoiler. Some read the story with a spoiler carefully embedded in the actual text, as if Chekhov himself had given away the end. And some read the story with a spoiler disclaimer in the preface.

Here’s the shocking twist: the scientists found that almost every single story, regardless of genre, was more pleasurable when prefaced with some sort of spoiler. It doesn’t matter if it’s Harry Potter or Hamlet: an easy way to make a good story even better is to spoil it at the start. As the scientists write, “Erroneous intuitions about the nature of spoilers may persist because individual readers are unable to compare spoiled and unspoiled experiences of a novel story. Other intuitions about suspense may be similarly wrong: Perhaps birthday presents are better when wrapped in cellophane, and engagement rings when not concealed in chocolate mousse.”

In fiction as in life: we assume our pleasure depends on ignorance. However, Leavitt and Christenfeld argue that spoilers enhance narrative pleasure by letting readers pay more attention to developments along the way. Because we know the destination, we’re better able to enjoy the journey. 

There's more to life than how it ends.

Gigerenzer, Gerd, and Rocio Garcia-Retamero. "Cassandra’s regret: The psychology of not wanting to know." Psychological Review 124.2 (2017): 179 

Why College Should Become A Lottery

Barry Schwartz, a psychologist at UC-Berkeley and Swarthmore, does not think much of the college admissions process. In a new paper, he tells a story about a friend who spent an afternoon with a high-school student. His friend was impressed by the student and, for the first time in thirty years of teaching, decided to send a note to the dean of admissions. Despite the note, the student did not get in. Schwartz describes what happened next:

“Curious, my friend asked the dean why. ‘No reason,’ said the dean. ‘No reason?,’ replied my friend, somewhat incredulous. ‘Yes, no reason. I can’t tell you how many applicants we reject for no reason.’”

For Schwartz, such stories are a sign of a broken system. Although colleges pretend to be paragons of meritocracy, their selection methods are rife with randomness. “Despite their very best efforts to make the selection process rational and reasonable, admissions people are, in effect, running a lottery,” Schwartz writes. “To get into Harvard (or Stanford, or Yale, or Swarthmore), you need to be good...and you need to be lucky.”

Schwartz devotes much of his article to the severe negative consequences inflicted by this capricious selection process. He begins by lamenting the ways in which it discourages students from experimenting, both inside and outside the classroom. Because teenagers are so terrified of failure—Harvard requires perfection!—they refuse to take classes that might end with the crushing disappointment of a B+. Over time, this can lead to high-school students that “may look better than ever before” but are probably learning less.  

But wait: it gets worse. Much worse. Suniya Luthar, a professor of psychology at Arizona State University, has spent the last several years documenting the emotional toll of the college competition on upper-middle class children. Although these affluent kids lead enviable lives on paper—they have educated white-collar parents, high test scores and attend elite high-schools—they are roughly twice as likely to suffer from the symptoms of depression and anxiety than the national average. They are also far more likely to have eating disorders and meet the diagnostic criteria for substance abuse.  

There are, of course, countless variables driving this epidemic of mental issues among affluent teenagers. (Maybe it’s Snapchat’s fault? Or a side-effect of helicopter parenting?) However, Luthar argues that one of the main causes is what she calls the “pressure to achieve.” The problem with the pressure is that it’s a double-edged sword. If a student’s achievements fall short, then he feels inadequate. However, even if a student gets straight As, she probably still lives in what Luthar calls “a state of fear of not achieving.” Over time, that chronic sense of fear can lead to anxiety disorders and depression; kids are burned out on stress before they even leave their childhood homes. 

How can we fix this competitive morass? Schwartz offers a provocative solution. (In an email, he observes that he first offered this proposal a decade ago. In the years since, it’s only gotten more necessary.) The first phase of his plan involves filtering applicants using the same academic standards currently in place. Schwartz estimates that these standards—GPA, SAT scores, extracurricular activities, etc.—could cut the applicant pool by up to two-thirds. But here’s the crucial twist: after this initial culling, all of the acceptable students would be entered into an admissions lottery. The winners would be drawn at random.

Such a lottery system, Schwartz writes, would offer multiple advantages over our current fake meritocracy. For one thing, it would be much less stressful for teenagers to strive to be “good enough” rather than the best; high-achieving students wouldn’t have to be the highest achieving. This, in turn, would “free students up to do the things they were really passionate about.” Instead of chasing extrinsic rewards—does Stanford need an oboe player?—adolescents would be free to follow their sense of intrinsic motivation.* By making selective colleges less selective, Schwartz says, they can get happier and more well-rounded students.

The hybrid lottery system would also force colleges to be more transparent about their selection methods. Right now, the admissions process is a black box; such secrecy is what allows colleges to accept legacies and reject otherwise qualified students for no particular reason. However, if the schools were forced to define their lottery cut-off, they would have to reflect on the measurements that actually predict academic success. And this doesn’t mean the criteria must be quantitative. As Schwartz notes, “criteria for ‘good enough’ can be sufficiently flexible that applicants who are athletes, violinists, minorities, or from Alaska get ‘credit’ for these characteristics,” just as in the current system.

The most obvious objection to Schwartz’s lottery system is ethical. For many people, it just seems wrong to base a major life decision on a roll of the dice. But here’s the thing: the college application process is already a crapshoot. (The differences used to differentiate applicants—say, 10 points on the SAT—are often smaller than the amount of error in the assessments.) By making the lottery explicit, students and schools would at least be forced to have a candid conversation about the role of luck in life. Instead of taking full credit for our admission, or blaming ourselves for our rejection, we’d admit that much of success is random chance and pure contingency. Perhaps, Schwartz writes, this might make students a little “more empathic when they encounter people who may be just as deserving as they are, but less lucky.”

Schwartz is best known for his research on the pitfalls of the maximizing decision-making strategy, in which people obsess over finding the best possible alternative. The problem with this approach, Schwartz and colleagues have repeatedly found, is that it ends up making us miserable. Instead of being satisfied with a perfectly acceptable option, we get stressed about finding a better one. And then, once we make a choice, studies show that maximizers end up drenched in regret, fixated on their foregone options. We’re trained to be maximizers by consumer culture—who wants to settle for the second best laundry detergent?—but it’s usually a shortcut to a sad life.

This new paper extends the maximizing critique to higher-education. In Schwartz’s telling, the college application process is a particularly powerful example of how the maximizing approach can lead us astray. Given the inherent uncertainty of matching students and colleges, Schwartz argues that it’s foolish to try to find the ideal school. Rather, we should practice an approach that Herbert Simon called satisficing, in which we search for colleges that are good enough. After all, the evidence suggests we can be equally happy at a multitude of places. 

This, perhaps, is the greatest virtue of the lottery proposal: by making it impossible for students to act like maximizers—chance chooses for them—they will be given a life lesson in the power of satisficing. Instead of wasting their dreams on a dream school, they should follow their adolescent passions and embrace the chanciness of life. You can’t always get exactly what you want. But if you practice satisficing, you just might get what you need.

*The danger of replacing intrinsic motivation with extrinsic rewards was first demonstrated in a classic study of preschoolers. Some of the young children were told they would get a reward for drawing with pens. You might think this would encourage the kids to draw even more. It didn’t. Instead, those toddlers given an “expected reward” were less likely to use the pens in the future. (And when they did use the pens, they spent less time drawing.) The extrinsic rewards, said the scientists, had turned “play into work.”

Schwartz, Barry (2016) “Why Selective Colleges Should Become Less Selective—And Get Better Students,” Capitalism and Society: Vol. 11: Iss. 2.

The Headwinds Paradox (Or Why We All Feel Like Victims)

When you are running into the wind, the air feels like a powerful force. It’s blowing you back, slowing you down, an annoying obstacle making your run that much harder.

And then you turn around and the headwind becomes a tailwind. The air that had been pushing you back is now propelling you forward. But here’s the question: do you still notice it?

Probably not. Simply put, headwinds are far more salient than tailwinds. When it comes to exercise, we fixate on the barrier and ignore the boost.

In a new paper, the psychologists Shai Davidai and Thomas Gilovich show that this same asymmetry is present across many aspects of life, and not just when we’re running on a windy day.

As evidence, Davidai and Gilovich conducted a number of clever studies. In the first experiment, they asked people which political party was advantaged or disadvantaged by the rules of American democracy, such as the electoral college. As expected, partisans on both sides believed their side suffered from the headwinds, so that Democrats were convinced the political system favored Republicans and Republicans believed it favored Democrats. Interestingly, the size of the effect was mediated by the level of political engagement, with more engagement leading to a stronger sense of unfairness. In short, the more you think about American politics the more convinced you are that the system is stacked against you. (In fairness to Democrats, recent history suggests they might be right.)

A similar effect was also observed among football fans, who were much more likely to notice the difficult games on their team’s upcoming schedule than the easy ones. The headwinds/tailwinds asymmetry even shaped the career beliefs of academics, as people in a given sub-discipline believed they faced more hurdles than those in other sub-disciplines.

And then there’s family life, that rich vein of grievance. When the psychologists asked siblings if their parents had been harder on the older or younger child, their answers depended largely on their own position in the family. Older children were convinced that their parents had gone easy on their little siblings, while younger siblings insisted the discipline had been evenly distributed. Mom always loves someone else the most.

According to Davidai and Gilovich, the underlying cause of the headwind effect is the availability heuristic, in which our judgement is distorted by the ease with which relevant examples come to mind. First described by Kahneman and Tversky, the availability heuristic is why people think tornadoes are deadlier than asthma—tornadoes generate headlines, even though asthma takes 20 times more lives—and why spouses tend to overestimate their share of household chores. (We remember that time we took out the garbage; we don’t remember all those times we didn’t.) As Timur Kuran and Cass Sunstein point out, the availability bias might be “the most fundamental heuristic” of them all, constantly distorting our judgements of frequency and probability. We see through a glass, darkly; the availability heuristic is often what makes the glass so dark. 

This new paper shows how the availability bias can even warp our life narratives. We think our memory reflects the truth; it feels like a fair accounting of events. In reality, though, it’s a story tilted towards resentment, since it’s so much easier for us to remember every slight, wound and obstacle.

Why does this matter? Didn’t we already know that our memory is mostly bullshit? Davidai and Gilovich argue that this particular mnemonic flaw comes with serious practical consequences. For one thing, the headwind effect makes it harder for us to experience gratitude, which research shows is associated with higher levels of happiness, fewer hospitalizations and a more generous approach towards others. Because we take the tailwinds of life for granted—the headwinds consume all our attention—we have to work to notice our blessings. We easily remember who hurt us; we soon forget who helped us.

This effect can even shape public policy, limiting our interest in helping the less fortunate. We’re so biased towards our adversities that we can’t empathize with the adversities of others, even when they might be far more challenging. And since we tend to neglect our God given advantages—good parents, silver spoons, etc.—we discount the role they played in our success. The end result is a series of false beliefs about what it takes to succeed.

In a recent interview, Rob Lowe lamented the obstacles that had limited his early career opportunities. Handsome actors like himself, he said, are subject to “an unbelievable bias and prejudice against quote-unquote good-looking people.”

We’re all victims. Even beauty is a headwind.

Davidai, Shai, and Thomas Gilovich. "The headwinds/tailwinds asymmetry: An availability bias in assessments of barriers and blessings." Journal of Personality and Social Psychology 111.6 (2016): 835. 

Fewer Friends, Better Marriages: The Modern American Social Network

In A Book About Love, I wrote about research showing that the social networks of Americans have been shrinking for decades. Miller McPherson, a sociologist at the University of Arizona and Duke University, has helped document the decline. In 1985, 26.1 percent of respondents reported discussing important matters with a “comember of a group,” such as a church congregant. In 2004, McPherson found that the percentage had fallen to 11.8. In 1985, 18.5 percent of subjects had important conversations with their neighbors. That number shrank to 7.9 percent two decades later. Other studies have reached similar conclusions. Robert Putnam, for instance, has used the DDB Needham Life Style Surveys to show that the average married couple entertained friends at home approximately fifteen times per year in the 1970s. By the late 1990s, that number was down to eight, “a decline of 45 percent in barely two decades.”

These surveys raise the obvious question: If we’re no longer socializing with our neighbors, or having dinner parties with our friends, then what the hell are we doing? 

One possibility is screens. Conversation is hard; it’s much easier to chill with Netflix and the cable box. According to this depressing speculation, technology is an enabler of loneliness, allowing us to forget how isolated we’ve become. 

But there’s another possibility. While it seems clear that we’re spending less time with our friends and acquaintances (texting doesn’t count), we might be spending more time with our spouses and children. (McPherson found, for instance, that the percentage of Americans who said their spouse was their “only confidant” nearly doubled between 1985 and 2004.) If true, this would suggest that our social network isn’t fraying so much as it’s gradually becoming more focused and intimate.

A new paper by Katie Genadek, Sarah Flood and Joan Garcia Roman at the University of Minnesota, drawing from time use survey data from 1965 to 2012, aims to answer these important unknowns. Their data provides a fascinating portrait of the social trends shaping the lives of American families.

I’ll start with the punchline: on average, spouses are spending more time with each other than they did in 1965. This trend is particularly visible among married couples with children. Here are the scientists: “In 1965, individuals with children spent about two hours per day with both their spouse and child(ren); by 2012 this had increased 50 minutes to almost three hours.” Instead of bowling with neighbors, we’re taking our kids to soccer practice.

Of course, when it comes to togetherness time, quality matters more than quantity. One cynical explanation for the increase in family time is that much of it might involve screens. Maybe we’re not hanging out—we’re just sharing a wifi network. But the data doesn’t seem to show that. In 1975, couples spent 79 minutes watching television together. In 2012, that number had increased by only 13 minutes. What’s more, spouses are still making time for shared activities that don’t involve TV. Although our total amount of leisure time has remained remarkably constant – Keynes’ leisure society has not come to pass – we are more likely to spend this free time with our spouse.

This is particularly true among couples with children. The big news buried in this time use data is that parents are doing a lot more parenting. In 1965, parents spent 41 minutes engaged in “primary care” for their little ones. That number had more than doubled, to 88 minutes, in 2012. We’re also far more likely to parent together, with the number of minutes spent as a family unit quadrupling from 6 minutes in 1965 to 27 minutes in 2012. This increase in family time comes despite the sharp increase in women working outside the home.

It’s so easy to despair about the state of the world. What’s important to remember, however, is that these more intimate benchmarks of life are trending in the right direction.  Amid all the calls to make America great again, we’re liable to forget that the greatest generations spent a staggeringly little amount of time with their families. The nuclear family is supposed to be disintegrating, but these time diaries show us the opposite, as Americans are choosing to spend an increasing percentage of their time with their partner and children.

What makes this survey data more compelling is that it jives with recent research showing the growing role played by our spouses in determining our own life happiness. In a separate study based on data from 47,000 couples, Genadek and Flood found that individuals are nearly twice as happy when they are with their spouse as when they’re not. Meanwhile, a recent meta-analysis of ninety-three studies by the psychologist Christine Proulx found that the rewards of a good marriage have surged in recent decades, with the most loving couples providing a bigger lift to the “personal well-being” of the partners. In fact, the influence of a good marriage on overall levels of life satisfaction has nearly doubled since the late 1970s. Given this happiness boost, it shouldn’t be too surprising that we’re spending more time with our spouses. If we’re lucky, we already live with the people who make us happiest. 

Genadek, Katie R., Sarah M. Flood and Joan Garcia Roman. “Trends in Spouses’ Shared Time in the United States, 1965-2012.” Demography (2016)  

Why Facebook Rules the World

One day, when historians tell the strange story of the 21st century, this age of software and smartphones, populism and Pokemon, they will focus on a fundamental shift in the way people learn about the world. Within the span of a generation, we went from watching the same news shows on television, and reading the same newspapers in print, to getting a personalized feed of everything that our social network finds interesting, as filtered by a clever algorithm. The main goal of the algorithm is to keep us staring at the screen, increasing the slight odds that we might click on an advertisement.

I’m talking, of course, about Facebook. Given the huge amount of attention Facebook commands—roughly 22 percent of the internet time Americans spend on their mobile devices is spent on the social network—it has generated a relatively meager amount of empirical research. (It didn't help that the company’s last major experiment became a silly controversy.) Furthermore, most of the research that does exist explores the network’s impact on our social lives. In general, these studies find small, mostly positive correlations between Facebook use and a range of social measures: our Facebook friends are not the death of real friendship.

What this research largely overlooks, however, is a far more basic question: why is Facebook so popular? What is it about the social network (and social media in general) that makes it so attractive to human attention? It’s a mystery at the heart of the digital economy, in which fortunes hinge on the allocation of eyeballs.

One of the best answers for the appeal of Facebook comes from a 2013 paper by a team of researchers at UCSD. (First author Laura Mickes, senior authors Christine Harris and Nicholas Christenfeld.) Their paper begins with a paradox: the content of Facebook is often mundane, full of what the scientists refer to as “trivial ephemera.” Here’s a random sampling of my current feed: there’s an endorsement of a new gluten-free pasta, a smattering of child photos, emotional thoughts on politics and a post about a broken slide at the local park. As the scientists point out, these Facebook “microblogs” are full of quickly composed comments and photos, an impulsive record of everyday life.

Such content might not sound very appealing, especially when there is so much highly polished material already competing for our attention. (Why read our crazy uncle on the election when there’s the Times?) And yet, the “microblog” format has proven irresistible: Facebook’s “news” feed is the dominant information platform of our century, with nearly half of Americans using it as a source for news.  This popularity, write the scientists, “suggests that something about such ‘microblogging’ resonates with human nature.”

To make sense of this resonance, the scientists conducted some simple memory experiments. In their first study, they compared the mnemonic power of Facebook posts to sentences from published books. (The Facebook posts were taken from the feeds of five research assistants, while the book sentences were randomly selected from new titles.) The subjects were shown 100 of these stimuli for three seconds each. Then, they were given a recognition test consisting of these stimuli along with another 100 “lures” – similar content they had not seen - and asked to assess their confidence, on a twenty-point scale, as to whether they previously been exposed to a given stimulus.

According to the data, the Facebook posts were much more memorable than the published sentences. (This effect held even after controlling for sentence length and the use of “irregular typography,” such as emoticons.) But this wasn’t because people couldn’t remember the sentences extracted from books – their performance here was on par with other studies of textual memory. Rather, it was largely due to the “remarkable memorability” of the Facebook posts. Their content was trivial. It was also unforgettable.

In a follow-up condition, the scientists replaced the book sentences with photographs of human faces. (They also gathered a new collection of Facebook posts, to make sure their first set wasn’t an anomaly.) Although it’s long been argued that the human brain is “specially designed to process and store facial information,” the scientists found that the Facebook posts were still far easier to remember.

This is not a minor effect: the difference in memory performance between Facebook posts and these other stimuli is roughly equivalent to the difference between people with amnesia due to brain damage and those with a normal memory. What’s more, this effect exists even when the Facebook content is about people we don’t even know. Just imagine how memorable it is when the feed is drawn from our actual friends.

To better understand the mnemonic advantage of microblogs, the scientists ran several additional experiments. In one study, they culled text from CNN.com, drawing from both the news and entertainment sections. The text came in three forms: headlines, sentences from the articles, and reader comments. As you can probably guess, the reader comments were much more likely to be remembered, especially when compared to sentences from the articles. Subjects were also better at remembering content from the entertainment section, at least compared to news content.

Based on this data, the scientists argue that the extreme memorability of Facebook posts is being driven by at least two factors. The first is that people are drawn to “unfiltered, largely unconsidered postings,” whether it’s a Facebook microblog or a blog comment. When it comes to text, we don’t want polish and reflection. We want gut and fervor. We want Trump’s tweets.

The second factor is the personal filter of Facebook, which seems to take advantage of our social nature.  We remember random updates from our news feed for the same reason we remember all the names of the Pitt-Jolie children: we are gossipy creatures, perpetually interested in the lives of others.

This research helps explain the value of Facebook, which is currently the 7th most valuable company in the world. The success of the company, which sells ads against our attention, is ultimately dependent on our willingness to read the haphazard content produced by other people for free. This might seem like a bug, but it’s actually an essential feature of the social network. “These especially memorable Facebook posts,” write the scientists, “may be far closer than professionally crafted sentences to tapping into the basic language capacities of our minds. Perhaps the very sentences that are so effortlessly generated are, for that reason, the same ones that are readily remembered.” While traditional media companies assume people want clean and professional prose, it turns out that we’re compelled to remember the casual and flippant. The problem, of course, is that the Facebook news algorithm is filtered to maximize attention, not truth, which can lead to the spread of sticky lies. When our private feed is full of memorable falsehoods what happens to public discourse?

And it’s not just Facebook: the rise of the smartphone has encouraged a parallel rise in informal messaging. (We've gone from email to emojis in a few short years.) Consider Snapchat, the social network du jour. It's entire business model depends on the eagerness of users to consume raw visual content, produced by friends in the grip of System 1. In a universe overflowing with professional video content, it might seem perverse that we spend so much time watching grainy videos of random events. But this is what we care about. This is what we remember.

The creation of content used to be a professional activity. It used to require moveable type and a printing press and a film crew. But digital technology democratized the tools. And once that happened, once anyone could post anything, we discovered an entirely new form of text and video. We learned that the most powerful publishing platform is social, because it embeds the information in a social context. (And we are social animals.) But we also learned about our preferred style, which is the absence of style: the writing that sticks around longest in our memory is what seems to take the least amount of time to create. All art aspires to the condition of the Facebook post. 

Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. (2013). Major memory for microblogs. Memory & cognition, 41(4), 481-489.

The Psychology of the Serenity Prayer

One of the essential techniques of Cognitive-Behavioral Therapy (CBT) is reappraisal. It’s a simple enough process: when you are awash in negative emotion, you should reappraise the stimulus to make yourself feel better.

Let’s say, for instance, that you are stuck in traffic and are running late to your best friend’s birthday party. You feel guilty and regretful; you are imagining all the mean things people are saying about you. “She’s always late!” “He’s so thoughtless.” “If he were a good friend, he’d be here already.”

To deal with this loop of negativity, CBT suggests that you think of new perspectives that lessen the stress. The traffic isn’t your fault. Nobody will notice. Now you get to finish this interesting podcast.

It’s an appealing approach, rooted in CBT’s larger philosophy that the way an individual perceives a situation is often more predictive of his or her feelings than the situation itself. 

There’s only one problem with reappraisal: it might not work. For instance, a recent meta-analysis showed that the technique is only modestly useful at modulating negative emotions. What’s worse, there’s suggestive evidence that, in some contexts, reappraisal may actually backfire. According to a 2013 paper by Allison Troy, et al., among people who were stressed about a controllable situation—say, being fired because of poor work performance—better reappraisal ability was associated with higher levels of depression. 

Why doesn’t reappraisal always work? One possible answer involves an old hypothesis known as the strategy-situation fit, first outlined by Richard Lazarus and Susan Folkman in the late 1980s. This approach assumes that there is no universal fix for anxiety and depression, no single tactic that always grants us peace of mind. Instead, we must think strategically about which strategies to use, as their effectiveness will depend on the larger context.

A new paper by Simon Haines et al. (senior author Peter Koval) in Psychological Science provides new evidence for the strategy-situation fit model. While previous research has suggested that the success of reappraisal depends on the nature of the stressor—it’s only useful when we can’t control the source of the stress—these Australian researchers wanted to measure the relevant variables in the real world, and not just in the lab. To do this, they designed a new smartphone app that pushed out surveys at random moments. Each survey asked their participants a few questions about their use of reappraisal and the controllability of their situation. These responses were then correlated with several questionnaires measuring well-being and mental health.

The results confirmed the importance of strategy-situation fit. According to the data, people with lower levels of well-being (they had more depressive symptoms and/or stress) used reappraisal in the wrong contexts, increasing their use of the technique when they were in situations they perceived as controllable. For example, instead of leaving the house earlier, or trying to perform better at work, people with poorer “strategy-situation fit” might spend time trying to talk themselves into a better mood. People with higher levels of well-being, in contrast, were more likely to use reappraisal at the right time, when they were confronted with situations they felt they could not control. (Bad weather, mass layoffs, etc.) This leads Haines et al. to conclude that, “rather than being a panacea, reappraisal may be adaptive only in relatively uncontrollable situations.”

Why doesn’t reappraisal help when we can influence the situation? One possibility is that focusing on our reaction might make us less likely to take our emotions seriously. We’re so focused on changing our thoughts—think positive!—that we forget to seek an effective solution. 

Now for the caveats. The most obvious limitation of this paper is that the researchers relied on subjects to assess the controllability of a given situation; there were no objective measurements. The second limitation is the lack of causal data. Because this was not a longitudinal study, it’s still unclear if higher levels of well-being are a consequence or a precursor of more strategic reappraisal use. The best way to deal with our emotions is an ancient question. It won’t be solved anytime soon.

That said, this study does offer some useful advice for practitioners and patients using CBT. As I noted in an earlier blog, there is worrying evidence that CBT has gotten less effective over time, at least as measured by its ability to reduce depressive symptoms. (One of the leading suspects behind this trend is the growing popularity of the treatment, which has led more inexperienced therapists to begin using it.) While more study is clearly needed, this research suggests ways in which standard CBT might be improved. It all comes down to an insight summarized by the great Reinhold Niebuhr in the Serenity Prayer:

God, grant me the serenity to accept the things I cannot change,

Courage to change the things I can,

And wisdom to know the difference.                                         

That’s wisdom: tailoring our response based on what we can and cannot control. Serenity is a noble goal, but sometimes the best way to fix ourselves is to first fix the world.

Haines, Simon J., et al. "The Wisdom to Know the Difference Strategy-Situation Fit in Emotion Regulation in Daily Life Is Associated With Well-Being." Psychological Science (2016): 0956797616669086.

How Southwest Airlines Is Changing Modern Science

The history of science is largely the history of individual genius. From Galileo to Einstein, Isaac Newton to Charles Darwin, we tend to celebrate the breakthroughs achieved by a mind working by itself, seeing more reality than anyone has ever seen before.

It’s a romantic narrative. It’s also obsolete. As documented in a pair of Science papers by Stefan Wuchty, Benjamin Jones and Brian Uzzi, modern science is increasingly a team sport: more than 80 percent of science papers are now co-authored. These teams are also producing the most influential research, as papers with multiple authors are 6.3 times more likely to get at least 1000 citations. The era of the lone genius is over.

What’s causing the dramatic increase in scientific collaboration? One possibility is that the rise of teams is a response to the increasing complexity of modern science. To advance knowledge in the 21st century, one has to master an astonishing amount of information and experimental know-how; because we have discovered so much, it’s harder to discover something new. (In other words, the mysteries that remain often exceed the capabilities of the individual mind.) This means that the most important contributions now require collaboration, as people from different specialties work together to solve extremely difficult problems.

But this might not be the only reason scientists are working together more frequently. Another possibility is that the rise of teams is less about shifts in knowledge and more about the increasing ease of interacting with other researchers. It’s not about science getting hard. It’s about collaboration getting easy.

While it seems likely that both of these explanations are true—the trend is probably being driven by multiple factors—a new paper emphasizes the changes that have reduced the costs of academic collaboration. To do this, the economists Christian Catalini, Christian Fons-Rosen and Patrick Gaule looked at what happens to scientific teams after Southwest Airlines enters a metropolitan market. (On average, the entrance of Southwest leads to a roughly 20 percent reduction in fares and a 44 percent increase in passengers.) If these research partnerships are held back by practical obstacles—money, time, distance, etc.—then the arrival of Southwest should lead to a spike in teamwork.

That’s exactly what they found. According to the researchers, after Southwest begins a new route collaborations among scientists increase across every scientific discipline. (Physicists increase their collaborations by 26 percent, while biologists seem to really love cheap airfare: their collaborations increase by 85 percent.) To better understand these trends, and to rule out some possible confounds, Catalini et al. zoomed in on collaborations among chemists. They tracked the research produced by 819 pairs of chemists between 1993 and 2012. Once again, they found that the entry of Southwest into a new market leads to an approximately 30 percent spike in collaboration among chemists living near the new routes. What’s more, this trend towards teamwork showed no signs of existing before the arrival of the low-cost airline.

At first glance, it seems likely that these new collaborations triggered by Southwest will produce research of lower quality. After all, the fact that the scientists waited to work together until airfares were slightly cheaper suggests that they didn’t think their new partnership would create a lot of value. (A really enticing collaboration should have been worth a more expensive flight, especially since the arrival of Southwest didn’t significantly increase the number of direct routes.) But that isn’t what Catalini et al. found. Instead, they discovered that Southwest’s entry into a market led to an increase in higher quality publications, at least as measured by the number of citations. Taken together, these results suggest that cheaper air travel is not only redrawing the map of scientific collaboration, but fundamentally improving the quality of research.

There is one last fascinating implication of this dataset. The spread of Southwest paralleled the rise of the Internet, as it became far easier to communicate and collaborate using digital tools, such as email and Skype. In theory, these virtual interactions should make face-to-face conversations unnecessary. Why put up with the hassle of air travel when there’s Facetime? Why meet in person when there’s Google Docs? The Death of Distance and all that.

But this new paper is a reminder that face-to-face interactions are still uniquely valuable. I’ve written before about the research of Isaac Kohane, a professor at Harvard Medical School. A few years ago, he published a study that looked at the influence physical proximity on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Geography turned out to be a crucial variable: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were located within ten meters of each other, while the least cited papers tended to emerge from collaborators who were a kilometer or more apart.

Even in the 21st century, the best way to work together is to be together. The digital world is full of collaborative tools, but these tools are still not a substitute for meetings that take place in person.* That’s why we get on a plane.

Never change Southwest.

Catalini, Christian, Christian Fons-Rosen, and Patrick Gaulé. "Did cheaper flights change the geography of scientific collaboration?" SSRN Working Paper (2016). 

* Consider a study that looked at the spread of Bitnet, a precursor to the internet. As one might expect, the computer network significantly increased collaboration among electrical engineers at connected universities. However, the boost in collaboration was far larger among engineers who were within driving distance of each other.  Yet more evidence for the power of in-person interactions comes from a 2015 paper by Catalini, which looked at the relocation of scientists following the removal of asbestos from Paris Jussieu, the largest science university in France. He found that science labs that had been randomly relocated in the same area were 3.4 to 5 times more likely to collaborate. Meat space matters.

Do Social Scientists Know What They're Talking About?

The world is lousy with experts. They are everywhere: opining in op-eds, prognosticating on television, tweeting out their predictions. These experts have currency because their opinions are, at least in theory, grounded in their expertise. Unlike the rest of us, they know what they’re talking about.

But do they really? The most famous study of political experts, led by Philip Tetlock at the University of Pennsylvania, concluded that the vast majority of pundits barely beat random chance when it came to predicting future events, such as the winner of the next presidential election. They spun out confident predictions but were never held accountable when their predictions proved wrong. The end result was a public sphere that rewarded overconfident blowhards. Cable news, Q.E.D.

While the thinking sins identified by Tetlock are universal - we’re all vulnerable to overconfidence and confirmation bias - it’s not clear that the flaws of political experts can be generalized to other forms of expertise. For one thing, predicting geopolitics is famously fraught: there are countless variables to consider, interacting in unknowable ways. It’s possible, then, that experts might perform better in a narrower setting, attempting to predict the outcomes of experiments in their own field.

A new study, by Stefano DellaVigna at UC Berkeley and Devin Pope at the University of Chicago, aims to put academic experts to this more stringent test. They assembled 208 experts from the fields of economics, behavioral economics and psychology and asked them to forecast the impact of different motivators on the performance of subjects performing an extremely tedious task. (They had to press the “a” and “b” buttons on their keyboard as quickly as possible for ten minutes.) The experimental conditions ranged from the obvious - paying for better performance - to the subtle, as DellaVigna and Pope also looked at the influence of peer comparisons, charity and loss aversion.  What makes these questions interesting is that DellaVigna and Pope already knew the answers: they’d run these motivational studies on nearly 10,000 subjects. The mystery was whether or not the experts could predict the actual results.

To make the forecasting easier, the experts were given three benchmark conditions and told the average number of presses, or “points,” in each condition. For instance, when subjects were told that their performance would not affect their payment, they only averaged 1521 points. However, when they were paid 10 cents for every 100 points, they averaged 2175 total points. The experts were asked to predict the number of points in fifteen additional experimental conditions.

The good news for experts is that these academics did far better than Tetlock’s pundits. When asked to predict the average points in each condition, they demonstrated the wisdom of crowds: their predictions were off by only 5 percent. If you’re a policy maker, trying to anticipate the impact of a motivational nudge, you’d be well served by asking a bunch of academics for their opinions. 

The bad news is that, on an individual level, these academics still weren’t very good. They might have looked prescient when their answers were pooled together, but the results were far less impressive if you looked at the accuracy of experts in isolation. Perhaps most distressing, at least for the egos of experts, is that non-scientists were much better at ranking the treatments against each other, forecasting which conditions would be most and least effective. (As DellaVigna pointed out in an email, this is less a consequence of expert failure and more a tribute to the fact that non-experts did “amazingly well” at the task.) The takeaway is straightforward: there might be predictive value in a diverse group of academics, but you’d be foolish to trust the forecast of a single one.

Furthermore, there was shockingly little relationship between the credentials of academia and overall performance. Full professors tended to underperform assistant professors, while having more Google Scholar citations was correlated with lower levels of accuracy. (PhD students were “at least as good” as their bosses.) Academic experience clearly has virtues. But making better predictions about experiments does not seem to be one of them.

Since Tetlock published his damning critique of political pundits, he has gone on to study so-called “superforecasters,” those amateurs whose predictions of world events are consistently more accurate than those of intelligence analysts with access to classified information. (In general, these superforecasters share a particular temperament: they’re willing to learn from their mistakes, quick to update their beliefs and tend to think in shades of gray.) After mining the data, DellaVigna and Pope were able to identify their own superforecasters. As a group, these non-experts significantly outperformed the academics, improving on the average error rate of the professors by more than 20 percent. These people had no background in behavioral research. They were paid $1.50 for 10 minutes of their time. And yet, they were better than the experts at predicting research outcomes.

The limitations of expertise are best revealed by the failure of the experts to foresee their own shortcomings. When the academics were surveyed by DellaVigna and Pope, they predicted that high-citation experts would be significantly more accurate. (The opposite turned out to be true.) They also expected PhD students to underperform the professors – that didn’t happen, either – and that academics with training in psychology would perform the best. (The data points in the opposite direction.)

It’s a poignant lapse. These experts have been trained in human behavior. They have studied our biases and flaws. And yet, when it comes to their own performance, they are blind to their own blindspots. The hardest thing to know is what we don’t.

DellaVigna, Stefano, and Devin Pope. Predicting Experimental Results: Who Knows What? NBER Working Paper, 2016.      

The Power of Family Memory

In a famous series of studies conducted in the 1980s, the psychologists Betty Hart and Todd Risley gave parents a new variable to worry about: the number of words they speak to their children. According to Hart and Risley, the quantity of spoken language in a household is predictive of IQ scores, vocabulary size and overall academic success. The language gap even begins to explain socio-economic disparities in educational outcomes, as upper-class parents speak, on average, about 3.5 times more to their kids than their poorer peers. Hart and Risley referred to the lack of spoken words in poor households as "the early catastrophe."

In recent years, however, it’s become clear that it’s not just the amount of language that counts. Rather, researchers have found that some kinds of conversations are far more effective at promoting mental and emotional development than others. While all parents engage in roughly similar amounts of so-called “business talk” – these are interactions in which the parent is offering instructions, such as “Hold out your hands,” or “Stop whining!” – there is far more variation when it comes to what Hart and Risley called “language dancing,” or conversations in which the parent and child are engaged in a genuine dialogue. According to a 2009 study by researchers at the UCLA School of Public Health, parent-child dialogues were six times as effective in promoting the development of language skills as those in which the adult did all the talking.

So conversation is better than instruction; dialogues over monologues. But this only leads to the next practical question: What’s the best kind of conversation to have with children? If we only have a limited amount of “language dancing” time every day - my kids usually start negotiating for dessert roughly five minutes into dinner - then what should we choose to chat about? And this isn’t just a concern for precious helicopter parents. Rather, it’s a relevant topic for researchers trying to design interventions for at-risk children, as they attempt to give caregivers the tools to ensure successful development.

A new answer is emerging. According to a recent paper by the psychologists Karen Salmon and Elaine Reese, one of the best subjects of parent-child conversation is the past, or what they refer to as “elaborative reminiscing.” As evidence, Salmon and Reese cite a wide variety of studies, drawn from more than three decades of research on children between the ages of 18 months and 5 years, all of which converge on a similar theme: discussing our memories is an extremely effective way to promote cognitive and emotional growth. Maybe it’s a scene from our last family vacation, or an accounting of what happened at school that day, or that time I locked my keys in the car - the details of the memory don’t seem to matter that much. What does is that we remember together.

Here’s an example of the everyday reminiscing the scientists recommend:

Mother: “What was the first thing he [the barber] did?”

Child: “Bzzzz.” (running his hand over his head)

Mother: “He used the clippers, and I think you liked the clippers. And you know how I know? Because you were smiling.”

Child: “Because they were tickling.”

Mother: “They were tickling, is that how they felt? Did they feel scratchy?”

Child: “No.”

Mother: “And after the clippers, what did he use then?”

Child: “The spray.”

Mother: “Yes. Why did he use the spray?”

Child: (silent)

Mother: “He used the spray to tidy your hair. And I noticed that you closed your eyes, and I thought ‘Jesse’s feeling a little bit scared,’ but you didn’t move or cry and I thought you were being very brave.”

It’s such an ordinary conversation, but Salmon and Reese point out its many virtues. For one thing, the questions are leading the child through his recent haircut experience. He is learning how to remember, what it takes to unpack a scene, the mechanics of turning the past into a story. Over time, these skills play a huge role in language development, which is why children that engage in more elaborative reminiscing with their parents tend to have more advanced vocabularies, better early literacy scores and improved narrative skills. In fact, one study found that teaching low-income mothers to “reminisce in more elaborative ways” led to bigger improvements in narrative skills and story comprehension than an interactive book-reading program.

But talking about the past isn’t just about turning our kids into better storytellers. It’s also about boosting their emotional intelligence, teaching them how to handle those feelings they’d rather forget. In A Book About Love, I wrote about research showing that children raised in households that engage in the most shared recollection report higher levels of emotional well-being and a stronger sense of personal identity. The family unit also becomes stronger, as those children and parents who know more about the past also scored higher on a widely used measure of “reported family functioning.” Salmon and Reese expand on these findings, citing research showing that emotional reminiscing is linked to long-term improvements in the ability of children to regulate their negative emotions, handle difficult situations and identify the feelings of themselves and others.

Consider the haircut conversation above. Notice how the mother identifies the feelings felt by the child: enjoyment, tickling, fear. She suggests triggers for these emotions - the clippers, the water spray - and helps her son understand their fleeting nature. (Because the feelings are no longer present, they can be discussed calmly. That’s why talking about remembered emotions is often more useful than talking about emotions in the heat of the moment.) The virtue of such dialogues is that they teach children how to cope with their feelings, even when what they feel is fury and fear. As Salmon and Reese note, these are particularly important skills for mothers who have been exposed to adverse or traumatic experiences, such as drug abuse or domestic violence. Studies show that these at-risk parents are much less likely to incorporate “emotion words” when talking with their children. And when they do discuss their memories, Salmon and Reese write, they often “remain stuck in anger.” Their past isn’t past yet.

Perhaps this is another benefit of elaborative reminiscing. When we talk about our memories with loved ones, we translate the event into language, giving that swirl of emotion a narrative arc. (As the psychologist James Pennebaker has written, "Once it [a painful memory] is language based, people can better understand the experience and ultimately put it behind them.") And so the conversation becomes a moment of therapy, allowing us to make sense of what happened and move on. 

It was just a haircut, but you were so brave.   

Salmon, Karen, and Elaine Reese. "The Benefits of Reminiscing With Young Children." Current Directions in Psychological Science 25.4 (2016): 233-238.       

 

The Overview Effect

After six weeks in orbit, circling the earth in a claustrophobic space station, the three-person crew of Skylab 4 decided to go on a strike. For 24 hours, the astronauts refused to work, and even turned off their communications radio linking them to Earth. While NASA was confused by the space revolt—mission control was concerned the astronauts were depressed—the men up in space insisted they just wanted more time to admire their view of the earth. As the NASA flight director later put it, the astronauts were asserting “their needs to reflect, to observe, to find their place amid these baffling, fascinating, unprecedented experiences.”

The Skylab 4 crew was experiencing a phenomenon known as the overview effect, which refers to the intense emotional reaction that can be triggered by the sight of the earth from beyond its atmosphere. Sam Durrance, who flew on two shuttle missions, described the feeling like this: “You’ve seen pictures and you’ve heard people talk about it. But nothing can prepare you for what it actually looks like. The Earth is dramatically beautiful when you see it from orbit, more beautiful than any picture you’ve ever seen. It’s an emotional experience because you’re removed from the Earth but at the same time you feel this incredible connection to the Earth like nothing I’d ever felt before.”

The Caribbean Sea, as seen from ISS Expedition 40

The Caribbean Sea, as seen from ISS Expedition 40

What’s most remarkable about the overview effect is that the effect lasts: the experience of awe often leaves a permanent mark on the lives of astronauts. A new paper by a team of scientists (the lead author is David Yaden at the University of Pennsylvania) investigates the overview effect in detail, with a particular focus on how this vision of earth can “settle into long-term changes in personal outlook and attitude involving the individual’s relationship to Earth and its inhabitants.” For many astronauts, this is the view they never get over.

How does this happen? How does a short-lived perception alter one’s identity? There is no easy answer. In this paper, the scientists focus on how the sight of the distant earth is so contrary to our usual perspective that it forces our “self-schema” to accommodate an entirely new point of view. We might conceptually understand that the earth is a lonely speck floating in space, a dot of blue amid so much black. But it’s an entirely different thing to bear witness to this reality, to see our fragile planet from hundreds of miles away. The end result is that the self itself is changed; this new perspective of earth alters one’s perspective on life, with the typical astronaut reporting “a greater affiliation with humanity as a whole.” Here’s Ed Gibson, the science pilot on Skylab 4: “You see how diminutive your life and concerns are compared to other things in the universe. Your life and concerns are important to you, of course. But you can see that a lot of the things you worry about do not make much difference in an overall sense.”

There are two interesting takeaways. The first one, emphasized in the paper, is that the overview effect might serve as a crucial coping mechanism for the challenges of space travel. Astronauts live a grueling existence: they are stressed, isolated and exhausted. They live in cramped quarters, eat terrible food and never stop working. If we are going to get people to Mars, then we need to give astronauts tools to endure their time on a spaceship. As the crew of Skylab 4 understood, one of the best ways to withstand space travel is to appreciate its strange beauty.

The second takeaway has to do with the power of awe and wonder. When you read old treatises on human nature, these lofty emotions are often celebrated. Aristotle argued that all inquiry began with the feeling of awe, that “it is owing to their wonder that men both now begin and at first began to philosophize.” Rene Descartes, meanwhile, referred to wonder as the first of the passions, “a sudden surprise of the soul that brings it to focus on things that strike it as unusual and extraordinary.” In short, these thinkers saw the experience of awe as a fundamental human state, a feeling so strong it could shape our lives.

But now? We have little time for awe in the 21st century; wonder is for the young and unsophisticated. To the extent we consider these feelings it’s for a few brief moments on a hike in a National Park, or to marvel at a child’s face when they first enter Disneyland. (And then we get out our phones and take a picture.) Instead of cultivating awe, we treat it as just another fleeting feeling; wonder is for those who don’t know any better.

The overview effect, however, is a reminder that these emotions can have a lasting impact. Like the Skylab 4 astronauts, we can push back against our hectic schedules, insisting that we find some time to stare out the window.  

Who knows? The view just might change your life.

Yaden, David B., et al. "The overview effect: Awe and self-transcendent experience in space flight." Psychology of Consciousness: Theory, Research, and Practice 3.1 (2016): 1.

 

How Magicians Make You Stupid

The egg bag magic trick is simple enough. A magician produces an egg and places it in a cloth bag. Then, the magician uses some poor sleight of hand, pretending to hide the egg in his armpit. When the bag is revealed as empty, the audience assumes it knows where the egg really is.

But the egg isn’t there. The armpit was a false solution, distracting the crowd from the real trick: the bag contains a secret compartment. When the magician finally lifts his arm, the audience is impressed by the vanishing. How did he remove the egg from his armpit? It never occurs to them that the egg never left the bag.

Magicians are intuitive psychologists, reverse-engineering the mind and preying on all its weak spots. They build illusions out of our frailties, hiding rabbits in our attentional blind spots and distracting the eyes with hand waves and wands. And while people in the audience might be aware of their perceptual shortcomings – those fingers move so fast! - they are often blind to a crucial cognitive limitation, which allows magicians to keep us from deciphering the trick. In short, magicians know that people tend to to fixate on particular answers (the egg is in the armpit), and thus ignore alternative ones (it’s a trick bag), even when the alternatives are easier to execute. 

When it comes to problem-solving, this phenomenon is known as the einstellung effect. (Einstellung is German for “setting” or “attitude.”) First identified by the psychologist Abraham Luchins in the early 1940s, the effect has since been replicated in numerous domains. Consider a study that gave chess experts a series of difficult chess problems, each of which contained two solutions. The players were asked to find the shortest possible way to win. The first solution was obvious and took five moves to execute. The second solution was less familiar, but could be achieved in only three moves. As expected, these expert players found the first solution right away. Unfortunately, most of them then failed to identify the second one, even though it was more efficient. The good answer blinded them to the better one.  

Back to magic tricks. A new paper in Cognition, by Cyril Thomas and Andre Didierjean, extends the reach of the einstellung effect by showing that it limits our problem-solving abilities even when the false solution is unfamiliar and unlikely. Put another way, preposterous explanations can also become mental blocks, preventing us from finding answers that should be obvious. To demonstrate this, the scientists showed 90 students one of three versions of a card trick. The first version went like this: a performer showed the subject a brown-backed card surrounded by six red-backed cards. After randomly touching the back of the red cards, he asked the subject to choose one of the six, which was turned face up. It was a jack of hearts. The magician then flipped over the brown-backed card at the center, which was also a jack of hearts. The experiment concluded with the magician asking the subject to guess the secret of the trick. In this version, 83 percent of subjects quickly figured it out: all of the cards were the same.

The second version featured the same trick, except that the magician slyly introduced a false solution. Before a card was picked, he explained that he was able to influence other people’s choices through physical suggestions. He then touched the back of the red cards, acting as if these touches could sway the subject’s mind. After the trick was complete, these subjects were also asked to identify the secret. However, most of these subjects couldn’t figure it out: only 17 percent of people realized that every card was the jack of hearts. Their confusion persisted even after the magician encouraged them to keep thinking of alternative explanations.

This is a remarkable mental failure. It’s a reminder that our beliefs are not a mirror to the world, but rather bound up with the limits of the human mind. In this particular case, our inability to see the obvious trick seems to be a side-effect of our feeble working memory, which can only focus on a few bits of information at any given moment. (In an email, Thomas notes that it is more “economical to focus on one solution, and to not lose time…searching for a hypothetical alternative one.”) And so we fixate on the most salient answer, even when it makes no sense. As Thomas points out, a similar lapse explains the success of most mind-reading performances: we are so seduced by the false explanation (parapsychology!) that we neglect the obvious trick, which is that the magician gathered personal information about us from Facebook. The performance works because we lack the bandwidth to think of a far more reasonable explanation.

Thomas and Didierjean end their paper with a disturbing thought. “If a complete stranger (the magician) can fix spectators’ minds by convincing them that he/she can control their individual choice with his own gesture,” they write, “to what extent can an authority figure (e.g., policeman) or someone that we trust (e.g., doctors, politicians) fix our mind with unsuitable ideas?” They don’t answer the question, but they don’t need to. Just turn on the news.

Thomas, Cyril, and André Didierjean. "Magicians fix your mind: How unlikely solutions block obvious ones." Cognition 154 (2016): 169-173.

What Can Toilet Paper Teach Us About Poverty?

“Costco is where you go broke saving money.”

-My Uncle

The fundamental paradox of big box stores is that the only way to save money is to spend lots of it. Want to get a discount on that shampoo? Here's a liter. That’s a great price for chapstick – now you have 32 of them. The same logic applies to most staples of modern life, from diapers to Pellegrino, Uni-ball pens to laundry detergent.

For consumers, this buy-in-bulk strategy can lead to real savings, especially if the alternative is a bodega or Whole Foods. (Brand name diapers, for instance, cost nearly twice as much at my local grocery store compared to Costco.) However, not every American is equally likely to seek out these discounts. In particular, some studies have found that lower-income households  – the ones who could benefit the most from that huge bottle of Kirkland shampoo – pay higher prices because they don’t make bulk purchases.

A new paper, “Frugality is Hard to Afford,” by A. Yesim Orhun and Mike Palazzolo investigates why this phenomenon exists. Their data set featured the toilet paper purchases of more than 100,000 American families over seven years. Orhun and Palazzolo focused on toilet paper for several reasons. First, consumption of toilet paper is relatively constant. Second, toilet paper is easy to store – it doesn’t spoil – making it an ideal product to purchase in bulk, at least if you’re trying to get a discount. Third, the range of differences between brands of toilet paper is rather small, at least when compared to other consumer products such as detergent and toothpaste. 

So what did Orhun and Palazzolo find? As expected, lower income households were far less likely take advantage of the lower unit prices that come with bulk purchases. Over time, these shopping habits add up, as the poorest families end up paying, on average, 5.9 percent more per sheet of toilet paper. 

The question, of course, is why this behavior exists. Shouldn’t poor households be the most determined to shop around for cheap rolls? The most obvious explanation is what Orhun and Palazzolo refer to as a liquidity constraint: the poor simply lack the cash to “invest” in a big package of toilet paper. As a result, they are forced to buy basic household supplies on an as-needed basis, which makes it much harder to find the best possible price.

But this is not the only constraint imposed by poverty. In a 2013 Science paper, the behavioral scientists Anandi Mani, Sendhil Mullainathan, Eldar Shafir and Jiaying Zhao argued that not having money also imposes a mental burden, as our budgetary worries consume scarce attentional resources. This makes it harder for low-income households to plan for the future, whether it’s buying toilet paper in bulk or saving for retirement. “The poor, in this view, are less capable not because of inherent traits,” write the scientists, “but because the very context of poverty imposes load and impedes cognitive capacity.”

Consider a clever experiment conducted by Mani, et al. at a New Jersey mall. They asked shoppers about various hypothetical scenarios involving a financial problem. For instance, they might be told that their “car is having some trouble and requires $[X] to be fixed.” Some subjects were told that their repair was extremely expensive ($1500), while others were told it was relatively cheap ($150.) Then, all participants were given a series of challenging cognitive tasks, including some questions from an intelligence test and a measure of impulse control.

The results were startling. Among rich subjects, it didn’t really matter how much the car cost to fix – they performed equally well when the repair estimate was $150 or $1500.  Poor subjects, however, showed a troubling difference. When the repair estimate was low, they performed roughly equivalent to rich subjects. But when the repair estimate was high they suddenly showed a steep drop off in performance on both tests, comparable in magnitude to the mental deficit associated with losing a full night of sleep or becoming an alcoholic.

This new toilet paper study provides some additional evidence that poverty takes a toll on our choices. In one analysis, Orhun and Palazzolo looked at how purchase behavior was altered at the start of the month, when low income households are more likely to receive paychecks and food stamps. As the researchers note, this influx of money should temporarily ease the stress of being poor, thus making it easier to buy in bulk.  

That’s exactly what they found. When the poorest households were freed from their most pressing liquidity constraints, they made much more cost-effective toilet paper decisions. (This also suggests that poorer households are not simply buying smaller bundles due to a lack of storage space or transportation, as these factors are not likely to exhibit week-by-week fluctuation.) Of course, the money didn't last long; the following week, these households reverted back to their old habits, overpaying for household products. And so those with the least end up with even less.

Orhun, A. Yesim, and Mike Palazzolo. "Frugality is hard to afford." University of Michigan Working Paper (2016).

Mani, Anandi, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao. "Poverty impedes cognitive function." Science 341 (2013): 976-980.

The Nordic Paradox

By virtually every measure, the Nordic countries – Denmark, Finland, Iceland, Norway and Sweden - are a paragon of gender equality. It doesn’t matter if you’re looking at the wage gap or political participation or educational attainment: the Nordic region is the most gender equal place in the world.

But this equality comes with a disturbing exception: Nordic women also suffer from intimate partner violence (IPV) at extremely high rates. (IPV is defined by the CDC as the experience of “physical violence, sexual violence, stalking and psychological aggression by a current or former intimate partner.”) While the average lifetime prevalence for intimate partner violence for women living in Europe is 22 percent – a horrifyingly high number by itself – Nordic countries perform even worse. In fact, Denmark has the highest rate of IPV in the EU at 32 percent, closely followed by Finland (30 percent) and Sweden (28 percent.) And it’s not just violence from partners: other surveys have looked at violence against women in general. Once again, the Nordic countries had some of the highest rates of violence in the EU, as measured by reports of sexual assault, physical abuse or emotional abuse.

A new paper in Social Science & Medicine by Enrique Gracia and Juan Merlo refers to the existence of these two realities – gender equality and high rates of violence against woman – as the Nordic paradox. It’s a paradox because a high risk of IPV for women is generally associated with lower levels of gender equality, particularly in poorer countries. (For example, 71 percent of Ethiopian women have suffered from IPV.) This makes intuitive sense: a country that disregards the rights of women, or fails to treat them as equals, also seems more likely to tolerate their abuse.

And yet, the same logic doesn’t seem to apply at the other extreme of gender equality. As Gracia and Merlo note, European countries with lower levels of gender equality, such as Italy and Greece, also report much lower levels of IPV (roughly 30 percent lower) than Nordic nations.

What explains this paradox? Why hasn’t the gender equality of Nordic countries reduced violence against women? That’s the tragic mystery investigated by Gracia and Merlo.

One possibility is that the paradox is caused by differences in reporting, as women in Nordic countries might feel more free to disclose the abuse. This also makes intuitive sense: if you live in a country with higher levels of gender equality, then you might be less likely to fear retribution when accusing a partner, or telling the police about a sex crime. (In Saudi Arabia, only 3.3 of women who suffered from IPV told the police or a judge.) However, Gracia and Merlo cast shade on this explanation, noting that the available evidence suggests lower levels of disclosure of IPV among women in the Nordic countries. For instance, while 20 percent of women in Europe said that the most serious incident of IPV they’d experienced was brought to the attention of the police, only 10 percent of women in Denmark and Finland could say the same thing. The same trend is supported by other data, including rape statistics and “victim blaming” surveys. Finally, even if part of the Nordic paradox was a reporting issue, this would only reinforce the real mystery, which is that gender equal societies still suffer from epidemic levels of violence against women.

The main hypothesis advanced by Gracia and Merlo – and it’s only a hypothesis – is that high gender equality might create a backlash effect among men, triggering high levels of violence against women.  Because gender equality disrupts traditional gender norms, it might also reinforce “victim-blaming attitudes,” in which the violence is excused or justified. Gracia and Merlo cite related studies showing that women with “higher economic status relative to their partners can be at greater IPV risk depending on whether their partners hold more traditional gender beliefs.” For these backwards men, the success of women is perceived as a threat, an undermining of their identity. This backlash is further exacerbated by women becoming more independent and competitive in gender equal societies, thus increasing the potential for conflict with partners who insist on control and subservience. Progress leaves some people behind, and those people tend to get angry.

At best, the backlash effect is only a partial explanation for the Nordic Paradox. Gracia and Merlo argue that a real understanding of the prevalence of IPV – why is it still so common, even in developed countries? – will require looking beyond national differences and instead investigating the risk factors that affect the individual. How much does he drink? What is her employment status? Do they live together? What is the neighborhood like? Even brutish behaviors have complicated roots; we need a thick description of life to understand them.  

On the one hand, the Nordic paradox is a testament to liberal values, a reminder that thousands of years of gender inequality can be reversed in a few short decades. The progress is real. But it’s also a reminder that progress is difficult, full of strange backlashes and reversals. Two steps forward, one step back. Or is it the other way around? We can see the moral universe bending, but goddamn is it slow.

Gracia, Enrique, and Juan Merlo. "Intimate partner violence against women and the Nordic paradox." Social Science & Medicine 157 (2016): 27-30.

via MR