The Nordic Paradox

By virtually every measure, the Nordic countries – Denmark, Finland, Iceland, Norway and Sweden - are a paragon of gender equality. It doesn’t matter if you’re looking at the wage gap or political participation or educational attainment: the Nordic region is the most gender equal place in the world.

But this equality comes with a disturbing exception: Nordic women also suffer from intimate partner violence (IPV) at extremely high rates. (IPV is defined by the CDC as the experience of “physical violence, sexual violence, stalking and psychological aggression by a current or former intimate partner.”) While the average lifetime prevalence for intimate partner violence for women living in Europe is 22 percent – a horrifyingly high number by itself – Nordic countries perform even worse. In fact, Denmark has the highest rate of IPV in the EU at 32 percent, closely followed by Finland (30 percent) and Sweden (28 percent.) And it’s not just violence from partners: other surveys have looked at violence against women in general. Once again, the Nordic countries had some of the highest rates of violence in the EU, as measured by reports of sexual assault, physical abuse or emotional abuse.

A new paper in Social Science & Medicine by Enrique Gracia and Juan Merlo refers to the existence of these two realities – gender equality and high rates of violence against woman – as the Nordic paradox. It’s a paradox because a high risk of IPV for women is generally associated with lower levels of gender equality, particularly in poorer countries. (For example, 71 percent of Ethiopian women have suffered from IPV.) This makes intuitive sense: a country that disregards the rights of women, or fails to treat them as equals, also seems more likely to tolerate their abuse.

And yet, the same logic doesn’t seem to apply at the other extreme of gender equality. As Gracia and Merlo note, European countries with lower levels of gender equality, such as Italy and Greece, also report much lower levels of IPV (roughly 30 percent lower) than Nordic nations.

What explains this paradox? Why hasn’t the gender equality of Nordic countries reduced violence against women? That’s the tragic mystery investigated by Gracia and Merlo.

One possibility is that the paradox is caused by differences in reporting, as women in Nordic countries might feel more free to disclose the abuse. This also makes intuitive sense: if you live in a country with higher levels of gender equality, then you might be less likely to fear retribution when accusing a partner, or telling the police about a sex crime. (In Saudi Arabia, only 3.3 of women who suffered from IPV told the police or a judge.) However, Gracia and Merlo cast shade on this explanation, noting that the available evidence suggests lower levels of disclosure of IPV among women in the Nordic countries. For instance, while 20 percent of women in Europe said that the most serious incident of IPV they’d experienced was brought to the attention of the police, only 10 percent of women in Denmark and Finland could say the same thing. The same trend is supported by other data, including rape statistics and “victim blaming” surveys. Finally, even if part of the Nordic paradox was a reporting issue, this would only reinforce the real mystery, which is that gender equal societies still suffer from epidemic levels of violence against women.

The main hypothesis advanced by Gracia and Merlo – and it’s only a hypothesis – is that high gender equality might create a backlash effect among men, triggering high levels of violence against women.  Because gender equality disrupts traditional gender norms, it might also reinforce “victim-blaming attitudes,” in which the violence is excused or justified. Gracia and Merlo cite related studies showing that women with “higher economic status relative to their partners can be at greater IPV risk depending on whether their partners hold more traditional gender beliefs.” For these backwards men, the success of women is perceived as a threat, an undermining of their identity. This backlash is further exacerbated by women becoming more independent and competitive in gender equal societies, thus increasing the potential for conflict with partners who insist on control and subservience. Progress leaves some people behind, and those people tend to get angry.

At best, the backlash effect is only a partial explanation for the Nordic Paradox. Gracia and Merlo argue that a real understanding of the prevalence of IPV – why is it still so common, even in developed countries? – will require looking beyond national differences and instead investigating the risk factors that affect the individual. How much does he drink? What is her employment status? Do they live together? What is the neighborhood like? Even brutish behaviors have complicated roots; we need a thick description of life to understand them.  

On the one hand, the Nordic paradox is a testament to liberal values, a reminder that thousands of years of gender inequality can be reversed in a few short decades. The progress is real. But it’s also a reminder that progress is difficult, full of strange backlashes and reversals. Two steps forward, one step back. Or is it the other way around? We can see the moral universe bending, but goddamn is it slow.

Gracia, Enrique, and Juan Merlo. "Intimate partner violence against women and the Nordic paradox." Social Science & Medicine 157 (2016): 27-30.

via MR

Did "Clean" Water Increase the Murder Rate?

The construction of public waterworks across the United States in the late 19th and early 20th centuries was one of the great infrastructure investments in American history. As David Cutler and Grant Miller have demonstrated, these waterworks accounted for “nearly half of the total mortality reduction in major cities, three-quarters of the infant mortality reduction, and nearly two-thirds of the child mortality reduction.” Within a generation, the scourge of waterborne infectious diseases – from cholera to typhoid fever – was largely eliminated. Moving to a city no longer took years off your life, a sociological trend that unleashed untold amounts of human innovation.

However, not all urban waterworks were created equal. Some systems were built with metal pipes containing large amounts of lead. (At the time, lead pipes were considered superior to iron pipes, as they were more durable and easier to bend.) Unfortunately, these pipes leached lead particulates into the water, exposing city dwellers to water that tasted clean but was actually a poison.

Over the last few decades, researchers have amassed an impressive body of evidence linking lead exposure in childhood to a tragic list of symptoms, including higher rates of violent crime and lower scores on the IQ test. (One study found that lead levels are four times higher among convicted juvenile offenders than among non-delinquent high school students.) In 2014, I wrote about a paper by Jesssica Wolpow Reyes that documented the association between the use of leaded gasoline and the decline of violent crime:

Reyes concluded that “the phase-out of lead from gasoline was responsible for approximately a 56 percent decline in violent crime” in the 1990s. What’s more, Reyes predicted that the Clean Air Act would continue to generate massive societal benefits in the future, “up to a 70 percent drop in violent crime by the year 2020.” And so a law designed to get rid of smog ended up getting rid of crime. It’s not the prison-industrial complex that keeps us safe. It’s the EPA.

But these studies have their limitations. For one thing, the pace at which states reduced their use of leaded gas might be related to other social or political variables that influence the crime rate. It’s also possible that those neighborhoods with the highest risk of lead poisoning might suffer from additional maladies linked to crime, such as poverty and poor schools. To convincingly demonstrate that lead causes crime, researchers need to find a credible source of variation in lead exposure that is completely independent (aka exogenous) to the factors that might shape criminal behavior.

That is the goal of a new paper by James Feigenbaum and Christopher Muller. Their study mines the historical record, drawing from homicide data between 1921 and 1936 (when the first generation of children exposed to lead pipes were adults) and the materials used to construct each urban water system. If lead was responsible for higher crime rates, then those cities with higher lead content in their pipes (and also more acidic water, which leaches out the lead) should also experience larger spikes in crime decades later.

What makes this research strategy especially useful is that the decision to use lead pipes in a city’s water system was based in part on its proximity to a lead refinery. (Cities that were closer to a refinery were more likely to invest in lead pipes, as the lower transportation costs made the “superior” option more affordable.) In addition, Feigenbaum and Muller were able to look at how the lead content of pipes interacted with the acidity of a city’s water supply, thus allowing them to further isolate the causal role of lead.

The results were clear: cities that used lead pipes had homicide rates that were between 14 and 36 percent higher than cities that opted for cheaper iron pipes.

These violent crime increases are especially striking given that those cities using lead pipes tended to be wealthier, better educated and more “health conscious” than those that did not. All things being equal, one might expect these places to have lower rates of violent crime. But because of a little noticed engineering decision, the water of these cities contained a neurotoxin, which interfered with brain development and made it harder for their residents to reign in their emotions.

The brain is a plastic machine, molded by its environment. When we introduce a new technology – and it doesn’t matter if it’s an urban water system or the smartphone – it’s often impossible to predict the long-term consequences. Who would have guessed that the more expensive lead pipes would lead to spikes in crime decades later? Or that the heavy use of road salt in the winter would lead to a 21st century water crisis in Flint, as the chloride ions pull lead out of the old pipes?

One day, the scientists of the future will study our own blind spots, as we invest in technologies that mess with the mind in all sorts of subtle ways. History reminds us that these tradeoffs are often unexpected. After all, it took decades before we realized that, for some unlucky cities, even clean water came with a terrible cost.

James Feigenbaum and Christopher Muller. "Lead Exposure and Violent Crime in the Early Twentieth Century." Explorations in Economic History (2016)

The Importance of Learning How to Fail

“An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.” -Niels Bohr

Carol Dweck has devoted her career to studying how our beliefs about intelligence influence the way we learn. In general, she finds that people subscribe to one of two different theories of mental ability. The first theory is known as the fixed mindset – it holds that intelligence is a fixed quantity, and that each of us is allotted a certain amount of smarts we cannot change. The second theory is known as the growth mindset. It’s more optimistic, holding that our intelligence and talents can be developed through hard work and practice. "Do people with this mindset believe than anyone can be anything?" Dweck asks. "No, but they believe that a person's true potential is unknown (and unknowable); that it's impossible to foresee what can be accomplished with years of passion, toil, and training."

You can probably guess which mindset is more useful for learning. As Dweck and colleagues have repeatedly demonstrated, children with a fixed mindset tend to wilt in the face of challenges. For them, struggle and failure are a clear sign they aren’t smart enough for the task; they should quit before things get embarrassing. Those with a growth mindset, in contrast, respond to difficulty by working harder. Their faith in growth becomes a self-fulling prophecy; they get smarter because they believe they can.

The question, of course, is how to instill a growth mindset in our children. While Dweck is perhaps best known for her research on praise – it’s better to compliment a child for her effort than for her intelligence, as telling a kid she’s smart can lead to a fixed mindset – it remains unclear how children develop their own theories about intelligence.  What makes this mystery even more puzzling is that, according to multiple studies, the mindset of parents’ is surprisingly disconnected from the mindsets of their children. In other words, believing in the plasticity of intelligence is no guarantee that our kids will feel the same way.

What explains this disconnect? One possibility is that parents are accidental hypocrites. We might subscribe to the growth mindset for ourselves, but routinely praise our kids for being smart. Or perhaps we tell them to practice, practice, practice, but then get frustrated when they can’t master fractions, or free throws, or riding without training wheels. (I’m guilty of both these sins.) The end result is a muddled message about the mind’s potential.

However, in an important new paper, Kyla Haimovitz and Carol Dweck reveal the real influence behind the mindsets of our children. It turns out that the crucial variable is not what we think about intelligence – it’s how we react to failure.

Consider the following scenario: a child comes home with a bad grade on a math quiz. How do you respond? Do you try to comfort the child and tell him that it’s okay if he isn’t the most talented? Will you worry that he isn’t good at math? Or would you encourage him to describe what he learned from doing poorly on the test?

Parents with a failure-is-debilitating attitude tend to focus on the importance of performance: doing well on the quiz, succeeding at school, getting praise from other people. When confronted with the specter of failure, these parents get anxious and worried. Over time, their children internalize these negative reactions, concluding that failure is a dead-end, to be avoided at all costs. If at first you don’t succeed, then don’t try again.

In contrast, those parents who see failure as part of the learning process are more likely to see the bad grade as an impetus for extra effort, whether it’s asking the teacher for help or trying a new studying strategy. They realize that success is a marathon, requiring some pain along the way. You only learn how to get it right by getting it wrong.

According to the scientists, this is how our failure mindsets get inherited – our children either learn to focus on the appearance of success or on the long-term rewards of learning. Over time, these attitudes towards failure shape their other mindsets, influencing how they felt about their own potential. If they worked harder, could they get good at math? Or was algebra simply beyond their reach? 

Although the scientists found that children were bad at guessing the intelligence mindsets of their parents – they don’t know if we’re in the growth or fixed category - the kids were surprisingly good at predicting their parents’ relationship to failure. This suggests that our failure mindsets are much more “visible” than our beliefs about intelligence. Our children might forget what happened after the home-run, but they damn sure remember what we said after the strike-out. 

This study helps clarify the forces that shape our children. What matters most is not what we say after a triumph or achievement - it’s how we deal with their disappointments. Do we pity our kids when they struggle? (Sympathy is a natural reaction; it also sends the wrong message.) Do we steer them away from potential defeats? Or do we remind them that failure is an inescapable part of life, a state that cannot be avoided, only endured. Most worthy things are hard.

Haimovitz, K., and C. S. Dweck. "What Predicts Children's Fixed and Growth Intelligence Mind-Sets? Not Their Parents' Views of Intelligence but Their Parents' Views of Failure." Psychological Science (2016).

 

Is Tanking An Effective Strategy in the NBA?

In his farewell manifesto, former Philadelphia 76ers General Manager Sam Hinkie spends 13 pages explaining away the dismal performance of his team, which has gone 47-199 over the last three seasons. Hinkie’s main justification for all the losses involves the consolation of draft picks, which are the NBA’s way of rewarding the worst teams in the league. Here’s Hinkie:

"In the first 26 months on the job we added more than one draft pick (or pick swap) per month to our coffers. That’s more than 26 new picks or options to swap picks over and above the two per year the NBA allots each club. That’s not any official record, because no one keeps track of such records. But it is the most ever. And it’s not close. And we kick ourselves for not adding another handful."

This is the tanking strategy. While the 76ers have been widely criticized for their consistent disinterest in winning games, Hinkie argues that it was a necessary by-product of their competitive position in 2013, when he took over as GM. (According to a 2013 ESPN ranking of each NBA team’s three-year winning potential, the 76ers ranked 24th out of 30.) And so Hinkie, with his self-described “reverence for disruption” and “contrarian mindset,” set out to take a “long view” of basketball success. The best way for the 76ers to win in the future was to keep on losing in the present.  

Hinkie is a smart guy. At the very least, he was taking advantage of the NBA’s warped incentive structure, which can lead to a “treadmill of mediocrity” among teams too good for the lottery but too bad to succeed in the playoffs. However, Hinkie’s devotion to tanking – and his inability to improve the team’s performance - does raise an interesting set of empirical questions. Simply put: is tanking in the NBA an effective strategy? (I’m a Lakers fan, so it would be nice to know.) And if tanking doesn't work, why doesn't it?

A new study in the Journal of Sports Economics, published six days before Hinkie’s resignation, provides some tentative answers. In the paper, Akira Motomura, Kelsey Roberts, Daniel Leeds and Michael Leeds set out to determine whether or not it “pays to build through the draft in the National Basketball Association.” (The alternative, of course, is to build through free agency and trades.) Motomura et al. rely on two statistical tests to make this determination. The first test is whether teams with more high draft picks (presumably because they tanked) improve at a faster rate than teams with fewer such picks. The second test is whether teams that rely more on players they have drafted for themselves win more games than teams that acquire players in other ways. The researchers analyzed data from the 1995 to 2013 NBA seasons.

What did they find? The punchline is clear: building through the draft is not a good idea. Based on the data, Motomura et al. conclude that “recent high draft picks do not help and often reduce improvement,” as teams with one additional draft pick between 4 and 10 can be expected to lose an additional 6 to 9 games three years later. Meanwhile, those teams lucky enough to have one of the first three picks should limit their expectations, as those picks tend to have “little or no impact” on team performance. The researchers are blunt: “Overall, having more picks in the Top 17 slots of the draft does not help and tends to be associated with less improvement.”

There are a few possible explanations for why the draft doesn’t rescue bad teams. The most likely source of failure is the sheer difficultly of selecting college players, even when you’re selecting first. (One study found that draft order predicts only about 5 percent of a player’s performance in the NBA.) For every Durant there are countless Greg Odens; Hinkie’s own draft record is a testament to the intrinsic uncertainty of picking professional athletes.

That said, some general managers appear to be far better at evaluating players. “While more and higher picks do not generally help teams, having better pickers does,” write the scientists. They find, for instance, that R.C. Buford, the GM of the Spurs, is worth an additional 23 to 29 wins per season. Compare that to the “Wins Over Replacement” generated by Stephen Curry, who has just finished one of the best regular season performances in NBA history. According to Basketball Reference, Curry was worth an additional 26.4 wins during the 2015-2016 regular season. If you believe these numbers, R.C. Buford is one of the most valuable (and underpaid) men in the NBA.

So it’s important to hire the best GM. But this new study also finds franchise effects that exist independently of the general manager, as certain organizations are simply more likely to squeeze wins from their draft picks. The researchers credit these franchise differences largely to player development, especially when it comes to “developing players who might not have been highly regarded entering the NBA.” This is proof that “winning cultures” are a real thing, and that a select few NBA teams are able to consistently instill the habits required to maximize the talent of their players. Draft picks are nice. Organizations win championships. And tanking is no way to build an organization.

In his manifesto, Hinkie writes at length about the importance of bringing the rigors of science to the uncertainties of sport: “If you’re not sure, test it,” Hinkie writes. “Measure it. Do it again. See if it repeats.” Although previous research by the sports economist Dave Berri has cast doubt on the effectiveness of tanking,” this new paper should remind every basketball GM that the best way to win over the long-term is to develop a culture that doesn’t try to lose.

Motomura, Akira, et al. “Does It Pay to Build Through the Draft in the National Basketball Association?” Journal of Sports Economics, March 2016.

Does Stress Cause Early Puberty?

The arrival of puberty is a bodily event influenced by psychological forces. The most potent of these forces is stress: decades of research have demonstrated that a stressful childhood accelerates reproductive development, at least as measured by menarche, or the first menstrual cycle. For instance, girls growing up with fathers who have a history of socially deviant behavior tend to undergo puberty a year earlier than those with more stable fathers, while girls who have been maltreated (primarily because of physical or sexual abuse) begin menarche before those who have not. One study even found that Finnish girls evacuated from their homeland during WWII – they had to endure the trauma of separation from their parents – reached puberty at a younger age and had more children than those who stayed behind. 

There’s a cold logic behind these correlations. When times are stressful, living things tend to devote more resources to reproductive development, as they want to increase the probability of passing on their genes before death. This leads to earlier puberty and reduced investment in developmental processes less directly related to sex and mating. If nothing else, the data is yet another reminder that early childhood stress has lasting effects, establishing developmental trajectories that are hard to undo.

But these unsettling findings leave many questions unanswered. For starters, what kind of stress is the most likely to speed up reproductive development? Scientists often divide early life stressors into two broad categories: harshness and unpredictability. Harshness is strongly related to a lack of money, and is typically measured by looking at how a family’s income relates to the federal poverty line.  Unpredictability, in contrast, is linked to factors such as the consistency of father figures inside the house and the number of changes in residence. Are both of these forms of stress equally important at triggering the onset of reproductive maturation? Or do they have different impacts on human development?

Another key question is how this stress can be buffered. If a child is going to endure a difficult beginning, then what is the best way to minimize the damage?

These questions get compelling answers in a new study by a team of researchers from four different universities. (The lead author is Sooyeon Sung at the University of Minnesota.) Their subjects were 492 females born in 1991 at ten different hospitals across the United States. Because these girls were part of a larger study led by the National Institute of Child Health and Human Development, Sung. et al. were able to draw on a vast amount of relevant data, from a child’s attachment to her mother at 15 months to the fluctuating income of her family. These factors were then tested against the age of menarche, as the scientists attempted to figure out the psychological variables that determine the onset of puberty. 

The first thing they found is that environmental harshness (but not unpredictability) predicts the timing of the first menstrual cycle. While this correlation is limited by the relatively small number of impoverished families in the sample, it does suggest that not all stress is created equal, at least when it comes to the acceleration of reproductive development. It’s also evidence that poverty itself is stressful, and that children raised in the poorest households are marked by their scarcities.

But the news isn’t all terrible. The most significant result to emerge from this new paper is that the effects of childhood stress on reproductive development can be minimized by a secure mother-daughter relationship. When the subjects were 15 months old, they were classified using the Strange Situation procedure, a task pioneered by Mary Ainsworth in the mid-1960s. The experiment is a carefully scripted melodrama, as a child is repeatedly separated and reunited with his or her mother. The key variable is how the child responds to these reunions. Securely attached infants get upset when their mothers leave, but are excited by her return; they greet her with affectionate hugs and are quickly soothed. Insecure infants, on the other hand, are difficult to calm down, either because they feign indifference to their parent or because they react with anger when she comes back.  

Countless studies have confirmed the power of these attachment categories: Securely attached infants get better grades in high school, have more satisfying marriages and are more likely to be sensitive parents to their own children, to cite just a few consistent findings. However, this new study shows that having a secure attachment can also dramatically minimize the developmental effects of stress and poverty, at least when measured by the onset of puberty. 

Love is easy to dismiss as a scientific variable. It’s an intangible feeling, a fiction invented by randy poets and medieval troubadours. How could love matter when life is sex and death and selfish genes?

And yet, even within the unsparing framework of evolution we can still measure the sweeping influence of love. For these children growing up in the harshest environments, the security of attachment is not just a source of pleasure. It's their shield.

Sung, Sooyeon, Jeffry A. Simpson, Vladas Griskevicius, I. Sally, Chun Kuo, Gabriel L. Schlomer, and Jay Belsky. "Secure infant-mother attachment buffers the effect of early-life stress on age of menarche." Psychological Science, 2016.

The Curious Robot

Curiosity is the strangest mental state. The mind usually craves certainty; being right feels nice; mystery is frustrating. But curiosity pushes back against these lesser wants, compelling us to seek out the unknown and unclear. To be curious is to feel the pleasure of learning, even when what we learn is that we’re wrong.

One of my favorite theories of creativity is the so-called “information gap” model, first developed by George Loewenstein of Carnegie-Mellon in the early 90s. According to Loewenstein, curiosity is what happens when we experience a gap “between what we know and what we want to know…It is the feeling of deprivation that results from an awareness of the gap.” As such, curiosity is a mostly aversive state, an intellectual itch begging to be scratched. It occurs when we know just enough to know how little we understand.

The abstract nature of curiosity – it’s a motivational state unlinked to any specific stimulus or reward – has made it difficult to study, especially in the lab. There is no test to measure curiosity, nor is there a way to assess its benefits in the real world. Curiosity seems important – “Curiosity is, in great and generous minds, the first passion and the last,” wrote Samuel Johnson – but at times this importance verges on the intangible.

Enter a new paper by the scientists Pierre-Yves Oudeyer and Linda Smith that explores curiosity in robots and its implications for human nature. The paper is based on a series of experiments led by Ouedeyer, Frederic Kaplan and colleagues in which a pair of adorable quadruped machines – they look like a dogs from the 22nd century – were set loose on an infant play mat. One of these robots is the “learner,” while the other is the “teacher.” Here’s a picture of the setup:

The learner robot begins with a set of “primitives,” or simple pre-programmed instincts. It can, for instance, turn its head, kick its legs and make sounds of various pitches. These primitives begin as just that: crude scripts of being, patterns of actions that are not very impressive. The robot looks about as useful as a newborn.

But these primitives have a magic trick: they are bootstrapped to a curious creature, as the robot has been programmed to seek out those experiences that are the most educational. Consider a simple leg movement. The robot begins by predicting what will happen after the movement. Will the toy move to the left? Will the teacher respond with a sound? Then, after the leg kick, the robot measures the gap between its predictions and reality. This feedback leads to a new set of predictions, which leads to another leg kick and another measurement of the gap. A shrinking gap is evidence of its learning.

Here’s where curiosity proves essential. As the scientists note, the robot is tuned to explore “activities where the estimated reward from learning progress is high,” where the gap between what it predicts and what actually happens decreases most quickly. Let’s say, for instance, that the robot has four possible activities to pursue, represented in the chart below:

A robot driven by curiosity will avoid activity 4 - too easy, no improvement - and also activity 1, which is too hard. Instead, it will first focus on activity 3, as investing in that experience leads to a sharp drop in prediction errors. Once that curve starts to flatten - the robot has begun learning at a slower rate - it will shift to activity 2, as that activity now generates the biggest educational reward.

This simple model of curiosity – it leads us to the biggest knowledge gaps that can be closed in the least amount of time - generates consistent patterns of development, at least among these robots. In Oudeyer's experiments, the curious machines typically followed the same sequence of pursuits. The first phase involved “unorganized body babbling,” which led to the exploration of each “motor primitive.” These primitives were then applied to the external environment, often with poor results: the robot might vocalize towards the elephant toy (which can’t talk back), or try to hit the teacher. The fourth phase featured more effective interactions, such as talking to the teacher robot (rather than hitting it), or grasping the elephant. “None of these specific objectives were pre-programmed,” write the scientists. “Instead, they self-organized through the dynamic interaction between curiosity-driven exploration, statistical inference, the properties of the body, and the properties of the environment.”

It’s an impressive achievement for a mindless machine. It’s also a clear demonstration of the power of curiosity, at least when unleashed in the right situation. As Oudeyer and Smith note, many species are locked in a brutal struggle to survive; they have to prioritize risk avoidance over unbridled interest. (Curiosity killed the cat and all that.) Humans, however, are “highly protected for a long period” in childhood, a condition of safety that allows us, at least in theory, to engage in reckless exploration of the world. Because we have such a secure beginning, our minds are free to enjoy learning with little consideration of its downside. Curiosity is the faith that education is all upside.*

The implication, of course, is that curiosity is a defining feature of human development, allowing us to develop “domain-specific” talents – speech, tool use, literacy, chess, etc. – that require huge investments of time and attention.* When it comes to complex skills, failure is often a prerequisite for success; we only learn how to get it right by getting it wrong again and again. Curiosity is what draws us to these useful errors. It’s the mental quirk that lets us enjoy the steepest learning curves, those moments when we become all too aware of the endless gaps in our knowledge. The point of curiosity is not to make those gaps disappear – it’s to help us realize they never will.

*A new paper by Christopher Hsee and Bowen Ruan in Psychological Science demonstrates that even curiosity can have negative consequences. Across a series of studies, they show that our "inherent desire" to resolve uncertainty can lead people to endure aversive stimuli, such as electric shocks, even when the curiosity comes with no apparent benefit. They refer to this as the Pandora Effect. I'd argue, however, that the occasional perversities of curiosity are far outweighed by the curse of being incurious, as that can lead to confirmation bias, overconfidence, filter bubbles and all sorts of errors with massive consequences, both at the individual and societal level.

Oudeyer, Pierre-Yves, and L. Smith. "How evolution may work through curiosity-driven developmental process." Topics in Cognitive Science.

Money, Pain, Death

Last December, the economists Anne Case and Angus Deaton published a paper in PNAS highlighting a disturbing trend: more middle-aged white Americans are dying. In particular, whites between the ages of 45 and 54 with a high school degree or less have seen their mortality rate increase by 134 people per 100,000 between 1999 and 2013. This increase exists in stark contrast to every other age and ethnic demographic group, both in America and other developed countries. In the 21st century, people are supposed to be living longer, not dying in the middle of life.

What’s going on? A subsequent statistical analysis by Andrew Gelman suggested that a significant part of the effect was due to the aging population, as there are now more people in the older part of 45-54 cohort. (And older people are more likely to die.) However, this correction still doesn’t explain much of the recent changes to the mortality rate, nor does it explain why the trend only exists in the United States.

To explain these rising death rates, Case and Deaton cite a number of potential causes, from a spike in suicides to the prevalence of obesity. However, their data reveal that the single biggest contributor was drug poisonings, which rose more than fourfold between 1999 and 2013. This tragic surge has an equally tragic explanation: in the late 1990s, powerful opioid painkillers become widely available, leading to a surge in prescriptions. In 1991, there were roughly 76 million prescriptions written for opioids in America. By 2013, there were nearly 207 million.

Here’s where the causal story gets murky, at least in the Case and Deaton paper. Nobody really knows why painkillers have become so much more popular. Are they simply a highly addictive scourge unleashed by Big Pharma? Or is the rise in opioid prescriptions triggered, at least in part, by a parallel rise in chronic physical pain? Case and Deaton suggest that it’s largely the later, as their paper highlights the increase in reports of pain among middle-aged whites. “One in three white non-Hispanics aged 45–54 reported chronic joint pain,” write the economists, “one in five reported neck pain; and one in seven reported sciatica.” America is in the midst of a pain epidemic.

To review the proposed causal chain: more white people are dying because more white people are taking painkillers because more white people are experiencing severe pain. But this bleak narrative leads to the obvious question: what is causing all this pain?

That question, which has no easy answer, is the subject of a new paper in Psychological Science by Eileen Chou, Bidhan Parmar and Adam Galinsky. Their hypothesis is that our epidemic of pain is caused, at least in part, by rising levels of economic insecurity.

The paper begins with a revealing survey result. After getting data on 33,720 households spread across the United States, the scientists found that when both adults were unemployed, households spent 20 percent more on over-the-counter painkillers, such as Tylenol and Midol. A follow-up survey revealed that employment status was indeed correlated with reports of pain, and that inducing a feeling of economic hardship – the scientists asked people to recall a time when they felt financially insecure – nearly doubled the amount of pain people reported. In other words, the mere memory of money problems set their nerves on fire.

Why does economic insecurity increase our perception of physical pain? In a lab experiment, the scientists asked more than 100 undergraduates at the University of Virginia to plunge their hand into a bucket of 34 degree ice water for as long as it felt comfortable. Then, the students were randomly divided into two groups. The first group was the high-insecurity condition. They read a short text that highlighted their bleak economic prospects:

"Research conducted by Bureau of Labor Statistics reveals that more than 300,000 recent college grads are working minimum wage jobs, a figure that is twice as high as it was merely 10 years ago. Certain college grads bear more of the burden than others. In particular, students who do not graduate from top 10 national universities (e.g., Princeton and Harvard) fare significantly worse than those who do".

The students were then reminded that the University of Virginia was the 23rd best college in the United States, at least according US News & World Report.

In contrast, those students assigned to the low insecurity condition were given good news:

"Certain college grads are shield [sic] from the economic turmoil more than others. In particular, students who graduate from top 10 public universities (e.g., UC Berkeley and UVA) fare significantly better on the job market than those who do not. These college grads have a much easier time finding jobs."

These students were reminded that the University of Virginia was the second highest ranked public university.

After this intervention, all of the students were taken back to the ice bucket station. Once again, they were asked to keep their hand in the cold water for as long as it felt comfortable. As predicted, those primed to feel economically insecure showed much lower levels of pain tolerance:

The scientists speculate that the mediating variable between economic insecurity and physical pain is a lack of control. When people feel stressed about money, they feel less in control of their lives, and that lack of control exacerbates their perception of pain. The ice water feels colder, their nerves more sensitive to the sting.

In Case and Deaton's paper on the rising death rates of white Americans, the economists note that less educated whites have been hit hard by recent economic trends. “With widening income inequality, many of the baby-boom generation are the first to find, in midlife, that they will not be better off than were their parents,” they write. Job prospects are bleak; debt levels are high; median income has fallen by 4 percent for the middle class over the last 15 years.

The power of this new paper by Chou et al. is that it tells the human impact of these facts. When we feel buffeted by forces beyond our control – by global shifts involving the rise of automation and the growth of Chinese manufacturing and the decline of the American middle class – we are more likely to experience aches we can’t escape. As the scientists point out, the end result is a downward spiral, as economic insecurity causes physical pain which makes it harder for people to work which leads to even more pain.

It shouldn’t be a surprise, then, that dangerous painkillers become such a tempting way out. Side effects include death.

Chou, E. Y., B. L. Parmar, and A. D. Galinsky. "Economic Insecurity Increases Physical Pain." Psychological Science (2016)

 

The Fastest Way To Learn

Practice makes perfect: One of those clichés that gets endlessly trotted out, told to children at the piano and point guards shooting from behind the arc. It applies to multiplication tables and stickshifts, sex and writing. And the line is true, even if it overpromises. Perfection might be impossible, but practice is the only way to get close.

Unfortunately, the cliche is limited by its imprecision. What kind of practice makes perfect? And what aspects of practice are most valuable? Is it the repetition? The time? The focus? Given the burden of practice – it’s rarely much fun – knowing what works is useful knowledge, since it comes with the promise of learning faster. To invoke another cliché: Less pain, more gain.

These practical questions are the subject of a new paper by Nicholas Wymbs, Amy Bastian and Pablo Celnik in Current Biology that investigates the best ways to practice a motor skill. In the experiment, the scientists had subjects play a simple computer game featuring an isometric pinch task. Basically, subjects had to squeeze a small device that translated the amount of force they applied into cursor movements. The goal of the game was to move the cursor to specific windows on the screen.

The scientists divided their subjects into three main groups. The first group practiced the isometric task and then, six hours later, repeated the exact same lesson. The second group practiced the task but then, when called back six hours later, completed a slightly different version of the training, as the scientists required varying amounts of force to move the cursor. (The variations were so minor that subjects didn’t even notice them.) The last group only performed a single practice session. There was no follow-up six hours later.

The next day, all three groups returned to the lab for another training session. Their performance on the task was also measured. How accurate were their squeezes? How effectively were they able to control the cursor?

At first glance, the extra variability might seem counterproductive. Motor learning, after all, is supposed to be about the rote memorization of muscles, as the brain learns how to execute the exact same plan again and again. (As the scientists write, “motor learning is commonly described as a reduction of variability.”) It doesn’t matter if we’re talking about free throws or a Bach fugue – it’s all about mindless consistency, reinforcing the skill until it’s a robotic script.

However, the scientists found that making practice less predictable came with big benefits. When subjects were given a second training session requiring variable amounts of force, they showed gains in performance nearly twice as large as those who practiced for the same amount of time but always did the same thing. (Not surprisingly, the group given less practice time performed significantly worse.) In other words, a little inconsistency in practice led people to perform much more effectively when they returned to the original task.

This same technique – forcing people to make small alterations during practice - can be easily extended to all sorts of other motor activities. Perhaps it means shooting a basketball of a slightly different size, or doctoring the weight of a baseball bat, or adjusting the tension of tennis racquet strings. According to the scientists, these seemingly insignificant changes should accelerate your education, wringing more learning from every minute of training.

Why does variability enhance practice? The scientists credit a phenomenon known as memory reconsolidation. Ever since the pioneering work of Karim Nader, et al. it’s become clear that the act of recall is not a passive process. Rather, remembering changes the memory itself, as the original source file is revised every time it’s recalled. Such a mechanism has its curses – for one thing, it makes our memories highly unreliable, as they never stay the same – but it also ensures that all those synaptic files get updated in light of the latest events. The brain isn’t interested in useless precision; it wants the most useful version of the world, even if that utility comes at the expense of verisimilitude. It’s pragmatism all the way down.

While reconsolidation theory is already being used to help treat patients with PTSD and traumatic memories – the terrible past can always be rewritten – this current study extends the promise of reconsolidation to complex motor skills. In short, the scientists show that training people on a physical task, and then giving them subtle variations on that task after it has been recalled, can strengthen the original memory trace.  Because subjects were forced to rapidly adjust their “motor control policy” to achieve the same goals, their brains seamlessly incorporated these new lessons into the old motor skill. The practice felt the same, but what they’d learned had changed: they were now that much closer to perfect.

Wymbs, Nicholas F., Amy J. Bastian, and Pablo A. Celnik. "Motor Skills Are Strengthened through Reconsolidation." Current Biology 26.3 (2016): 338-343.

 

The Psychology of 'Making A Murderer'

Roughly ten hours into Making a Murderer, a Netflix documentary about the murder trial of Steven Avery, his defense lawyer Dean Strang delivers the basic thesis of the show:

“The forces that caused that [the conviction of Brendan Dassey and Steven Avery]…I don’t think they are driven by malice, they’re just expressions of ordinary human failing. But the consequences are what are so sad and awful.”

Strang then goes on to elaborate on these “ordinary human failing[s]”:

“Most of what ails our criminal justice system lies in unwarranted certitude among police officers and prosecutors and defense lawyers and judges and jurors that they’re getting it right, that they simply are right. Just a tragic lack of humility of everyone who participates in our criminal justice system.”

Strang is making a psychological diagnosis. He is arguing that at the root of injustice is a cognitive error, an “unwarranted certitude” that our version is the truth, the whole truth, and nothing but the truth. In the Avery case, this certitude is most relevant when it comes to forensic evidence, as his lawyers (Dean Strang and Jerry Buting) argue that the police planted keys and blood to ensure a conviction. And then, after the evidence was discovered, Strang and Buting insist that forensic scientists working for the state distorted their analysis to fit the beliefs of the prosection. Because they needed to find the victim’s DNA on a bullet in Avery’s garage – that was the best way to connect him to the crime – the scientists bent protocol and procedure to make a positive match.

Regardless of how you feel about the details of the Avery case, or even about the narrative techniques of Making A Murderer, the documentary raises important questions about the limitations of forensics. As such, it’s a useful antidote to all those omniscient detectives in the CSI pantheon, solving crimes with threads of hair and fragments of fingerprints. In real life, the evidence is usually imperfect and incomplete. In real life, our judgments are marred by emotions, mental short-cuts and the desire to be right. What we see is through a glass darkly.

One of the scientists who has done the most to illuminate the potential flaws of forensic science is Itiel Dror, a cognitive psychologist at the University College of London. Consider an experiment conducted by Dror that featured five fingerprint experts with more than ten years of experience working in the field. Dror asked these experts to examine a set of prints from Brandon Mayfield, an American attorney who’d been falsely accused of being involved with the Madrid terror attacks. The experts were instructed to assess the accuracy of the FBI’s final analysis, which concluded that Mayfield's prints were not a match. (The failures of forensic science in the Mayfield case led to a searing 2009 report from the National Academy of Sciences. I wrote about Mayfield and forensics here.) 

Dror was playing a trick. In reality, each set of prints was from one of the experts’ past cases, and had been successfully matched to a suspect. Nevertheless, Dror found that the new context – telling the forensic analysts that the prints came from the exonerated Mayfield - strongly influenced their judgment, as four out of five now concluded that there was insufficient evidence to link the prints. While Dror was careful to note that his data did “not necessarily indicate basic flaws” in the science of fingerprint identification – those ridges of skin remain a valid way to link suspects to a crime scene – he did question the reliability of forensic analysis, especially when the evidence gathered from the field is ambiguous. Here's an example of an ambiguous set of prints, which might give you a sense of just how difficult forensic analysis can be:

Similar results have emerged from other experiments. When Dror gave forensic analysts more typical stories about fingerprints they’d already reviewed, such as informing them that a suspect had already confessed, the new stories were able to get two-thirds of analysts to reverse their previous conclusions at least once. In an email, Dror noted that the FBI has replicated this basic finding, showing that in roughly 10 percent of cases examiners reverse their findings even when given the exact same prints. They are consistently inconsistent.

If the flaws of forensics were limited to fingerprints and other forms of evidence requiring visual interpretation, such as bite marks and hair samples, that would still be extremely worrying. (Fingerprints have been a crucial police tool since the French detective Alphonse Bertillon used a bloody print left behind on a pane of glass to secure a murder conviction in 1902.) But Dror and colleagues have shown that these same basic failings can even afflict the gold-standard of forensic evidence: DNA.

The experiment went like this: Dror and Greg Hampikian presented DNA evidence from a 2002 Georgia gang rape case to 17 professional DNA examiners working in an accredited government lab. Although the suspect in question (Kerry Robinson) had pleaded not guilty, the forensic analysts in the original case concluded that he could not be excluded based on the genetic data. This testimony, write the scientists, was “critical to the prosecution.”

But was it the best interpretation of the evidence? After all, the DNA gathered from the rape victim was part of a genetic mixture, containing samples from multiple individuals. In such instances, the genetics become increasingly complicated and unclear, making forensic analysts more likely to be swayed by their presumptions and prejudices. And because crime labs are typically part of a police department, these biases are almost always tilted in the direction of the prosecution. For instance, in the Avery case, the analyst who identified the victim’s DNA on a bullet fragment had been explicitly instructed by a detective to find evidence that the victim had been “in his house or his garage.”

To explore the impact of this potentially biasing information, Dror and Hampikian sent the DNA evidence from the Georgia gang rape case to additional examiners. The only difference was that these forensic scientists did the analysis blind - they weren’t told about the grisly crime, or the corroborating testimony, or the prior criminal history of the defendants. Of these 17 additional experts, only one concurred with the original conclusion. Twelve directly contradicted the finding presented during the trial – they said Robinson could be excluded - and four said the sample itself was insufficient.

These inconsistencies are not an indictment of DNA evidence. Genetic data remains, by far, the most reliable form of forensic proof. And yet, when the sample contains biological material from multiple individuals, or when it’s so degraded that it cannot be easily sequenced, or when low numbers of template molecules are amplified, the visual readout provided by the DNA processing software must be actively interpreted by the forensic scientists. They are no longer passive observers – they have become the instrument of analysis, forced to fill in the blanks and make sense of what they see. And that’s when things can go astray.

The errors of forensic analysts can have tragic consequences. In Convicting the Innocent, Brandon Garrett’s investigation of more than 150 wrongful convictions, he found that “in 61 percent of the trials where a forensic analyst testified for the prosecution, the analyst gave invalid testimony.” While these mistakes occurred most frequently with less reliable forms of forensic evidence, such as hair samples, 17 percent of cases involving DNA testing also featured misleading or incorrect evidence. “All of this invalid testimony had something in common,” Garret writes. “All of it made the forensic evidence seem like stronger evidence of guilt than it really was.”

So what can be done? In a recent article in the Journal of Applied Research in Memory and Cognition, Saul Kassin, Itiel Dror and Jeff Kukucka propose several simple ways to improve the reliability of forensic evidence. While their suggestions might seem obvious, they would represent a radical overhaul of typical forensic procedure. Here are the psychologists top five recommendations:

1) Forensic examiners should work in a linear fashion, analyzing the evidence (and documenting their analysis) before they compare it to the evidence taken from the target/suspect. If their initial analysis is later revised, the revisions should be documented and justified.

2) Whenever possible, forensic analysts should be shielded from potentially biasing contextual information from the police and prosecution. Here are the psychologists: “We recommend, as much as possible, that forensic examiners be isolated from undue influences such as direct contact with the investigating officer, the victims and their families, and other irrelevant information—such as whether the suspect had confessed. “

3) When attempting to match evidence from the field to that taken from a target/suspect, forensic analysts should be given multiple samples to test, and not just a single sample taken from the suspect. This recommendation is analogous to the eyewitness lineup, in which eyewitnesses are asked to identify a suspect among a pool of six other individuals. Previous research looking at the use of an "evidence lineup" with hair samples found that introducing additional samples reduced the false positive error rate from 30.4 percent to 3.8 percent.  

4) When a second forensic examiner is asked to verify a judgment, the verification should be done blindly. The “verifier” should not be told about the initial conclusion or given the identity of the first examiner.

5) Forensic training should also include lessons in basic psychology relevant to forensic work. Examiners should be introduced to the principles of perception (the mind is not a camera), judgment and decision-making (we are vulnerable to a long list of biases and foibles) and social influence (it’s potent).

The good news is that change is occurring, albeit at a slow pace. Many major police forces – including the NYPD, SFPD, and FBI – have started introducing these psychological concepts to their forensic examiners. In addition, leading forensic organizations, such as the US National Commission on Forensic Science, have endorsed Dror’s work and recommendations.

But fixing the practice of forensics isn’t enough: Kassin, Dror and Kukucka also recommend changes to the way scientific evidence is treated in the courtroom. “We believe it is important that legal decision makers be educated with regard to the procedures by which forensic examiners reached their conclusions and the information that was available to them at that time,” they write. The psychologists also call for a reconsideration of the “harmless error doctrine,” which holds that trial errors can be tolerated provided they aren’t sufficient to reverse the guilty verdict. Kassin, Dror and Kukucka point out that this doctrine assumes that all evidence is analyzed independently. Unfortunately, such independence is often compromised, as a false confession or other erroneous “facts” can easily influence the forensic analysis. (This is a possible issue in the Avery case, as Brendan Dassey’s confession – which contains clearly false elements and was elicited using very troubling police techniques – might have tainted conclusions about the other evidence. I've written about the science of false confessions here.) And so error begets error; our beliefs become a kind of blindness.

It’s important to stress that, in most instances, these failures of forensics don't require intentionality. When Strang observes that injustice is not necessarily driven by malice, he's pointing out all the sly and subtle ways that the mind can trick itself, slouching towards deceit while convinced it's pursuing the truth. These failures are part of life, a basic feature of human nature, but when they occur in the courtroom the stakes are too great to ignore. One man’s slip can take away another man’s freedom.

Dror, Itiel E., David Charlton, and Ailsa E. Péron. "Contextual information renders experts vulnerable to making erroneous identifications." Forensic Science International 156.1 (2006): 74-78.

Ulery, Bradford T., et al. "Repeatability and reproducibility of decisions by latent fingerprint examiners." PloS one 7.3 (2012): e32800.

Dror, Itiel E., and Greg Hampikian. "Subjectivity and bias in forensic DNA mixture interpretation." Science & Justice 51.4 (2011): 204-208.

Kassin, Saul M., Itiel E. Dror, and Jeff Kukucka. "The forensic confirmation bias: Problems, perspectives, and proposed solutions." Journal of Applied Research in Memory and Cognition 2.1 (2013): 42-52.

The Danger of Safety Equipment

My car is a safety braggart. When I glance at the dashboard, there’s a cluster of glowing orange lights, reminding me of all the smart technology designed to save me from my stupid mistakes. Airbags, check. Anti-lock brakes, check. Traction control, check. Collision Alert system, check.

It’s a comforting sight. It might also be a dangerous one. In fact, if you follow the science, all of these safety reminders could turn me into a more dangerous driver. This is known as the risk compensation effect, and it refers to the fact that people tend to take increased risks when using protective equipment. It’s been found among bicycle riders (people go faster when wearing helmets), taxi drivers and children running an obstacle course (safety gear leads kids to run more “recklessly.”) It’s why football players probably hit harder when playing with helmets and the fatality rate for skydivers has remained constant, despite significant improvements in safety equipment. (When given better parachute technology, people tend to open their parachutes closer to the ground, leading to a sharp increase in landing deaths.) It’s why improved treatments for HIV can lead to riskier sexual behaviors, why childproof aspirin caps don’t reduce poisoning rates (parents are more likely to leave the caps off bottles) and why countries with mandatory seat belt laws shift the risk from drivers to pedestrians and cyclists. As John Adams, professor of geography at the University of College London notes, “Protecting car occupants from the consequences of bad driving encourages bad driving.”

However, despite this surfeit of field data, the precise psychological mechanisms of risk compensation remain unclear. One of the lingering mysteries involves the narrowness of the effect. For instance, when people drive a car loaded with safety equipment, it’s clear that they often drive faster. But are they also more likely to not follow parking regulations? Similarly, a football player wearing an advanced helmet is probably more likely to deliver a dangerous hit with their head. But are they also more willing to commit a penalty? Safety equipment makes us take risks, but what kind of risks?

To explore this mystery, the psychologists Tim Gamble and Ian Walker at the University of Bath came up with a very clever experimental design. They recruited 80 subjects to play a computer game in which they had to inflate an animated balloon until it burst. The bigger the balloon, the bigger the payout, but every additional pump came with a risk: the balloon could pop, and then the player would get nothing.

Here’s the twist: Before the subjects played the game, they were given one of two pieces of headgear to wear. Some were given a baseball hat, while others were given a bicycle helmet. They were told that the gear was necessary part of the study, since the scientists had to track their eye movements. You can see the equipment below:

In reality, the headgear was a test of risk compensation. Gamble and Walker wanted to know how wearing a bike helmet, as opposed to a baseball hat, influenced risk-taking behavior on a totally unrelated task. (Obviously, a bike helmet won’t protect you from an exploding balloon on a computer screen.) Sure enough, those subjects randomly assigned to wear the helmet inflated the balloon to a much greater extent, receiving risk-taking scores that were roughly 30 percent higher. They also were more likely to admit to various forms of “sensation-seeking,” such as saying they “wish they could be a mountain climber,” or that they “enjoy the company of real ‘swingers.’” In short, the mere act of wearing a helmet that provided no actual protection still led people to act as if they were protected from all sorts of risks.

This lab research has practical implications. If using safety gear induces a general increase in risky behavior - and not just behavior directly linked to the equipment - then it might also lead to unanticipated dangers for which we are ill prepared. “This is not to suggest that the safety equipment will necessarily have its specific utility nullified,” write Gamble and Walker, “but rather that there could be changes in behavior wider than previously envisaged.” If anti-lock brakes lead us to drive faster in the rain, that’s too bad, but at least it’s a danger the technology is designed to mitigate. However, if the presence of the safety equipment also makes us more likely to text on the phone, then it might be responsible for a net reduction in overall safety, at least in some cases. Anti-lock brakes are no match for a distracted driver.

This doesn’t mean we're better off without air bags or behind the wheel of a Ford Pinto. But perhaps we should think of ways to lessen the salience of our safety gear. (At the very least, we should get rid of all those indicators on the dashboard.) Given the risk compensation effect, the safest car just might be the one that never tells you how safe it really is.

Gamble, Tim and Walker, Ian. “Wearing a Bicycle Helmet Can Increase Risk Taking and Sensation Seeking in Adults,” Psychological Science, 2016.

Do Genes Predict Intelligence? In America, It Depends on Your Class

There’s a longstanding academic debate about the genetics of intelligence. On the one side is the “hereditarian” camp, which cites a vast amount of research showing a strong link between genes and intelligence. This group can point to persuasive twin studies showing that, by the time children are 17 years old, their genetics explain approximately 66 percent of the variation in intelligence. To the extent we can measure smarts, what we measure is a factor largely dictated by the double helices in our cells.

On the other side is the “sociological” camp. These scientists tend to view differences in intelligence as primarily rooted in environmental factors, whether it’s the number of books in the home or the quality of the classroom. They cite research showing that many children suffering from severe IQ deficits can recover when placed in more enriching environments. Their genes haven’t changed, but their cognitive scores have soared.

These seem like contradictory positions, irreconcilable descriptions of the mind. However, when science provides evidence of two opposing theories, it’s usually a sign that something more subtle is going on. And this leads us to the Scarr-Rowe hypothesis, an idea developed by Sanda Scarr in the early 1970s and replicated by David Rowe in 1999. It’s a simple conjecture, at least in outline: according to the Scarr-Rowe hypothesis, the influence of genetics on intelligence depends on the socioeconomic status of the child. In particular, the genetic influence is suppressed in conditions of privation – say, a stressed home without a lot of books – and enhanced in conditions of enrichment. These differences have a tragic cause: when children grow up in poor environments, they are unable to reach their full genetic potential. The lack of nurture holds back their nature.

You can see this relationship in the chart below. As socioeconomic status increases on the x-axis, the amount of variance in cognitive-test performance explained by genes nearly triples. Meanwhile, nurture generates diminishing returns. Although upper class parents tend to fret over the details of their parenting — Is it better to play the piano or the violin? Should I be a Tiger Mom or imitate those chill Parisian parents?— these details of enrichment become increasingly insignificant. Their children are ultimately held back by their genetics.

It’s a compelling theory, with significant empirical support. However, a number of studies have failed to replicate the Scarr-Rowe hypothesis, including a 2012 paper that looked at 8716 pairs of twins in the United Kingdom. This inconsistency has two possible explanations. The first is that the Scarr-Rowe hypothesis is false, a by-product of underpowered studies and publication bias. The second possibility, however, is that different societies might vary in how socioeconomic status interacts with genetics. In particular, places with a more generous social welfare system – and an educational system less stratified by income - might show less support for the Scarr-Rowe hypothesis, since their poor children are less likely to be cognitively limited by their environment.

These cross-country differences are the subject of a new meta-analysis in Psychological Science by Elliot Tucker-Drob and Timothy Bates. In total, the scientists looked at 14 studies drawn from nearly 25,000 pairs of twins and siblings, split rather evenly between the United States and other developed countries in Western Europe and Australia. The goal of their study was threefold: 1) measure the power of the Scarr-Rowe hypothesis in the United States 2) measure the power of the Scarr-Rowe hypothesis outside of the United States, in countries with stronger social-welfare systems and 3) compare these measurements.

The results should depress every American: we are the great bastion of socioeconomic inequality, the only rich country where many poor children grow up in conditions so stifling they fail to reach their full genetic potential. The economic numbers echo this inequality, showing how these differences in opportunity persist over time. Although America likes to celebrate its upward mobility, the income numbers suggest that such mobility is mostly a myth, with only 4 percent of people born into the bottom quintile moving into the top quintile as adults. As Michael Harrington wrote in 1962, “The real explanation of why the poor are where they are is that they made the mistake of being born to the wrong parents.” 

Life isn’t fair. Some children will be born into poor households. Some children will inherit genes that make it harder for them to succeed. Nevertheless, we have a duty to ensure that every child has a chance to learn what his or her brain is capable of. We should be ashamed that, in 21st century America, the effects of inequality are so pervasive that people on different ends of the socioeconomic spectrum have minds shaped by fundamentally different forces. Rich kids are shaped by the genes they have. Poor kids are shaped by the support they lack.

Tucker-Drob, Elliot and Bates, Timothy. “Large cross-national differences in gene x socioeconomic status interaction on intelligence,” Psychological Science. 2015.        

The Louis-Schmeling Paradox

 

Why do we go to sporting events?

The reasons to stay home are obvious. Here’s my list, in mostly random order: a beer costs $12, the view is better from my couch, die-hard fans can be scary, the price of parking, post-game traffic.

That’s a pretty persuasive list. And yet, as I stare into my high-resolution television, I still find myself hankering for the live event, jealous of all those people eating bad nachos in the bleachers, or struggling to see the basketball from the last row. It’s an irrational desire - I realize I should stay home, save money, avoid the hassle – but I still want to be there, at the game, complaining about the cost of beer.

In a classic 1964 paper, “The Peculiar Economics of Professional Sports,” Walter Neale came up with an elegant explanation for the allure of live sporting events. He began his discussion with what he called the Louis-Schmeling Paradox, after the epic duo of fights between heavyweights Joe Louis and Max Schmeling. (Louis lost the first fight, but won the second.) According to Neale, the boxers perfectly illustrate the “peculiar economics” of sports. Although normal business firms seek out monopolies – they want to minimize competition and maximize profits – such a situation would be disastrous for a heavyweight fighter. If Joe Louis had a boxing monopoly, then he’d have “no one to fight and therefore no income,” for “doubt about the competition is what arouses interest.” Louis needed a Schmeling, the Lakers needed the Celtics and the Patriots benefit from a healthy Peyton Manning. It’s the uncertainty that’s entertaining.

Professional sports leagues closely follow Neale’s advice. They construct elaborate structures to smooth out the differences between teams, instituting salary caps, revenue sharing and lottery-style drafts. The goal is to make every game a roughly equal match, just like a Louis-Schmelling fight. Because sports monopolies are bad for business, Neale writes that the secret prayer of every team owner should be: “Oh Lord, make us good, but not that good.”

It’s an alluring theory. It’s also just that: a theory, devoid of proof. Apart from a few scattered anecdotes – when the San Diego Chargers ran roughshod over the AFL in 1961 “fans stayed away”– Neale’s paper is all conjecture.

Enter a new study by the economists Brad Humphreys and Li Zhou, which puts the Louis-Schmeling paradox to the empirical test. Humphreys and Zhou decided to delve into the actual numbers, looking at the relationship between league competition, team performance and game attendance. Their data was drawn from the home games of every Major League Baseball team between 2006 and 2010, as they sought to identify the variables that actually made people want to buy expensive tickets and overpay for crappy food.

What did they find? In “Peculiar Economics,” Neale made a clear prediction: “The closer the standings, and within any range of standings, the more frequently the standings change, the larger will be the gate receipts.” (Neale called this the “League Standing Effect,” arguing that the flux of brute competition was a “kind of advertising.”) However, Humphreys and Zhou reject this hypothesis, as they find that changes in the standings, and the overall closeness of team win percentages, have absolutely no impact on game attendance. Uncertainty is overrated.

But the study isn’t all null results. After looking at more than 12,000 baseball games, Humphreys and Zhou found that two variables were particularly important in determining attendance. The first variable was win preference, which isn’t exactly shocking: fans are more likely to attend games in which the home team is more likely to win. If we’re going to invest time and money in a live performance, then we want the investment to pay off; we don’t want to be stuck in post-game traffic after a defeat, thinking ruefully of all the better ways we could have spent our cash.

The second variable driving ticket sales is loss aversion, an emotional quirk of the mind in which losses hurt more than gains feel good. According to Humphreys and Zhou, loss aversion compounds the pain of a team’s defeat, especially when we expected a win. This suggests that the impact of an upset is asymmetric, with surprising losses packing a far greater emotional punch than surprising wins. The end result is that the pursuit of competitive balance – a league in which upsets are common – is ultimately a losing proposition for teams trying to sell more tickets. Instead of seeking out parity, greedy owners should focus on avoiding home losses, as that tends to discourage attendance at games.*

And so a familiar tension is revealed in the world of sports. On the one hand, there are the collective benefits of equality, which is why sports leagues aggressively redistribute wealth and draft picks. (The NFL is a bastion of socialism.) However, the individual team owners have a much narrower set of interests – they just want to win, especially at home, because that's what sells tickets.

The fans are stuck somewhere in between. While Neale might have been mistaken about the short-term motives of attendance – we want Louis to knock the shit out of Schmeling, not witness a close boxing match – he was almost certainly correct about the long-term impact of a league with a severe competitive imbalance. (It’s exciting when the Warriors are 23-0; it’s a travesty if they go undefeated for an entire season.) Sports fans might not be drawn to uncertainty, but they sure as hell need hope. Just ask those poor folks packed into Wrigley Field.

*Baseball owners should also invest in pitching: teams that give up more runs at home also exhibit lower attendance.

Neale, Walter C. "The peculiar economics of professional sports: A contribution to the theory of the firm in sporting competition and in market competition." The Quarterly Journal of Economics (1964): 1-14.

Humphreys, Brad, and Li Zhou. "The Louis-Schmelling Paradox and the League Standing Effect Reconsidered." The Journal of Sports Economics. (2015) 16: 835-852

When Should Children Start Kindergarten?

One of the fundamental challenges of parenting is that the practice is also the performance; childcare is all about learning on the job. The baby is born, a lump of need, and we’re expected to keep her warm, nourished and free of diaper rash. (Happy, too.) A few random instincts kick in, but mostly we just muddle our way through, stumbling from nap to nap, meal to meal. Or at least that’s how it feels to me.

Given the steep learning curve of parenting, it’s not surprising that many of us yearn for the reassurance of science. I want my sleep training to have an empirical basis; I’m a sucker for fatty acids and probiotics and the latest overhyped ingredient; my bookshelf groans with tomes on the emotionally intelligent toddler.

Occasionally, I find a little clarity in the research. The science has taught me about about the power of emotional control and the importance of secure attachments. But mostly I find that the studies complicate and unsettle, leaving me questioning choices that, only a generation or two ago, were barely even a consideration.  I’m searching for answers. I end up with anxiety.

Consider the kindergartner. Once upon a time, a child started kindergarten whenever they were old enough to make the age cutoff, which was usually after they turned five. (Different states had slightly different requirements.) However, over the last decade roughly 20 percent of children have been held back from formal schooling until the age of six, a process known as “redshirting.” The numbers are even higher for children in “socioeconomically advantaged families.”

What’s behind the redshirting trend? There are many causes, but one of the main factors has been research suggesting that delaying a child’s entry into a competitive process offers lasting benefits. Most famously, researchers have demonstrated that Canadian hockey players and European soccer stars are far more likely to have birthdays at the beginning of the year. The explanation is straightforward: because these redshirted children are slightly older than their peers, they get more playing time and better coaching. Over time, this creates a feedback loop of success.

However, the data has been much more muddled when it comes to the classroom. Athletes might benefit from a later start date, but the case for kindergartners isn’t nearly as clear. One study of Norwegian students concluded that the academic benefits were a statistical illusion: older children score slightly higher on various tests because they’re older, not because they entered kindergarten at a later date. Other studies have found associations between delayed kindergarten and educational attainment – starting school later makes us stay in school longer – but no correlation with lifetime earnings. To make matters even more complicated, starting school late seems to have adverse consequences for boys from poorer households, who are more likely to drop out of high-school once they reach the legal age of school exit.

Are you confused? Me too, and I’ve got a got a kid on the cusp of kindergarten. To help settle this debate, Thomas Dee of Stanford University and Hans Henrik Sievertsen at the Danish National Centre for Social Research decided to study Danish schoolchildren. This is for two reasons: 1) the country had high quality longitudinal data on the mental health of its students and 2) children in Denmark are supposed to begin formal schooling in the calendar year in which they turn six. This rule allowed Dee and Sievertsen to compare children born at the start of January with children born just a few days before in December. Although these kids are essentially the same age, they ended up in in different grades, making them an ideal population to study the impact of a delayed start to school.

After comparing these two groups of students, Dee and Sieversten found a surprisingly large difference in their mental health. According to the scientists, children who were older when they started kindergarten – they fell on the January side of the calendar – displayed significant improvements in mental health, both at the age of 7 and 11. In particular, the late starters showed much lower levels of inattention and hyperactivity, with a one-year delay leading to 73 percent decrease in reported problems.

As the scientists note, these results jive with a large body of research in developmental psychology suggesting that children benefit from an extended period of play and unstructured learning. When a child is busy pretending – when they turn a banana into a phone or a rock into a spaceship – they are practicing crucial mental skills. They are learning how to lose themselves in an activity and sustain their own interest. They are discovering the power of emotion and the tricks of emotional control. “To become mature,” Nietzsche once said, “is to recover that sense of seriousness which one had as a child at play.” But it takes time to develop that seriousness; the imagination cannot be rushed. 

Does this mean I should hold back my daughter? Is it always better to start kindergarten at a later date? Probably not. Dee and Sieverten are careful to note that the benefits of a later start to school were distributed unevenly among the Danish children. As a result, the scientists emphasize the importance of taking the individual child into account when making decisions about when to start school. Where is he on the developmental spectrum? Has she had a chance to develop her play skills? What is the alternative to kindergarten? As Dee noted in The Guardian, “the benefits of delays are unlikely to exist for children in preschools that lack the resources to provide well-trained staff and a developmentally rich environment.”

And so we’re left with the usual uncertainty. The data is compelling in aggregate – more years of play leads to better attention skills – but every child is an n of 1, a potential exception to the rule. (Parents are also forced to juggle more mundane concerns, like money; not every family can afford the luxury of redshirting.) The public policy implications are equally complicated. Starting kindergarten at the age of six might reduce attention problems, but only if we can replace the academic year with high-quality alternatives. (And that’s really hard to do.)

The takeaway, then, is that there really isn’t one. We keep looking to science for easy answers to the dilemmas of parenting, but mostly what we learn is that such answers don’t exist. Childcare is a humbling art. Practice, performance, repeat.

Dee, Thomas and Hans Henrik Sievertsen. "The Gift of Time? School Starting Age and Mental Health,” NBER Working Paper No. 21610, October 2015.

The Root of Wisdom: Why Old People Learn Better

In Plato’s Apology, Socrates defines the essence of wisdom. He makes his case by comparison, arguing that wisdom is ultimately an awareness of ignorance. The wise man is not the one who always gets it right. He’s the one who notices when he gets it wrong: 

I am wiser than this man, for neither of us appears to know anything great and good; but he fancies he knows something, although he knows nothing; whereas I, as I do not know anything, so I do not fancy I do. In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know. 

I was thinking of Socrates while reading a new paper in Psychological Science by Janet Metcalfe, Lindsey Casal-Roscum, Arielle Radin and David Friedman. The paper addresses a deeply practical question, which is how the mind changes as it gets older. It’s easy to complain about the lapses of age: the lost keys, the vanished names, the forgotten numbers. But perhaps these shortcomings come with a consolation. 

The study focused on how well people learn from their factual errors. The scientists gave 44 young adults (mean age = 24.2 years) and 45 older adults (mean age = 73.7 years) more than 400 hundred general information questions. Subjects were asked, for instance, to name the ancient city with the hanging gardens, or to remember the name of the woman who founded the American Red Cross. After answering each question, they were asked to rate, on a 7-point scale, their “confidence in the correctness of their response.” Then, they were then shown the correct answer. (Babylon, Clara Barton.) This phase of the experiment was done while subjects were fitted with an EEG cap, a device able to measure the waves of electrical activity generated by the brain.

The second part of the experiment consisted of a short retest. The subjects were asked, once again, to answer 20 of their high-confidence errors – questions they thought they got right but actually got wrong – and 20 low-confidence errors, or those questions they always suspected they didn’t know.

The first thing to note is that older adults did a lot better on the test overall. While the young group only got 26 percent of questions correct, the aged subjects got 41 percent. This is to be expected: the mind accumulates facts over time, slowly filling up with stray bits of knowledge.

What’s more surprising, however, is how the older adults performed on the retest, after they were given the answers to the questions they got wrong. Although current theory assumes that older adults have a harder time learning new material - their semantic memory has become rigid, or “crystallized” – the scientists found that the older subjects performed much better than younger ones during the second round of questioning. In short, they were far more likely to correct their errors, especially when it came to low-confidence questions:

Why did older adults score so much higher on the retest? The answer is straightforward: they paid more attention to what they got wrong. They were more interested in their ignorance, more likely to notice what they didn’t know. While younger subjects were most focused on their high-confidence errors – those mistakes that catch us by surprise – older subjects were more likely to consider every error, which allowed them to remember more of the corrections. Socrates would be proud.

You can see these age-related differences in the EEG data. When older subjects were shown the correct answer in red, they exhibited a much larger P3a amplitude, a signature of brain activity associated with the engagement of attention and the encoding of memory.

Towards the end of their paper, the scientists try to make sense of these results in light of research documenting the shortcomings of the older brain. For instance, previous studies have shown that older adults have a harder time learning new (and incorrect) answers to math problems, remembering arbitrary word pairs, and learning “deviant variations” to well-known fairy tales. Although these results are often used as evidence of our inevitable mental decline – the hippocampus falls apart, etc. – Metcalfe and colleagues speculate that something else is going on, and that older adults are simply “unwilling or unable to recruit their efforts to learn irrelevant mumbo jumbo.” In short, they have less patience for silly lab tasks. However, when senior citizens are given factually correct information to remember – when they are asked to learn the truth – they can rally their attention and memory. The old dog can still learn new tricks. The tricks just have to be worth learning.

Metcalfe, Janet, Lindsey Casal-Roscum, Arielle Radin, and David Friedman. "On Teaching Old Dogs New Tricks." Psychological Science (2015)

The Upside of a Stressful Childhood

 

Everyone knows that chronic stress is dangerous, especially for the developing brain. It prunes dendrites and inhibits the birth of new neurons. It shrinks the hippocampus, swells the amygdala and can lead to an elevated risk for heart disease, depression and diabetes. At times, these persuasive studies can make it can seem as if the ideal childhood is an extended vacation, shielded from the struggles of life.

But the human brain resists such simple prescriptions. It is a machine of tradeoffs, a fleshy computer designed to adapt to its surroundings. This opens the possibility that childhood stress - even of the chronic sort - might actually prove useful, altering the mind in valuable ways. Stress is a curse. Except when it's a benefit.

A new paper by a team of scientists at the University of Minnesota (Chiraag Mittal, Vladas Griskevicius, Jeffry Simpson, Sooyeon Sung and Ethan Young) investigates this surprising hypothesis. The researchers focused on two distinct aspects of cognition: inhibition and shifting.

Inhibition involves the exertion of mental control, as the mind overrides its own urges and interruptions. When you stay on task, or resist the marshmallow, or talk yourself out of a tantrum, you are relying on your inhibitory talents. Shifting, meanwhile, involves switching between different trains of thought. “People who are good at shifting are better at allowing their responses to be guided by the current situation rather than by an internal goal,” write the scientists. These people notice what’s happening around them and are able to adjust their mind accordingly. Several studies have found a correlation between such cognitive flexibility and academic achievement.

The researchers focused on these two cognitive functions because they seemed particularly relevant in stressful childhoods. Let’s start with inhibition. If you grow up in an impoverished environment, you probably learn the advantages of not waiting, as delaying a reward often means the reward will disappear. In such contexts, write the scientists, “a preference for immediate over delayed rewards…is actually more adaptive.” Self-control is for suckers.

However, the opposite logic might apply to shifting. If an environment is always changing – if it’s full of unpredictable people and intermittent comforts – then a child might become more sensitive to new patterns. They could learn how to cope by increasing their mental flexibility. 

It’s a nice theory, but is it true? To test the idea, the Minnesota scientists conducted a series of experiments and replications. The first study featured 103 subjects, randomly divided into two groups. The first group was given a news article about the recent recession and the lingering economic malaise. The second group was given an article about a person looking for his lost keys. The purpose of this brief intervention was to induce a state of uncertainty, at least among those reading about the economy.

Then, all of the subjects were given standard tasks designed to measure their inhibition and shifting skills. The inhibition test asked subjects to ignore an attention-grabbing flash and look instead at the opposite side of the screen, where an arrow was displayed for 150 milliseconds. Their score was based on how accurately they were able to say which way the arrow was pointing. Switching, meanwhile, was assessed based on how quickly subjects were able to identify colors or shapes after switching between the categories.  

Once these basic cognitive tests were over, the scientists asked everyone a few questions about the unpredictability of their childhood. Did their house often feel chaotic? Did people move in and out on a seemingly random basis? Did they have a hard time knowing what their parents or other people were going to say or do from day-to-day?

The results confirmed the speculation. If people were primed to feel uncertain – they read that news article about the middling economy – those who reported more unpredictable childhoods were significantly worse at inhibition but much better at shifting. “To our knowledge, these experiments are the first to document that stressful childhood environments do not universally impair mental functioning, but may actually enhance certain cognitive functions in the face of uncertainty,” write the scientists. “These findings, therefore, suggest that the cognitive functioning of adults reared in more unpredictable environments may be better conceptualized as adapted rather than impaired.”

After replicating these results, the scientists turned to a sample of subjects from the Minnesota Longitudinal Study of Risk and Adaptation. (I’ve written about this incredible project before.) Begun in the mid-1970s, the Minnesota study has been following the children born to 267 women living in poverty in the Minneapolis area for nearly four decades. As a result, the scientists had detailed data on their childhoods, and were able to independently assess them for levels of stress and unpredictability.

The scientists gave these middle-aged subjects the same shifting task. Once again, those adults with the most chaotic childhoods performed better on the test, at least after being made to feel uncertain. This suggests that those young children forced to deal with erratic environments – they don’t know where they might be living next month, or who they might be living with – tend to develop compensatory skills in response; the stress is turned into a kind of nimbleness. This difference is significant, as confirmed by a meta-analysis:

The triangle-dotted line represents those subjects with an unpredictable childhood

The triangle-dotted line represents those subjects with an unpredictable childhood

Of course, this doesn’t mean poverty is a blessing, or that we should wish a chaotic childhood upon our children. “We are not in any way suggesting or implying that stressful childhoods are positive or good for people,” write Mittal, et al. However, by paying attention to the adaptations of the mind, we might learn to help people take advantage of their talents, even if they stem from disadvantage. If nothing else, this research is a reminder that Nietzsche had a point: what doesn’t kill us can make us stronger. We just have to look for the strength.

Mittal, Chiraag, Vladas Griskevicius, Jeffry A. Simpson, Sooyeon Sung, and Ethan S. Young. "Cognitive adaptations to stressful environments: When childhood adversity enhances adult executive function." Journal of Personality and Social Psychology 109, no. 4 (2015): 604.

Why You Should Hold It

When Prime Minister David Cameron is giving an important speech, or in the midst of difficult negotiations, he relies on a simple mental trick known as the full bladder technique. The name is not a metaphor. Cameron drinks lots of liquid and then deliberately refrains from urinating, so that he is “desperate for a pee.” The Prime Minister believes that such desperation comes with benefits, including enhanced clarity and improved focus.

Cameron heard about the technique from Enoch Powell, a Conservative politician in the 1960s. Powell once explained why he never peed before a big speech. "You should do nothing to decrease the tension before making a big speech," he said. "If anything, you should seek to increase it."

Are the politicians right? Does the full bladder technique work? Is the mind boosted by the intense urge to urinate?  These questions are the subject of a new paper by Elise Fenn, Iris Blandon-Gitlin, Jennifer Coons, Catherine Pineda and Reinalyn Echon in the journal of Consciousness and Cognition.

The study featured a predictable design. In one condition, people were asked to drink five glasses of water, for a total of 700 milliliters. In the second condition, they only took five sips of water, or roughly 50 milliliters. The subjects were then forced to wait 45 minutes, a “timeframe that ensured a full bladder.” (The scientists are careful to point out that their study was approved by two institutional review boards; nobody wet their pants.) While waiting, the subjects were asked their opinions on various social and moral issues, such as the death penalty, gun control and gay rights.

Once the waiting period was over, and those who drank lots of water began to feel a little discomfort, the scientists asked some of the subjects to lie to an interviewer about their strongest political opinions. If they were pro-death penalty, they had to argue for the opposite; if they believed in gay marriage, they had to pretend they weren’t. (To give the liars an incentive, they were told that those who “successfully fooled the interviewer” would receive a gift card.) Those in the truth telling condition, meanwhile, were told to simply speak their mind.

All of the conversations were videotaped and shown to a panel of seventy-five students, who were asked to rate the answers across ten different variables related to truth and deception. Does the subject appear anxious? How hard does he or she appear to be thinking? In essence, the scientists wanted to know if being in a “state of high urination urgency” turned people into better liars, more adept at suppressing the symptoms of dishonesty.

That’s exactly what happened. When people had to pee, they exhibited more “cognitive control” and “their behaviors appeared more confident and convincing when lying than when telling the truth.” They were less anxious and fidgety and gave more detailed answers.

In a second analysis, the scientists asked a panel of 118 students to review clips of the conversation and assess whether or not people were telling the truth. As expected, the viewers were far more likely to believe that people who had to pee were being honest, even when they were lying about their beliefs.

There is a larger mystery here, and it goes far beyond bladders and bathrooms. One of the lingering debates in the self-control literature is the discrepancy between studies documenting an ego-depletion effect  – in which exerting self-control makes it harder to control ourselves later on – and the inhibitory spillover effect, in which performance on one self-control task (such as trying not to piss our pants) makes us better at exerting control on another task (such as hiding our dishonesty). In a 2011 paper, Tuk, et al. found that people in a state of urination urgency were more disciplined in other domains, such as financial decision-making. They were less impulsive because they had to pee.

This new study suggests that the discrepancy depends on timing. When self-control tasks are performed sequentially, such as resisting the urge to eat a cookie and then working on a tedious puzzle, the ego gets depleted; the will is crippled by its own volition. However, when two self-control tasks are performed at the same time, our willpower is boosted: we perform better at both tasks. “We often do not realize the many situations in our daily lives where we’re constantly exerting self control and how that affects our capacity,” wrote co-author Iris Blandón-Gitlin in an email. “It is nice to know that we can also find situations where our mental resources can be facilitated by self-control acts.” 

There are some obvious takeaways. If you’re trying to skip dessert, then you should also skip the bathroom; keep your bladder full. The same goes for focusing on a difficult activity  – you’re better off with pressure in your pants, as the overlapping acts of discipline will give you even more discipline. That said, don’t expect the mental boost of having to pee to last after you relieve yourself. If anything, the ego depletion effect might then leave you drained: All that willpower you spent holding it in will now hold you back.

F Fenn, Elise, Iris Blandón-Gitlin, Jennifer Coons, Catherine Pineda, and Reinalyn Echon. "The inhibitory spillover effect: Controlling the bladder makes better liars." Consciousness and Cognition 37 (2015): 112-122.

Quality, Quantity, Creativity

There are two competing narratives about the current television landscape. The first is that we’re living through a golden age of scripted shows. From The Sopranos to Transparent, Breaking Bad to The Americans: the art form is at its apogee.

The second narrative is that there’s way too much television - more than 400 scripted shows! - and that consumers are overwhelmed by the glut. John Landgraf, the CEO of FX Networks, recently summarized the problem: "I long ago lost the ability to keep track of every scripted TV series,” he said last month at the Television Critics Association. “But this year, I finally lost the ability to keep track of every programmer who is in the scripted programming business…This is simply too much television.” [Emphasis mine.] Landgraf doesn’t see a golden age – he sees a “content bubble.”

Both of these narratives are true. And they’re both true for a simple reason: when it comes to creativity, high levels of creative output are often a prerequisite for creative success. Put another way, throwing shit at the wall is how you figure out what sticks. More shit, more sticks.

This is a recurring theme of the entertainment business. The Golden Age of Hollywood – a period beginning with The Jazz Singer (1927) and ending with the downfall of the studio system in the 1950s – gave rise to countless classics, from Casablanca to The Searchers. It also led to a surplus of dreck. In the late 1930s, the major studios were releasing nearly 400 movies per year. By 1985, that number had fallen to around 100. It’s no accident that, as Quentin Tarantino points out, the movies of the 80s “sucked.”

The psychological data supports these cultural trends. The classic work in this area has been done by Dean Keith Simonton at UC-Davis. In 1997, after decades spent studying the creative careers of scientists, Simonton proposed the “equal odds rule,” which argues that “the relationship between the number of hits (i.e., creative successes) and the total number of works produced in a given time period is positive, linear, stochastic, and stable.” In other words, the people with the best ideas also have the most ideas. (They also have some of the worst ideas. As Simonton notes, “scientists who publish the most highly cited works also publish the most poorly cited works.”) Here’s Simonton, rattling off some biographical snapshots of geniuses:

"Albert Einstein had around 248 publications to his credit, Charles Darwin had 119, and Sigmund Freud had 330,while Thomas Edison held 1,093 patents—still the record granted to any one person by the U.S. Patent Office. Similarly, Pablo Picasso executed more than 20,000 paintings, drawings, and pieces of sculpture, while Johann Sebastian Bach composed over 1,000 works, enough to require a lifetime of 40-hr weeks for a copyist just to write out the parts by hand."

Simonton’s model was largely theoretical: he tried to find the equations that fit the historical data. But his theories now have some new experimental proof. In a paper published this summer in Frontiers in Psychology, a team of psychologists and neuroscientists at the University of New Mexico extended Simonton’s work in some important ways. The scientists began by giving 246 subjects a version of the Foresight test, in which people are shown a graphic design (say, a zig-zag) and told to “write down as many things as you can that the drawing makes you think of, looks like, reminds you of, or suggests to you.” These answers were then scored by a panel of independent judges on a five-point scale of creativity. In addition, the scientists were given a variety of intelligence and personality tests and put into a brain scanner, where the thickness of the cortex was measured.

The results were a convincing affirmation of Simonton’s theory: a high ideation rate - throwing shit at the wall - remains an extremely effective creativity strategy. According to the scientists, “the quantity of ideas was related to the judged quality or creativity of ideas to a very high degree,” with a statistical correlation of 0.73. While we might assume that rushing to fill up the page with random thoughts might lead to worse output, the opposite seemed to occur: those who produced more ideas also produced much better ideas. Rex Jung, the lead author on the paper, points out that this “is the first time that this relationship [the equal odds rule] has been demonstrated in a cohort of ‘low creative’ subjects as opposed to the likes of Picasso, Beethoven, or Curie.” You can see the linear relationship in the chart below:

The scientists also looked for correlations between the cortical thickness data and the performance of the subjects. While such results are prone to false positives and Type 1 errors – the brain is a complicated knot – the researchers found that both the quantity and quality of creative ideas were associated with a thicker left frontal pole, a brain area associated with "thinking about one's own future" and "extracting future prospects."

Of course, it’s a long way from riffing on a zig-zag in a lab to producing quality scripted television. Nevertheless, the basic lesson of this research is that expanding the quantity of creative output will also lead to higher quality, just as Simonton predicted. It would be nice (especially for networks) if we could get Breaking Bad without dozens of failed pilots, or if there was a way to greenlight Deadwood without buying into John from Cincinnati. (Call it the David Milch conundrum.) But there is no shortcut; failure is an essential inefficiency. In his writing, Simonton repeatedly compares the creative process to Darwinian evolution, in which every successful adaptation emerges from a litter of dead-end mutations and genetic mistakes. The same dismal logic applies to culture.* The only way to get a golden age is to pay for the glut. 

Jung, Rex E., et al. "Quantity yields quality when it comes to creativity: a brain and behavioral test of the equal-odds rule." Frontiers in Psychology (2015).

*Why? Because William Goldman was right: "NOBODY KNOWS ANYTHING. Not one person in the entire motion picture field knows for a certainty what's going to work. Every time out it's a guess..."

The Best Way To Increase Voter Turnout

“Nothing is more wonderful than the art of being free, but nothing is harder to learn how to use than freedom.”

-Alexis de Tocqueville

Why don’t more Americans vote? In the last midterm election, only 36.4 percent of eligible voters cast a ballot, the lowest turnout since 1942.

To understand the causes of low turnout, the Census Bureau regularly asks citizens why they chose not to exercise their constitutional right. The number one reason is always the same: “too busy.” (That was the reason given by 28 percent of non-voters in 2014.) The second most popular excuse is “not interested,” followed by a series of other obstacles, such as forgetting about the election or not liking any of the candidates.

What’s telling about this survey is that the reasons for not voting are almost entirely self-created. They are psychological, not institutional. (Only 2 percent of non-voters blame “registration problems.”) It’s not that people can’t vote – it’s that they don’t want to. They are unwilling to make time, even if it only takes a few minutes.

Alas, most interventions designed to mobilize these non-voters – to help them deal with their harried schedule and political apathy - are not very effective. One recent paper, for instance, reviewed evidence from more than 200 get-out-the-vote (GOTV) projects to show that, on average, door-to-door canvassing increased turnout by 1 percentage point, direct mail increased turnout by 0.7 percentage points, and phone calls increased turnout by 0.4 percentage points.* Television advertising, meanwhile, appears to have little to no effect. (Nevertheless, it’s estimated that 2016 political campaigns will spend more than $4.4 billion on broadcast commercials.)

However, a new working paper, by John Holbein of Duke University, proposes a very different approach to increase voter turnout. While most get-out-the-vote operations are fixated on the next election, trying to churn out partisans in the same Ohio/Florida/Nevada zip codes, Holbein’s proposal focuses on children, not adults. It also comes with potentially huge side benefits.

In his paper, “Marshmallows and Votes?” Holbein looks at the impact of the Fast Track Intervention, one of the first large scale programs designed to improve children’s non-cognitive skills, such as self-control, emotional awareness and grit. (“Non-cognitive” remains a controversial description, since it implies that these skills don’t require high-level thinking. They do.) Fast Track worked. Follow-up surveys with hundreds of students enrolled in the program showed that, relative to a control group, those given an education in non-cognitive skills were less aggressive at school, better at identifying their emotions and more willing to work through difficult problems. As teenagers, those in the treatment group “manifested reduced conduct problems in the home, school, and community, with decreases in involvement with deviant peers, hyperactivity, delinquent behavior, and conduct disorders.” As adults, they had lower conviction rates for violent and drug-related crimes.

Holbein wanted to expand on this analysis by looking at voter behavior when the Fast Track subjects were in their mid to late twenties. After matching people to their voter files, he found a clear difference in political participation rates. According to Holbein’s data, “individuals exposed to Fast Track turned out to vote in at least one of the federal elections held during 2004-2012 at a rate 11.1 percentage points higher than the control group.” That represents a 40 percent increase over baseline levels. Take that, television ads.

There are two important takeaways. The first is that a childhood intervention designed to improve non-cognitive skills can have “large and long-lasting impacts on political behavior in adulthood.” In his paper, Holbein emphasis the boost provided by self-regulation, noting that the ability to "persevere, delay gratification, see others' perspectives, and properly target emotion and behavior" can help people overcome the costs of participating in an election, whether it's waiting in a line at a polling place or not getting turned off by negative ads.  And since the health of a democracy depends on the involvement of its citizens – we must learn how to use our freedom, as Tocqueville put it – it seems clear that attempts to improve political participation should begin as early as possible, and not just with lectures about civics.

The second lesson involves the downstream benefits of an education in non-cognitive skills. We’ve been so focused for so long on the power of intelligence that we’ve largely neglected to teach children about self-control and emotional awareness. (That’s supposed to be the job of parents, right?) However, an impressive body of research over the last decade or so has made it clear that non-cognitive skills are extremely important. In a recent review paper, the Nobel Laureate James Heckman and the economist Tim Kautz summarize the evidence: “The larger message of this paper is that soft skills [e.g, non-cognitive skills] predict success in life, that they causally produce that success, and that programs that enhance soft skills have an important place in an effective portfolio of public policies.”

Democracies are not self-sustaining – they have to invest in their continued existence, which means developing citizens willing to cast a ballot. In Making Democracy Work, Robert Putnam and colleagues analyzed the regional governments of Italy. On paper, all of these governments looked identical, having been created as part of the same national reform. However, Putnam found that their effectiveness varied widely, largely as a result of differing levels of civic engagement among their citizens. When people were more engaged with their community – when they voted in elections, read newspapers, etc. – they had governments that were more responsive and successful. According to Putnam, civic engagement is not a by-product of good governance. It’s a precondition for it.

This new paper suggests that the road to a better democracy begins in the classroom. By teaching young students basic non-cognitive skills – something we should be doing anyway – we might also improve the long-term effectiveness of our government.

Holbein, John. "Marshmallows and Votes? Childhood Non-Cognitive Skill Development and Adult Political Participation." Working Paper.

*There is evidence, however, that modern campaigns have become more effective at turning out the vote, largely because their ground games can now target supporters with far higher levels of precision. According to a recent paper by Ryan Enos and Anthony Fowler, the Romney and Obama campaigns were able to increase voter participation in the 2012 election by 7-8 percentage points in “highly targeted” areas. Unfortunately, these additional supporters were also quite expensive, as Enos and Fowler conclude that “the average cost of generating a single vote is about 87 dollars."

Can Tetris Help Us Cope With Traumatic Memories?

In a famous passage in Theatetus, Plato compares human memory to a wax tablet: 

Whenever we wish to remember anything we see or hear or think of in our own minds, we hold this wax under the perceptions and thoughts and imprint them upon it, just as we make impressions from seal rings; and whatever is imprinted we remember and know as long as its image lasts, but whatever is rubbed out or cannot be imprinted we forget and do not know.

The appeal of Plato’s metaphor is that it fits our intuitions about memory. Like the ancient philosopher, we’re convinced that our recollections are literal copies of experience, snapshots of sensation written into our wiring. And while our metaphors of memory now reflect modern technology – the brain is often compared to a vast computer hard drive – we still focus on comparisons that imply accurate recall. Once a memory is formed, it’s supposed to stay the same, an immutable file locked away inside the head.

But this model is false. In recent years, neuroscientists have increasingly settled on a model of memory that represents a dramatic break from the old Platonic metaphors. It turns out that our memories are not fixed like an impression in a wax tablet or a code made of zeroes and ones. Instead, the act of remembering changes the memory itself, a process known as memory reconsolidation. This means that every memory is less like a movie, a collection of unchanging scenes, and more like a theatrical play, subtly different each time it’s performed. 

On the one hand, the constant reconsolidation of memory is unsettling. It means that our version of history is fickle and untrustworthy; we are all unreliable narrators. And yet, the plasticity of memory also offers a form of hope, since it means that even the worst memories can be remade. This was Freud’s grand ambition. He insisted that the impossibility of repression was the “corner-stone on which the whole structure of psychoanalysis rests.” Because people could not choose what to forget, they had to find ways to live with what they remembered, which is what the talking cure was all about. 

If only Freud knew about video games. That, at least, is the message of a new paper in Psychological Science from a team of researchers at Cambridge, Oxford and the Karolinska Institute. (The first author is Ella James; the corresponding author is Emily Holmes.) While previous research has used beta-blockers to weaken traumatic memories during the reconsolidation process – subjects are given calming drugs and then asked to remember their traumas – these scientists wanted to explore interventions that didn’t involve pharmaceuticals. Their logic was straightforward. Since the brain has strict computational limits, and memories become malleable when they’re recalled, distracting people during the recall process should leave them with fewer cognitive resources to form a solid memory trace of the bad stuff. “Intrusive memories of trauma consist of mental images such as visual scenes from the event, for example, the sight of a red car moments before a crash,” write the scientists. “Therefore, a visuospatial task performed when memory is labile (during consolidation or reconsolidation) should interfere with visual memory storage (as well as restorage) and reduce subsequent intrusions.” In short, we’ll be so distracted that we’ll forget the pain. 

The scientists began by inducing some experimental trauma. In the first experiment, the “trauma” consisted of a 12 minute film containing 11 different scenes involving “actual or threatened death, as well as serious injury.” There was a young girl hit by a car, and a man drowning in the sea and a teenager, staring at his phone, who gets struck by a van while crossing the street. The subjects were shown these tragic clips in a dark room and asked to imagine themselves “as a bystander at the scene.” 

The following day, the subjects returned to the lab and were randomly assigned to two groups. Those in the first group were shown still pictures drawn from the video, all of which were designed to make them remember the traumatic video. There was a photo of the young girl just before she was hit and a snapshot of the man in the ocean, moments before he slipped below the surface. Then, after a brief “music filler task” – a break designed to let the chemistry of reconsolidation unfold – the subjects were told to play Tetris on a computer.  Twelve minutes later, they were sent home with a diary and asked to record any “intrusive memories” of the traumatic film over the following week.  

Subjects in the control group underwent a simpler procedure. After returning to the lab, they were given the music filler task and told to sit quietly in a room, where they were allowed to “think about anything.” They were then sent home with the same diary and the same instructions.

As you can see in the charts below, that twelve minute session of Tetris significantly reduced the number of times people remembered those awful scenes, both during the week and on their final return to the lab:

A second experiment repeated this basic paradigm, except with the addition of two additional control groups. The first new group played Tetris but was not given the reactivation task first, which meant their memories never became malleable. The second new control group was given the reactivation task but without the Tetris cure. The results were again compelling:

It’s important to note that the benefits of the Tetris treatment only existed when the distraction was combined with a carefully timed recall sequence. It’s not enough to play a video game after a trauma, or to reflect on the trauma in a calming space. Rather, the digital diversion is only therapeutic within a short temporal window, soon after we’ve been reminded of what we’re trying to forget. "Our findings suggest that, although people may wish to forget traumatic memories, they may benefit from bringing them back to mind, at least under certain conditions - those which render them less intrusive," said Ella James, in an interview with Psychological Science

The virtue of this treatment, of course, is that it doesn’t involve any mood-altering drugs, most of which come with drawbacks and side-effects. (MDMA might be useful for PTSD, but it can also be a dangerous compound.) The crucial question is whether these results will hold up among people exposed to real traumas, and not just a cinematic compilation of death and injury. If they do, then Tetris just might become an extremely useful psychiatric tool.  

Last point: Given the power of Tetris to interfere with the reconsolidation process, I couldn’t help but wonder about how video games might be altering our memory of more ordinary events. What happens to those recollections we think about shortly before disappearing into a marathon session of Call of Duty or Grand Theft Auto V? Are they diminished, too? It’s a dystopia ripped from the pages of Infinite Jest: an entertainment so consuming that it induces a form of amnesia. The past is still there. We just forget to remember it.

James, Ella L., et al. "Computer game play reduces intrusive memories of experimental trauma via reconsolidation-update mechanisms." Psychological Science (2015)

 

How Does Mindfulness Work?

In the summer of 1978, Ellen Langer published a radical sentence in the Journal of Personality and Social Psychology. It’s a line that’s easy to overlook, as it appears in the middle of the first page, sandwiched between a few dense paragraphs about the nature of information processing. But the sentence is actually a sly attack on one of the pillars of Western thought. “Social psychology is replete with theories that take for granted the ‘fact’ that people think,” Langer wrote. With her usual audacity, Langer then went on to suggest that most of those theories were false, and that much of our behavior is “accomplished…without paying attention.” The “fact” of our thinking is not really a fact at all.

Langer backed up these bold claims with a series of clever studies. In one experiment, she approached a student at a copy machine in the CUNY library. As the subject was about to insert his coins into the copier, Langer asked if she could use the machine first. The question came in three different forms. The first was a simple request: “Excuse me, I have 5 pages. May I use the Xerox machine?” The second was a request that included a meaningless reason, or what Langer called “placebic information”: “Excuse me, I have 5 pages. May I use the Xerox machine, because I have to make copies?” Finally, there was a condition that contained an actual excuse, if not an explanation: “Excuse me, I have 5 pages. May I use the Xerox machine, because I’m in a rush?”

If people were thoughtful creatures, then we’d be far more likely to let a person with a valid reason (“I’m in a rush”) cut in line. But that’s not what Langer found. Instead, she discovered that offering people any reason at all, even an utterly meaningless one (“I have to make copies”) led to near universal submission. It’s not that people aren’t listening, Langer says – it’s that they’re not thinking. While mindlessness had previously been studied in situations of “overlearned motoric behavior,” such as typing on a keyboard, Langer showed that the same logic applied to many other situations.

After mapping out the epidemic of mindlessness, Langer decided to devote the rest of her career to its antidote, which she refers to as mindfulness. In essence, mindfulness is about learning how to control the one thing in this world that we can control: our attention. But this isn’t the sterile control of the classroom, in which being attentive means “holding something still.” Instead, Langer came to see mindfulness as a way to realize that reality is never still, and that what we perceive is only a small sliver of what there is. “When you notice new things, that puts you in the present, but it also reminds you that you don’t know nearly as much as you think you know,” Langer told me. “We tend to confuse the stability of our attitudes and mindsets with the stability of the world. But the world outside isn’t stable – it’s always changing.” Mindfulness helps us see the change. 

This probably sounds obvious. But then the best advice usually is. In recent years, Langer and others have documented the far-reaching benefits of mindfulness, showing how teaching people basic mindfulness techniques can help them live longer, improve eyesight, alleviate stress, lose weight, increase happiness and empathy, decrease cognitive biases and even enhance memory in old age. Most recently, Langer has showed, along with her Harvard colleagues, that mindfulness can attenuate the progress of ALS, a disease that is believed to be “almost solely biologically driven.”

However, it’s one thing to know that mindfulness can work. It’s something else to know how it works, to understand the fundamental mechanisms by which mindfulness training can alter the ways we engage with the world. That’s where a recent paper by Esther Papies, Mike Keesman, Tila Pronk and Lawrence Barsalou in JPSP can provide some important insights. The researchers developed a short form of mindfulness training for amateurs; it takes roughly twelve minutes to complete. Participants view a series of pictures and are told to “simply observe” their reactions, which are just “passing mental events.” These reactions might include liking a picture, disliking it, and so on. The goal is to go meta, to notice what you notice, and to do all this without judging yourself.  

At first glance, such training can seem rather impractical. Unlike most programs that aim to improve our behavior, there is no mention of goals or health benefits or self-improvement. The scientists don’t tell people which thoughts to avoid, or how to avoid them; they offer no useful tips for becoming a better person.

Nevertheless, these short training sessions altered a basic engine of behavior, which is the relationship between motivational states – I want that and I want it now – and our ensuing choices.  At any given moment, people are besieged with sundry desires: for donuts, gossip, naps, sex. It’s easy to mindlessly submit to these urges. But a little mindfulness training (12 minutes!) seems to help us say no. We have more control over the self because we realize the self is a fickle ghost, and that it’s craving for donuts will disappear soon enough. We can wait it out.

The first experiment by Papies, et al. involved pictures of opposite-sex strangers. The subjects were asked to rate the attractiveness of each person and whether or not he or she was a potential partner.  In addition, they were asked questions about their “sexual motivation,” such as “How often do you fantasize about sex?” and “How many sexual partners have you had in the last year?” In the control condition – these people were given no mindfulness training – those who were more sexually motivated were also more likely to see strangers as attractive and as suitable partners. That’s not very surprising: if you’re in a randy mood, motivated to seek out casual sex, then you see the world through a very particular lens. (To quote Woody Allen: “My brain? That’s my second favorite organ.”) Mindfulness training, however, all but erased the correlation between sexual motivation and the tendency to see other people as sexual objects. “As one learns to perceive spontaneous pleasurable reactions to opposite-sex others as mere mental events,” write the scientists, “their effect on choice behavior…no longer occurs.” One might be in the mood, but the mood doesn’t win.

And it’s not just sex: the same logic can be applied to every appetite. In another experiment, the scientists showed how mindfulness training can help people make better eating choices in the real world. Subjects were recruited as they walked into a university cafeteria. A third of subjects were assigned to a short training session in mindful attention; everyone else was part of a control group. Then, they were allowed to choose their lunch as usual, selecting food from a large buffet.  Some of the meal options were healthy (leafy salads and other green things) and some were not (cheese puff pastries, sweet muffins, etc.)

The brief mindfulness training generated impressive results, especially among students who were very hungry. Although the training only lasted a few minutes, it led them to choose a meal with roughly 30 percent fewer calories. They were also much more likely to choose a healthy salad, at least when compared to those in the control groups (76 percent versus 49 percent), and less likely to choose an unhealthy snack (45 percent versus 63 percent).

What makes this research important is that it begins to reveal the mechanisms of mindfulness, how an appreciation for the transitory nature of consciousness can lead to practical changes in behavior. When subjects were trained to see their desires as passing thoughts, squirts of chemistry inside the head, the stimuli became less alluring. A new fMRI study from Papies and colleagues reveals the neural roots of this change. After being given a little mindfulness training, subjects showed reduced activity in brain areas associated with bodily states and increased activity in areas associated with "perspective shifting and effortful attention." In short, they were better able to tune their flesh out. Because really: how happy will a donut make us? How long will the sugary pleasure last? Not long enough. We might as well get the salad.

The Buddhist literature makes an important distinction between “responding” and “reacting.” Too often, we are locked in loops of reaction, the puppets of our most primal parts. This is obviously a mistake. Instead, we should try to respond to the body and the mind, inserting a brief pause between emotion and action, the itch and the scratch. Do I want to obey this impulse? What are its causes? What are the consequences? Mindfulness doesn’t give us the answers. It just helps us ask the questions.

This doesn’t mean we should all take up meditation or get a mantra; there are many paths to mindfulness. Langer herself no longer meditates: “The people I know won’t sit still for five minutes, let alone forty,” she told Harvard Magazine in 2010. Instead, Langer credits her art – she’s a successful painter in her spare time – with helping her maintain a more mindful attitude. “It’s not until you try to make a painting that you’re forced to really figure out what you’re looking at,” she says. “I see a tree and I say that tree is green. Fine. It is green. But then when I go to paint it, I have to figure out exactly what shade of green. And then I realize that these greens are always changing, and that as the sun moves across the sky the colors change, too. So here I am, trying to make a picture of a tree, and all of a sudden I’m thinking about how nothing is certain and everything changes. I don’t even know what a tree looks like.” That’s a mindful epiphany, and for Langer it’s built into the artistic process.

The beauty of mindfulness is that it’s ultimately an attitude towards the world that anyone can adopt. Pay attention to your thoughts and experiences. Notice their transient nature. Don’t be so mindless. These are simple ideas, which is why they can be taught to nearly anyone in a few minutes. They are also powerful ideas, which is why they can change your life.

Langer, Ellen J., Arthur Blank, and Benzion Chanowitz. "The mindlessness of ostensibly thoughtful action: The role of" placebic" information in interpersonal interaction." Journal of Personality and Social Psychology 36.6 (1978): 635.

Papies, E. K., Pronk, T. M., Keesman, M., & Barsalou, L. W. (2015). The benefits of simply observing: Mindful attention modulates the link between motivation and behavior. Journal of Personality and Social Psychology, 108(1), 148.