The Purpose Driven Life

Viktor Frankl was trained as a psychiatrist in Vienna in the early 1930s, during the peak of Freud’s influence. He internalized the great man’s theories, writing at one point that “all spiritual creations turn out to be mere sublimations of the libido.” The human mind, powered by its id engine, wanted primal things. Mostly, it just wanted sex.

Unfortunately, Frankl didn’t find this therapeutic framework very useful. While working as a doctor in the so-called “suicide pavilion” at the Steinhof hospital – he treated more than 1200 at-risk women over four years - Frankl began to question his training. The pleasure principle, he came to believe, was not the main motive of existence; the despair of these women was about more than a thwarted id.

So what were these women missing? Why were they suicidal? Frankl’s simple answer was that their depression was caused by a lack of meaning. The noun is deliberately vague, for there is no universal fix; every person’s meaning will be different. For some people, it was another person to care for, or a lasting relationship. For others, it was an artistic skill, or a religious belief, or an unwritten novel. But the point was that meaning was at the center of things, for “life can be pulled by goals as surely as it can be pushed by drives.” What we craved wasn’t happiness for its own sake, Frankl said, but something to be happy about.

And so, inspired by this insight, Frankl began developing his own school of psychotherapy, which he called logotherapy. (Logos is Greek for meaning; therapeuo means “to heal or make whole.” Logotherapy, then, literally translates as “healing through meaning.”)  As a clinician, Frankl’s goal was not the elimination of pain or worry. Rather, it was showing patients how to locate a sense of purpose in their lives. As Nietzsche put it, “He who has a why to live can bear with almost any how.” Frankl wanted to help people find their why.

Logotherapy now survives primarily as a work of literature, closely associated with Frankl’s best-selling Holocaust memoir, Man’s Search for Meaning. Amid the horrors of Auschwitz and Dachau, Frankl explored the practical utility of logotherapy. In the book he explains, again and again, how a sense of meaning helped his fellow prisoners survive in such a hellish place. He describes two men on the verge of suicide. Both of the inmates used the same argument: “They had nothing more to expect from life,” so they might as well stop living in pain. Frankl, however, used his therapeutic training to convince the men that “life was still expecting something from them.” For one man, that meant thinking about his adored child, waiting for him in a foreign country. For the other man, it was his scientific research, which he wanted to finish after the war. Because these prisoners remembered that their life still had meaning, they were able to resist the temptation of suicide. They had a why, and they could accept the how.

I was thinking of Frankl while reading a new paper in Psychological Science by Patrick Hill and Nicholas Turiano. The research explores one of Frankl’s essential themes: the link between finding a purpose in life and staying alive. The new study picks up where several recent longitudinal studies have left off. While prior research has found a consistent relationship between a sense of purpose and “diminished mortality risk” in older adults, this new paper looks at the association across the entire lifespan. Hill and Turiano assessed life purpose with three questions, asking their 6163 subjects to say, on a scale from 1 to 7, how strongly they disagreed or agreed with the following statements:

  1. Some people wander aimlessly through life, but I am not one of them.
  2. I live life one day at a time and don’t really think about the future.
  3. I sometimes feel as if I’ve done all there is to do in life.

Then the scientists waited. For 14 years. After counting up the number of deaths in their sample (569 people), the scientists looked to see if there was any relationship between the people who died and their sense of purpose in life.

Frankl would not be surprised by the results, as the scientists found that purpose was significantly correlated with reduced mortality. (For every standard deviation increase in life purpose, the risk of dying during the study period decreased by 15 percent. That’s roughly equivalent to the reduction in mortality that comes from a engaging in a modest amount of exercise.) This statistical relationship held even after Hill and Turiano corrected for other markers of psychological well-being, such as having a positive disposition. Meaning still mattered. A sense of purpose – regardless of what the purpose was – kept us from death. “These findings suggest the importance of establishing a direction for life as early as possible,” write the scientists.

Of course, these correlations cannot reveal their cause. One hypothesis, which is currently being explored by Hill and Turiano, is that people with a sense of purpose are also more likely to engage in healthier behaviors, if only because they have a reason to eat their kale and go the gym. (Nihilism leads to hedonism.) But that’s only a guess. Frankl himself remained metaphysical to the end.  The closest he ever got to a testable explanation was to insist that man was wired for “self-transcendence,” which Frankl defined as being in a relationship with “someone or something other than oneself.” While Freud stressed the inherent selfishness of man, Frankl believed that we needed a purpose as surely as we needed sex and water and food. We are material machines driven by immaterial desires.

Frankl, Viktor E. Man's Search for Meaning. Simon and Schuster, 1985.

Haddon Klingberg, Jr. When Life Calls Out To Us: The love and lifework of Viktor and Elly Frankl. Random House, 2012.

Hill, Patrick L., and Nicholas A. Turiano. "Purpose in Life as a Predictor of Mortality Across Adulthood." Psychological Science (2014): 0956797614531799.

The Too-Much-Talent Effect

A few years ago, the psychologists Adam Galinsky and Roderick Swaab began working on a study that looked at the relationship between national levels of egalitarianism – the belief that everyone deserves equal rights and opportunities – and the performance of national soccer teams in international competitions like the World Cup. It was an admittedly speculative hypothesis, an attempt to find a link between a vague cultural ethos and success on the field. But their logic went something like this: because talented athletes often come from impoverished communities, the most successful countries in the highly competitive World Cup would find a way to draw from the biggest pools of human talent. Think here of the great Pele, who was too poor to afford a soccer ball so he practiced his kicks with a grapefruit instead. Or the famous Diego Maradona, born in a shantytown on the outskirts of Buenos Aires. These men had talent but little else. It is a testament to egalitarianism that they were still able to get the opportunities to succeed.

It’s a nice theory, but is it true? After controlling for a number of variables, including GDP, population size, length of national soccer history and climate, Galinsky and Swaab found that egalitarianism was, indeed, “strongly linked” to better performance in international competition. It also predicted the quantity of talent on each team, with more egalitarian countries producing more players under contract with elite European clubs. In short, the most successful soccer countries don’t necessarily have the most innately talented populations. Instead, they do a better job of not squandering the talent they already have. 

It’s a fascinating study with broad implications. It suggests, for one thing, that much of the national variation in performance – and it doesn’t matter if we’re talking about the soccer pitch or 8th grade math scores – has to do with how well countries utilize their available human capital. What T.S. Eliot said about the excess of literary geniuses during the Elizabethan age (Shakespeare, Marlowe, Spenser, Donne, etc.) turns out to be a far more general truth. “The great ages did not perhaps produce much more talent than ours,” Eliot wrote, “but less talent was wasted.”

So far, so interesting. But as often happens in science, answers have a slippery way of inspiring new questions; the scientific process is a perpetual mystery generating machine. And it’s this next mystery – one utterly unrelated to egalitarianism – that most interests me.

While analyzing the soccer data, Galinsky and Swaab noticed something very peculiar – at a certain point, having more highly talented players on a national team led to worse performance. It was an unsettling finding, since people generally assume that talent exists in a linear relationship with success. (More talent is always better.) Such logic underpins the frenzy of NBA free-agency – every team is begging for superstars – and the predictions of bookies and commentators, who believe that the most gifted teams are the most likely to win. It’s why an already loaded Barcelona team just spent more than $100 million to acquire Luis Suarez, a player who has become as famous for biting as he has for striking.

And so, armed with this anomaly, Galinsky, Swaab and colleagues at INSEAD, Columbia University and VU University Amsterdam, decided to continue the investigation. After confirming the result among soccer teams competing at the 2010 and 2014 World Cup – too much talent appeared to be a burden, making national teams less likely to win – the scientists decided to see if their findings could be extended to other sports.

They turned first to basketball, looking at the impact of top talent on NBA team performance between 2002 and 2012. They coded talent by looking at the Estimated Wins Added (EWA) statistic, a measure that reflects the approximate number of wins a given player adds to a team’s season total. (In the 2013-2014 season, Kevin Durant led the league with an EWA of 30.1. LeBron was second with 27.3.) Once again, talent exhibited a tipping point: NBA teams benefited from having the best players unless they had too many of them. While most general managers assume the link between talent and performance is linear – a straight line with an upward slope – the scientists found that it was actually curved, and teams with more than 60 percent top talent did worse than their less skilled competition. Swaab and Galinsky call this the “too-much-talent” effect.

The relationship between team talent levels and team performance in the NBA

The relationship between team talent levels and team performance in the NBA

What accounts for the negative returns of excessive talent? The problem isn’t talent itself; there’s nothing inherently wrong with gifted players. Rather, Galinsky and Swaab argue that too much talent can disrupt the dynamics required for effective teamwork. “Too much talent is really a metaphor for having ineffective coordination among players,” Galinsky says. “Sometimes, you need a hierarchy on a team. You need to have different roles. But if everyone thinks they should be the one with the ball, then you’re going to run into problems.” Galinsky, et al. documented this drop-off in coordination by tracking various measures of “intra-team coordination,” such as the number of assists and defensive rebounds per game. (Both stats require teammates to work together.) Sure enough, the-too-much-talent effect was mediated by a drop-off in effective coordination, as teams with too many top-flight athletes also struggled with their chemistry. The egos didn’t gel; the players competed for the spotlight; all the talent became a curse.

When I asked Galinsky for an example of a team undone by their surfeit of talent, he cites a 2013 quote from Mike D’Antoni, the head coach of a gifted Lakers team that woefully underperformed. (The starting five featured four probable Hall of Famers: Kobe Bryant, Steve Nash, Dwight Howard and Pau Gasol.) “Have you ever watched an All-Star game? It's god-awful,” D’Antoni said to reporters. “Everybody gets the ball and goes one on one and then they play no defense. That’s our team. That’s us. We’re an All-Star team.” The 2012-13 Lakers were swept by the Spurs in the first round of the playoffs. 

Likewise, the LeBron era Miami Heat only succeeded once their talented stars learned how to work together. “When Dwayne Wade got hurt [in 2012), the Heat became a less talented team,” Galinsky says. “But I think his injury also made it clear that he was subordinate to James, and that James was the true leader of the team. That helped them play together. Having less pure talent actually increased their performance.” This suggests that the too-much-talent effect might explain a bit of the The Ewing Theory, which occurs when a team performs better after the loss of one of its stars.

Of course, if athletic talent exists in a tensioned relationship with teamwork, then the effect should not exist in sports, such as baseball, that require less coordination.  “If you have five starting pitchers, those pitchers don’t need to like each other, because they all start on different days,” Galinsky says. “Too much talent shouldn’t be a big problem.” (The scientists quote Bill Simmons in their paper, noting that baseball is “an individual sport masquerading as a team sport.”) To test this hypothesis, Galinsky, et. al. used the Wins Above Replacement stat, or WAR, to assess the talent level of every MLB player. Then, they looked to see how different levels of team talent were related to team performance. As predicted, the relationship never turned negative: for baseball clubs, having more highly skilled players was always better. “These results suggest that people’s lay beliefs about the relationship between talent and performance are accurate, but only for tasks low in interdependence,” write the scientists.

The relationship between team talent levels and team performance in MLB

The relationship between team talent levels and team performance in MLB

These findings aren’t just relevant for sports teams. Rather, the scientists insist that the too-much-talent effect should apply to many different kids of collective activity. While organizations place a big emphasis on acquiring top talent – it’s often their top HR priority – the importance of talent depends on the nature of the task. If success depends on the accumulation of individual performances – think of a sales team, or hedge fund traders – then more talent will lead to better outcomes. However, if success requires a high level of coordination among colleagues, then more talent can backfire, especially if the group lacks a clear hierarchy or well-defined roles.  And that’s why the best basketball teams, Galinsky argues, feature talented athletes who focus on different aspects of the game. “No one would argue that the Jordan era Bulls teams weren’t incredibly gifted,” he says. “But Jordan, Pippen and Rodman all understood their roles.  They knew what they needed to do.”

There is, I think, one final implication of this paper. In a world of moneyball GMs and SportVU tracking, it’s easy to dismiss the importance of team chemistry as yet another myth of the small data age, an intangible factor in a time of measurable facts. But this paper provides fans and coaches with a useful way of thinking about the importance of player chemistry, even if we still can’t reliably quantify it.* We’ve always known that team coordination matters, that a group of talented athletes can become more (or less) than the sum of their parts. But now we have empirical proof – a lack of chemistry is the one problem that more talent cannot solve.

*We might not be able to quantify player chemistry, but there does seem to be some consensus among players as to who has it. Talented athletes take big pay cuts to play with LeBron – he makes his teammates better - but Houston couldn't convince any superstars to play with Dwight Howard and James Harden. 

Swaab, Roderick I., and Adam D. Galinsky. "Egalitarianism Makes Organizations Stronger: Cross-National Variation in Institutional and Psychological Equality Predicts Talent Levels and the Performance of National Teams." Organizational Behavior & Human Decision Processes (forthcoming)

Swaab, Roderick I., et al. "The Too-Much-Talent Effect Team Interdependence Determines When More Talent Is Too Much or Not Enough." Psychological Science (2014)

"A Wandering Mind Is An Unhappy Mind"

Last year, in an appearance on the Conan O’Brien show, the comedian Louis C.K. riffed on smartphones and the burden of human consciousness:

"That's what the phones are taking away, is the ability to just sit there. That's being a person...Because underneath everything in your life there is that thing, that empty—forever empty. That knowledge that it's all for nothing and you're alone. It's down there.

And sometimes when things clear away, you're not watching anything, you're in your car, and you start going, 'Oh no, here it comes. That I'm alone.' It's starts to visit on you. Just this sadness. Life is tremendously sad, just by being in it...

That's why we text and drive. I look around, pretty much 100 percent of the people driving are texting. And they're killing, everybody's murdering each other with their cars. But people are willing to risk taking a life and ruining their own because they don't want to be alone for a second because it's so hard."

The punchline stings because it’s mostly true. People really hate just sitting there. We need distractions to distract us from ourselves. That, at least, is the conclusion of a new paper published in Science by the psychologist Timothy Wilson and colleagues. The study consists of 11 distinct experiments, all of which revolved around the same theme: forcing subjects to be alone with themselves for up to 15 minutes. Not alone with a phone. Alone with themselves.

The point of these experiments was to study the experience of mind-wandering, which is what we do when we have nothing to do at all. When the subjects were surveyed after their session of enforced boredom – they were shorn of all gadgets, reading materials and writing implements - they reported feelings of intense unpleasantness. One of Wilson’s experimental conditions consisted of giving subjects access to a nine-volt battery capable of administering an unpleasant shock. To Wilson’s surprise, 12 out of 18 male subjects (and 6 out of 24 female subjects) chose to shock themselves repeatedly. “What is striking,” Wilson et. al write, “is that simply being alone with their own thoughts for 15 minutes was apparently so aversive that it drove many participants to self-administer an electrical shock that they had earlier said they would pay to avoid…Most people seem to prefer doing something rather than nothing, even if that something is negative.”

These lab results build on a 2010 experience-sampling study by Mathew Killingsworth and Daniel Gilbert that contacted 2250 adults at random intervals via their iPhones. The subjects were asked about their current level of happiness, their current activity and whether or not they were thinking about their current activity. On average, subjects reported that their minds were wandering – thinking about something besides what they were doing – in 46.9 percent of the samples. (Sex was the only activity during which people did not report high levels of mind-wandering.) Here’s where things get disturbing: all this mind-wandering made people unhappy, even when they were daydreaming about happy things. “In conclusion,” write Killingsworth and Gilbert, “a human mind is a wandering mind, and a wandering mind is an unhappy mind.” Although we typically use mind-wandering to reflect on the past and plan for the future, these useful thoughts deny us our best shot at happiness, which is losing ourselves in the present moment. As Killingsworth and Gilbert put it: “The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost.” 

Given these dismal results, it’s easy to understand the appeal of the digital world, with its constant froth of new information. To carry a smartphone is to never be alone; a swipe of the fingers turns on a screen that keeps us mindlessly entertained, the brain lost in the glowing screen. It’s important to note, however, that Wilson et. al. didn’t find any correlation between time spent on smartphones and the ability to enjoy mind-wandering. Contrary to what Louis C.K. argued, there’s little to reason to think that our gadgets are the cause of our inability to be alone. They distract us from ourselves, but we’ve always sought distractions, whether it’s television, novels or a comic on a stage. We seek these distractions because, as Wilson et. al. write, "it is hard to steer our thoughts in pleasant directions and keep them there." And so our daydreams often end up in dark places, as we ruminate on our errors and regrets. (It shouldn't be too surprising, then, that there's a consistent relationship between mind-wandering and dysphoria.) Here's Louis C.K. once again:

"The thing is, because we don't want that first bit of sad, we push it away with a little phone or a jack-off or the food...You never feel completely sad or completely happy, you just feel kinda satisfied with your product, and then you die. So that's why I don't want to get a phone for my kids.”

One last point. It's interesting to think of this new research in light of religious traditions that emphasize both the struggle of existence and the importance of living in the moment. According to the Buddha, the first noble truth of the world is dukkha, which roughly translates as “suffering.” This pain can't be escaped - everyone dies - but it can be assuaged, at least if we learn to think properly. (The Buddhist term for such thinking, sati, is often translated as mindfulness, or "attentiveness to the present.") Instead of letting the mind disengage, Buddhism emphasizes the importance of using meditative practice to stay tethered to this here now. Because once you admit the big picture sadness, once you accept the inevitability of sorrow and despair, then a wandering mind keeps wandering back to that brutal truth. The only escape is embrace what's actually happening, even if it means sitting in a bare room, noticing the waves of boredom and sadness that wash over the mind. "Let the sadness hit you like a truck," Louis C.K. says, sounding a little bit like a foul-mouthed Buddha. "You're lucky to live sad moments."

The Skin Is A Social Organ

Your body is covered in hairy skin.* Below the surface of this skin are wispy sensory nerves known as a C-fiber tactile afferents, or CTs. These nerves are designed to respond to gentle contact - even the slightest of indentations can turn them on, starting a cascade of electrical signals that ends with a feeling of touch. For a long time, the most notable fact about these nerves was their lack of speed: because CTs had no myelin insulation, they were about 50 times slower at transmitting sensory signals to the brain than myelinated A-fiber nerves. 

And so a simple model of the touch system emerged: we had a fast pathway, modulated by A-fibers, which gave us quick and precise information about the surface of the body. Such a system had an obvious function, allowing us to touch the world, manipulate objects and monitor the body in space.

But if we have this fast sensory system, then why are the vast majority of nerves in hairy skin slow CT fibers? It’s like a customer with a broadband keeping around a dial-up modem, just in case.

In recent years, however, it’s become clear that CT fibers are not merely an archaic back-up or useless redundancy. Rather, they are endowed with their own unique purpose, which is just as essential as the speedy transmission of A-fibers. In a new Perspective published in Neuron, the neuroscientists Francis McGlone, Johan Wessberg and Hakan Olausson lay out the argument. They suggest that a particular kind of Cfiber nerve is largely responsible for the emotional quality of touch, passing along crucial information about the “affective and rewarding properties” of the most tender contact. When we talk about the power of touch – say, the healing properties of a hug, or a gentle caress – we are talking about the powers of these slow nerves.

There are multiple strands of evidence. The first is neurological patients with selective damage to A-fibers, leaving them with a touch pathway composed exclusively of C-fibers. These people are mostly numb. However, this numbness comes with a strange loophole – if their skin is brushed gently at a low velocity (between 1 and 10 centimeters per second), their interior bodies can be filled with pleasurable sensations. The feeling is vague – some patients couldn’t even identify the body quadrant that was being stroked - but everyone felt it.

The second piece of evidence is the inverse situation: patients with a rare genetic mutation that wipes out their C-fiber pathway, so that only A-fibers remain. While these patients have primarily been studied for their inability to feel pain – they are often oblivious to severe wounds, such as that from a broken bone – it turns out that they’re also less likely to experience pleasure from a soft touch

These differences in the function of A and C fibers are echoed in the brain. While skin stroking in normal subjects triggers activation in the somatosensory cortex – the part of the brain that tells us where the sensation is coming from – patients with only C-fibers show a selective activation in the posterior insular cortex and other limbic areas. According to McGlone et. al, this suggests that a class of touch sensitive C-fibers have “excitatory projections mainly to emotion-related” systems in the brain. They are designed to fill us with feeling, not to tell us where in the flesh these feelings are coming from.

This all makes sense, if you think about. We are creatures of touch, naked apes that still enjoy getting groomed. We soothe children with soft strokes and kiss the limbs of lovers; the skin is a social organ. While neuroscience tends to focus on vision and hearing as conduits for social information, McGlone et. al. point that the epidermis is also “the site of events and processes crucial to the way we think about, feel about, and interact with one another.”

These touches are most important during development. As Harry Harlow first observed, the absence of comforting contact is deeply stressful for young monkeys, leaving them with a wound from which they never recover. More recent studies have found that separating infant monkeys from their mother with a transparent screen – they could still hear, smell and see her – led to chronic activation of stress pathways in the brain. The stress was only diminished if the young monkeys were allowed to form “peer touch relationships,” suggesting that physical contact is required for normal brain development. Michael Meaney, meanwhile, has shown that rat pups born to mothers that engaged in lots of licking and grooming were much better at coping with stressful situations, such as the open-field test. They solved mazes more quickly, were less aggressive with their peers and lived longer lives. Meaney argues that these differences are driven by differences in the brain, as rat pups exposed to a surfeit of tender contact have fewer receptors for stress hormone and more receptors for the chemicals that attenuate the stress response.

And then there’s the tragic evidence from early 20th century orphanages and foundling hospitals. In these childcare institutions, there was an intense focus on cleanliness and efficiency. As the psychologist Robert Karen notes, this meant that babies were “typically prop-fed, the bottle propped up for them so that they wouldn’t have to be held during feeding. This was considered ideally antiseptic, and it was labor-saving as well.”

Unfortunately, such routines proved deadly. Although these hospitals supplied infants with adequate nutrition and warmth, they struggled to keep them alive. A 1915 review of ten infant foundling hospitals in the Eastern United States, for instance, concluded that up to 75 percent of the children died before their second birthday. (The best hospital in the study had a 31.7 percent mortality rate.) In fact, it wasn’t until the early 1930s, when pediatricians like Harry Bakwin began insisting that nurses touch the babies that mortality rates declined. The soft touches, carried along by those CT nerves, were a kind of sustenance.

Of course, the newfound recognition of C-fibers doesn’t mean the mystery of emotional touch has been solved. The pleasure of contact isn’t just a bottom-up phenomenon, triggered by some peripheral nerves in the flesh. Rather, it’s entangled with all sorts of higher order variables, from the context of touch to the “relationship of the touchee with the toucher.” If anything, the fact that we’re only now beginning to outline the mechanics of the caress is a reminder that the nervous system is full of unknowns, threaded with wires we don’t understand. Somehow, in the milliseconds after the skin is stroked, we turn that mechanical twitch into a powerful feeling, which eases our anxiety and reminds us why it’s good to be alive.

*The only non-hairy parts of the skin - so-called glabrous skin - are found on the soles of the feet and the palms of the hands. 

McGlone, Francis, Johan Wessberg, and Håkan Olausson. "Discriminative and Affective Touch: Sensing and Feeling." Neuron 82.4 (2014): 737-755.

 

Pity the Fish

Consider the lobster; pity the fish. In his justly celebrated Gourmet essay, David Foster Wallace argued that the lobster was not a mindless invertebrate, but rather a creature capable of feeling, especially pain. Wallace made his case with the brute facts of comparative neurology - lobsters have plenty of pain receptors - but also with anecdotes of the kitchen, as the crustacean resists its boiling death.  "After all the abstract intellection," Wallace writes, "there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience."

I was thinking of Wallace's essay while reading a new paper in Animal Cognition by Culum Brown, a biologist at Macquarie University in Australia. Brown does for the fish what Wallace did for the lobster, calmly reviewing the neurological data and insisting that our undersea cousins deserve far more dignity and compassion that we currently give them. Brown does not mince words:

"All evidence suggests that fish are, in fact, far more intelligent than we give them credit. Recent reviews of fish cognition suggest fish show a rich array of sophisticated behaviours. For example, they have excellent long-term memories, develop complex traditions, show signs of Machiavellian intelligence, cooperate with and recognise one another and are even capable of tool use. Emerging evidence also suggests that, despite appearances, the fish brain is also more similar to our own than we previously thought. There is every reason to believe that they might also be conscious and thus capable of suffering."

What makes this review article so necessary is that, as Brown notes, fish are afforded virtually no protections against human cruelty. They are the most consumed animal; the most popular pet; the only creature for which it’s an acceptable leisure activity to hook them with a metal barb and then reel them, against their frantic wishes, into an environment in which they will slowly suffocate to death, drowning in air.

Such suffering is ignored because we assume it doesn't exist; fish are supposed to be primitive beasts, cold-blooded and unconscious. But Brown gathers a persuasive range of evidence highlighting our error: 

  • Fish are exquisitely sensitive creatures, with perceptual abilities that track (or exceed) those of mammals. 
  • Fish can learn a simple Pavlovian conditioning task - light paired with food - significantly faster than rats and dogs. They also exhibit one-trial learning: pike that have been hooked often become "hook shy" for over a year.
  • Fish have incredible spatial memories. Gobies, for instance, sometimes leap from rock pool to rock pool. "Even after being removed from their home pools for 40 days, the fish could still remember the location of surrounding pools," writes Brown. "This astonishing ability makes use of a cognitive map built-up during the high tide when the fish are free to roam over the rock platform."
  • Fish exhibit social learning. Salmon born in a hatchery can be taught to recognize unknown live prey by pairing them with fish that already feed off the prey. Guppies can pass along foraging routes; some scientists speculate that the recent shift in cod spawning grounds reflects the "systematic removal of older, knowledgeable individuals by commercial fishing."
  • Fish know each other. Guppies can easily recognize up to 15 individuals. If allowed to choose, fish prefer to shoal with fish they have met before.
  • Fish exhibit a high degree of social intelligence.  "If a pair of fish inspects a predator," Brown writes, “they glide back and forth as they advance towards the predator each taking it in turn to lead. If a partner should defect or cheat in any way, perhaps by hanging back, the other fish will refuse to cooperate with that individual on future encounters." Or look at the cleaner wrasse, which removes parasites and dead skin from the surface of "client fish." Each wrasse has a large set of regular customers, who they seek to please in order to ensure return business. "If the cleaner should accidentally bite the client, then the client will rapidly swim away. But the cleaner has a mode of reconciliation; they chase after the distraught client and give them a back rub, thus enticing them to come again." Interestingly, the wrasse are far less likely to nip predatory fish, suggesting that they are able to categorize clients according to their aggressive potential. 
  • Fish build nests and use tools. At least 9000 species of fish construct nests, either for eggs or shelter. Wrasse species often use rocks to crush sea urchin shells; they use anvils to break open shellfish. Meanwhile, cod in the laboratory figured out how to use tiny metal tags embedded in their backs to operate a feeder.
  • Fish rely on the same basic circuitry of nerves to process pain as mammals. This shouldn’t be too surprising: the pain receptors in all vertebrates are descended from an early fishlike ancestor. Furthermore, there’s evidence that fish also respond to pain in a “cognitive sense” – they have an experience of suffering. Brown cites a study showing that fish injected with acetic acid display “attention deficits,” and lose their fear of novel objects. Presumably, he writes, this is because “the cognitive experience of pain is dominant over or overshadows other processes.”

Brown concludes his review by arguing that fish deserve to be included in our “moral circle.” The vertebrate taxa are worthy of the same protections against wanton suffering that we offer to most land based mammals. And yet, Brown readily admits that, given current fishing practices, the “ramifications for such animal welfare legislation…is perhaps too daunting to consider.” Billions of humans depend on fish for sustenance, but there is no way to catch a fish without being cruel. 

My own dietary decisions are harder to defend. I don't fish, but I love to eat them. Wild salmon is my favorite. Brown’s paper reminded me of a wonderful Stanley Kunitz poem, “King of the River.” The poem describes the heroic journey of a Pacific salmon, as it returns to the fresh water of its birth to spawn and die. If Brown makes the empirical case for fish – they know more than we think, they feel more than we want them to - then Kunitz takes us inside the strange mind of the orange fleshed vertebrate, swimming madly upriver, its suicidal trip driven by a familiar mixture of “nostalgia and desire.”  

"A dry fire eats at you.

Fat drips from your bones.

The flutes of your gills discolor.

You have become a ship for parasites.

The great clock of your life

is slowing down,

and the small clocks run wild.

For this you were born."

Brown, C. "Fish intelligence, sentience and ethics. Animal Cognition. June 2014.

The Violence of the Pass

Football is going to change. That much is clear. The correlation between the impacts sustained on the football field and the brain damage of players is no longer just a correlation: it’s starting to look like a tragic cause.

But how is the sport going to change? There will be better helmets, of course, and stricter rules about helmet-to-helmet contact, and more accurate monitoring of head trauma.  We’ll start tracking the linear acceleration (g) of skulls as carefully as we track the stats of quarterbacks.

However, it’s also worth considering the ways in which the concussion crisis will interact with pre-existing football trends. Over the last decade, the single most notable shift within the sport has been the rise of the passing offense, with the amount of passing yards increasing by roughly twenty percent. In 2003, as Ty Schalter notes, only Indianapolis used the shotgun offense more than 30 percent of the time. (Three teams never used the shotgun at all.) By 2012, most teams were approaching a shotgun usage rate of 40 percent or higher.

At first glance, this shift towards passing might seem like an effective response to the concussion crisis. Studies relying on head telemetry data – they use special helmets outfitted with a network of sensors – show that linemen and linebackers sustain, by far, the most sub-concussive hits over the course of a game. (Running backs take the hardest hits.) Less running plays, then, should translate to less wear and tear on the brains of those players brawling at the line of scrimmage. (Pass routes are the only part of the game in which, after five yards, no meaningful contact is allowed; it’s football pretending to be basketball.) When a pass dominant offense takes the field, the game is still violent, but the violence seems contained. More spread equals less smash mouth.  

Alas, a new paper by Douglas Martini, James Eckner, Jeffrey Kutcher and Steven Broglio at the University of Michigan, Ann Arbor, suggests that the rise of the passing offense will do little to quell the concussion crisis. In fact, it might even be making the problem worse. In their study, Martini et. al. tracked 83 high school football athletes using the HITS head impact telemetry system. While most public attention has focused on the brains of NFL players, these highly paid athletes actually represent a very small sliver of those at risk. There are, give or take, a few thousand players on NFL payrolls. There are approximately 68,000 football players at the college level. And there are 1.2 million football players at the high-school level.

The question investigated by these researchers was whether or not offensive style influenced the amount and distribution of head impacts. One team utilized a run-first offense (RFO); the other used a pass-first offense (PFO). The RFO team passed, on average, 8.8 times a game and ran the ball 32.9 times, while the PFO passed 25.6 times and ran 26.3 times. 

So what did they find? The first thing to state is the obvious: football is a contact sport. These 83 teenagers endured 35,681 head impacts over the course of the season; at least six of these impacts resulted in serious concussions. 

What’s more, the different offensive styles resulted in significantly different patterns of impact. The running offense generated about 1.5 times as many total head blows as the passing offense – many of these occurred during practice – while the passing offense generated bigger average blows, especially during the games. This was true across every measure of head impact, from linear acceleration (g) to the overall hit severity profile (HITsp). In short, when teams throw the ball in the air, there are fewer total hits, but each hit is harder, especially for skill position players, such as running backs and wide-receivers. The scientists speculate that the root cause of these differences is simple physics, as players in the pass offense are “able to reach higher running velocities before contacting an opponent than the equivalent RFO athletes… As such, the PFO athletes would have larger initial velocities that resulted in greater deceleration values following impact.” And it’s the deceleration that’s dangerous, as the soft brain lurches into the hard bones of the skull. This helps explain why, in 2012 and 2013, receivers and cornerbacks sustained more concussions than any other positions in the NFL. Their speed across the field more than makes up for their lack of mass.

The larger lesson is that there appears to be a fundamental tradeoff between the frequency of hits in a football game and their magnitude. (More research on this subject is desperately needed - the NFL should install head telemetry units in every helmet.) The passing attack might look less aggressive, but appearances can be deceiving; the elegant throws still end with a cloud of dust. If nothing else, this study is yet another reminder that head violence is an intrinsic part of football, and not a by-product of a particular style of play.  

Martini, Douglas, et al. "Subconcussive head impact biomechanics: comparing differing offensive schemes." Medicine and science in sports and exercise 45.4 (2013): 755-761.


Cohesion, PTSD and War

I’ve been reading Head Strong, an excellent new book by Michael D. Matthews, a professor of engineering psychology at West Point. The book describes the history and future of military psychology, from the birth of intelligence testing during WWI to the next generation of immersive battlefield simulations.

Not surprisingly, the problem of Post-Traumatic Stress Disorder (PTSD) is a recurring theme, as Matthews discusses recent attempts by the Armed Forces to promote resilience. (“The military does a good job of teaching its soldiers to kill. But it does not do a good job of teaching them to cope with it,” he writes.)  Matthews details the Comprehensive Soldier Fitness (CSF) program, based on the work of Martin Seligman, and the unintended consequences of creating weapons systems so effective that they “give the individual soldier the firepower of a traditional squad or platoon.” One potential downside of these new systems, Matthews argues, is that American soldiers will gain the ability to control a large territory by themselves, and thus end up isolated from their comrades. “Soldiers fight for their buddies, who traditionally they could literally reach out and touch,” he writes. While technology makes the dispersal of troops possible, Matthews suggests there will be no substitute for the “physical presence of others,” especially when soldiers are “placed in situations of mortal danger.”

Interesting stuff. But there was one data point in the book that I couldn’t stop thinking about, even though Matthews mentions it almost as an aside. While pointing out that PTSD rates vary widely between military units – the overall rate for deployed soldiers hovers between 10 and 25 percent – Matthews notes that “highly trained and specialized units including SEAL teams, Rangers, and other elite organizations” have proven far more resistant to the disorder. (Their PTSD rates are typically less than five percent.) What makes this statistic even more surprising is that these elite units tend to see frequent and intense combat – in objective terms, they have experienced the most trauma. And yet, they seem the least troubled by its aftermath.

Why are elite units so resilient? There are many variables at work here; PTSD is triggered by a multitude of risk factors. For starters, elite units tend to be better educated and in better physical condition, both of which are correlated with a reduced incidence of PTSD. Self-selection also plays a role: anyone tough enough to become a Ranger or Seal has learned how to handle stress and hardship.

Matthews, however, mentions a protective factor that is often overlooked, at least in popular discussions of PTSD: unit cohesion. According to Matthews, elite units are “highly cohesive"; the soldiers form close relationships, built out of their shared experiences. In Pentagon surveys, they are more likely to agree with statements such as “my unit is like family to me,” or “members of my unit understand me.” 

A series of recent studies backs up Matthews’ argument, highlighting the protective effects of unit cohesion. One analysis of 705 Air Force medical personnel deployed as part of Operation Iraqi Freedom found a “significant linear interaction…such that greater cohesion was associated with lower levels of PTSD symptom severity.” When stress exposure was high, for instance, medics in the most cohesive units reported PTSD symptoms that were approximately 25 percent less severe, at least as measured by the military’s PTSD checklist. Another study of 4901 male personnel from the UK armed services (Royal Navy, Royal Marines, British Army and Royal Air Force) concluded that unit cohesion was associated with significantly lower levels of PTSD and other mental disorders, such as depression. The British scientists end their paper by stressing the importance of fostering unit cohesion among soldiers, given "that so many other factors which have a positive association with higher levels of mental health problems are un-modifiable (for example, family background and exposures on deployment)." When it comes to PTSD, cohesion isn't just an incredibly important variable - it's a variable the Armed Forces can influence.

The explanation for these results is straightforward: in the aftermath of a terrible life event, other people are the best medicine. It doesn’t matter if we’re being helped by another soldier or a loving spouse - it’s really hard to get over the trauma alone. According to a highly cited meta-analysis of the risk factors associated with PTSD, a lack of social support is incredibly dangerous for those dealing with an acute stressor. (Among military subjects, a lack of social support was the single most important risk factor; among civilians, it placed second.) Close relationships, in this sense, are the ultimate coping mechanism, allowing us to survive the worst parts of life.

In some instances, the presence of close relationships seems to matter more than the stressor itself. Consider a natural experiment that took place during World War II, when approximately 70,000 young Finnish children were evacuated to temporary foster homes in Sweden and Denmark. For the kids who stayed behind in Finland, life was certainly filled with moments of trauma and stress — there were regular air bombardments, severe food shortages and invasions by the Soviets and the Germans. Those kids sent away, however, experienced a different kind of stress. Their wartime experiences might have featured less actual war, but the lack of social support would prove, over time, to be even more dangerous. A 2009 study found that Finnish adults who had been sent away from their parents between 1939 and 1944 were nearly twice as likely to die from cardiovascular illness as those who had stayed at home. A follow-up study found that these temporary war orphans also showed higher levels of stress hormone, stress reactivity and depression, sixty years after they’d been separated from their families. Chronic stress sucks. But chronic stress in the absence of supportive relationships can be crippling.

Perhaps this is why soldiers in elite units are so resilient. When the Armed Forces take unit cohesion seriously, they turn out be remarkably good at it, able to create deep, emotional bonds among their members. Over time, these relationships become an essential part of how soldiers cope with the violence. While unit cohesion has traditionally been seen through the prism of combat performance – more cohesive units perform better in battle – it seems likely that the biggest benefits of cohesion come after the war.  

Matthews, Michael D. Head Strong: How Psychology is Revolutionizing War. Oxford University Press, 2013.

Dickstein, Benjamin D., et al. "Unit cohesion and PTSD symptom severity in Air Force medical personnel." Military Medicine 175.7 (2010): 482-486.

The Ritual Effect

Stella Artois is an old beer with a long history. The original brewery (Den Hoorn) was founded in 1336 in Leuven, Belgium. In 1717, Sebastian Artois bought the brewery and promptly renamed it after himself.  The company has been brewing a pale lager ever since.

To celebrate this history, Stella Artois has developed a Nine Step Pouring Ritual. The first step is The Purification, in which the Stella branded chalice is given a cold water bath. Then comes The Sacrifice, as the bartender squanders the first few drops of beer to “ensure the freshest taste.” After that comes The Liquid Alchemy – “the chalice is held at 45 degrees for the perfect combination of foam and liquid” – and The Crown, whereby the chalice is straightened out. The final steps are a blur of movement: there is The Removal, The Beheading – the bartender trims the foam with a knife – The Judgment, The Cleansing and The Bestowal, in which the beer is presented on a clean coaster, with the logo facing outward. 

It’s a silly ceremony, made all the sillier by its Seriousness. And while Stella might like you to think that their pouring ritual is some medieval sacrament invented by trappist monks, it’s actually a fairly recent marketing ploy. (The ritual appears to have been first codified as part of the World Draught Master Competition in the late 1990s.) It’s also a remarkably successful gimmick, helping distinguish Stella from all the other fizzy, refreshing and tasteless beers on the supermarket shelf. According to the experts over at Beer Advocate, Stella is a worse beer than its corporate sibling, Budweiser. (Stella scores a 73, while Bud gets an 80.) And yet, Stella is typically 25 percent more expensive, both at bars and in stores. 

So why am I writing about this mediocre and "reassuringly expensive" beer? A recent paper in Psychological Science, led by Kathleen Vohs at the Carlson School of Management at the University of Minnesota, begins to explain why rituals like the Nine Step Pour are so effective. When acted out, these rituals don’t merely enhance our perception of the brand. They enhance our perception of the beer.

Vohs and her colleagues (Yajin Wang, Francesca Gino and Michael Norton) conducted four separate experiments. In the first experiment, 52 students were randomly assigned to one of two conditions: ritual or no ritual. In the ritual condition, the students were given the following instructions: “Without unwrapping the chocolate bar, break it in half. Unwrap half of the bar and eat it. Then, unwrap the other half and eat it.” In the no ritual condition, students were merely given the candy, without any instructions.

As expected, those in the ritual condition enjoyed the chocolate more than those who simply consumed it. They spent more time “savoring” the candy bar, thought it was more flavorful, and were willing to pay about 75 percent more money for it.

In another experiment, Vohs and colleagues showed that the same logic could be applied to carrots. (This time the ritual consisted of rapping on the desk and taking deep breaths.) Once again, the differences were stark: those assigned to the ritual group reported higher levels of anticipated and experienced enjoyment.  

The last two studies attempted to explain why rituals enhance consumption. Vohs and colleagues showed that “personal involvement” is crucial – watching a stranger perform a ritual with lemonade didn’t make the drink taste better – and that rituals increase our “intrinsic interest” in whatever we’re eating.

This is a nice example of social science clarifying a cultural quirk. After all, rituals are everywhere, especially around food and drink. (There's grace before dinner, the Oreo cookie "twist, lick and dunk," a sommelier presenting the cork, a barista making a pour-over, etc.) Even when the steps themselves are meaningless, they give more meaning to whatever happens next.

Walter Benjamin famously argued that art began "in the service of a ritual," that its "aura" was "embedded" in a larger set of acts and ceremonies. The invention of mechanical reproduction changed all that, Benjamin wrote, "emancipating the work of art from its parasitical dependence on ritual." This came with happy consequences - we could buy a Rothko poster for the bedroom - but it also stripped many products of their artisanal roots. Consider the beer shelf: the same multinational company makes Stella, Budweiser and Corona and they pretty much taste the same. 

I think Benjamin would be amused by the ways in which our age of mass production has returned us to ritual, as we seek to differentiate all these products that aren't very different at all. (Your favorite craft IPA probably doesn't need a nine-step pour.) These rituals pretend to have a function - Stella says it's about getting the fizz right - but they're really there to elevate the ordinary. For a few moments after The Bestowal, as we stare at that logo covered chalice handed to us by the bartender/brand ambassador, it's possible to believe that this generic beer actually has a twinge of aura.

Vohs, Kathleen D., et al. "Rituals enhance consumption." Psychological Science 24.9 (2013): 1714-1721.

A Science of Self-Reports?

In 1975, the psychologists Stephen West and T. Jan Brown conducted an investigation into the factors that made people more likely to help a stranger. What made their study unique is that they conducted the experiment twice, using two different methods.

In the first study, they staged a crisis. Sixty men walking on a college campus were stopped by a woman who made the following request:

“Excuse me, I was working with a rat for a laboratory class and it bit me. Rats carry so many germs – I need to get a tetanus shot right away but I don’t have any money with me. So I’m trying to collect $1.75 to pay for the shot.”

In some conditions, the woman held her hand as if it had been bitten; in other conditions, her fist was wrapped in gauze that had been soaked in artificial blood. Sometimes she wore an “attractive pant outfit and was tastefully made up” and sometimes she wore a blonde wig, white face powder and dark lipstick, “all of which were inappropriate for her natural complexion.”

Not surprisingly, men offered the most help when the woman was attractive and in urgent need of help, giving her an average of 43 cents. (Every man stopped to help in this condition.) In contrast, an “unattractive” woman with a bloody bandage received 26.5 cents on average, and only 80 percent of men offered help. The less severe conditions led to even less assistance: the men donated approximately 13.5 cents with two-thirds providing some amount of money. 

So far, so obvious: when deciding whether or not to help a stranger, the most important variable is the severity of the situation. We might stop for a head-on collision, but not for a fender bender. If you're asking for money, it’s better to be good-looking.

But the most intriguing part of the paper came when the scientists tried to replicate their field study in a lab. Instead of faking an emergency on the street, the sixty male subjects were read a description of the injury (severe/not severe) and shown a photograph of the woman (attractive/unattractive.) Then, the men were asked how much money they would be willing to give her.

In this “interpersonal simulation,” the men were very generous. Interestingly, they gave the woman the most money in the unattractive/severe condition, offering her an average of $1.20, or four and a half times what their peers offered in real life. The same basic pattern persisted across every situation, with the men giving her far larger sums when she was a hypothetical. The lab subjects also insisted they wouldn’t be swayed by her appearance - they said they'd give more when she was less attractive - even though the field test strongly suggested otherwise. West and Brown conclude their 1975 paper with a warning: “The comparison of the results of the field experiment and the interpersonal simulation raise serious questions concerning the validity of the latter approach as a strategy for investigating human social behavior.”

I first learned about this study from a fascinating critique of modern psychology, published in 2007 by the psychologists Roy Baumeister at Florida State University, Kathleen Vohs at the Carlson School of Management, University of Minnesota and David Funder at the University of California, Riverside. In “Psychology as the Science of Self-Reports and Finger Movements,” Baumeister, et al. hold up the results of the West/Brown study as an example of the unsettling discrepancy between what we think we’ll do and what we actually do. Because it turns out that such discrepancies are a recurring theme in the literature. For instance, Baumeister, et al. note that “affective forecasting studies” – research in which people are asked how they will feel if x happens – “systematically show the inaccuracies of people’s predictions” about their own future emotions. Meanwhile, financial decision-making research reveals that people are “moderately risk averse” when dealing with pretend money, but become far more risk averse when large amounts of real cash are involved. Other experiments show that merely asking people about their preferences can alter their preferences; the act of introspection has a distorting effect. As the psychologist Timothy Wilson famously argued, we are all “strangers to ourselves.”

And yet, despite this surplus of evidence, Baumeister and colleagues document a steady decline in the percentage of studies that actually look at behavior, and not just our predictions of it. Here’s the trendline of research published in the elite Journal of Personality and Social Psychology over the last forty years:

As the psychologists note, this is a troubling situation for a science that is typically described as the study of human behavior. Instead of observing humans in vivo, the vast majority of these papers rely on questionnaires, tests and stimuli flashed on computer screens. Subjects predict their actions rather than act them out. But Baumeister et al. point out that such methodologies leave out a lot of the complexity that make people so interesting. In fact, many of the canonical studies of modern psychology, such as the Milgram study, Stanford Prison experiment and Mischel's Marshmallow task, derive their power from the contradiction between predicted behavior - I wouldn't do that! - and our actual behavior. What's more, the "eclipse" of behavioral studies is inevitably shrinking the range of possible psychological subjects, as much of human nature cannot be easily reduced to a self-report. Here are the scientists, getting frisky:

“Whatever happened to helping, hurting, playing, working, taking, eating, risking, waiting, flirting, goofing off, showing off, giving up, screwing up, compromising, selling, persevering, pleading, tricking, outhustling, sandbagging, refusing, and the rest? Can’t psychology find ways to observe and explain these acts, at least once in a while?”

There are, of course, a number of factors behind this shift away from behavior. Field studies are riskier and more expensive; internal review boards are more likely to object to behavioral experiments, as they might upset subjects; in the 1970s, peer-reviewed journals began explicitly favoring psychology articles with multiple studies and, as Baumeister et al. note, “it is far easier to do many studies by seating groups in front of computers…than to measure behavior over and over.” 

Again: there is nothing wrong with self-reports. In their paper, Baumeister, Vohs and Funder repeatedly emphasize the value of non-behavioral research, especially for certain subject areas. However, the shortcomings of this approach have also been clearly established – when we talk about ourselves, we often don’t know what we’re talking about.

Baumeister, et al. don’t sound very optimistic that this experimental trend can be reversed. (They call for an “affirmative action for action,” with journals and funding agencies giving “a little extra preference” to papers and proposals that measure behavior.) In the meantime, perhaps we should all just remember the intrinsic limitations of studies that rely exclusively on self-reports. It’s a limitation to keep in mind when reading the papers themselves and when reading blog posts about such papers. 

West, Stephen G., and T. Jan Brown. "Physical attractiveness, the severity of the emergency and helping: A field experiment and interpersonal simulation."Journal of Experimental Social Psychology  11.6 (1975): 531-538.

Baumeister, Roy F., Kathleen D. Vohs, and David C. Funder. "Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior?." Perspectives on Psychological Science 2.4 (2007): 396-403.

Materialism and Its Discontents

“To do or to have?” That Hamlet-like question is the title of a scientific paper by Leaf Van Boven and Thomas Gilovich, published several years ago in The Journal of Personality and Social Psychology. It’s a simple paper, just a few pages long, but I doubt there's another piece of social science that I think about more during a typical day. In essence, the scientists tried to solve the problem of scarce resources. If our goal is to maximize happiness, then how should we spend our money? Should we buy things? Or should we buy experiences?

At first glance, the answer seems obvious – buy things! Things last! We can return to things. Experiences, on the other hand, are inherently ephemeral; they can only be consumed once. Buying an experience is like setting money on fire. 

But this intuition is exactly backwards - the person who wants more toys has misunderstood the nature of happiness. Van Boven and Gilovich demonstrated this by conducting a number of straightforward experiments. In one survey, they asked people to describe a recent purchase that was made with “the intention of advancing your happiness and enjoyment in life.” It turned out that those who described the purchase of an experience, such as a music concert or trip to the beach, reported much higher feelings of happiness than those who purchased objects. They were more likely to consider the money well spent and less likely to wish they’d bought something else instead. Similar results emerged from follow-up surveys, as reminding subjects of a recent “experiential purchase” made them happier than reminding them of a recent material purchase. Most impressive, perhaps, is that this effect seems to increase over time. While objects depreciate - we habituate to their delights - experiences become even more valuable, as we return again and again to the pleasurable memory. (The scientists refer to this as the process of positive reinterpretation.) One lasts, the other doesn’t. But what lasts isn’t what we can hold in the hand. 

I bring this paper up because, in the last year or so, there have been a number of very interesting studies on materialism and its discontents. While Van Boven and Gilovich showed that purchasing experiences made us happier, these new studies help reveal why purchasing things does not. They expose the heart of darkness inside every mall.

  • Marsha Richins, in the Journal of Consumer Research, showed that “high materialism consumers” typically experience a post-purchase hangover. While they were extremely excited about the object before they bought it – they imagined all the ways it would make their lives better – that excitement quickly dissipated once they actually possessed the object. According to Richins, this disappointment is rooted in a false belief among the most materialistic shoppers that “purchase of the desired product will transform their lives in significant and meaningful ways…For these consumers, the state of anticipating and desiring a product may be inherently more pleasurable than product ownership itself.” 
  • A team of psychologists conducted three longitudinal studies looking at the relationship between materialism and well-being. The results were clear-cut: “Across all three studies, results supported the hypothesis that people’s well-being improves as they place relatively less importance on materialistic goals and values, whereas orienting toward materialistic goals relatively more is associated with decreases in well-being over time.” In their most interesting experiment, the psychologists exposed a sample of “highly materialistic US adolescents” to a financial education program called “Share Save Spend,” which encourages people to balance spending with sharing and saving. Those teens randomly assigned to the intervention showed a decrease in materialism and an increase in self-esteem. They bought less, and thought better of themselves.
  • In the journal Communication Research, a group of Dutch psychologists help reveal the roots of materialism. They place the blame, at least in part, on advertisements targeting children, noting that kids who saw the most ads were also the most materialistic. This new study builds on previous work by the lead researcher, Susan Opree, which suggested that the “material values portrayed in advertising teach children that material possessions are a way to cope with decreased life satisfaction.”
  • A new study led by psychologists at Baylor University found that people who scored high on measures of materialism were also less grateful for what they had. According to their statistical analysis, this lack of gratitude was largely responsible for the observed relationship between materialism and decreased life satisfaction.

Taken together, the psychological literature on materialism is a fairly persuasive critique of modern capitalism, which conditions us to seek happiness in all the wrong places. That said, I’m most intrigued by a 2013 study on materialism and loneliness by Rik Pieters at Tilburg University, if only because his study complicates, ever so slightly, the strong version of the anti-materialism argument. It shows that materialism is usually a terrible way to seek life-satisfaction, but that it’s not always terrible. Some materialists live delighted lives. 

First, a brief taxonomy. It’s generally recognized that there are three subtypes of materialism. The first is material measure, which is the tendency to see possessions as a status signal or sign of success. (You buy the Porsche because it shows you can afford it.) The second is material medicine, in which purchases are seen as a quick way to elevate levels of future happiness. (You buy the Porsche because you believe the car will make your future self content.) Lastly, there’s material mirth, a world-view in which material possessions are believed to be part of the good life. (You buy the Porsche because it’s a beautiful car.)

Pieters was interested in the causal relationship between materialism and loneliness, as numerous studies have quantified the severe negative consequences of the lonely life. (According to one recent study of older people led by John Cacioppo, feelings of extreme loneliness increase the risk of premature death by 14 percent, which is roughly twice the impact of obesity.) Although it’s often speculated that materialism causes loneliness – our obsession with things leads us to neglect our relationships – Pieters wondered if the “influence might also run in the opposite direction.” Perhaps we aren’t lonely because we’re always shopping. Perhaps we shop because we’re always lonely.

To untangle this causal knot, Pieters collected data from 2,500 consumers between 2005 and 2010. He gave them standard surveys to measure materialism and its subtypes, asking people to rate, on a scale from 1 to 5, the extent to which they agreed with a series of statements about shopping and happiness. (“I like to own things that impress people,” “I like a lot of luxury in my life,” “Buying things gives me lots of pleasure,” etc.). They were also assessed in terms of loneliness, and asked whether or not they agreed with sentences about their social life. (“I feel in tune with the people around me,” “There is no one I can turn to,” “I feel left out,” etc.) By studying the ebb and flow of materialism and loneliness over time, Pieters was able to detect some interesting statistical relationships.

His most important finding was that materialism and loneliness often exist in a so-called vicious cycle, so that materialistic tendencies make us feel lonely, which leads us to seek comfort in purchases and possessions, which only makes us feel even lonelier. It’s a downward spiral that ends with lots of misery and credit card debt. Interestingly, loneliness seemed to have a bigger causal effect on materialism than materialism did on loneliness. This suggests that the best way to escape the “materialistic treadmill” is to make some new friends. 

But there’s an interesting exception to the rule. While two subtypes of materialism were locked in a vicious loop with loneliness – the worst was material medicine, followed by material measure - there was one subtype of materialism that was actually associated with reduced feelings of loneliness. Those who score high in material mirth, Pieters writes, are those who “derive pleasure from the process of buying things,” enjoy spending money on “things that are not practical,” and like “a lot of luxury in life.”

Why is this mindset so much more effective? Nobody really knows. Pieters speculates that part of the answer has to do with intrinsic motivation, as those high in mirth tend to buy things for the simple reason that buying things is fun. Their materialism is not about impressing others, or improving the mood of a future self – it’s about the sheer delight of spending money. Such an attitude, Peters writes, might spill over and “indirectly improve social relationships,” as mirthful people also tend to lavish cash on family vacations, nice meals and other shared experiences. 

Perhaps. Or maybe those merry materialists just like what they bought. Here's the great Frederick Seidel, the poet laureate of material mirth, writing about his new Ducati motorcycle in a poem called "Fog":

I spend most of my time not dying.
That’s what living is for.
I climb on a motorcycle.
I climb on a cloud and rain.
I climb on a woman I love.
I repeat my themes.

Here I am in Bologna again.
Here I go again.
Here I go again, getting happier and happier.

The motorcycle, says Seidel, is not merely a thing. It's an experience. If we live our life right, what's the difference?

Van Boven, Leaf, and Thomas Gilovich. "To do or to have? That is the question." Journal of Personality and Social Psychology 85.6 (2003): 1193.

Pieters, Rik. "Bidirectional Dynamics of Materialism and Loneliness: Not Just a Vicious Cycle." Journal of Consumer Research 40.4 (2013): 615-631.

 

Why Do We Watch Sports?

Why do we watch sports? It's a simple question with a complicated answer. Sports are a huge entertainment business – the NFL alone generates at least $7 billion a year in television revenue  – so it’s easy to lose sight of their essential absurdity. In essence, we are watching freakishly large humans in tight polyester outfits play with balls. They try to get these balls into cups, goals, baskets and end zones. It's a bizarre thing to get emotional about. 

There's no shortage of social science that tries to pin down the appeal of sports. There's the tribal theory, and the mirror neurons cavort, and the patterning hypothesis, which argues that sports take advantage of our tendency to hallucinate patterns in the noise. (Slot machines are fun for the same reason.) All of these speculations are probably a little bit true. 

But I'm most intrigued by the so-called talent-luck theory, which was first proposed by the UCSD psychologist Nicholas Christenfeld in 1996. (His short paper has only been cited a single time, but I think it’s a brilliant little conjecture.) Here's the model in short form: humans like watching feats of physical talent, but we still want to be surprised. As a result, the most successful sports (i.e., those on Sportscenter) have found a way to engineer an ideal balance of skill and randomness. Thanks to chance, the underdog (which is a polite way of saying the less talented team) still has a chance. 

So what’s Christenfeld’s evidence? He relied on a popular statistical measure known as the split half reliability coefficient. The measure is often used when assessing the reliability – that is, the internal consistency – of a psychological test. Let’s say, for instance, that you’ve developed a new cognitive assessment designed for NFL quarterbacks. In order to measure the internal consistency of the test, you should randomly divide the questions into two groups. The split-half reliability is a measure of the correlation between the scores of the different groups, with higher correlations signaling higher test reliability. (The best tests are said to “hang together.”) In other words, if the quarterbacks performed equally well on both halves of the test, then the test is probably measuring something, even if we still don’t know what that something is.

Christenfeld realized that this common statistical tool could be used to assess the reliability of various professional sports, including baseball, hockey, soccer, basketball, football and rugby. He randomly divided each of their seasons in half and then computed their split-half reliability. To what extent did a team’s success in half of its games predict its success in the other half? 

The first thing Christenfeld discovered is that different sports generate very different reliabilities on a per game basis. Baseball, for instance, has a single game reliability of 0.008. If that seems low, it’s because it is – the NBA is roughly eleven times more reliable on a per game basis than MLB. (Hockey is smack in the middle, while the NFL has the highest single game reliability rating of any major American sports league. Only rugby is more predictable.) When I tell Christenfeld that I’m impressed by the unpredictability of baseball, he notes that the randomness is rooted in the basic mechanics of the sport, as the difference between a triple down the line and a double play is often just a few millimeters on a bat. “There is also no partial credit in baseball,” he says. “A hitter doesn’t get partial credit for hitting the warning track.” The end result is that success in America’s game is an all-or-nothing proposition, which increases the noisiness of victory. (As Christenfeld notes, sports that are more reliable, such as football, do give partial credit for performance: “Football has field position,” he says. “Even if you don’t score, assembling a long drive still has benefits.”)

But this doesn’t mean baseball is all luck and noise. Instead, Christenfeld points out that randomness of a single baseball game is balanced out by the fact that the baseball regular season is 162 games long, or ten times longer than the football season. What’s more, Christenfeld found the same pattern in every sport he looked at, so that season length was always inversely related to reliability. “The sports whose single games reliably assess talent have short seasons, while those whose games are largely chance have long ones,” Christenfeld wrote in his Nature paper. “Thus these sports, differing enormously in their particulars, converge towards the same reliability in a season.” Christenfeld then goes on to argue that season length is not an “arbitrary product of historical, meteorological or other such constraints.” Rather, it is rooted in the desire of fans to witness a “proper mix of skill and chance.”

I find this paper fascinating for a few reasons. For starters, it clarifies the appeal of sports. Although sabermetricians have gotten far better at measuring various kinds of athletic talent, from DVOA to PER, the entertainment value of sports is inseparable from the fact that the talent of players is intentionally constrained by the rules of the game. “If sports were pure contests of skill, then they’d quickly become genetic tournaments,” Christenfeld says. “But that’s not much fun to watch.” As a result, the most successful sports have evolved rules to encourage what Christenfeld calls an “optimal level of discrepancy.”

This model also comes with practical consequences, helping us evaluate potential rule changes to a given game. More instant replay? That will increase reliability, which might be good for baseball, but bad for rugby. What about changing the requirements of women’s tennis, so that players have to win the same number of sets as men? “The data suggest that women’s tennis is more reliable” – the best players are more likely to win – “so I’d guess that adding another set would make it too reliable,” Christenfeld says. Should we shorten the baseball season, as many fans and commentators have proposed? Since baseball already has the lowest season-length reliability of any major sports league, that’s probably not a good idea. “You never want the outcome to feel arbitrary,” Christenfeld says. 

The NBA is probably the sport most in need of Christenfeld’s advice. According to his data, the season reliability of basketball is 0.890, which is far higher than the NFL’s season reliability of 0.681. Such reliability manifests itself as a competitive imbalance, as the best teams routinely dominate their lesser opponents. While the imbalance of the NBA is caused, at least in part, by "the short supply of tall people" - that, at least, is the conclusion of a 2005 paper led by the economist David Berri - these human factors are exacerbated by the league rules.  “I think it’s pretty clear that the second half of the [NBA] season should be shorter,” Christenfeld says. “The history of basketball is the history of basketball dynasties. There are way too many games where the outcome is predictable.”

And then there is the larger lesson of Christenfeld’s research, which concerns the difficulty of managing the competing claims of talent and equality. If talent is fairly rewarded – i.e., LeBron James gets paid what he deserves – then inequality increases and NBA underdogs are even less likely win. To deal with this problem, most sports leagues impose salary caps on their teams, as they attempt to shrink the gap between the best and the worst, the richest and the poorest. Such parity makes the sport less predictable and more exciting; LeBron is underpaid for the good of the game.

In real life, of course, we’re not concerned about upsets and underdogs – we care about social mobility. We don’t seriously consider salary caps – we talk about marginal tax rates. Nevertheless, the basic tensions remain the same. While we want our society to be relatively reliable – every “game” should be a measurement of skill – we also don’t want a perfect meritocracy, for that creates a level of inequality that feels unfair. It’s also de-motivating, and can create a feedback loop in which the “underdogs” are even less likely to compete in the first place. If talent always win, there’s no reason to play. 

Christenfeld, Nicholas. "What makes a good sport." Nature 383.6602 (1996): 662-662.

 

Thank You For Reading

Welcome to my blog. Thank you for reading. I hope this will be a place where I can write about scientific research that interests me.

I also hope this blog can be a small step towards regaining the trust of my readers.

A quick note: when possible, all material will be sent to the relevant researchers for their approval. If that’s not possible, an independent fact-checker will review it.

Please contact me with any corrections or suggestions: jonah.lehrer@gmail.com