The fundamental premise of Moneyball is that the labor market of sports is inefficient, and that many teams systematically undervalue particular athletic skills that help them win. While these skills are often subtle – and the players that possess them tend to toil in obscurity - they can be identified using sophisticated statistical techniques, aka sabermetrics. Home runs are fun. On-base percentage is crucial.
The wisdom of the moneyball strategy is no longer controversial. It’s why the A’s almost always outperform their payroll, the Dodgers just hired Andrew Friedman, and baseball fans now speak in clumps of acronyms. (“His DICE and DIPS are solid, but I’m worried he’ll regress to the mean given his extremely high BABIP.”)
However, the triumph of moneyball creates a paradox, since its success depends on the very market inefficiencies it exposes. The end result is a relentless search for new undervalued skills, those hidden talents that nobody else seems to appreciate. At least not yet.
And this brings me to a new paper in the Journal of Sports Economics by Daniel Weimar and Pamela Wicker, economists at the University of Duisburg-Essen and the German Sport University Cologne. They focused on a variable of athletic performance that has long been neglected, if only because it’s so difficult to measure: effort. Intuitively, it’s obvious that player effort is important. Fans complain about basketball players slow to get back on defense; analysts gossip about pitchers who return to spring training carrying a few extra pounds; it’s what coaches are always yelling about on the sidelines. Furthermore, there's some preliminary evidence that these beliefs are rooted in reality: One study found that baseball players significantly improved their performance in the final year of their contracts, just before entering free-agency. (Another study found a similar trend among NBA players.) What explained this improvement? Effort. Hustle. Blood, sweat and tears. The players wanted a big contract, so they worked harder.
And yet, despite the obvious impact of effort, it’s surprisingly hard to isolate as a variable of athletic performance. Weimer and Wicker set out to fix this oversight. Using data gathered from three seasons and 1514 games of the Bundesliga – the premier soccer league in Germany – the economists attempted to measure individual effort as a variable of player performance, just like shots on goal or pass accuracy. They did this in two ways: 1) measuring the total distance run by each player during a game and 2) measuring the number of “intensive runs” - short sprints at high speed – by the players on the field.
The first thing to note is that the typical soccer player runs a lot. On average, players in the Bundesliga run 11.1 km per game and perform 58 intensive sprints. That said, there were still significant differences in running totals among players. Christoph Kramer averaged 13.1 km per game during the 2013-2014 season, while Carlos Zambrano ran less than 9 km; some players engaged in more than 70 sprints, while others executed less than 45. According to the economists, these differences reflect levels of effort, and not athletic ability, since “every professional soccer player should have the ability to run a certain distance per match.” If a player runs too little during a game, it’s not because his body gives out – it’s because his head doesn’t want to.
So did these differences in levels of effort matter? The answer is an emphatic yes: teams with players that run longer distances are more likely to win the game, even after accounting for a bevy of confounding variables. According to the calculations, if a team increases the average running distance of its players by 1 km (relative to the opponent), they will also increase their winning probability by 26-28 percent. Furthermore, the advantages of effort are magnified when the team difference is driven by extreme amounts of effort put forth by a few select players. As the economists note, “teams where some players run a lot while others are relatively lazy have a higher winning probability.”
Taken together, these results suggest that finding new ways to measure player effort can lead to moneyball opportunities for astute soccer teams. Since previous research demonstrates that a player’s effort has an “insignificant or negative impact” on his market value, it seems likely that teams would benefit from snapping up those players who run the most. Their extra effort isn’t appreciated or rewarded, but it will still help you win.
The same principle almost certainly applies to other sports, even if the metrics of effort aren’t quite as obvious as total running distance in soccer. How should one measure hustle in basketball? Number of loose balls chased? Time it takes to get back on defense? Or what about football? Can the same metrics of effort be used to assess linemen and wide-receivers? These questions don’t have easy answers, but given the role of effort in shaping player performance it seems worthwhile to start asking them.
There is a larger lesson here, which is that our obsession with measuring talent has led us to neglect the measurement of effort. This is a blind spot that extends far beyond the realm of professional sports. The psychologist Paul Sackett frames the issue nicely in his work on maximum tests versus typical performance. Maximum tests are high-stakes assessments that try to measure a person’s peak level of performance. Think here of the SAT, or the NFL Combine, or all those standardized tests we give to our kids. Because these tests are relatively short, we assume people are motivated enough to put in the effort while they’re being measured. As a result, maximum tests are good at quantifying individual talent, whether it’s scholastic aptitude or speed in the 40-yard dash.
Unfortunately, the brevity of maximum tests means they are not very good at predicting future levels of effort. Sackett has demonstrated this by comparing the results from maximum tests to field studies of typical performance, which is a measure of how people perform when they are not being tested. (That, presumably, is what we really care about.) As Sackett came to discover, the correlation between these two assessments is often surprisingly low: the same people identified as the best by a maximum test often unperformed according to the measure of typical performance, and vice versa.
What accounts for the mismatch between maximum tests and typical performance? One explanation is that, while maximum tests are good at measuring talent, typical performance is about talent plus effort. In the real world, you can’t assume people are always motivated to try their hardest. You can’t assume they are always striving to do their best. Clocking someone in a sprint won’t tell you if he or she has the nerve to run a marathon, or even 12 kilometers in a soccer match.
And that’s why I find this soccer data so interesting. Sports teams, after all, have massive financial incentives to improve their assessments of human capital; tens of millions of dollars depend on the wisdom of their personnel decisions. Given the importance of effort in player performance, I’m hopeful they’ll get more serious about finding ways to track it. With any luck, these sabermetric innovations will trickle down to education, which is still mired in maximum high-stakes tests that fail to directly measure or improve the levels of effort put forth by students. As the German football league reminds us, finding ways to increase effort is extremely valuable knowledge. After all, those teams with the hardest workers (and not just the most talented ones) significantly increase their odds of winning.
Old-fashioned effort just might be the next on-base percentage.
Weimar, D., & Wicker, P. (2014). Moneyball Revisited Effort and Team Performance in Professional Soccer. Journal of Sports Economics, 1527002514561789.