Game theory | Baseball statistics

Spring forward

By D.R. | NEW YORK

IT WAS still snowing in the north-eastern United States, but two American rituals of seasonal renewal got underway on March 3rd. The first was the start of games during the month-long spring training period in Florida and Arizona that precedes the baseball season. Invariably, the welcome return of the crack of the bat was accompanied by a torrent of tweets and blogs by statistical analysts of the sport, intended to disabuse gullible fans preemptively of the perilous notion that these contests might contain a single drop of useful information.

There are few areas where the consensus among quantitative baseball researchers is stronger than the truism that “spring training stats don’t matter”—meaning that they don’t help predict what will happen during the season—because the players are simply shaking the rust off and getting back into shape rather than trying to win. “Spring training stats are meaningless,” wrote Joe Sheehan of Baseball Prospectus back in 2008. “It’s the single most important thing to keep in mind every March.” Dave Cameron of Fangraphs echoed this view in 2010, saying that “Spring training numbers just don’t mean a thing. At all. Anything…Ignore the numbers coming from the Cactus and Grapefruit Leagues.” The conventional wisdom has not budged since then. Indeed, many fans who lack the ability to play baseball compete instead to find the most egregious example of a major-leaguer who showed up to camp “in the best shape of his life”, crushed the ball during spring training, and inevitably returned to mediocrity once the games started to count.

There’s no doubt that spring-training matchups are a far cry from meaningful baseball games. Half of them take place in high-and-dry Arizona, where balls regularly fly out of stadiums; the rest are held near sea level in humid Florida, where the heavy air turns would-be home runs into harmless pop flies. Players show up in a wide range of conditions: some spend their offseasons fishing or sunbathing, while others compete in Latin American winter leagues and arrive in midseason form. Pitchers often use the games to test out a new type of pitch; some players are learning new defensive positions. The quality of competition varies wildly, from green teenage prospects to established superstars. And spring training does not last long enough to generate robust samples of performance: a typical player will get just 50-100 plate appearances or batters faced, a small fraction of the 600 for hitters and 800 for pitchers that they will see over the course of the year.

Yet in spite of all these caveats, the claim that spring-training numbers are useless is wrong. Not a little bit wrong, not debatably wrong—demonstrably and conclusively wrong. To be sure, the figures are noisy. But they still contain a signal. At the MIT Sloan Sports Analytics Conference held in Boston on February 27th-28th, I presented a study (see slides) that explained how to extract the statistical golden nuggets buried in this troublesome dataset, and offered some lessons this example provides for the practice of quantitative sports research more broadly.

It’s easy to see how the consensus about the emptiness of spring-training statistics arose. In the most widely cited categories, such as batting average for hitters and earned-run average (ERA) for pitchers, the correlations between spring training and subsequent regular-season outcomes are vanishingly weak: players who top the charts in March only seldom wind up on the real leaderboards six months later. However, the relationships in those categories from one regular season to the next are only slightly stronger: no one blinks an eye when one year’s batting champion (such as Chipper Jones, who hit .364 in 2008) tumbles all the way down to the league-average rate (his 2009 mark was a pedestrian .264) the following season, or when a pitcher who leads the league in ballpark-adjusted ERA (like Roy Halladay in 2011) struggles to get anyone out the next year. That’s just baseball. (For quantitatively minded readers, on a scale of zero to one the year-to-year correlation among qualifying players from 2013-14 was a paltry .25 for ERA and .40 for batting average. An old joke among statisticians is that “the world is correlated at .3”: trends that faint have no meaning.)

There are good reasons that these numbers have so little predictive power. Batting average can vary wildly depending on whether a player happens to hit balls right at the fielders or between them—Kevin Costner’s character in “Bull Durham” offers a great speech elucidating this phenomenon. And ERA is largely determined by whether a pitcher happens to give up eight hits in a row, or scatters them at safe intervals over the course of a game. But baseball also provides many other statistics that primarily reflect a player’s actual skills, such as their frequencies of strikeouts and walks, or whether the balls they hit (or allow to be hit against them) tend to travel in the air or on the ground. These “peripheral” numbers tend to stabilise much faster: the year-to-year correlation for qualifying batters’ strikeouts from 2013-14 was .90. And sure enough, they also show a strong connection between spring training and the subsequent regular season (see scatter plot).

In and of themselves, these relationships could be of little use. Even if players often post similar peripheral statistics in spring training and the regular season, the spring-training data might not provide any extra information that wasn’t already measurable from their prior records. To encapsulate what was known about a player before the start of spring training, I turned to ZiPS, a quantitative model for projecting baseball performance. Because ZiPS forecasts are well-known for their accuracy and freely available on the internet, they make for a formidable baseline. Could adding a dollop of spring-training statistics, despite all their flaws, into a cauldron of ZiPS projections improve the results?

The answer was an unequivocal yes. In every peripheral category, forecasts that included a finely calibrated dose of spring-training numbers outperformed ZiPS by itself. The impact was particularly strong for first-year players (“rookies”), for whom spring training is their first taste of proper big-league competition. After adding the peripherals back together to get an all-in-one value measure, incorporating spring training improved the correlation between preseason projections and final results from .578 to .593 for hitters (using OPS) and from .354 to .387 for pitchers (using ERA).

That may sound like a piddling gain. But in the angels-on-the-head-of-a-pin world of baseball forecasts, it’s a big deal. Given two players with identical expectations coming into the year, spring training statistics can cause their projections to diverge by up to 60 points of OPS or ERA—gaps that equate to salary differentials of over $10m a year on the free-agent market. Put another way, players whose forecasts were most aided by their performance during spring training have tended to beat their ZiPS projections by substantial margins, whereas those whose expected value declined as a result of spring training have generally fallen short of their ZiPS forecasts by an equally large amount (see table). So the next time some know-it-all rolls his eyes when you note that a player is having a great spring, please challenge him (as such blowhards are almost invariably male) to a wager.

I don’t want to exaggerate the importance of this finding. Learning that spring-training statistics do in fact have some marginal predictive power will hardly revolutionise the sport. At best, it might cause teams to give a handful of promising players each year a chance they wouldn’t otherwise have had. But the speed at which the erroneous consensus about this issue congealed speaks volumes about the current sports-analytics ecosystem.

The first generation of quantitative baseball researchers—from Bill James, who more or less invented the discipline, to the whiz kids in the Oakland Athletics’ front office featured in the Brad Pitt film “Moneyball”—delighted in refuting conventional wisdom about the game that players, scouts and managers had recited for decades. This group made a number of highly valuable contributions: for example, they correctly accused major-league clubs in the 1990s of undervaluing on-base percentage, and they rightly continue to criticise managers who refuse to use their best relief pitchers in tie games.

However, they also committed a number of blunders. They said that teams overstated the importance of fielding; that drafting players out of high school rather than college was too risky; that pitchers had no ability to induce weak contact; that catchers had no measurable impact on the performance of the pitchers that threw to them; and of course that spring training statistics didn’t matter—and that anyone who insisted otherwise was a troglodyte whom history would soon consign to baseball’s Stone Age. In subsequent years, as researchers have gained access to richer and more granular datasets, each of these claims has either been watered down or outrightrefuted. The much-mocked “traditionalists” were at least partially right all along; the “nerds in their mothers’ basements” conducting the quantitative analyses simply lacked sufficient data to detect what people who had lived the sport were always able to see with their own lyin’ eyes.

This pattern of reversal should serve as a cautionary tale for researchers claiming to know more about a sport than those who actually partake in it do. One common error is overinterpreting a failure to reject the null hypothesis: as Donald Rumsfeld was fond of saying, absence of evidence is not evidence of absence. The 1990s generation lacked the statistical toolkit to detect pitchers’ impact on balls in play, or catchers’ effect on umpires’ strike-calling, and wrongly asserted (or at least strongly implied) that the fact they couldn’t see something meant that it wasn’t there. And an extra dose of humility is in order when the data seem to yield a conclusion that contradicts already-existing and well-formed beliefs (“priors”, in Bayesian parlance)—such as the entirely logical assumption that baseball games played yesterday, no matter how unreliable spring-training numbers may be, should offer some useful information, however modest, when all the other data at one’s disposal is at least six months old. Otherwise you run the risk of believing that the sun has exploded, as this memorable cartoon put it, even if you can see plainly that it hasn’t.

More from Game theory

Football marks the boundary between England’s winners and losers

As cities enjoy the Premier League’s riches, smaller clubs in Brexit-supporting towns are struggling

Data suggest José Mourinho is as likely to flop at Spurs as to succeed

Football managers make less difference than many people think


Japan’s Rugby World Cup success was improbable. Can it keep it up?

Impressive upsets have happened before. Building on these victories will be trickier