What really happens in the clutch?
May 21, 2009 4 Comments
Maybe there’s something to this whole clutch hitting thing after all. For the longest time, there’s been a maxim in the Sabermetric community that clutch hitting, as a repeatable skill, does not exist. Recently, I’ve come to question that received wisdom that players aren’t really affected by pressure, even though in the past I’ve been one of the people spreading it.
Strictly speaking (and statistically speaking), over a one-year period, there is very little in the way of a repeatable clutch skill. No matter how you measure it, whether it’s a leverage index based mathematical model of clutch or an intuitive “performance in ‘close and late’ situations” model, a player doesn’t seem to perform better or worse in the clutch than he does overall. At least over a one year period. But then, perhaps we’ve over-stated our case. It’s one thing to say that a one year time period (or 700 PA or whatever) is not a sufficient sampling frame to attain proper reliability on a statistic. It’s another to say that the skill does not exist. It might just be covered by a lot of noise.
One of the problems with measuring clutch is the way in which it’s been operationalized. The current best measure to be had is the method used at Fangraphs, which uses win probability added (WPA) and the Leverage Index (LI) to calculate a mathematically based clutch statistic. The problem with WPA based versions of clutch is that WPA itself doesn’t stablize over a short period of time, (and here, even a full year is a “short” period of time). Many of the usual one-number performance indicators (the slash stats) are unreliable in small sample sizes as well. Perhaps we need a better indicator of what’s really going on in the clutch.
Let’s go back to what the idea of clutch is. People react to stress differently. Some people seem to thrive under pressure. Others seem to crumble. Anyone who’s ever had stage fright can attest to the latter. The common thread is that performance suffers/improves under pressure. But does behavior change under pressure? There’s a difference. Behavior is what players decide to do, over which they have a great deal of control. Performance is the result, and as we’ve seen time and time again, is often the result of luck mixed with behavior.
Let’s take the decision on whether or not to swing at that pitch that’s currently hurtling toward the plate, perhaps the one thing over which the batter has total control. (Swing percentage also stablizes at ridiculously low levels of PA.) Do players swing more (or less) with the game on the line? Doubtless, some will not be affected by the pressure, some will start swinging at everything, and some will politely keep their bat on their shoulder more than they otherwise would. This is assuming, of course, that players in general are affected by pressure. But are players consistent in the way that they are affected by pressure?
I took all player-seasons from 2005-2008. I split all plate appearances into non-clutch and clutch. For my definition of “clutch”, I went with a “close and late” definition (7th inning or later, score within 2 runs either way). Why this and not a leverage-based method? Because baseball players probably don’t know that leverage exists. It’s a great mathematical tool, but it’s probably not how players decide whether this is a big situation or not. The reason behind the 7th inning cutoff is that even if one is leading off the seventh inning, there is the very real chance that this may be my last chance to do something. If the team is retired in order over the next three innings, the guy leading off the 7th will be standing in the on-deck circle as the game comes to an end. Plus, those “close and late” situations have pretty high leverage values to begin with. I fully realize that it’s not a perfect defintion of clutch, but I’m trying to model what goes on inside the head of a major league player. I’m a psychologist after all.
I looked at each player’s swing percentage when in the ho-hum basket and in the “clutch” basket. I set a 50 PA minimum for the clutch basket. I found the difference between his swing percentage for the clutch and non-clutch situations. Indeed, some players swung more, some less. Some stayed pretty much the same. Since I had four years worth of data, I ran an intra-class correlation (StatSpeak fans, take a shot!), and came up with .24. (short explanation: think of ICC as a year-to-year correlation… only better. For more details, go here.)
That may not seem very big. Papers have been written about less. The difference may not be stable at a minimum of 50 PA of clutch at bats, but perhaps at a slightly bigger sampling, it may approach respectablility. Given that swing percentage is a very stable statistic, and is something that is almost entirely in the control of the batter, it may even be possible to determine from minor league data what a player will look like on this measure before he gets to the majors.
The other thing to consider is that swing percentage matters. Not directly, but in concert with other diagnostics, it makes a difference. Low contact hitters (generally, power hitters) who swing more actually have lower rates of HR and higher rates of K’s. So, players who get nervous and start swinging more are much less likely to hit that homerun in the clutch.