# A Closer Look at Closers – Part Two

A couple of weeks ago I began my oddyssey into the inconsistent world of Closers.  I discussed what they are, what they do and how we currently evaluate them, as well as introducing my statistic – Save Rate
To recap, Save Rate measures and properly proports the number of individual saves, individual opportunities, and team opportunities, in order to give us a much more tangible idea of how effective a Closer is in any given year.  To find the Save Rate we need to know three things:

1. How many Saves for the Closer?
2. How many Opportunities for the Closer?
3. How many total games were saveable for the team of the Closer?

The goal with Save Rate is to compare Closers in different situations.  Since certain teams play more close games than others it would not be fair to say that Closer A (40-45 in saves) had a better season than Closer B (25-32) if all we are basing that on is the total number of saves or blown saves.
The team of Closer A had 60 total saveable games whereas Closer B’s team only had 35.  Clearly, Closer A should have more saves, since his team had so many (25) more chances in which to record them.  Save Rate takes that into account, as well as numerous other factors, and levels the field of play.
In the examples above, Closer A would have a Save Rate of 67%, because he successfully saved 40 of the 60 games that could have been saved for his team.  The Save Rate of Closer B would be 71% because he successfully saved 25 of his team’s 35 saveable games.  Closer B was more effective for his team.  After all, it’s not his fault that his team had that many less saveable games.  He did what his team asked him to do better than Closer A did for his team.
Essentially, to finish the recap, Save Rate measures how effective a Closer was in a given season by measuring his individual success relative to the team need.  The raw total of Saves means nothing if we cannot compare it to the needs of the team.
SAVE DIFFERENCES
As mentioned in part one, including the situation at hand and what the pitcher actually does, there are 144 different ways to record a ninth inning save.  If we wanted to get really crazy we could take into account the possibility of a closer being replaced mid-inning, due to injury or ineffectiveness, and add more ways.  Fortunately for everyone here I do not want to get any crazier.
144 is way too high of a number to keep track of, when measuring and weighting saves, so a more interesting approach would be to break them down into three groups.  The categories I am going to look at from now on are –

• Perfect Saves – saves in which nobody reaches base
• Medium Save – save in which baserunners reach but no runs score
• Suspense Save – save in which runs score

1-RUN, 2-RUN, 3-RUN
Tom Tango’s THE BOOK has a fascinating chapter on the 3-run save and I highly suggest fans of statistics get ahold of it.
You might wonder why I broke down the saves into categories that do not include these run differentials.  The major reasoning, consistent with most of this study, is that the Closer has absolutely no effect on that.  The team does.
The Closer cannot control how often his team plays a 1-run game.  That is up to virtually everyone except him.
All he can do is control what he does in his appearances – NOT what types of appearances he gets brought into.  And, if you are a fan or advocate of the DIPS theory, then the Closer does not even have much control over what happens during his appearances.  Closers have no control when they are brought in and, unless they are pure strikeout pitchers, little control over what happens while they are in.
It would not be accurate to say that Closer C, 17 for 21 in 1-run saves, is better under pressure than Closer D, 13 for 19 in 1-run saves, because there are too many types of 1-run saves.  What if C’s blown 1-run saves were with bases empty whereas D’s blown 1-run saves were all with the bases loaded when he entered?
These are situations reliant on what the team does prior to his entrance and cannot be accurately used to compare Closers on different teams.
We can, however, compare them by examining individual compiled statistics regardless of when they were brought in – IE – if a Closer enters with the bases already loaded it is much different than a bases loaded situation caused by him walking batters or giving up hits.
An extremely fascinating statistic, WPA measures the percentage of a game that a player contributes for both wins and losses.  The most it can amount to in a single game is 0.5 and the least is -0.5.  The way it works relies on probabilities and percentages of games won based on certain situational circumstances.  Below is an example of WPA put to use in terms of Closers.
From 1998-2006, home teams won 94.5% of games in which they led by 2 runs at the start of the 9th inning, with no outs and bases empty.  Let’s say that Francisco Cordero enters the game with this exact situation.  He strikes out the first batter he faces.  His team now has a 97.9% likelihood of winning, meaning that he just accounted for +.034.  He gets the next batter to ground out.  His team now has a 99.4% likelihood of winning, meaning he accounted for an additional +.015.  So far, Cordero’s WPA is +.049.  The next batter hits a double off of him.  His team’s likelihood of winning decreases to 97.3%, meaning he will lose +.021 of his +.049, leaving him at +.028.  He then strikes the next batter out to end the game, giving his team a 100% likelihood to win (since they won) and giving him a final WPA of +.055 for the game.
Now, when we look at the same number of recorded outs but have the runner on second base before Cordero entered the game (meaning he entered the 9th inning with a 2-run lead, a runner already on second, and no outs), the results are slightly different.  Instead of a +.055, the WPA would be +.119 – over two times that of the previous situation.
Though it makes sense, because he entered into a tougher situation, he had no control over when he was brought in.  Therefore, WPA can be very misleading when evaluating Closers.
While WPA is a tremendous statistic to use when evaluating the contributions of players during individual games, or a series, I have strong reservations about using it as the end-all tool to evaluate an entire season of a Closer – the major reservation being the aforementioned point that Closers have no control over when, or in what situation, they are brought in.
Additionally, there are the situational discrepancies mentioned here and in part one, and the fact that some teams simply have more opportunities for saves than others.  WPA is a great tool to use when differentiating between 1-run, 2-run, and 3-run saves, but since I am not terribly interested in that, I am going to stick with Save Rate.
SAVE BREAKDOWN
The table below shows the percentages of Perfect, Medium, and Suspense Saves of the nine Closers used in my study, from 2007.  Due to the Save Rate being included we can look at percentages and not raw numbers.  The Save Rate already accounts for some Closers having less chances to make appearances.

 NAME P SV M SV S SV SV RATE Francisco Cordero 54.5 40.9 4.5 75.9 Jose Valverde 48.9 46.8 4.3 73.4 Billy Wagner 52.9 41.2 5.9 70.8 Trevor Hoffman 50.0 40.5 9.5 70.0 Jason Isringhausen 40.6 50.0 9.4 69.6 Chad Cordero 48.6 37.8 13.5 62.7 Ryan Dempster 57.1 25.0 17.9 58.3 Brad Lidge 42.1 57.9 0.0 34.5 Brian Fuentes 40.0 50.0 10.0 33.9

By using the numbers in this table we can compare Closers in different situations and determine which were better and/or more effective.  Since it generally came down to Cordero or Valverde in the NL in 2007 we will use the table to compare them.
In 2007, Cordero had a higher percentage of Perfect Saves than Valverde, as well as a higher Save Rate.  This explains that Cordero was not only more successful in saving games relative to the needs of his team than Valverde, but also that more of that success stemmed from Saves in which he did not allow a baserunner.
Yes, some of these Suspense Saves resulted due to entering the game with baserunners and some sort of momentum factor going for the other team, but that was when the team decided to bring the Closer in.  I am trying to measure effectiveness relative to the team need here.  If they decide to bring you into that circumstance and you cannot get the job done, you are ineffective relative to the team need.
Essentially, Cordero had better numbers relative to his team than Valverde had to his team, and gave opposing teams less of a chance, so we can say that Cordero was a better Closer in 2007.  We could even make a case for Billy Wagner as being almost equally effective as Valverde.  Their Save Rates were very close, and even though Wagner gave up runs in a higher percentage of his saves, it gets canceled out by his higher percentage of Perfect Saves.
SAVE RATE
I introduced the Save Rate in part one and, no matter what other ways I try to quantify effectiveness with, I keep coming back to it.  If we really want to determine which Closer was the best, given the circumstances that –

• Some teams win more than other teams
• Some teams have more save opportunities than others
• The Closer has no control over when, or in what situation, he is brought into
• We can only really measure the statistics a Closer puts up in each appearance, regardless of situation, due to this lack of control

– the best way to reach our goal is to measure individual success relative to the needs of the teams.  In other words, exactly what the Save Rate tells us.
The bottom line is that, regardless of the situations or save types, if your team has 65 games that need to be saved, you want the Closer to not only appear in as many of those 65 as possible but also to successfully convert as many of those appearances.  Save Rate gives us those measurements and proports it into a neat percentage of effectiveness.
PART THREE
In the next, and final, post of this article I will present all of my data and findings.  I will also discuss some of the important statistics that tend to translate into better Closers and why they tend do make that translation.
When all three parts are completely said and done I will combine it into one solid PDF document and provide a downloadable link.  Since some parts are repeated in other parts, this PDF will bring everything together in order to give you my reasoning and methods for evaluating Closers.
I am not presenting this as the “end-all” method, by any means, but if I was a GM and aware of the circumstances I mentioned above (lack of control, situational discrepancies, good/bad team discrepancies) and in need of a proven Major League commodity, I would want to know how durable and how successful in that durability a Closer was – exactly what the Save Rate tells me.