"Fixing" the All-Star selection system

Today over at Baseball Prospectus, Nate Silver takes a look at voting patterns for the All-Star game.

“Fixing” the All-Star selection system

Today over at Baseball Prospectus, Nate Silver takes a look at voting patterns for the All-Star game.  (Subscription needed.)  Nate finds that some players get a little bump just from wearing the uniform of certain teams (*cough*Yankees*cough*) and suffer for being a member of one of the non-cool teams.  MLB will reveal the winners of the balloting soon, and then the reserves will be named, and then for about a week, you can count on the usual yearly spate of columns about how to “fix” a system that so unjustly excluded (insert player name here) from being an All-Star.  Mind you, we don’t see columns about fixing the Presidential election system, but why would we?  After all, the All-Star Game is serious business.
The usual arguments trotted out go something like this:

  1. “The fans are morons and have no idea what makes for a good player so we should abolish the fan ballot” vs. “It’s the fans’ game, so let the morons have their say.”
  2. “The one-player-per-team rule ensures that everyone’s watching because they have someone to root for” vs. “Ron Coomer can put on his resume that he was once an All-Star due to this rule”
  3. “Closers, home run hitters, players from winning teams and big cities, and guys who were good five years ago get in while set up guys, high average guys, players from ‘mid-market’ and losing teams, and up-and-coming stars having breakthrough years get shafted” vs. “Yeah, so?”
  4. “The respective league’s managers pick their own players at the expense more deserving players from other teams” vs. “They won the LCS last year.  Did you?  Didn’t think so.”
  5. “The singing of the Canadian National Anthem at the All-Star Game is the single best 2 minutes of awkward television of the year” vs. “Yeah, you’re right.  Get someone who’s vaguely Canadian (Avril Lavigne!) to sing the song and watch while the camera crew struggles to find anything to focus on that’s vaguely Canadian.  (The two Blue Jays who are there, even though they’re from the Dominican Republic?  Justin Morneau?  Jason Bay?  The Canadian flag in center field that was brought out of storage just for this event because the Expos moved out of the NL?  Some random fan?)”

Argument #1 is a tough one to resolve.  If it’s just the fans’ game, then perhaps the fans should just vote for all 64 players in the game?  They already get to pick sixteen of them through balloting for the starters, why not let the fans pick the backups too?  Let’s see if the fans really are morons.  Here’s the top two vote-getters at each position (six for the outfield), and how they rank in their league at their position in win probability added and VORP (subscription needed for VORP)
AL Catchers: Pudge (32nd WPA, 10th VORP), Posada (2nd WPA, 1st VORP)
AL 1B: Big Papi (1st WPA*), Morneau (2nd WPA*, 5th VORP)
AL 2B: Polanco (1st WPA, 4th VORP), Robinson Cano (25th, which is actually last in WPA among AL 2B, 15th VORP)
AL 3B: A-Rod (1st in both), Lowell (6th WPA, 2nd VORP)
AL SS: Jeter (1st in both), Carlos Guillen (2nd in both)
AL OF: Vlad (2nd in WPA and VORP for RF), Ordonez (1st in WPA and VORP for RF), Ichiro (4th WPA, 1st in VORP for CF), Manny (16th WPA, 2nd in VORP for LF), Hunter (9th WPA, 4th in VORP for CF), Sheffield (12th in WPA*)
*-I pretended that David Ortiz actually was a 1B for WPA.  His 1.54 WPA would put him 1st among all 1B, ahead of the current leader among actual first basemen, Morneau.  Same for Sheffield in the OF.  Ortiz and Sheffield are 1-2 in VORP among DH’s.
Not bad, with the exception of Pudge.  The voters got the starters right with the exception of Grady Sizemore and Victor Martinez (as an Indians fan, I am not bitter… repeat, I am not bitter…).  The backups aren’t all that bad a lot, although I’m guessing that Cano and to some extent Manny got votes for being associated with the only two teams in MLB of which ESPN appears to be aware.  Cano should be B.J. Upton (2nd WPA, 3rd VORP) or Brian Roberts (3rd WPA, 2nd VORP), but other than that, can you say that any of these are horrible choices?
NL, please?
NL Catchers: Russell Martin (1st in both), Lo Duca (32nd in WPA, 7th VORP)
NL 1B: Fielder (2nd WPA, 1st VORP ), Pujols (1st in WPA, 3rd VORP)
NL 2B: Utley (1st in both), Kent (29th WPA, 8th VORP)
NL 3B: Wright (5th WPA, 3rd VORP), Cabrera (1st in both)
NL SS: Reyes (5th WPA, 2nd VORP), Hardy (7th WPA, 5th VORP)
NL OF: Beltran (74th WPA among OF, 3rd in VORP for CF) , Griffey (11th WPA, 1st in VORP for RF), Soriano (22nd WPA, 4th in VORP for LF), Bonds (1st WPA, 1st in VORP for LF), Andruw Jones (100th WPA, 18th in VORP for CF), Matt Holliday (2nd WPA, 2nd in VORP for LF)
Not great, but not all that bad.  Lo Duca, Kent, and Andruw Jones don’t belong in the same ball park as the All-Stars this year, and Beltran and Soriano really aren’t the best choices either, but get in based on buzz factor.  Lo Duca should be replaced by one of the Atlanta catchers (McCann or Salta… Sala… ah, you know whom I mean) and Kent should be someone like Kelly Johnson (3rd WPA, 4th VORP).  I don’t know what Aaron Rowand (4th WPA, 1st in VORP for CF) or Brad Hawpe  (3rd WPA, 3rd in VORP for RF) have to do to get noticed in the outfield.  Neither broke the Top 15 in the fan voting.
So, yeah, the fans have/would make some rather questionable decisions based on past reputations.  But, not too bad…
Read more of this post

How much is Mark Buerhle worth?

By the time you read this, the White Sox may already sign Buerhle to a contract extension.  Or maybe they will have traded him, if not both rumors will keep spinning.
I had to laugh when I read this post : BP unfiltered  I think its safe to say that if Kenny Williams goes to his agent with a 4 year offer for 33 million, the negotiations will be over, and Mark’s White Sox career will soon be over.
How much is he worth?  Here is the best salary calculator I have seen.  It accounts for continued inflation, which is somewhat balanced by the player’s talent declining.  Almost all free agent players should be in their decline phase, and Buerhle is no exception – he’ll be 29 next season.  So all we need to know is how many wins is he worth.
First, how good is he?  This year so far, his ERA+ is a very good 135.  Last year, it was a career worst 93, following a career best 143 in 2005.  His career mark is 122, and that seems to be a reasonable expectation of his ability.
Second, how much will he pitch? So far, he’s never missed a start, pitching over 200 innings every year and up to 245.  We can’t expect he will always pitch that many innings.  There’s pretty much no chance he’ll throw much more than 240, but there is a chance that he could hurt himself in spring training and pitch zero.  I looked at how much the average top free agent starters have pitched the year after they signed.  To get a quick list, I looked at only players who switched teams, and the old team received a compensation pick.  Their average IP after signing was 170.  If we remove the ones with obvious prior injury concerns, like Jaret Wright and AJ Burnett, we get 180.  Buerhle may be a bit better than that, having not just a good health record, but a perfect one, so we might be able to project him at 190-200.  But I can’t forget my definition of a durable pitcher:
Durable pitcher:  Noun.  A pitcher who has not been injured yet.
A good chunk of Mark’s value is just showing up and being average.  Assuming replacement level is 1.25 times the league average, an average pitcher is +24 runs in 190 innings.  Being above average around a 120 ERA+, that’s another 16 runs, so Buerhle is about 4 wins above the average pitcher.  From 2006 going back, Buerhle’s wins over replacement numbers have been 1.8, 5.9, 5.7, 3.1, 6.1, and 6.3 according to my database, so 4 seems like a reasonable figure.
The chart gives us a 59 million deal over 4 years or 71 million over 5.  The chart was designed for last year, so adding another 10% for inflation (damn you, Federal reserve!) would give us 65/4 or 78/5.
That would seem a fair deal, but if he holds out until the offseason, its hard to see somebody not overpaying and giving him Barry Zito money, as I can’t think of anything Zito had going for him that Buerhle doesn’t have.

Fun with DIPS: Not all balls in play are created equal

DIPS.  The idea that a pitcher doesn’t have any say in what happens to the ball once it is hit, short of fielding a ground ball back to him.  The now-famous original study found that pitchers showed very little year-to-year correlation in the percentage of balls in play that became hits.  However, there is a large amount of year-to-year correlation with events that the pitcher does have control over, specifically walks, strike outs, home runs allowed, and hit batsmen.  The natural corollary of the theory was that once the ball was hit, just about every pitcher becomes a league average pitcher.
Critics of DIPS theory often point to such counter-examples as Greg Maddux, who “pitches to contact”, yet in the 1990s, was anything but league average.  Perhaps, they contend, ground ball pitchers or fly ball pitchers have better luck than others.  There has been some discussion of GB/FB rates and DIPS, but to my knowledge (and I could be mistaken), no one has ever broken down DIPS theory by the type of ball in play.
I take as my data set Retrosheet play-by-play files from 2003-2006.  I eliminated all home runs, and calculated each pitcher’s yearly BABIP on each type of ball in play (grounders, liners, pop ups, and fly balls that are not home runs).  I restricted the sample to those who had at least 25 of that type of ball in play in the year in question.  (So, a pitcher with 22 grounders and 28 liners would have an entry for BABIP for liners, but not for grounders.)  I transformed the variables using a log-odds ratio method, as is proper for rate/probability variables.  Then, as per my favorite statistical trick, I took the intraclass correlation for each type of ball in play.
The results:
Ground balls, .114
Line drives, .174
Pop ups, .075
Fly balls (non-HR), .194
You can read those ICC’s much like year-to-year correlations.  The pitcher has the least control over whether pop-ups go for outs and the most for fly balls.  Even the fly ball number works out to an R-squared value of 3.8%, which isn’t all that thrilling (it means that 96.2% of the variance is due to other factors), so the DIPS theory still seems pretty sound.  On the other hand, the R-squared value for ground balls is 1.2%, so pitchers have a little bit more control over their fly balls than they do their ground balls.  Still, those values are pretty tiny, so I wouldn’t make anything of it.  I’m not saying anything new here, but the assumption that the pitcher is totally out of control of what happens is errant, although not all that far off from the truth.  However, some pitchers, especially those who live on fly balls are a little bit more in control than others.
There’s one other issue that irks me.  While doing some work for something else I’m in the process of writing, I found that the ICC for stolen base success rate (SB / (SB+CS)) was about .30.  That’s an R-squared value of 9%, which is, in perspective, a lot higher than the general BABIP ICC of .182 that I found here, but with correlation you end up on a slippery slope.  When does the ICC (or if you want to do year-to-year) become high enough that it’s a “skill” and not luck?  Is success at stealing bases a skill?  This isn’t an issue with an easy resolution in Sabermetrics or science in general, I realize, but it’s something to consider.

Cutting the ball off in the gap

A quick study.  A ground ball through the right side of the infield.  The right fielder runs over to try to cut the ball off before it gets past him and get it back into the infield before the batter gets any ideas about trying to go to second.  Will he get there and prevent an extra base hit?  And who is the best in the business at this particular skill… at least who was in 2006?
There’s plenty wrong with my methodology, but I’ll be happy with “decent approximation” on this one, given the limitations of my data set.  I’m using the Retrosheet event file for 2006.  I’m selecting for all ground balls that were fielded by one of the outfielders.  Not surprisingly, all but six of the 10,000+ balls in this category were hits.  The key variable is what sort of hit were they?  If the end result was a single, then the fielder has done his job.  If the ball goes for an extra base hit, then the fielder has failed.  The rest is just calculating a simple success rate.  I restricted the sample to players who dealt with at least 20 ground balls in their direction in 2006.  This left me with 54 left fielders, 50 center fielders, and 46 right fielders.  Players who logged significant time at two (or three) positions were eligible to repeat in multiple categories.
The problems are that Retrosheet’s data do not tell us where the fielder was at the start of the play, and hit location data are pretty scarce, so we don’t know how far the fielder had to go to get to the ball (was it simply hit at him or did he really have to hustle to get that one?)  There are also park effects to consider (which I will not for this study).  Outfields are, of course, all shaped differently, and some have much more ground to “defend” than others.  There’s also the issue of a fielder actually getting to the ball to cut it off, but a fast runner taking second anyway.  We also don’t know where the ball is when the fielder reaches it, which is important information when looking at events that involve baserunning.  I also didn’t take into account liners that went through the infield, but then dropped in the outfield, effectively becoming ground balls, because they can’t be teased apart from the straight out liners in the Retrosheet data.  It’s not a perfect study, but let’s see what happens.
When sorting the data, something odd appeared.  The center fielders were all almost perfect.  I can appreciate that CF are usually better fielders and faster than their LF and RF brethren.  They also have the advantage of playing deeper in terms of physical yardage from the plate, so they have more of a chance to react, plus they don’t have to guard both the foul line and the alley.  Perhaps there’s also a bias in the way that Retrosheet notes who it was that fielded the ball.  So, let’s stick to LF and RF.
The five best LF of 2006 in terms of ground balls to them that became singles instead of XBH:
Hideki Matsui (94% success)
Jay Payton (91%)
Matt Diaz (91%)
Angel Pagan (89%, and the best name in baseball!)
Matt Holliday (89%)
The five worst LF:
Kevin Mench (59%)
Scott Spiezio (66%)
John Rodriguez (68%)
Preston Wilson (68%)
Scott Podsednik (69%)
It must have been a bad year to be a Cardinal LF.  So Taguchi is actually 8th from the bottom.  This leads to the question of whether this is a park effect or if the Cardinals front office doesn’t care about defense in left field.
Moving over to RF, the five best:
Jason Lane (98%)
Ryan Freel (97%)
Nelson Cruz (96%)
Franklin Gutierrez (96%)
Jacque Jones (95%)
And the five worst:
Chris Snelling (75%)
Ichiro (76%???)
Emil Brown (76%)
Aubrey Huff (77%)
Geoff Jenkins (78%)
A few observations: Right fielders are generally better than left fielders at turning ground balls into singles, rather than XBH.  This probably has something to do with the fact that RF usually have better arms (so batters aren’t as tempted to try for second), and for a man wearing a glove on his left hand, cutting a ball off down the right field line is a more natural pick up motion than down the left field line.  For a ball in the alley, the RF has a more difficult pick up motion, but also has the CF to help him out.  More batters are right-handed, and probably more likely to pull the ball, and can probably hit it faster/harder down the line than the ball hit down the right field line.  Maybe RF are just better fielders in general.
Ichiro makes a surprise visit to the worst RF list.  It could be that because he was patrolling RF in spacious Safeco that he was simply a victim of his own home park.  Maybe it’s the real reason he’s in CF now.
How much of a difference does it make?  The most extreme left fielders are Kevin Mench (59%) and Hideki Matsui (94%).  Matsui had 32 balls hit to him, Mench had 39.  Split the difference and call it 35.  That’s an extra 12.25 balls that Matsui got to that Mench did not.  Assuming that those 12 and a quarter balls went for doubles with Mench instead of singles.  Using a linear weights/runs created approach, a single is worth .47 runs, while a double is worth .78 runs (roughly, the exact numbers aren’t all that important right now.)  So, 12.25 balls that turn into doubles rather than singles represents 3.8 runs.  So, the spread from best to worst is 3.8 runs, just from handling ground balls to left field.  Might not seem like much, but the actual effect is probably understated by my inability to distinguish between liners that dropped and liners that went over someone’s head.

The Great Comeback

Nothing is more exciting than seeing a team, over the course of a number of days or weeks, slowly close the gap between them and the team leading their respective division. As we near the end of June, the number “14” once again edges into our consciousness.
Fourteen teams in MLB history have overcome a 10+ game deficit at the end of the month of June to go on and win their division or league crown. Exactly how difficult is it to overcome such a deficit? To put such comebacks in perspective, I turn to a statistic devised by Mike Murphy, on-air host and baseball guru at 670 AM in Chicago. Murphy posits that the true perspective of a divisional deficit is not how many games a team trails the division leader, but the sum total of the games behind each of the teams in front of a squad. For example, the White Sox currently trail division leader Cleveland by 11 1/2 games in the AL Central. However, with three teams in front of the White Sox, the Aggregate Deficit (as we’ll call the statistic) is 29 games. This is a staggering number!
This number becomes useful when divisional rivals play each other. For example, let’s take the aforementioned AL Central. If Cleveland, Detroit, and Minnesota were all to lose, and the White Sox were to win, the Sox would gain three games in the AD tally. However, if Cleveland and Detroit were to play each other, as well as Chicago and Minnesota, a Chicago win could net them only two games in the AD tally, and a Chicago loss would dock them two games in the AD tally.
What does this mean for a team that trails multiple teams in the same division? It becomes far harder to overcome divisional deficits over multiple teams. Assuming that only one team of the three teams in front of Chicago loses each day, it would take the White Sox a minimum of twenty-nine games to overcome such a deficit, and likely many more than that.
Let us apply this statistic to some of the aforementioned comebacks previously seen in Major League history.

  • 1914 – The Boston Braves stand in last place at the start of play on July 6th. They are 26-40, 15 games behind the league-leading New York Giants. Their league AD is 52, and they finally catch the Giants on August 23rd, 52 games later.
  • 1930 – The St. Louis Cardinals are in fourth place in the National League. They are 53-52, 12 games behind the league-leading Brooklyn Robins. Their league AD is 27, and they catch the NL leading Robins on September 13th, 33 games later.
  • 1978 – The New York Yankees are in fourth place in the AL East. At 48-42, they trail the division-leading Red Sox by 14 games and sport a divisional AD of 18.5. The Yankees reach first place in the division for the first time on September 10th, 52 games later.
  • 1989 – The Toronto Blue Jays stand in sixth place in the AL East. They are 38-45, 10 games behind the division-leading Baltimore Orioles, and with a divisional AD of 19. They spring into the lead for the first time on August 31st, 51 games later.\

These examples sport a very obvious trend – as the leagues have expanded, it has become more and more difficult to overcome large deficits, and, more to the point, multiple-team deficits. As teams dig their holes deeper and deeper, it is exponentially more difficult to crawl out of said holes. The era of the pennant-chase comeback has not ended, but it is certainly disappearing more with every passing season.

The Replacement Pitchers

How good (or bad) is a replacement level starting pitcher?
I tried to answer this by looking at all starting pitchers for 2007, and remove the ones who are not replacement level pitchers.  In other words, take out all pitchers who started the season in the rotation, those who would have started had they not been injured, top prospects (those who made Baseball America’s top 100 list), and Roger Clemens.
What I would expect is that replacement level pitchers, as a group, would allow about 25% more runs than the league average pitcher.    That would be equivalent to a winning percentage around .400.  Maybe that should be a little lower, like .380 suggested here, but should be fairly close.  I used .400 when I calculated the top starting pitcher seasons of alltime.
How are they doing for 2007?  After taking out the non-replacements, as well as any pitcher who made more than half of his appearances in relief, I have 952 innings with a 5.03 ERA, against a league average of 4.32.  That’s a winning percentage of .430, so replacement level pitchers are having a pretty good season.  Perhaps I used too broad a definition of replacement pitcher, though perhaps the best pitcher in this group and the one with the most innings, Jeremy Guthrie, fits the mold of a replacement level pitcher.   The Orioles got him for essentially nothing after he was a failed prospect in Cleveland, and was removed from their 40 man roster.  Now, if only the Orioles can replace his bullpen…
The Orioles have done extremely well with replacement starters, both Guthrie and Brian Burres.  Its a bright spot for an otherwise brutal team.
The Yankees had more replacement level starts (21) than any other team.  That’s almost 1/3 of their games so far.  Since Phillip Hughes (top 100) and Roger Clemens don’t count, they have had to replace pitchers they were counting on with Tyler Clippard, Darrell Rasner, Matt Desalvo, Chase Wright, and Jeff Karstens.  That group put up a collective 6.15 ERA.

Trevor Hoffman, Hall of Famer?

Congratulations to Trevor Hoffman.  A few weeks ago, Trevor notched his 500th career save, which makes him a virtual lock for the Hall of Fame.  And as my father would say at this juncture, I think that’s nice.  Really.
I don’t want to come off as saying that Trevor Hoffman is a bad pitcher.  In fact, I think he’s a rather good pitcher, and if I were a Major League manager, I’d love to hear “Hells Bells” playing and knowing that he was taking the mound for my team.  It’s just that I’m not sold on closers going into the Hall of Fame based only on how many saves they’ve racked up.  (I’m also not a believer in magic numbers for HOF admission, such as 3000 hits, 500 HR, etc.  For more information, please see McGriff, Frederick Stanley.)  If you’d like to make the case that Hoffman strikes a lot of batters out, keeps runners off base, doesn’t give up homeruns in key situations, has good control, or signed a lot of autographs for kids before batting practice and you want to put in him in The Hall for that, I’m listening.  Just understand that I’m not all that impressed with large numbers of saves. 
Those who follow Sabermetrics (or are even casual fans of the game) have probably heard the arguments on why saves aren’t a reliable indicator of a relief pitcher’s skill set.  Sure, a save means that the pitcher did something right that day.  He pitched, recorded the last out of the game, didn’t blow the lead, and his team won.  And in fact, he might have performed stunning feats of pitching heroism and actually deserve a gold star next to his name.  He may have pitched a very tense ninth and guarded a one-run lead against the middle of the best-hitting lineup in the league.  But, then again, he might have gotten a cheap save, coming in to guard a 3-run lead against some league patsies.  Just about any serviceable Major League pitcher can guard a three-run ninth inning lead almost as well as the established “closers” out there.  The save rule doesn’t discriminate.
Like it or not (and I am decidedly on the side of “not”) there is a small cadre of pitchers who make $6-8 million a year because they have attained the magical tag of “closer.”  Compare this to their brothers who make about a quarter of that, based on the fact that they are “only” middle relievers.  How did they attain this coveted status?  Why they racked up a bunch of saves in a season.  How does one rack up saves?  Well, nowadays, by pitching the ninth inning (or in the event of an extra innings game, the last inning), whether that was actually the most critical juncture in the game or not.
It’s easy to see why people might believe that the ninth inning is the most important point in a game.  It seems a little strange to suggest otherwise.  After all, the ninth inning is the last inning.  It’s all over afterwards.  However, it’s not always the case that games are most decided by what happens in the ninth inning.  In a blowout, where the score is 16-1 and the utility infielder is living out his lifelong dream of pitching garbage time mop up relief (likely pitching to his fellow utility infielder!) what happens in the ninth inning does nothing to decide the game.  But what about the following situation?  Runners on first and third with one out in the bottom of the seventh, with your team up by a run.  You would scratch your head if the manager brought in your team’s “closer” here, because it’s not the ninth inning, but perhaps this would be a good time to do so.  Consider, your team’s chances of winning are hanging in the balance here.  A strike out or a double play would be fantastic right about now.  A home run puts you two down.  Even a base hit ties the game.  A lot depends on this at bat.  You want your best in the game right here, not your fourth best.  But, the fourth best is usually what you get, because it’s not yet a “save situation.”
How important is this situation compared to any other situation?  Well, we have a statistic called the leverage index that tells us exactly that.  The intricacies of leverage have been discussed elsewhere, but the only important thing to understand is that it’s a mathematical way of determining exactly how important any point in a given game is.  So, when does the most important at-bat (the highest leverage point) in a game occur most often?  In what inning does it usually happen?  Well, at least using data from 2000-2006, the answer may surprise you.  Take a guess.
If you said the 8th inning, you’re right, at least technically (more on this in a minute).
Inning    % of games in which highest leverage point occurred
1st          6.7%
2nd        9.0%
3rd        10.8%
4th        8.6%
5th        9.2%
6th        8.6%
7th        12.8%
8th        15.8%
9th       14.9%
extra    3.6%
I think it’s a good idea to combine the 9th inning with extra innings for this discussion because they are the “last” innings, so perhaps there is some merit to the idea that baseball games are won and lost most often in these innings (added together, the 9th and later innings account for 18.5% of all highest leverage points), but I’m betting that few fans think much about the 8th inning and how important it is.
Now, the preceding table represents all games, including blowouts where saves are generally not awarded.  (There is the odd game here and there where a reliever will go the last 3 innings to mop up in a 9-2 winning effort and be awarded a save for his effort.)  Since our discussion concerns saves, let’s restrict ourselves to games in which a save was awarded.  Highest leverage points by inning in those games where a save is awarded:
Inning    %
1-6        37.7%
7            16.6%
8            23.4%
9+         22.3%
Oh really?  In less than a quarter of the games where a save was awarded did the highest leverage situation occur in the 9th inning, and the critical inning the plurality of the time was the eighth inning not the ninth.  Given that closers generally only pitch the ninth inning, this means that it’s likely that they weren’t the ones on the mound when the big moment came.  I re-ran the numbers to account for all games in which the final score was within 3 runs, figuring that there might have been a save situation in the bottom of ninth, but whoever was in there blew it, thus there would be no save awarded.  The numbers did change a bit from above, turning into 32.0%, 15.7%, 23.1%, and 29.2%.  So, the ninth inning contains the most critical situation in close games about 3 out of 10 times, but the eighth inning is still checking in at 23%. 
Finally, I looked at what percentage of the time that the gentleman who was awarded the save was actually the one on the mound at the point of highest leverage in the game (where his team was in the field).  The answer: 26.7% of the time.  In only 26.7% of games where a save is awarded does the “savior” actually handle the biggest at-bat that his team faces in the game.  Padres fans, in case you were wondering, In games Trevor Hoffman saved from 2000-2006 (covering 254 of his saves), he was on the mound 25.6% of the time at the point of highest leverage.
Does anyone else want to make the case that closers are overpaid?  They generally make their millions based on how many saves they rack up, yet it seems that most of the time, they’re not the ones who actually save the game.  They were simply cunning enough to get their managers to let them into the game last and took advantage of what was probably a pretty good bullpen in front of them.  A bullpen that already did the dirty work.
A few objections come to my mind.  One is that lately, teams have been designating one pitcher as an eighth inning guy in addition to designating someone as a closer.  He’s generally not quite as good as the closer, and he might “make” some high leverage situations for himself by putting a few runners on in a close game (runners usually increase leverage).  So, it might be a mark of distinction that the closer is so good that he didn’t make a mess for himself.  Another is that I could, I suppose, take a look at things from the standpoint of win probability and check to see which relief pitcher added the most win probability to the team’s chances during his tenure on the mound.  I might do that as a follow up.
Update: I ran the win probability numbers.  Percentage of games in which a save was recorded that the gentleman recording the save was the same gentleman who added the most in terms of win probability to his team (using the assumption that everything that happens while the pitcher is on the mound is his credit/fault): 20.5%.

World Series-worthy?

A favorite argument of many media types is the argument that many teams, coming off a World Series victory, often experience a great deal of psychological change through the victory. Being a Chicagoan, I know all too well that a championship transforms the psyche of players, coaches, and fans. But given everything else, how would World Series teams fare in different situations?
Breaking out the trusty simulator, I have placed the 2005 Chicago White Sox into the 2007 season. As many people know, the 2005 White Sox was built on pitching and timely hitting. The team was ranked 11th in batting average (.262), 10th in on-base percentage (.322), last in doubles (253), and 11th in K/BB ratio (2.3). As a team, though, the White Sox were 2nd in the American League in ERA (3.61), T-1st in CG (9), 2nd in DP grounders (143), and T-3 in WHIP (1.25). Since the 2007 White Sox team is extremely weak in hitting and built a bit better pitching-wise, would the 2005 team (similarly built) be able to hold its own in 2007?
The parameters of the simulation: I have incorporated the 2005 White Sox into the 2007 projected database. Because of limitations inherent within the simulator, I have chosen to re-generate a schedule (without doing this, I would have to manually insert the 2005 White Sox into each schedule slot previously occupied by their 2007 counterparts). I have attempted to maintain the integrity of the schedule as closely as possible. As a hypothesis, I predict that the 2005 White Sox would experience similar success even when put into the 2007 paradigm.
The first simulation run produced a result set that went a long way to refuting the above hypothesis. The 2005 White Sox toiled in obscurity from the start, locked closely in a race with the Kansas City Royals for last place in the AL Central. In the AL, the Sox finished last in batting average (.259), on-base percentage (.315), doubles (245), and RBI (657), as well as next-to-last in home runs (164). On the pitching side, Chicago finished 7th in ERA (4.64), 2nd in hits allowed (1483), 4th in DP induced (146), 6th in K/9 innings (6.6) and 10th in K/BB ratio (1.9). You can find the entire season log here.
In order to refine these results, the experiment will be run three more times. However, I need to retool the schedule as there were some glitches (a season that started on the wrong date and ran too long, as well as an incorrect number of games for teams in the NL Central). While I don’t expect said glitches to impact the ultimate results at all, it would be better to ensure the quality and integrity of the remaining season runs by fixing the schedule issues.
What surprises me most about the initial simulation is the performance of the simulated 2005 White Sox vs. the real-life 2007 White Sox. The core of the present-day team is largely intact from the 2005 campaign, yet the hitting woes that afflict the team this year seem to have visited themselves upon the simulated 2005 squad. I cannot say for certain if this is an effect of the team composition or the composition of the Major Leagues as a whole. Further simulations may serve to clear this question up further.

Are baseball players getting bigger and slower?

Willie Wilson, where have you gone?  You were tall and skinny (6’3″ 195 lbs.) and you stole bases like they were going out of style.  And every team seemed to have a copy of you.  In fact, the St. Louis Cardinals of the 1980s basically had a whole team full of guys like you.  Stolen bases used to be plentiful in baseball and league leaders used to rack up 100 of them in a season.  And back then, 30 home runs might just win you the home run crown.
But, now chicks dig the long ball and so big bulky guys who can drive the ball populate the rosters.  Big bulky guys who can’t run.  But my can they hit!  Some commenters have blamed this as the reason that the stolen base seems to be a lost art nowadays, compared to the heyday of Willie Wilson/Vince Coleman/Tim Raines.  Teams favor big mashers.  But then, there were some big mashers in the 80s, right?  Has baseball really become a big man’s game in the last few decades?  Dan Fox did some work on a similar question in Baseball Prospectus (subscription needed, and speaking as someone who has no vested interest at all in BP, other than being a subscriber myself, it’s worth it) this past January.  He looked at triples and body-mass index (BMI) over the course of the 100+ years of major league baseball being played and found that as players have gotten bigger over time, triple rates went down.  But has there been a specific shift in player size since the 1980s?
Well, let’s pop open the Lahman master data base and see what we can find.  One of the 500,000,000 things to be found in the Lahman data set is a listing of all players who have ever played the game, most with height and weight information.  There’s only one height/weight listing per player.  So if you used to be a skinny-as-a-toothpick guy, but put on a lot of muscle mass quickly, you’ll forever be listed at one weight only.  (By the way, I’m not trying to imply anything with that link.  Really.  Seriously.  What?)   That’s a drawback here, but so it goes.
Let me introduce the concept of body-mass index more fully.  Some of you may be familiar with it, but for those who aren’t, it’s a measure of the ratio of weight to height.  Just knowing someone’s weight doesn’t tell you much.  (Think of a guy who is 5’3″ and 190 lbs. versus a guy who is 6’3″ and 190 lbs.)  In fact, in medical research, we find that it’s this ratio of weight to height that best predicts a lot of weight-influenced outcomes, such as risk for heart trouble.
The standard formula for BMI is weight (given in kilograms… I know, you think the metric system is evil, even though it makes a lot more sense.  And you have the same argument about the DH) divided by height squared (with height given in meters).  In general, it’s best to be between 19 and 25 on this scale for health reasons, although it’s been found that men can go up to 26 with relatively few bad effects.  BMIs from 26-30 are considered overweight and 30 and above is considered obese.  (PSA: Need to check for your own health?  Here’s a good and easy to use BMI calculator that uses inches and pounds.)
I took players who were active during each of the year from 1970-2005 and calculated the league average BMI per year.  Want to see a pretty picture?
year_by_bmi.JPG
Sure enough, the average BMI of players was lowest in the early to mid-80s and began rising throughout the course of the decade.  I’m not sure what to make of that sudden drop in 1994 (The strike apparently was so tough that it made some of the players have to cut back on food and subsequently, they must have lost some weight!).  The drop clearly didn’t last long.  In 2005, league wide BMI was back to all-time high levels. 
So, the intuition is right.  Players really were skinnier in the 80s.  Does it mean that the increasing BMI caused the drop in stolen bases?  It doesn’t necessarily mean that, but it’s a pretty good theory.  A few more pieces of circumstantial evidence point to it.  I chopped players up by their BMI into class intervals of 1 (that is, everyone from a BMI of 22 to 22.999 were grouped together) and selected for players who had at least 100 AB (to get rid of the pitchers and cup-of-coffee guys).  I ran a one-way ANOVA on season stolen base totals (inelegant and could be more precise, but you get the idea.)  Looking at the averages, we see that skinnier players steal more bases, and most of those differences are significant.
BMI               Avg. SB
0-21.999      14.27
22-22.999    13.16
23-23.999    8.85
24-24.999    7.32
25-25.999    6.75
26-inf            5.51
I could reproduce a similar chart going in the opposite direction with home runs.  So, on a very broad level, skinny guys steal bases and big guys hit home runs.  This doesn’t surprise anyone.  Sure, there are guys who can do both (and guys who can’t do either), and there are some little guys with big sticks, but most sluggers are rather big guys. 
Well, we’ve seen that the average player is getting bigger (and I don’t just mean taller and heavier), but are these bigger players getting more plate appearances?  Let me break out the fingerpaint and show you.  The following graph shows the percentage of league-wide plate appearances given to players broken down by BMI class in a given year.  The classes 1-6 correspond as they do in the table above.
bmi_plate_appearances.JPG
Teams appear to always have been fond of big guys, but in the mid-to-late-80s there was a sharp increase in the amount of playing time given to especially big guys.  The peak for skinny guys, such as it was, came in the early 80s, which corresponds, not surprisingly, to the heyday of the Wilson/Raines/Henderson/Coleman era.
Again, in this data set, a player remains in whatever BMI bracket he’s listed in for his entire career, even if in reality he  put on or took off some weight from year-to-year.  But teams do draft, develop, and perhaps promote based on body type.  It looks like they’ve made a conscious (or perhaps not so conscious) decision to give more at-bats to big powerful guys.  It would be wrong to say that the general population of people interested in baseball is getting slower (a plausibe theory, but more evidence is needed).  In fact, there’s probably plenty of fast skinny guys around who want to play baseball.  
Here’s what I think is going on.  Teams control whom they draft, who they have on their rosters, and how they pass out at bats.  Whether it’s because home runs sell more tickets or GMs believe that home runs win games, it’s a real pattern that’s emerged over the last 15 years.  Perhaps a few other GMs will come along and build a team like the Angels are built now (with a bunch of speedsters and Vlad) and focus their development on skinny, fast guys.  But, for now, the way to get playing time as a hitter is to be a big guy, presumably one who hits home runs.
Which brings me to an interesting point.  Not to step on J.C.’s toes, but does anyone else see an interesting economic explanation on why hitters might use Vitamin S nowadays?  Perhaps MLB teams have created the very monster that they seek to slay.  It also speaks to the home run “explosion.”  Sure, the parks are smaller, the pitching has been expansioned to death, blah blah blah, but there’s a very real physical shift to consider as well.  More home runs are being hit because teams are more and more prioritizing hitters who are physically more likely to hit home runs.
Are players getting bigger and slower?  Well, the ones who are getting playing time sure are.

Follow

Get every new post delivered to your Inbox.