Wanted: Guys who know how to win?

In 1990, the Cleveland Indians signed former MVP (in 1979) Keith Hernandez, then 36 years old, to play first base.  The year before, Hernandez had put up a nifty line of .233/.324/.326 in 1989 with the Mets while playing in 75 games and losing his starting spot at first base to Dave Magadan(!)  Hernandez, looking for work, must have done what the rest of us would do in that situation, he put together a resume, no doubt pointing out that he had been a member of the 1986 World Champion New York Mets.  The Indians, in signing him, proclaimed that they wanted Hernandez because he knew “how to win.”  (Does anyone not know that?)  Hernandez lasted 43 games with the Indians and put up a lovely batting line of .200/.283/.238, and was promptly amputated from baseball.
Do GM’s really look at whether a player is a “winner” when choosing whether to sign him?  Does wearing a World Series ring give a player a little more leverage in securing a contract, especially if he’s an established veteran?  The answer might surprise you.  But let’s define some terms first.
I took the Lahman database and coded all hitters from 1960-2005 as having either played on a World Series team or not.  In order to qualify, they had to have logged 100 AB in the season in which their team won the World Series.  This did two things: It got rid of all the pitchers batting and got rid of the cup-of-coffee call-up guys that didn’t really do much for the team on their run to glory.  If they met these criteria, they were forever labeled as “ring wearers.”
Now, how to tell whether a player gets a little boost from wearing that ring?  The best way is to look at the margins of when a player ceases being useful, or when no one else will sign him to a contract.  So, we could look to see whether former champions are allowed to fall further than non-champions by looking at how they performed in their last season.  That gives us an idea of when he is no longer wanted in the game of baseball.  If guys who have rings are allowed to continue playing, despite hitting at a much lower level than those who don’t have rings, then we have evidence that GM’s might be keeping them around for their knowledge of “how to win.”
But there are other problems here, namely a sampling issue.  A player’s last year could happen for a couple of reasons.  He might decide to retire on his own terms or he might simply find that no one wants him any more.  Which players tend to retire on their own terms, still able to hold down a regular job?  The ones who were pretty good to begin with.  Who are the ones to retire because no one wants them?  The guys who spent the year mostly on the bench, whether they have a ring or not.  The other problem with my sample is that it includes, both the free agency period and the reserve clause era.  So, I restricted the sample just to those who retired after the onset of free agency, to where they might have actually caught on with another team, if they had wanted to.
We need to compare apples to apples, or at least get as close as we can in making the samples equal.  In this case, I’m only interested in those who retired because of their (lack of) ability.  So, I want to focus on guys who were mostly sitting on the bench during their last year, so as not to introduce the bias of those who retired still as productive members of a starting lineup, and who could, in theory have continued on as bench players had they so chosen (cough*Julio Franco*cough).  So, I isolated gentlemen who were not in the top nine in their team’s AB rankings in their final year of play.  I realize that none of this is exact for bench player vs. starter, but it’s close enough.  So, do bench players with a ring perform any worse or better than their counterparts without a ring in their last year before they are kindly asked not to return?
The answer turns out to be a little surprising.  There is a difference between those with a World Series ring and those without, but not in the direction you might expect.
Bench players without a ring hit an average of .228/.295/.327
Bench players with a ring hit an average of  .235/.310/.342
(In case anyone was wondering, the differences on OBP and SLG are significantly different, AVG just misses)
It looks like players without a ring can actually get away with a worse performance than those with a ring before they leave the game.  There are two possible explanations.  One is that the players with rings are better players overall, and maintained their level long enough to hang around long enough as a bench player, but then decided to retire on their own terms.  The other is that teams see the ring and expect that a player will put up a certain level of performance, more so than they would expect out of a guy without a ring. When he doesn’t, they sour on him more quickly.
So, do GM’s really look for “guys who know how to win?”  Maybe, but it seems that they’re actually looking more for guys who put up decent numbers.  As they should.  Having won a World Series is nice, but like everything else in life, the question is what have you done for me lately?

Do the Astros miss Adam Everett?

One June 14, Adam Everett, the best defensive shortstop in the majors, broke his leg and hasn’t played since.  The Astros have replaced him primarily with Mark Loretta, who is a better hitter (pretty much anybody is) but an absolute joke as a defensive shortstop.
According to the latest updated Ultimate Zone Rating numbers, courtesy of Mitchel Lichtman, Everett was 11 runs better than an average shortstop, and did this in about 1/3 of a season’s worth of playing time.  Everett as a +30 defender seems ridiculously high, but its right in line with what he’s done over the last few years, and less than what he did in 2006.  Loretta was at -5 runs, but in only 16 defensive games.  By the Zone Rating published by Hardball Times Loretta has made plays on only 38 of 64 balls hit into his zone, a terrible .594 zone rating, and he gets to few balls outside his zone as well.  Is he really a -50 fielder over a full season?  That would be highly unlikely that he’s as bad as the small sample size shows him, about as unlikely as the randome player who starts the year 38 for 100 is a true .380 hitter.  However, I think it is reasonable to assume that Loretta, who is getting older and primarily a second baseman, would be as bad as the worst regular defensive shortstops if he played regularly.  He might be the worst.  I’ll guess in a full season he’d be -20 runs at shortstop.
If the difference between Everett and Loretta is 50 runs over a full season, that’s about 67 hits, or 0.4 per game.  What has been the actual difference for the Astros since the injury?
Read more of this post

The All Under-rated team of 2007 (so far)

Of course, you know about all of these guys.  You knew about them three years ago and you “called it” on most of them that they would have monster years.  Right.  It’s just all those morons out there who still think that Mike Piazza is a good player that keeps these guys from getting their due.  In fact, I bet that if I asked you nicely, you’d tell me who will be on the All-Under-rated team of 2010.  You so know how to spell Tulowitzski.  You’re that guy who knew about My Chemical Romance before they got big.  And now, you don’t listen to them any more because they “sold out.”
It’s not you that I’m writing this for; it’s for all of “them” who haven’t stopped to think about any of these players yet, even though they should have.  Without further ado, I present the All Under-rated team of 2007 (so far).
Read more of this post

Miscellanity on AstroTurf and my recent obsession with DIPS

A few minor musings that I’ve been tinkering with lately.  For years I’ve heard that AstroTurf is a “faster” surface, so a ball hit on the ground rolls faster, and presumably has more of a chance to go through for a base hit.  I guess rolling a ball across fuzzy cement is easier than rolling it across actual grass, but does that really translate into more grounders that get through the infield?  To check, I isolated all ground balls hit from 2003-2006 (Retrosheet, I love you.)  If the ball was fielded by an infielder (even if it went for an infield hit), I coded it as not going through the infield.  The fielder got there.  It’s not the grass or turf’s fault that he couldn’t make the throw.  If it was fielded by an outfielder, the ball clearly went through the infield.
I calculated a park effect of sorts for this particular event.  I compared how teams’ defenses did on the road at cutting balls off vs. those numbers at home, and then how their offenses did at punching the ball through at home and on the road.  To get my park effect, I took home % / road % for offense and defense separately and averaged the two results.  Like regular park effects, the results were actually pretty variable.  ICC was around .18, meaning that there was very little year-to-year consistency.  Not only that, but of the three stadia that still employ artificial turf (Tropicana Field, SkyDome… er, the Rogers Center, and the Metrodome) were generally in the middle of the pack.  Veterans Stadium also used turf in 2003, as did Stade Olympique in Montreal in 2003 and 2004, although the Expos were too busy playing in Puerto Rico half the time that year.
Artificial turf may actually make for a faster rolling ball, but teams probably solve that by playing back a bit and the strategy doesn’t seem to have any ill effects for balls rocketing through the infield on the ground.  In 1988, Bill James wrote, “Our idea of what makes a team good on artificial turf is not supported by any research.”  Seems like finding a bunch of ground ball hitting speed demons to take advantage of some sort of property of the turf isn’t such a good idea.  That property isn’t there.
Also, I’ve been a bit obsessed with DIPS lately and a thought occurred to me.  I’m now fairly convinced that there is a small amount of skill (small, but present) in a pitcher influencing what happens to the ball when it comes off the bat.  Perhaps the pitcher can place the ball in a particular spot to induce a ground ball that will go toward the third baseman.  Again, it might not be a great amount of influence, but maybe we can figure out some of what drives that skill.  What if we controlled for… well, control?  A pitcher who has good control doesn’t walk many batters.  I regressed BABIP on walk rate and saved the residuals.  (My data set was pitchers from 2003-2006 with a minimum of 50 IP.)  The ICC for the residuals is .209, which is a touch higher than I’d found previously.  But then again, I looked at the ICC for BABIP proper in this sample and it was .216.  So, controlling walk rate actually made things worse.  I tried with strikeout rate and that didn’t work either.
Back to the drawing board, I suppose.

League differences in the 1950's

If you look at the batting statistics of Willie Mays and Mickey Mantle at their peaks, it is obvious to anyone with the slightest familiarity with sabermetric techniques that Mantle was the superior hitter at his peak. Of course, Willie Mays was the better fielder and both his peak year period and total career was far longer than Mantle’s.
Just as a hitter though, in their best 3,4,5 (or however you want to define it), Mantle accounted for more runs (about 15 per year by my baseruns calculations) while using fewer outs – about 80 or enough to be worth another 15 runs.
But what if Mays, playing in the national league, was facing tougher competition than Mantle was in the american league? I don’t remember where I heard this first, but it seems to be accepted as conventional wisdom now. The question is, is this really true, and if so, how big? Are we talking about knocking 5 runs a year off Mantle or enough to make their peak offensive seasons equal?
Read more of this post

League differences in the 1950’s

If you look at the batting statistics of Willie Mays and Mickey Mantle at their peaks, it is obvious to anyone with the slightest familiarity with sabermetric techniques that Mantle was the superior hitter at his peak. Of course, Willie Mays was the better fielder and both his peak year period and total career was far longer than Mantle’s.
Just as a hitter though, in their best 3,4,5 (or however you want to define it), Mantle accounted for more runs (about 15 per year by my baseruns calculations) while using fewer outs – about 80 or enough to be worth another 15 runs.
But what if Mays, playing in the national league, was facing tougher competition than Mantle was in the american league? I don’t remember where I heard this first, but it seems to be accepted as conventional wisdom now. The question is, is this really true, and if so, how big? Are we talking about knocking 5 runs a year off Mantle or enough to make their peak offensive seasons equal?
Read more of this post

Testing the Ewing Theory

I have to be honest: I know as much about basketball as I know about nuclear physics.  A little bit, but enough that if you put me in charge of either a nuclear reactor or a basketball team, there would be a catastrophe.  Worse, I would swear to you that I knew what I was doing right up to the part where things started exploding.
But, then there’s the easy-to-understand Ewing Theory, coined by ESPN’s ever-delightful Bill Simmons and named after former Knicks’ center Patrick Ewing.  Simmons credits this particular theory to a friend who noticed that whenever Ewing wasn’t playing, the team that he was on at the time seemed to actually play better.  Odd for a man who was usually the undisputed “best player on the team” by far.  After all, how could subtracting such a player make the team better?  Still, Simmons put together a pretty impressive list of other times that the Ewing Theory seems to have worked in the past across a number of sports, most of them single games.  After all, the ’88 Dodgers won the World Series, despite having lost Kirk Gibson to an injury (although I’m told that he did have one pinch at-bat).
There are plenty of things that people believe about sports (and life) that simply aren’t true.  It makes no sense that taking away a team’s best player would do anything other than make them worse.  If you cherry pick your examples, you can make a (false) argument that the theory is valid.  And there will be plenty of examples from which to choose.  After all, taking away the star player doesn’t reduce the team to complete impotence.  Suppose that your team is involved in a big game, and right before the game, the star player decides to retire and take up competitive poker instead.  Let’s say that your chances of winning the game were 60% beforehand, and now they are 40%.  Well, that means that 4 times out of 10, the Ewing Theory “works”.  All that’s left is to forget about the six times that you lost, and you have a nice happy illusion.
Or is it an illusion?  The idea behind the Ewing Theory is that a basketball team (or any team) can become so dependent on one star, but his removal can have the effect of having other team members “step up.”  It has a certain logic to it.  In basketball, where five players must interact with each other to achieve their goal (put the ball in our basket, keep it out of theirs), I could actually see it working.  If too much of the focus is on the star player (the “Ewing”, if you will), the other players may actually not get a chance to show off some of the talents that they have.  Taking away the Ewing means that they get a chance to use them, and the talents which they are able to uncover are actually better, as a whole, than those of the team before the Ewing decided to take up poker.  I have no idea how exactly that would work in basketball, but the one guy who never gets to shoot, even though he’s really good at it, might get a chance to show that he can.  In baseball, it seems a little less obvious how this might work.  Perhaps players get moved around the lineup and asked to do things (hit for power instead of contact) that they were really quite good at, but never had the chance to do.
The question of whether or not the Ewing Theory actually works in baseball is one that can be answered with a few pokes around the data.  The key in investigating something like the Ewing theory is not to cherry pick examples.  We need a very large data set, and we need to look at all games in which our “Ewings” were absent.  In general, good players don’t get a lot of rest (although they may get hurt), so we might be looking at just a handful of games per Ewing to investigate.  So, I’m going to be looking at the years 1980-2006 (thanks to the magic of Retrosheet game logs) to ensure that I have a big enough sample size.  I eliminated 1981 because the season was dramatically shortened due to the strike.
Let’s identify some Ewings.  First off, in baseball, the only way to really look at the Ewing Theory is with hitters.  In baseball, an injured ace starter would really only affect a team in one out of five games (the ones that he was slated to pitch).  But hitters play every day.  The Ewing Theory also stipulates that a player must be a superstar and that the team is basically “his”.  So, let’s isolate the hitters who are really good, say the top 30 hitters in baseball as ranked by OPS within the year in question (with a 400 AB minimum).  This eliminates the Dmitri Youngs of the world who win the McDonalds’ Chef of the Year Award as the best hitters on bad teams (actually, Young was a Ewing on the 2003 Tigers!).  Now, a good hitter who has two or three other superstars around him probably won’t be missed as much.  But, if he’s the only superstar level hitter, he really qualifies as a Ewing.  So, if a team has two players in the top 30 for a year, neither one counts as a Ewing for that year.  To give you an idea of who’s left, in 2006, there were nine players who fit this definition of a Ewing: Aramis Ramirez, Carlos Guillen, Miguel Cabrera, Lance Berkman, Vladimir Guerrero, Frank Thomas, Ryan Howard, Jason Bay, and Albert Pujols.  Really good hitters without someone else on their team to back them up.  Overall, I came up with 275 Ewings over the 25 years under study.
For those curious, the most often “Ewingized” players (is that even a word?) over the course of the last few decades were Frank Thomas and Sammy Sosa (7 times each), followed by Mike Schmidt (6), and Carlos Delgado, Vlad Guerrero, Brian Giles, and Fred McGriff (each with 5).
Thanks to the magic of Retrosheet game logs, we have the starting lineup for every game played in baseball during the time in question, and the score.  I assumed that if a player was not in the starting lineup, he did not play.  That’s, of course, flawed, as he might have pinch hit, but it’s close enough for government work.
Once I’ve identified my Ewings, it’s easy enough to identify games in which they started and those in which they didn’t.  I took the team’s overall W% when the Ewing started.  Suppose that a team had a .600 win percentage with the Ewing in the lineup in whatever year I’m looking at.  Then, let’s suppose that in the three games where he didn’t play, the team won two and lost one.  For those two wins, that’s .4 wins above what we would expect from them.  For the loss, they get a .6 win debit (or .6 losses, however you want to look at it.)  If teams really do compensate in some way when losing a star player, we would expect that adding up all of these numbers over all teams and all years would produces a zero (if they break even) or a positive number if they somehow manage to do even better.  If teams do actually suffer from missing their Ewing, the sum would be negative.
The rest is just crunching the numbers.  So, what do the numbers say?  Over the course of the last 26 years (excluding 1981), teams were 137 wins below what they would be expected to do when their Ewing was missing.  Teams really do get worse when you take away their best player.  Over the course of the seasons in questions, these teams combined for a record 19209 – 18883 with their Ewing in the lineup (a winning percentage of .504), and a 2680-3001 record without (.472).
Now, that’s not to say that there weren’t teams that didn’t get better after the loss of their star.  The most extreme example was the 2004 Los Angeles California Angels of Anaheim California which is near Los Angeles.  That year, the Angels were a .530 team in the 115 games where Vlad was in the starting lineup and a .660 team in the 47 games when he wasn’t.  That’s a total of 6 extra wins without Vlad over what they would have been expected to win if they had just held steady to what they did with Vlad.  The 1987 Brewers, on the other hand, really missed Paul Molitor (in the year of his 39 game hit streak… I was at game number 35 in the fourth row!  Best seats I’ve ever had for a baseball game.)  They were a .647 team when he was in the lineup and .348 when he wasn’t, for a total of 13 extra losses without Molitor.  A few Ewings never gave their teams a chance to test the theory.  Cal Ripken, in 1991, was in the middle of an MVP season, but… well, perhaps you’re familiar with his work attendance rates during that time period.  In all, 99 teams got better without their Ewing, 3 performed the same, and 161 were worse off.  At least in baseball, the Ewing Theory doesn’t work.
The magic of the Ewing theory is in the small sample size.  Overall, teams lost about a three percent chance of winning by having their Ewing amputated, but still that’s a 47% chance of winning.  Over the course of a season, a reduction of 3% in win chances is about 5 wins.  But, if all you care about if one game, then you’ve got a sporting chance of having your theory “proved” right.  It’s just that’s not the way to bet.

Follow

Get every new post delivered to your Inbox.