Wanted: Guys who know how to win?

In 1990, the Cleveland Indians signed former MVP (in 1979) Keith Hernandez, then 36 years old, to play first base.  The year before, Hernandez had put up a nifty line of .233/.324/.326 in 1989 with the Mets while playing in 75 games and losing his starting spot at first base to Dave Magadan(!)  Hernandez, looking for work, must have done what the rest of us would do in that situation, he put together a resume, no doubt pointing out that he had been a member of the 1986 World Champion New York Mets.  The Indians, in signing him, proclaimed that they wanted Hernandez because he knew “how to win.”  (Does anyone not know that?)  Hernandez lasted 43 games with the Indians and put up a lovely batting line of .200/.283/.238, and was promptly amputated from baseball.
Do GM’s really look at whether a player is a “winner” when choosing whether to sign him?  Does wearing a World Series ring give a player a little more leverage in securing a contract, especially if he’s an established veteran?  The answer might surprise you.  But let’s define some terms first.
I took the Lahman database and coded all hitters from 1960-2005 as having either played on a World Series team or not.  In order to qualify, they had to have logged 100 AB in the season in which their team won the World Series.  This did two things: It got rid of all the pitchers batting and got rid of the cup-of-coffee call-up guys that didn’t really do much for the team on their run to glory.  If they met these criteria, they were forever labeled as “ring wearers.”
Now, how to tell whether a player gets a little boost from wearing that ring?  The best way is to look at the margins of when a player ceases being useful, or when no one else will sign him to a contract.  So, we could look to see whether former champions are allowed to fall further than non-champions by looking at how they performed in their last season.  That gives us an idea of when he is no longer wanted in the game of baseball.  If guys who have rings are allowed to continue playing, despite hitting at a much lower level than those who don’t have rings, then we have evidence that GM’s might be keeping them around for their knowledge of “how to win.”
But there are other problems here, namely a sampling issue.  A player’s last year could happen for a couple of reasons.  He might decide to retire on his own terms or he might simply find that no one wants him any more.  Which players tend to retire on their own terms, still able to hold down a regular job?  The ones who were pretty good to begin with.  Who are the ones to retire because no one wants them?  The guys who spent the year mostly on the bench, whether they have a ring or not.  The other problem with my sample is that it includes, both the free agency period and the reserve clause era.  So, I restricted the sample just to those who retired after the onset of free agency, to where they might have actually caught on with another team, if they had wanted to.
We need to compare apples to apples, or at least get as close as we can in making the samples equal.  In this case, I’m only interested in those who retired because of their (lack of) ability.  So, I want to focus on guys who were mostly sitting on the bench during their last year, so as not to introduce the bias of those who retired still as productive members of a starting lineup, and who could, in theory have continued on as bench players had they so chosen (cough*Julio Franco*cough).  So, I isolated gentlemen who were not in the top nine in their team’s AB rankings in their final year of play.  I realize that none of this is exact for bench player vs. starter, but it’s close enough.  So, do bench players with a ring perform any worse or better than their counterparts without a ring in their last year before they are kindly asked not to return?
The answer turns out to be a little surprising.  There is a difference between those with a World Series ring and those without, but not in the direction you might expect.
Bench players without a ring hit an average of .228/.295/.327
Bench players with a ring hit an average of  .235/.310/.342
(In case anyone was wondering, the differences on OBP and SLG are significantly different, AVG just misses)
It looks like players without a ring can actually get away with a worse performance than those with a ring before they leave the game.  There are two possible explanations.  One is that the players with rings are better players overall, and maintained their level long enough to hang around long enough as a bench player, but then decided to retire on their own terms.  The other is that teams see the ring and expect that a player will put up a certain level of performance, more so than they would expect out of a guy without a ring. When he doesn’t, they sour on him more quickly.
So, do GM’s really look for “guys who know how to win?”  Maybe, but it seems that they’re actually looking more for guys who put up decent numbers.  As they should.  Having won a World Series is nice, but like everything else in life, the question is what have you done for me lately?

Do the Astros miss Adam Everett?

One June 14, Adam Everett, the best defensive shortstop in the majors, broke his leg and hasn’t played since.  The Astros have replaced him primarily with Mark Loretta, who is a better hitter (pretty much anybody is) but an absolute joke as a defensive shortstop.
According to the latest updated Ultimate Zone Rating numbers, courtesy of Mitchel Lichtman, Everett was 11 runs better than an average shortstop, and did this in about 1/3 of a season’s worth of playing time.  Everett as a +30 defender seems ridiculously high, but its right in line with what he’s done over the last few years, and less than what he did in 2006.  Loretta was at -5 runs, but in only 16 defensive games.  By the Zone Rating published by Hardball Times Loretta has made plays on only 38 of 64 balls hit into his zone, a terrible .594 zone rating, and he gets to few balls outside his zone as well.  Is he really a -50 fielder over a full season?  That would be highly unlikely that he’s as bad as the small sample size shows him, about as unlikely as the randome player who starts the year 38 for 100 is a true .380 hitter.  However, I think it is reasonable to assume that Loretta, who is getting older and primarily a second baseman, would be as bad as the worst regular defensive shortstops if he played regularly.  He might be the worst.  I’ll guess in a full season he’d be -20 runs at shortstop.
If the difference between Everett and Loretta is 50 runs over a full season, that’s about 67 hits, or 0.4 per game.  What has been the actual difference for the Astros since the injury?
Read more of this post

The All Under-rated team of 2007 (so far)

Of course, you know about all of these guys.  You knew about them three years ago and you “called it” on most of them that they would have monster years.  Right.  It’s just all those morons out there who still think that Mike Piazza is a good player that keeps these guys from getting their due.  In fact, I bet that if I asked you nicely, you’d tell me who will be on the All-Under-rated team of 2010.  You so know how to spell Tulowitzski.  You’re that guy who knew about My Chemical Romance before they got big.  And now, you don’t listen to them any more because they “sold out.”
It’s not you that I’m writing this for; it’s for all of “them” who haven’t stopped to think about any of these players yet, even though they should have.  Without further ado, I present the All Under-rated team of 2007 (so far).
Read more of this post

Miscellanity on AstroTurf and my recent obsession with DIPS

A few minor musings that I’ve been tinkering with lately.  For years I’ve heard that AstroTurf is a “faster” surface, so a ball hit on the ground rolls faster, and presumably has more of a chance to go through for a base hit.  I guess rolling a ball across fuzzy cement is easier than rolling it across actual grass, but does that really translate into more grounders that get through the infield?  To check, I isolated all ground balls hit from 2003-2006 (Retrosheet, I love you.)  If the ball was fielded by an infielder (even if it went for an infield hit), I coded it as not going through the infield.  The fielder got there.  It’s not the grass or turf’s fault that he couldn’t make the throw.  If it was fielded by an outfielder, the ball clearly went through the infield.
I calculated a park effect of sorts for this particular event.  I compared how teams’ defenses did on the road at cutting balls off vs. those numbers at home, and then how their offenses did at punching the ball through at home and on the road.  To get my park effect, I took home % / road % for offense and defense separately and averaged the two results.  Like regular park effects, the results were actually pretty variable.  ICC was around .18, meaning that there was very little year-to-year consistency.  Not only that, but of the three stadia that still employ artificial turf (Tropicana Field, SkyDome… er, the Rogers Center, and the Metrodome) were generally in the middle of the pack.  Veterans Stadium also used turf in 2003, as did Stade Olympique in Montreal in 2003 and 2004, although the Expos were too busy playing in Puerto Rico half the time that year.
Artificial turf may actually make for a faster rolling ball, but teams probably solve that by playing back a bit and the strategy doesn’t seem to have any ill effects for balls rocketing through the infield on the ground.  In 1988, Bill James wrote, “Our idea of what makes a team good on artificial turf is not supported by any research.”  Seems like finding a bunch of ground ball hitting speed demons to take advantage of some sort of property of the turf isn’t such a good idea.  That property isn’t there.
Also, I’ve been a bit obsessed with DIPS lately and a thought occurred to me.  I’m now fairly convinced that there is a small amount of skill (small, but present) in a pitcher influencing what happens to the ball when it comes off the bat.  Perhaps the pitcher can place the ball in a particular spot to induce a ground ball that will go toward the third baseman.  Again, it might not be a great amount of influence, but maybe we can figure out some of what drives that skill.  What if we controlled for… well, control?  A pitcher who has good control doesn’t walk many batters.  I regressed BABIP on walk rate and saved the residuals.  (My data set was pitchers from 2003-2006 with a minimum of 50 IP.)  The ICC for the residuals is .209, which is a touch higher than I’d found previously.  But then again, I looked at the ICC for BABIP proper in this sample and it was .216.  So, controlling walk rate actually made things worse.  I tried with strikeout rate and that didn’t work either.
Back to the drawing board, I suppose.

League differences in the 1950's

If you look at the batting statistics of Willie Mays and Mickey Mantle at their peaks, it is obvious to anyone with the slightest familiarity with sabermetric techniques that Mantle was the superior hitter at his peak. Of course, Willie Mays was the better fielder and both his peak year period and total career was far longer than Mantle’s.
Just as a hitter though, in their best 3,4,5 (or however you want to define it), Mantle accounted for more runs (about 15 per year by my baseruns calculations) while using fewer outs – about 80 or enough to be worth another 15 runs.
But what if Mays, playing in the national league, was facing tougher competition than Mantle was in the american league? I don’t remember where I heard this first, but it seems to be accepted as conventional wisdom now. The question is, is this really true, and if so, how big? Are we talking about knocking 5 runs a year off Mantle or enough to make their peak offensive seasons equal?
Read more of this post

League differences in the 1950′s

If you look at the batting statistics of Willie Mays and Mickey Mantle at their peaks, it is obvious to anyone with the slightest familiarity with sabermetric techniques that Mantle was the superior hitter at his peak. Of course, Willie Mays was the better fielder and both his peak year period and total career was far longer than Mantle’s.
Just as a hitter though, in their best 3,4,5 (or however you want to define it), Mantle accounted for more runs (about 15 per year by my baseruns calculations) while using fewer outs – about 80 or enough to be worth another 15 runs.
But what if Mays, playing in the national league, was facing tougher competition than Mantle was in the american league? I don’t remember where I heard this first, but it seems to be accepted as conventional wisdom now. The question is, is this really true, and if so, how big? Are we talking about knocking 5 runs a year off Mantle or enough to make their peak offensive seasons equal?
Read more of this post

Testing the Ewing Theory

I have to be honest: I know as much about basketball as I know about nuclear physics.  A little bit, but enough that if you put me in charge of either a nuclear reactor or a basketball team, there would be a catastrophe.  Worse, I would swear to you that I knew what I was doing right up to the part where things started exploding.
But, then there’s the easy-to-understand Ewing Theory, coined by ESPN’s ever-delightful Bill Simmons and named after former Knicks’ center Patrick Ewing.  Simmons credits this particular theory to a friend who noticed that whenever Ewing wasn’t playing, the team that he was on at the time seemed to actually play better.  Odd for a man who was usually the undisputed “best player on the team” by far.  After all, how could subtracting such a player make the team better?  Still, Simmons put together a pretty impressive list of other times that the Ewing Theory seems to have worked in the past across a number of sports, most of them single games.  After all, the ’88 Dodgers won the World Series, despite having lost Kirk Gibson to an injury (although I’m told that he did have one pinch at-bat).
There are plenty of things that people believe about sports (and life) that simply aren’t true.  It makes no sense that taking away a team’s best player would do anything other than make them worse.  If you cherry pick your examples, you can make a (false) argument that the theory is valid.  And there will be plenty of examples from which to choose.  After all, taking away the star player doesn’t reduce the team to complete impotence.  Suppose that your team is involved in a big game, and right before the game, the star player decides to retire and take up competitive poker instead.  Let’s say that your chances of winning the game were 60% beforehand, and now they are 40%.  Well, that means that 4 times out of 10, the Ewing Theory “works”.  All that’s left is to forget about the six times that you lost, and you have a nice happy illusion.
Or is it an illusion?  The idea behind the Ewing Theory is that a basketball team (or any team) can become so dependent on one star, but his removal can have the effect of having other team members “step up.”  It has a certain logic to it.  In basketball, where five players must interact with each other to achieve their goal (put the ball in our basket, keep it out of theirs), I could actually see it working.  If too much of the focus is on the star player (the “Ewing”, if you will), the other players may actually not get a chance to show off some of the talents that they have.  Taking away the Ewing means that they get a chance to use them, and the talents which they are able to uncover are actually better, as a whole, than those of the team before the Ewing decided to take up poker.  I have no idea how exactly that would work in basketball, but the one guy who never gets to shoot, even though he’s really good at it, might get a chance to show that he can.  In baseball, it seems a little less obvious how this might work.  Perhaps players get moved around the lineup and asked to do things (hit for power instead of contact) that they were really quite good at, but never had the chance to do.
The question of whether or not the Ewing Theory actually works in baseball is one that can be answered with a few pokes around the data.  The key in investigating something like the Ewing theory is not to cherry pick examples.  We need a very large data set, and we need to look at all games in which our “Ewings” were absent.  In general, good players don’t get a lot of rest (although they may get hurt), so we might be looking at just a handful of games per Ewing to investigate.  So, I’m going to be looking at the years 1980-2006 (thanks to the magic of Retrosheet game logs) to ensure that I have a big enough sample size.  I eliminated 1981 because the season was dramatically shortened due to the strike.
Let’s identify some Ewings.  First off, in baseball, the only way to really look at the Ewing Theory is with hitters.  In baseball, an injured ace starter would really only affect a team in one out of five games (the ones that he was slated to pitch).  But hitters play every day.  The Ewing Theory also stipulates that a player must be a superstar and that the team is basically “his”.  So, let’s isolate the hitters who are really good, say the top 30 hitters in baseball as ranked by OPS within the year in question (with a 400 AB minimum).  This eliminates the Dmitri Youngs of the world who win the McDonalds’ Chef of the Year Award as the best hitters on bad teams (actually, Young was a Ewing on the 2003 Tigers!).  Now, a good hitter who has two or three other superstars around him probably won’t be missed as much.  But, if he’s the only superstar level hitter, he really qualifies as a Ewing.  So, if a team has two players in the top 30 for a year, neither one counts as a Ewing for that year.  To give you an idea of who’s left, in 2006, there were nine players who fit this definition of a Ewing: Aramis Ramirez, Carlos Guillen, Miguel Cabrera, Lance Berkman, Vladimir Guerrero, Frank Thomas, Ryan Howard, Jason Bay, and Albert Pujols.  Really good hitters without someone else on their team to back them up.  Overall, I came up with 275 Ewings over the 25 years under study.
For those curious, the most often “Ewingized” players (is that even a word?) over the course of the last few decades were Frank Thomas and Sammy Sosa (7 times each), followed by Mike Schmidt (6), and Carlos Delgado, Vlad Guerrero, Brian Giles, and Fred McGriff (each with 5).
Thanks to the magic of Retrosheet game logs, we have the starting lineup for every game played in baseball during the time in question, and the score.  I assumed that if a player was not in the starting lineup, he did not play.  That’s, of course, flawed, as he might have pinch hit, but it’s close enough for government work.
Once I’ve identified my Ewings, it’s easy enough to identify games in which they started and those in which they didn’t.  I took the team’s overall W% when the Ewing started.  Suppose that a team had a .600 win percentage with the Ewing in the lineup in whatever year I’m looking at.  Then, let’s suppose that in the three games where he didn’t play, the team won two and lost one.  For those two wins, that’s .4 wins above what we would expect from them.  For the loss, they get a .6 win debit (or .6 losses, however you want to look at it.)  If teams really do compensate in some way when losing a star player, we would expect that adding up all of these numbers over all teams and all years would produces a zero (if they break even) or a positive number if they somehow manage to do even better.  If teams do actually suffer from missing their Ewing, the sum would be negative.
The rest is just crunching the numbers.  So, what do the numbers say?  Over the course of the last 26 years (excluding 1981), teams were 137 wins below what they would be expected to do when their Ewing was missing.  Teams really do get worse when you take away their best player.  Over the course of the seasons in questions, these teams combined for a record 19209 – 18883 with their Ewing in the lineup (a winning percentage of .504), and a 2680-3001 record without (.472).
Now, that’s not to say that there weren’t teams that didn’t get better after the loss of their star.  The most extreme example was the 2004 Los Angeles California Angels of Anaheim California which is near Los Angeles.  That year, the Angels were a .530 team in the 115 games where Vlad was in the starting lineup and a .660 team in the 47 games when he wasn’t.  That’s a total of 6 extra wins without Vlad over what they would have been expected to win if they had just held steady to what they did with Vlad.  The 1987 Brewers, on the other hand, really missed Paul Molitor (in the year of his 39 game hit streak… I was at game number 35 in the fourth row!  Best seats I’ve ever had for a baseball game.)  They were a .647 team when he was in the lineup and .348 when he wasn’t, for a total of 13 extra losses without Molitor.  A few Ewings never gave their teams a chance to test the theory.  Cal Ripken, in 1991, was in the middle of an MVP season, but… well, perhaps you’re familiar with his work attendance rates during that time period.  In all, 99 teams got better without their Ewing, 3 performed the same, and 161 were worse off.  At least in baseball, the Ewing Theory doesn’t work.
The magic of the Ewing theory is in the small sample size.  Overall, teams lost about a three percent chance of winning by having their Ewing amputated, but still that’s a 47% chance of winning.  Over the course of a season, a reduction of 3% in win chances is about 5 wins.  But, if all you care about if one game, then you’ve got a sporting chance of having your theory “proved” right.  It’s just that’s not the way to bet.

Break To Break

Phil Rogers wrote a very interesting column a couple of days ago (you can find it here but registration is required) regarding records of teams from All-Star Break ’06 to All-Star Break ’07. I was very surprised by who the division leaders are in B2B record:
AL East – NY Yankees (89-72)
AL Central – Minnesota (94-70)
AL West – LA Angels (99-63)
AL Wildcard – Oakland (92-70)
NL East – NY Mets (92-68)
NL Central – Milwaukee (80-80)
NL West – LA Dodgers (91-72)
NL Wildcard – San Diego (89-72)
I was surprised to see two teams from each Western Division in the mix. While each league’s western third seem to always be forgotten, both the AL West and the NL West feature two teams with very good records. One must also remember that Seattle and Arizona have both had very good first halves to this season, and are right up in the mix as well. A number that surprised me (but not too much) was the Milwaukee Brewers’ mark. Rogers mentioned in his article that the NL Central as a whole is 49 games under .500 since the All-Star break last year. The NL Central is certainly the worst division in baseball this year, as it was last year – St. Louis and Houston, in contention last year, are quickly falling out of the race, Cincinnati and Pittsburgh were never really in it to begin with, and the Chicago Cubs have marginally improved but still hover near .500 (whereas last year they were substantially below break-even).
This is a statistic that really falls under the column of “Huh! I did not know that.” However, it does seem to be a decent indicator of trends over the past twelve months for many teams – for example, my hometown Chicago White Sox are 72-88 since the break of ’06 – a 20-game deficit behind the Twins. (Sorry, Sox fans, but it doesn’t look good for them.) If nothing else, it will interest me to see how the trends continue during the second half this season.

DIPS and handedness

“And [Jesus] answered and said, He that dippeth his hand with me in the dish, the same shall betray me.” – Matthew 26:23 (KJV).
Nothing like starting off a Sabermetric blog post with a quote from the Bible, eh?  You know you’ve been doing Sabermetrics too long when even your preacher’s sermons give you ideas for a study.  (My pastor actually reads StatSpeak too.)
By this point, I presume that everyone in the room is familiar with DIPS theory.  It says that pitchers have very little to no control over what happens to the ball once it is hit as to whether it will become an out or not.  A little while ago, I noted that certain types of batted balls (i.e. grounders, line drives, etc.) were more in the pitcher’s control than others.  I decided to take a look to see whether handedness made any difference.  The pitcher can throw with either his left or his right arm, and the batter can stand in one of two rectangles drawn for his convenience.  This creates four possible combinations of pitcher and hitter handedness.
I took my trusty Retrosheet 2000-2006 database and selected out all the balls in play, then coded whether the pitcher (whose hand preference was obviously fixed) was facing a right-handed batter or a left-handed batter.  (Switch hitters were coded as whatever they were batting in that particular plate appearance, usually opposite to the pitcher.)  To qualify for these analyses, the pitcher had to have at least 50 balls in play against the batter handedness in question (so if Larry had 52 balls in play against righties and 48 against lefties, only his data against righties was retained.)
I calculated BABIP (yes, that’s the stat for me!  If you get that joke, you win a cookie!) for each pitcher-year separately for plate appearances against lefties and righties.  Those of you who know my statistical style well know what’s coming next.  I took the log of the odds ratio for the BABIP, then the AR1 intraclass correlation coefficient. 
Right-handed pitcher vs.
Right-handed batter: .181
Left-handed batter: .105
Left-handed pitcher vs.
Right-handed batter: .190
Left-handed batter: -.025, which in this case, basically means nil.
Pitchers have more control over what happens when a right-handed batter is in the box (“more” being a relative matter; those r-squared values are still south of 4%), and lefty pitchers are at the mercy of the league average when a left-handed hitter makes contact against them.
My theory on this?  Well, it’s the same reason with two different interpretations.  Pitchers see fewer left-handed batters than right-handed batters.  There are simply fewer lefties in the world and in baseball than there are righties.  This could have two effects.  The more statistical explanation is that pitchers see fewer lefties, which means a smaller sample size per season relative to righties.  Smaller sample sizes are more volatile, which is hard on a year-to-year correlational method or an ICC.
The other possibility is that there really is some small thing that the pitcher can do to spin the ball in a certain way or throw it in a certain way that actually makes a batted ball more likely to go for an out.  Pitchers, obviously, would want to develop this skill as much as they could.  DIPS tells us that the effect of this skill is minimal, but any advantage is welcome.  Well, the spin of the ball affects right handers and left handers differently.  A ball that tails away from a righty breaks in on a lefty, so pitchers must plan accordingly.  And they have more chances to practice this skill on right handed hitters.
Or it’s just statistical noise.

More Thoughts on the Buerhle contract.

When I left this continent for the one where cheese, wine, and soccer are more important than baseball, Buerhle’s contract was in the works but not yet signed.  In my last column I speculated that on a 4 year deal he would be worth 59-65 million dollars.
The White Sox signed him for 56 million, so it looks like they got a little bargain.  The reaction among baseball’s “thinking fans” seems a bit mixed.  Some  strongly disapprove, a horrible signing.  Others think it worked out pretty well for the White Sox.  A big difference in the two opinions is who you project Buerhle’s workload going forward.  There’s no chance he increases his workload, which is about as high as any pitcher has handled in recent years.  He might keep pitching 230 innings per year, he might decline a little, or he might get hurt and pitch very little despite not showing any injury so far.
I don’t know what the best way to get comparables is.  In the last column I looked at free agent pitchers who teams were willing to part with draft picks for – in other words pretty good pitchers.  I came up with 180 innings to expect for next year, but couldn’t look at a 4 year projection because I was looking at recent signings.
I tried another approach:  To control for both quality and durability, I looked at pitchers who pitched after 1970 (lets keep the pre-Tommy John and certainly the deadball pitchers out of comparables) who had 75 or more wins (Buerhle had 97) through age 27.  There were 60 such pitchers.  Then, exclude current players like Barry Zito (What Zito does in 2009 might be useful to know for Buerhle, but I don’t know what it is).  Finally, I make sure that the pitcher hasn’t already started to show arm problems.  To stay in the group, they must pitch at least 200 innings at age 28.  All indications are that Buerhle, having a fine season, will do so himself.  In any case, he has shown no performance or health problems during the 2007 season so far.
The innings I get for the years covered by the contract are:
2008  180
2009  172
2010  152
2011  137
The results are closer to Chris Dial’s group than David Gassko’s.
Just because we can’t guaranty, or even expect, that Buerhle will throw 230 innings like usual does not mean that this is a bad deal.  If he pitches 180 innings at his usual rate, he is worth almost 4 wins above a replacement pitcher.  It turns out Kenny Williams is paying him like he expects only a 3.5 win pitcher, if you adjst the salary calculator  for inflation (10%).  For the further decline in innings, well, the calculator assumes that the player will decline by 0.5 wins per year, but every year there is more and more inflation.
The White Sox got a pretty good deal in the context of what teams pay for free agents.  They likely would not do better by saving their money and buying something else this offseason.  One more thing about the salary calculator:  If we knew that Buerhle would pitch 230 innings every year, and do it with at the top of his ability (say a 130 ERA+) then he would be worth 5.5 wins per year (he’s had 4 years like that in his career) he would be worth over 20 million per season.
You might make the argument that regardless of what teams pay, he’s not worth that much revenue to the White Sox.  And you might be right.  You’d also have to say that average-ish pitchers like Ted Lilly and Gil Meche are not worth 10-11 million, and that free agent pitchers should be avoided.  It may be best, financially, to be like the Pirates, spend little, win 70 games, and milk the revenue sharing.   I kind of prefer teams that actually try to win though.  Good move, Mr. Williams.

Follow

Get every new post delivered to your Inbox.