clock menu more-arrow no yes mobile

Filed under:

Scoring the Crunchy Power Rankings: 2013 Edition

We check back to see if the CPR did a good job predicting the second half team performances.

Every couple of weeks during the MLS season I publish our Crunchy Power Rankings, which are an objective ranking of teams based on the detailed stats that Opta tracks for the league. The argument for them is that they distill out a lot of the luck that affects individual games and can have large effects on the standings over short or medium runs of games (even up to a significant chunk of the season). I think they're interesting and based on a couple of years of doing it I feel pretty confident that they provide value, and particularly predictive value. That is, that a team's ranking in the Crunchy Power Rankings is a pretty good indicator of how that team will perform in the future — a better indicator than other "eye-test" Power Rankings and even than the current standings.

To test that theory, I'm going to correlate the CPR I published at midseason with the second half standings and see if it's a better predictor than the first half standings or the power rankings published by the league and by ESPN. I did something similar early in the season based on the 2012 season. The CPR did quite well — particularly after I updated it to include the Recoveries stat — but that was a little compromised because I was examining a time period that I also used to generate the weightings for the stats, so there was a risk of over-fitting. The fit data was only one half-season out of three seasons of data, but still it'd be better to examine a time period that's not included in the calculation of the weights. So I'm doing that now.

The first thing we need is the second half standings. The rankings I'll be evaluating were the ones published on July 22 (actually a little past halfway). Incidentally, if you have time to go back and read the commentary on those rankings, I hope you'll find the predictions pretty accurate:

Seattle is. . we're holding out hope for Seattle. The good news is, as I've mentioned, both Vancouver and FC Dallas aren't looking strong. The bad news is LA certainly is. And even if RSL aren't impressive statistically, they've already accumulated so many points it doesn't matter. The other bad news is Colorado is looking good. If one of the Whitecaps or FCD fall out, it's just as likely (maybe more likely) that the Rapids take that spot. It may take both Vancouver and Dallas losing their playoff positions for the Sounders to make it.

That's exactly what happened. Both Dallas and Vancouver fell out of playoff position and were replaced by Colorado and Seattle. That's the kind of insight that I think the CPR can provide.

But, back to the second half standings. Here are the PPG standings for teams in games played after those rankings were published:

2013 Standings (After 7/24)
Team W D L Pts PPG GD
New York 8 3 2 27 2.08 12
San Jose 8 3 2 27 2.08 4
New England 7 3 4 24 1.71 4
Sporting KC 7 1 5 22 1.69 6
Seattle 8 3 5 27 1.69 -1
Colorado 6 2 4 20 1.67 5
Chicago 7 4 4 25 1.67 0
Portland 6 5 3 23 1.64 9
LA Galaxy 5 5 3 20 1.54 8
Houston 6 4 5 22 1.47 -3
Real Salt Lake 5 4 4 19 1.46 3
Columbus 6 0 8 18 1.29 -2
Philadelphia 4 3 6 15 1.15 -4
Vancouver 4 4 6 16 1.14 3
Montreal 5 2 8 17 1.13 -1
Toronto 4 3 7 15 1.07 -6
FC Dallas 3 3 7 12 0.92 -4
Chivas USA 2 3 9 9 0.64 -20
DC United 1 3 10 6 0.43 -13

I always enjoy looking at these because I think they're pretty unintuitive. People tend to decide whether a team is 'bad' or 'good' about halfway through the season and just stick to that evaluation, especially since the first half results will weigh so heavily on where a team is currently in the standings. But there can be pretty dramatic differences in results between the halves, and there's no reason to think the first half is the more 'real' reflection of a team than the second half. For example, it's no surprise that New York had the best second half record. . they went on to win the Supporters Shield. But did you realize the Earthquakes had the exact same PPG (and the exact same W/D/L results)? Their terrible first half buried them, but they were arguably the best team in the league in the second half.

In the other direction, FC Dallas and Montreal — who both led the Supporters Shield standings for multiple weeks — were 3 of the worst 5 teams in the second half. Their names should be familiar if you read the CPR, because we consistently called them out as teams that were getting results above their performance. The final narrative with Montreal was that their old roster couldn't keep up their level of performance, but I think the truth is that their level of performance was never that good in the first place. . they were just getting lucky.

So on to the evaluation. We have four models to look at: the CPR I ran on July 24, the actual MLS standings on that date, and the power rankings published by and ESPN FC (both on July 23). For each I'll correlate the model rankings with the actual second half results ranking and I'll also calculate the difference for each team and the root mean squared error.

Team 2nd Half CPR Diff 1st Half Diff MLS PR Diff ESPN PR Diff
New York 1 4 3 8 7 6 5 6 5
San Jose 2 7 5 16 14 15 13 16 14
New England 3 17 14 13 10 14 11 14 11
Sporting KC 4 1 3 2 2 2 2 1 3
Seattle 5 11 6 12 7 13 8 12 7
Colorado 6 6 0 11 5 7 1 9 3
Chicago 7 3 4 14 7 12 5 11 4
Portland 8 5 3 3 5 3 5 3 5
LA Galaxy 9 2 7 6 3 4 5 4 5
Houston 10 8 2 7 3 10 0 10 0
Real Salt Lake 11 13 2 1 10 1 10 2 9
Columbus 12 10 2 15 3 16 4 15 3
Philadelphia 13 9 4 10 3 8 5 8 5
Vancouver 14 15 1 5 9 5 9 5 9
Montreal 15 12 3 4 11 9 6 7 8
Toronto 16 18 2 18 2 18 2 18 2
FC Dallas 17 14 3 9 8 11 6 13 4
Chivas USA 18 19 1 17 1 17 1 17 1
DC United 19 16 3 19 0 19 0 19 0

A couple of things jump out quickly. First, there's almost no point in tracking the power rankings separately. Despite being written by two completely different sets of people, they're essentially the same list — with neither differing from the other by more than 2 ranks. That's largely driven by the fact that neither will deviate significantly from the standings. The MLS rankings deviate from the standings by more than 3 ranks for only two teams (Montreal and Colorado) and ESPN for only one (FC Dallas). That's true for all the rankings I looked at. . Yahoo, Bleacher Report, etc. You can understand the incentives. . nobody wants to deal with the negative attention they'd get for ranking a team 10 spots away from where they are in the standings, but if the goal is to accurately reflect the quality of the teams, they're failing.

Second, I've highlighted significant deviations (which is what I call deviations of 8 or more rankings, since that's the same standard I used in the previous study). For the CPR, there's one: New England, who's off by 14. There seems to be one team every year that just defies the crunch. Last season it was Colorado, which the CPR liked all season but never showed up in the standings. This year, New England way overplayed their stats in the second half. It'll be interesting to see if there's a modification I can make that will improve the Revolution result. It was after a similar large deviation last year that I added the Recoveries stat, which significantly improved the model. Another possibility is that the Revs were just getting lucky in the second half, as much as Montreal was getting lucky in the first. But on to the other models, the first half standings had 6, or about a third of the league, and the power rankings had 5 each. Based on those results I'd expect the CPR to do well in the actual correlations, so let's check those.

Model RMSE Correlation Corr. w/out
CPR 4.65 0.64 0.53
Standings 6.90 0.21 -0.08
MLS PR 6.32 0.34 0.09
ESPN PR 6.31 0.34 0.09

The Root Mean Squared Error is a standard measure of deviation and you want it to be low. You can see that CPR is beating out the other models handily here. The Power Rankings do a little better than the standings, since they tended to shade Dallas and Montreal lower in the rankings (though not nearly enough). But again, they don't differ much since the source rankings don't differ that much from the standings.

The correlation between the ranking data is an even better measure, and you can see CPR significantly outperforming there, with three times the correlation as the first half standings and twice the power rankings. So I think that's a pretty solid vindication that if you want to have some idea of how a team will play in the second half of the season, you're much, much better off looking at the CPR than you are at the current standings or any power rankings.

One thing that occurred to me is that this season was a slight anomaly in that a few of the ranks were just braindead easy. After about the first month of the season, it was clear that Chivas USA and DC United (and to a lesser extend Toronto) were just awful, awful teams and would be going nowhere. They were consistently at the bottom of the standings, the CPR, and any power rankings that didn't want to get laughed out of the building. To create a stiffer test, I also calculated the correlations with DC and Chivas excluded, which I've also added to the table. And the results of that were pretty amazing. They all go down, but CPR stays above 50%. The others effectively go to 0 (and in fact, the standings correlation goes negative). What that means is — if you take out those two easy calls — just rolling dice would have done as good a job predicting second half performance as looking at the standings or power rankings.

So overall I'm impressed with the CPR performance. At least for this season, it was tremendously more accurate in predicting future performance than just looking at the standings or the power rankings which are largely just cribbed from the standings. In fact it's far enough ahead that I suspect it might just be a particularly good season for the CPR. We'll see next year.

Sign up for the newsletter Sign up for the Sounder At Heart Weekly Roundup newsletter!

A twice weekly roundup of Seattle Sounders and OL Reign news from Sounder at Heart