≡ Menu

Correlating Wins in Year N and Year N+1

There are many advanced projection systems that do a great job of projecting teams wins. I’m not interested in recreating that or coming up with my own system, but rather setting a baseline for what a projection system should hope to accomplish. You’ll see what I mean in a few moments.

Test #1: Every Team Is The Same

This is the simplest baseline: let’s project each team to go 8-8. If you did that in every season from 1989 to 2014, your model would have been off by, on average, 2.48 wins per team. This is calculated by taking the absolute value of the difference between 0.500 and each team’s actual winning percentage, and multiplying that result by 16. So that should be the absolute floor for any projection model: you have to come closer than that.

Test #2: Every Team Does What They Did Last Year

Looking at all teams from 1990 to 2014, I calculated their winning percentages in that season (Year N) and in the prior season (Year N-1). If you used the previous year’s record to project this year’s record, you would have been off by, on average, 2.84 wins per team. That’s right: you are better off predicting every team to go 8-8 than to predict every team to repeat what they did last season.

But that’s simply an artifact of the power of regression to the mean: it doesn’t mean last year doesn’t matter, but that we just need to be a little smarter about it. If you run a regresion using last year’s winning percentage to predict this year’s winning percentage, the best-fit formula (R^2 = 0.11) is 0.338 + 0.327 * N-1_Win%. This shows the power of regression to the mean — we are only going to take about one-third of last year’s winning percentage to project this year’s winning percentage.

So, if instead of using last year’s winning percentage, let’s used a regressed version of last year’s winning percentage. This would increase our model by reducing the delta to 2.35 wins per team, or by about 17%.

Test #3: Pythagenpat Winning Percentage

Another option is to use a regressed form of Pythagenpat Winning Percentage. The best-fit formula (R^2 = 0.13) is 0.302 + 0.400 * N-1_Pythagenpat_Win%. Just looking at the exponents, you can see that this is slightly more precise than just using last year’s winning percentage: the constant has dropped by about 4%, while the weight on last year’s data has jumped from about 1/3 to 2/5. By using this regressed version of Pythagenpat Winning Percentage, we reduce the delta to 2.31 wins per team.

Test #4: Offensive and Defensive SRS Ratings

Unlike in our other tests, using offensive and defensive Simple Rating System ratings allows us to take advantage of the fact that offenses are more consistent than defenses. The best-fit formula using these grades (R^2 = 0.15) is:

Year_N_Win% = 0.501 + 0.0146 * Off_SRS_Year_N-1 + 0.0075 * Def_SRS_Year_N-1

This tells us that offensive SRS grades are nearly twice as important as defensive SRS grades when projecting future performance. If we use this regressed formula of offensive and defensive SRS ratings, we reduce the delta to 2.30 wins per team. That’s obviously not much of an improvement, although it’s about 0.02 wins per team from the prior example. (It only looking like 0.01 due to rounding; it’s an increase of 0.019 wins per team from using the regresesd version of Pythagenpat.))

Test #5: Offensive and Defensive SRS Ratings Excludes Return Scores

Using the numbers from Tom M. available here, we can ignore non-offensive scores. The best-fit formula is:

Year_N_Win% = 0.501 + 0.01527 * Off_SRS_Year_N-1 + 0.0092 * Def_SRS_Year_N-1

Unsurprisingly, there’s not much of a difference here, but it does slightly improve on the previous test.

So my conclusion from today’s research. At a minimum, any projection system should have a goal of reducing the difference between projected and actual wins, over a long enough period, to below 2.30.

Before we conclude, let’s look at the 2014 data, and what that would mean for 2015 projections. Here’s how to read the table below. Denver had an Offensive SRS (which excludes non-offensive scores) of +9.0 last year, and a Defensive SRS (which again excludes non-offensive scores) of +0.1. 1 Using the formula above, we would project Denver to win 10.2 games this year. Now, Vegas had the Broncos as a 10-win team as of August 5th. However, Vegas projected the 32 teams to win 264 games, and there are only 256 games in a season. So I’ve reduced each team’s projected wins total by 256/264, which drops Denver down to 9.7 projected wins. So this formula has the Broncos with 0.5 more wins than what Vegas is projecting.

RkTmOff SRSDef SRSProjVegasVegas (Adj)Diff
1DEN90.110.2109.70.5
2NWE6.52.91010.510.2-0.2
3SEA36.69.71110.7-0.9
4GNB6.60.39.71110.7-1
5DAL6.10.49.69.59.20.4
6IND6.109.51110.7-1.2
7BAL1.91.48.798.70
8PHI3.9-2.18.79.59.2-0.6
9KAN-0.24.28.68.58.20.3
10MIA2.3-1.18.498.7-0.3
11BUF-1.34.78.48.58.20.1
12NYG2.4-1.98.38.58.20.1
13SDG0.60.88.387.80.5
14PIT2.7-3.18.28.58.20
15CIN-0.21.48.28.58.2-0.1
16NOR3.2-4.48.28.58.2-0.1
17STL-23.4887.80.3
18DET-2.74.3887.80.2
19ARI-2.63.57.98.58.2-0.3
20HOU-1.827.98.58.2-0.4
21SFO-2.52.57.876.81
22ATL0.5-3.57.68.58.2-0.6
23CAR-2.4-0.17.48.58.2-0.8
24NYJ-2.9-0.87.27.57.3-0.1
25MIN-3.90.77.27.57.3-0.1
26WAS-2-3.17.16.56.30.8
27CHI-1.7-56.976.80.1
28CLE-4.7-0.36.86.56.30.5
29OAK-4.1-4.26.45.55.31.1
30TAM-6.1-3.4665.80.2
31JAX-7.7-1.265.55.30.6
32TEN-6.3-4.85.85.55.30.4

All 32 teams are projected within 1.2 wins of their Vegas wins total, and 22 of the teams within 0.5 wins. Vegas isn’t projecting as much regression to the mean for the Colts, Seahawks, and Packers, and that’s understandable. And, I suppose, the same could be said of Oakland, Washington, and Jacksonville, three of the four teams that this model projects to win at least half a game more than Vegas. The only other team “overrated” by these projections is the 49ers, who had one of the worst offseasons you can have.

All of that is to say that I think this model can be pretty useful to serve as a baseline of team projections entering a season. Which can have a lot of implications for us in the future.

  1. Denver’s defense was better than league average last year, but there were a lot of drives in Broncos games. That led to more scoring for both Denver and its opponents. []
  • jtr

    What if you do a combination of the first two? I would be curious to see how well the average of last year’s wins and 8 wins would do in predicting this year’s wins, acting as kind of a poor man’s regression to the mean.

    • Roger Kirk

      I have only a vague idea of what it means to “run a regression” but wouldn’t that be a formula of 0.25 + 0.5 * N-1_Win% which presumably was looked at and rejected in favor of the optimal 0.338 + 0.327 * N-1_Win%?

      • That’s exactly right.

  • Jack

    It’s curious that these rankings always bunch up the middle of the bell curve, for example, the highest projected teams are 10 wins, lowest are barely under 6 wins? There are always teams better and worse than that. The best predictive model, then, is just the best hedge, if you predict a team as an outlier (with 12 or 3 wins) and they’re not you’d get penalized far more than if you predicted, say, 10 or 6 wins for them.

    Predictive models like this and Vegas odds may be the best we can do but in reality, win totals will always have more variance.

    • Are there always teams better and worse than that, or teams that are lucky or unlucky and win more/fewer games than that?

  • Tom

    Chase – I think your footnote 1 should read “a lot of points” in Broncos games, not “a lot of games”?

    That nit-picky thing aside, I’d be real curious to see how an SOS-adjusted points-per-drive model would do. Kind of a hassle to put together, and not very intuitive – do we really get a feel for an offense that is 1.15 PPD above average? – but I’m thinking it’s a better evaluation of a team’s offense & defense than PPG.

    Also, it’s interesting that Vegas is pegging Oakland to win a full game less than this projection. It’s safe to say “Vegas knows best”, but considering the Raiders tough schedule last year (they faced NWE and SEA on the road for starters), I’m not surprised they’re projected higher here. Looking at their schedule this year, 5 does seem right though.