NFL.com posted an article yesterday looking at the strength of schedule for each team in 2013. We have known each team’s opponents since the end of the regular season, and while the full schedule won’t come out until April, it’s simple to calculate a team’s strength of schedule for 2013. Usually, the media reports this by looking at the win-loss record of each opponent from the prior season. Here are the projected SOSs for each team next season:
So does this mean that the Panthers job isn’t so attractive anymore? Not really. Strength of schedule absolutely matters: teams like the 2010 Kansas City Chiefs and 2012 Indianapolis Colts made the playoffs in large part based on easy schedules. But can you predict in the off-season which teams will have the hardest and easiest schedules?
I looked at the preseason strength of schedule for every team from 2003 to 2012. The correlation coefficient between a team’s predicted SOS and actual SOS measured at the end of the season was just 0.06 in 2012 and 0.24 over the ten-year period from 2003 to 2012. If you were to run a regression, the best fit formula to predict end-of-year strength of schedule from predicted strength of schedule is:
Actual SOS = 0.369 + 0.262*Proj_SRS
In English, that means we would project the Broncos (pre-season SOS of 0.430) to have an actual SOS of 0.482, while the Panthers (pre-season SOS of 0.543) to have an SOS of 0.511 by the end of 2013. Another way of thinking of that is that all strengths of schedule regress to the mean (of 0.500), with only 26% of a team’s actual SOS being dictated by their projected strength of schedule. However, the R^2 is just 0.06, indicating no meaningful relationship.
If instead we use the Simple Rating System to give us a projected strength of schedule, the best fit formula becomes 0.500 + 0.009*SRS.1 That means if your average opponent was 3 points better than average in Year N-1, you could expect that team’s strength of schedule at the end of the year to be equal to 0.527. If you tried to use pre-season SOS based on both Year N-1 win-loss records and Year N-1 SRS results, the Year N-1 win-loss records variable is not statistically (or practically) significant. You can view the projected SOS based on win-loss records and SRS (using Year N-1 data) and the actual SOS based on win-loss records and SRS (using Year N data) for all 320 team seasons here.
Let’s wind the clock backs one year. At the time, the story was that the Giants and Broncos had the league’s hardest schedules, while the Packers and Patriots were gifted the easiest slate of opponents. How did that turn out?
Denver ended up having the 4th easiest schedule in the league, while the Packers schedule ended up being slightly harder than average. If you look at the SRS ratings, the results held a bit better. So what are the projected end-of-season strength of schedule ratings for each team in 2013 using the SRS (and the regression formula above) instead of win-loss records as the input?
I’ll note that I’m hardly breaking new ground showing that a team’s projected strength of schedule based on the prior year’s win-loss record is meaningless. Doug wrote a study on this back in 2006, and Football Outsiders has shown that their DVOA stat is more highly correlated with future SOS than win-loss record. But it’s good to break out the data and confirm this every now and again.
Here’s another way to think of it, building off Doug’s study. I looked at the regular season records of the 320 teams from 2002 to 2011. If you knew only a team’s record in Year N-1, the best projection of their record in Year N would be:
Year N Win% = 0.353 + 0.295*YrN-1Win%
The R^2 in this formula is just 0.08, indicating not much of a relationship. What if we added as an input each team’s projected strength of schedule? That variable turns out to be statistically (and practically) insignificant, and the R^2 does not change.
- The R^2 in this case is only 0.12, so the relationship in general is still very weak. [↩]