How much should the week 1 results impact your projections for team wins in 2012? That’s what this post will attempt to answer.

Let’s start with the basics. Before the season, if you knew nothing about a team other than how many games it won the prior year, how many wins should you project for such team this year? This is a relatively simple question to answer giving enough historical data and a program to perform a regression analysis. After doing just that, I can tell you that you should project each team to win 5.28 games plus 0.34 times the number of games they won in the previous season. So a 4-win team projects to 6.6 wins, a 6-win team projects to 7.3 wins, and an 11-win team should drop down to 9.0 wins. There is a significant regression to the mean force at play here, unsurprisingly. Even a 15-win team projects to “only” 10.4 wins.

Of course, this is far from perfect. The R^2 of this model is just 0.11, an indication that there are significantly more factors at play in determining a team’s record than their amount of wins the prior year. Well, duh. However, we can improve on that 0.11 number. If we use SRS ratings as inputs instead of wins, that R^2 goes to 0.15. This is not surprising, and this is exactly what I mean when I say that the SRS is more predictive of future performance than wins. What’s the best-fit formula?

Each team should win 8.05 games plus or minus 0.196 wins for every point a team had in the SRS in the prior season. This means that a team that was 5 points better than average should be projected to win 9.0 games the next season, while a team that was 11 points below average in 2010 projects as a 5.9-win team in 2011.

At this point, you might think: okay, great, now let’s combine them both! Let’s use both SRS ratings and team wins as inputs and Year N+1 wins as outputs. Well, doing that adds nothing to the predictive power of the model. This is another reason not to use actual records for predictive purposes. For illustrative purposes, I performed such a regression, and the model tells us that the “record” variable has a p-value of 0.61, making it nowhere near statistically significant (and the weight on the variable was -0.04, making it practically insignificant as well). In layman’s terms, what this means is that if we already know a team’s SRS ratings, **also** knowing their won-loss record is not helpful to predicting their future performance.

Now have a simple way to project each team’s number of wins in a given season: 8.05 + 0.196*each team’s SRS rating from the prior year. You might wonder why that number is at 8.05 and not 8.00; that’s because I didn’t simply use the standard, regular season SRS ratings, but rather I calculated each team’s SRS score based on all of their games, postseason included. Therefore, the average is slightly higher than 8.00 since the best teams played the most games. There’s no good reason to ignore the postseason when projecting future performance (other than laziness, in which case I approve). I didn’t put special weight on games from the 2011 playoffs, but simply counted them as additional games. Anyway, the table below shows the SRS ratings from each team in 2011 and their projected 2012 wins based on the above formula:

Team | 2011 SRS | 2012 Proj W |
---|---|---|

NOR | 11.1 | 10.2 |

NWE | 9.7 | 10 |

GNB | 9.4 | 9.9 |

SFO | 8.1 | 9.6 |

BAL | 6.1 | 9.2 |

NYG | 5.3 | 9.1 |

PHI | 5.3 | 9.1 |

DET | 4.8 | 9 |

HOU | 4.6 | 9 |

PIT | 4.2 | 8.9 |

DAL | 2.2 | 8.5 |

ATL | 2 | 8.4 |

MIA | 1.3 | 8.3 |

NYJ | 1.2 | 8.3 |

CHI | 0.9 | 8.2 |

SEA | 0.8 | 8.2 |

SDG | 0.4 | 8.1 |

CIN | -0.6 | 7.9 |

TEN | -1.5 | 7.8 |

CAR | -1.9 | 7.7 |

ARI | -2.2 | 7.6 |

BUF | -3.1 | 7.4 |

WAS | -3.5 | 7.4 |

OAK | -5.4 | 7 |

CLE | -5.8 | 6.9 |

DEN | -6 | 6.9 |

JAX | -6.1 | 6.8 |

MIN | -6.5 | 6.8 |

KAN | -8.6 | 6.4 |

STL | -10.4 | 6 |

TAM | -11.3 | 5.8 |

IND | -11.8 | 5.7 |

You might be surprised to see the Packers only third on this table, since Green Bay won 15 games last year. But the 2011 Packers did not have great underlying statistics — they had a Pythagorean record of 11.9 – 4.1 against a pretty easy regular season schedule — and then they dropped a bit once you include the playoff loss to the Giants.

The key for purposes of this post is we now have baseline projections against which we can compare. Now that week 1 is in the books, what can we make of it?

Let’s take the Jets as an example. Before the season, we projected them at 8.3 wins, based on them being 1.2 points above average in 2011. Now they won their first game. Even if we thought that week 1 had zero predictive meaning, we still wouldn’t project the Jets at 8.3 wins in 2012 today. Instead, we would project them to win 8.8 games. Why is that? At 8.3 wins, that means the Jets projected to win 51.9% of their games. Well, we already know they won the first game, so we give them a full win for that. Projecting the them to win 51.9% of their remaining 15 games means we’d give them 7.8 more wins; therefore, our end-of-season projection should be revised to 8.8 wins even if we think there is nothing predictive we can drive from week 1 results.And that’s precisely what I wanted to test. The Jets surprised everyone by beating Buffalo by 20 points; we know it is a surprise because they were only a 3-point favorite, meaning they covered by 17 points. We can test to see whether the Jets exceeding expectations by 17 points in week 1 has any predictive meaning.

**Here’s what I did.** I performed a regression using the following two input variables: (1) Team SRS score in Year N-1 and (2) the amount of points by a team covered in week 1 of Year N. The output variable was how many wins a team earned over their last 15 games of Year N. The results? Both variables were statistically significant at even p=0.001. In other words, **yes, in fact, how much a team exceeds expectations in week 1 does help us in predicting how they end up performing over the duration of the year.**

The best fit formula was:

7.55 + 0.183*Year_N-1_SRS + 0.031*Week_1_Cover

What does that mean? For the Jets, they would project to win 8.3 wins over the rest of the season, or 9.3 wins for the year. Had the Jets simply won by 3 points (and therefore covered by 0 points), their end-of-year projection would remain at 8.8 wins. So the blowout, to the extent recognized by the amount of points by which they covered, means we would want to project the Jets to win 1 more game this year than we would a few days ago. Half of that is due to the fact that they netted one win already, and half is due to the blowout.

What about the Saints? New Orleans was projected to win 10.2 games, although our formula knows nothing about the Saints stressful offseason. The Saints covered by -16 points, a result of an 8-point loss despite being 8-point favorites. Those inputs gives New Orleans only 9.1 projected wins the rest of the way and for this season.

Here are the full results:

Team | 2011 SRS | 2012 PreProj | Wk 1 W/L | Wk 1 Cover | 2012 CurrProj | Diff |
---|---|---|---|---|---|---|

NWE | 9.7 | 10 | W | 16.5 | 10.8 | 0.8 |

SFO | 8.1 | 9.6 | W | 14 | 10.5 | 0.9 |

BAL | 6.1 | 9.2 | W | 24 | 10.4 | 1.2 |

HOU | 4.6 | 9 | W | 7 | 9.6 | 0.6 |

ATL | 2 | 8.4 | W | 15 | 9.4 | 1 |

NYJ | 1.2 | 8.3 | W | 17 | 9.3 | 1 |

DET | 4.8 | 9 | W | -5 | 9.3 | 0.3 |

DAL | 2.2 | 8.5 | W | 10.5 | 9.3 | 0.8 |

PHI | 5.3 | 9.1 | W | -8 | 9.3 | 0.2 |

NOR | 11.1 | 10.2 | L | -16 | 9.1 | -1.1 |

CHI | 0.9 | 8.2 | W | 10 | 9 | 0.8 |

SDG | 0.4 | 8.1 | W | 9 | 8.9 | 0.8 |

GNB | 9.4 | 9.9 | L | -14 | 8.8 | -1.1 |

WAS | -3.5 | 7.4 | W | 16 | 8.4 | 1 |

ARI | -2.2 | 7.6 | W | 5 | 8.3 | 0.7 |

NYG | 5.3 | 9.1 | L | -10.5 | 8.2 | -0.9 |

PIT | 4.2 | 8.9 | L | -9.5 | 8 | -0.9 |

DEN | -6 | 6.9 | W | 9.5 | 7.7 | 0.8 |

MIA | 1.3 | 8.3 | L | -7 | 7.6 | -0.7 |

SEA | 0.8 | 8.2 | L | -5 | 7.5 | -0.7 |

MIN | -6.5 | 6.8 | W | -0.5 | 7.3 | 0.5 |

CAR | -1.9 | 7.7 | L | -9 | 6.9 | -0.8 |

TEN | -1.5 | 7.8 | L | -16.5 | 6.8 | -1 |

TAM | -11.3 | 5.8 | W | 9 | 6.8 | 1 |

CLE | -5.8 | 6.9 | L | 8 | 6.7 | -0.2 |

CIN | -0.6 | 7.9 | L | -24 | 6.7 | -1.2 |

BUF | -3.1 | 7.4 | L | -17 | 6.5 | -0.9 |

JAX | -6.1 | 6.8 | L | 0.5 | 6.4 | -0.4 |

OAK | -5.4 | 7 | L | -9 | 6.3 | -0.7 |

STL | -10.4 | 6 | L | 5 | 5.8 | -0.2 |

KAN | -8.6 | 6.4 | L | -15 | 5.5 | -0.9 |

IND | -11.8 | 5.7 | L | -10 | 5.1 | -0.6 |

The last column there shows how much our 2012 wins projections change after this weekend’s games. Based on week 1, the Bengals, Saints, Packers, and Titans should all be projected for at least one fewer win in 2012, while the Ravens, Redskins, Jets, Falcons, and Buccaneers should move the same amount in the other direction. By virtue of their lopsided win over the Titans and their great SRS score last year, this system projects the Patriots as the team with the most wins in 2012; San Francisco isn’t far behind for the same reasons. The Giants drop from first to last in the NFC East. They were projected to win only 1.7 more games than the Redskins; following a bad week 1 for New York and an excellent performance by the Redskins, Washington actually has jumped the Giants in projected wins.

A lot of the movement is based on simply netting a win. The Eagles move from 9.1 to 9.3 wins even though they looked terrible, because they got a full win. Essentially moving from having a 57% chance of winning 16 games is less advantageous than having 1 guaranteed win and a 55% chance of winning each of their remaining 15 games. So the Eagles project to be less strong in 2012 — dropping from 57% to 55% likely to win their games — but jump in the projected wins column.

Anyway, there are many caveats that need to be taken, and a system with only two inputs is a system that can’t tell us too much. But the key takeaway for me: you certainly should revise total year-end projections based on actual win-loss results in week 1, and the data indicates that you should also revise them based on how team’s exceeded expectations, as well.

{ 10 comments… read them below or add one }

Curious: What’s the R^2 on the last regression?

If I recall correctly, it was about 0.17.

Great stuff, about when does current season SRS replace last season’s SRS as the most predictive measure of a team’s strength? I’m thinking weeks 4-6 based on gut, but interested to see what the data says.

I think that’s about right. One of the problems is it doesn’t feel right to even use SRS in-season until about 4 games.

Here’s a question:

Is it more likely for a “surprise” blowout to tell us that a good team might be bad, or just a fluke?

For instance, New Orleans was -16 in week 1. Does that tend to imply that the Saints just might not be good this year, or that they had a bad game?

I’m thinking back to 1989 when the Steelers lost to the Browns 51-0 on opening day, but ended up making the playoffs. Or the 2003 Patriots who lost 31-0 to Buffalo, but went on to win the Super Bowl.

Since 1990, there have only been 11 teams that had SRS grades of 8 or more and then lost and failed to cover by double digits. Most recently was the ’11 Steelers, who had an SRS of 9.7 in 2010 and “covered” by -26.5 points in week 1 last year, but still ended up with 12 wins. But that’s an outlier. On average, the 11 teams won 12.8 games in Year N-1 and had an SRS of 11.2 in Year N-1; they covered by -18.7 points in week 1 of Year N and lost. On average, they had 8.7 wins in Year N.

The Pats you referenced were only a really good team in hindsight; they had missed the playoffs the prior year and had an SRS of 3.9 in ’02. The ’89 Steelers were outside the study.

If we ignore SRS and just look at teams with 11+ wins since 1990 and then failed to cover by at least 14 points in week 1 the following season, we get 22 teams. On average, they won 12.1 games (SRS of 7.2) in Year N, and covered by -21.9 points in week 1 of Year N+1. They won 8.3 games in Year N+1, but there were obviously several teams that won double digit games (2011 PIT, ATL; 2008 IND; 2003 PHI; 2002 PIT; 1996 DAL, PIT; 1993 DAL; 1992 HOU.)

So, generally speaking, if a good team gets crushed in week 1, it may not be a good sign for their season.

Another way to look at this. 13 teams have won 11 games and then won fewer than 6 the next year. Three of them won in week 1; the median team covered by -1.5 points. Four teams were totally blown out.

It seems the one missing element is predicted team improvement. SRS measures team quality at the end of last season. Wk 1 Cover measures performance over expectation. However, teams might improve their team quality in the off-season in ways that lead to an improved point spread. Would Denver have been favored by 2.5 points over the Steelers without Manning at QB?

Could this omitted variable bias your estimates in any way?

Yes, absolutely. With only two variables, you’re missing out on a lot, and your observation is one of those factors.