**The Rookies**: Jameis Winston and Marcus Mariota will be the week 1 starters for the two teams that went 2-14 last year.**The Suspension**: With Tom Brady suspended in week 1, Jimmy Garoppolo will be New England’s first week 1 starter other than Brady since Drew Bledsoe in 2001.**The Returning Starters**: Blake Bortles and Teddy Bridgewaters, rookies last year, were not week 1 starters but the main quarterbacks for Jacksonville and Minnesota last year. Cam Newton missed week 1 due to injury, but is obviously returning as Carolina’s franchise quarterback. And in the continuing tire fire that is Washington football (both in general and as it specifically relates to RG3), Kirk Cousins will be the new starter in Washington over RG3.**The New Starters**: Buffalo will have a new starter, with Tyrod Taylor beating out both Matt Cassel and EJ Manuel. Philadelphia and St. Louis traded quarterbacks, so Sam Bradford and Nick Foles will be new faces. In addition, the Browns (Josh McCown), Houston (Brian Hoyer), and the Jets (Ryan Fitzpatrick) added veteran quarterbacks in the offseason.

Let’s take a look at each team’s opening day quarterback in each season since 2002:

As regular readers may recall, the Browns are about to set a record: 2015 will mark the 12th different quarterback to lead the team in passing yards in 14 years, the highest number in any such period in NFL history. Well, this year, Cleveland will also have its 11th different starting quarterback since 2002, the most in the NFL.

Only two other teams have had more than eight different week 1 starters: the Cardinals and Raiders. At least for now, it seems as though Oakland may have found its answer, at least in the medium term, at quarterback. For Arizona, Palmer will be the week 1 starter for the 4th year in a row, but it’s anyone’s guess how much longer the Cardinals will be able to count on him.

On the other side, there are four teams that have started just two quarterbacks since 2002: the Patriots, of course, are one. The Chargers and Packers have been fortunate enough to limit their week 1 starts to Drew Brees, Philip Rivers, Brett Favre, and Aaron Rodgers, respectively. Of course, for Green Bay, it’s been Favre or Rodgers for (as of 2015) *twenty-two consecutive week 1 games*. Meanwhile, the Saints have started only Brees or Aaron Brooks in every opener starting in 2001.

But the honor for longest active streak of consecutive week 1 starts? That belongs to Eli Manning. This year will mark his 11th straight season as the Giants opening day quarterback.

]]>**Test #1: Every Team Is The Same**

This is the simplest baseline: let’s project each team to go 8-8. If you did that in every season from 1989 to 2014, your model would have been off by, on average, 2.48 wins per team. This is calculated by taking the absolute value of the difference between 0.500 and each team’s actual winning percentage, and multiplying that result by 16. So that should be the absolute floor for any projection model: you have to come closer than that.

**Test #2: Every Team Does What They Did Last Year**

Looking at all teams from 1990 to 2014, I calculated their winning percentages in that season (Year N) and in the prior season (Year N-1). If you used the previous year’s record to project this year’s record, you would have been off by, on average, 2.84 wins per team. That’s right: you are better off predicting every team to go 8-8 than to predict every team to repeat what they did last season.

But that’s simply an artifact of the power of regression to the mean: it doesn’t mean last year doesn’t matter, but that we just need to be a little smarter about it. If you run a regresion using last year’s winning percentage to predict this year’s winning percentage, the best-fit formula (R^2 = 0.11) is 0.338 + 0.327 * N-1_Win%. This shows the power of regression to the mean — we are only going to take about one-third of last year’s winning percentage to project this year’s winning percentage.

So, if instead of using last year’s winning percentage, let’s used a regressed version of last year’s winning percentage. This would increase our model by reducing the delta to 2.35 wins per team, or by about 17%.

**Test #3: Pythagenpat Winning Percentage**

Another option is to use a regressed form of Pythagenpat Winning Percentage. The best-fit formula (R^2 = 0.13) is 0.302 + 0.400 * N-1_Pythagenpat_Win%. Just looking at the exponents, you can see that this is slightly more precise than just using last year’s winning percentage: the constant has dropped by about 4%, while the weight on last year’s data has jumped from about 1/3 to 2/5. By using this regressed version of Pythagenpat Winning Percentage, we reduce the delta to 2.31 wins per team.

**Test #4: Offensive and Defensive SRS Ratings**

Unlike in our other tests, using offensive and defensive Simple Rating System ratings allows us to take advantage of the fact that offenses are more consistent than defenses. The best-fit formula using these grades (R^2 = 0.15) is:

Year_N_Win% = 0.501 + 0.0146 * Off_SRS_Year_N-1 + 0.0075 * Def_SRS_Year_N-1

This tells us that offensive SRS grades are nearly twice as important as defensive SRS grades when projecting future performance. If we use this regressed formula of offensive and defensive SRS ratings, we reduce the delta to 2.30 wins per team. That’s obviously not much of an improvement, although it’s about 0.02 wins per team from the prior example. (It only looking like 0.01 due to rounding; it’s an increase of 0.019 wins per team from using the regresesd version of Pythagenpat.))

**Test #5: Offensive and Defensive SRS Ratings Excludes Return Scores**

Using the numbers from Tom M. available here, we can ignore non-offensive scores. The best-fit formula is:

Year_N_Win% = 0.501 + 0.01527 * Off_SRS_Year_N-1 + 0.0092 * Def_SRS_Year_N-1

Unsurprisingly, there’s not much of a difference here, but it does slightly improve on the previous test.

So my conclusion from today’s research. At a minimum, any projection system should have a goal of reducing the difference between projected and actual wins, over a long enough period, to below 2.30.

Before we conclude, let’s look at the 2014 data, and what that would mean for 2015 projections. Here’s how to read the table below. Denver had an Offensive SRS (which excludes non-offensive scores) of +9.0 last year, and a Defensive SRS (which again excludes non-offensive scores) of +0.1. ^{1} Using the formula above, we would project Denver to win 10.2 games this year. Now, Vegas had the Broncos as a 10-win team as of August 5th. However, Vegas projected the 32 teams to win 264 games, and there are only 256 games in a season. So I’ve reduced each team’s projected wins total by 256/264, which drops Denver down to 9.7 projected wins. So this formula has the Broncos with 0.5 more wins than what Vegas is projecting.

Rk | Tm | Off SRS | Def SRS | Proj | Vegas | Vegas (Adj) | Diff |
---|---|---|---|---|---|---|---|

1 | DEN | 9 | 0.1 | 10.2 | 10 | 9.7 | 0.5 |

2 | NWE | 6.5 | 2.9 | 10 | 10.5 | 10.2 | -0.2 |

3 | SEA | 3 | 6.6 | 9.7 | 11 | 10.7 | -0.9 |

4 | GNB | 6.6 | 0.3 | 9.7 | 11 | 10.7 | -1 |

5 | DAL | 6.1 | 0.4 | 9.6 | 9.5 | 9.2 | 0.4 |

6 | IND | 6.1 | 0 | 9.5 | 11 | 10.7 | -1.2 |

7 | BAL | 1.9 | 1.4 | 8.7 | 9 | 8.7 | 0 |

8 | PHI | 3.9 | -2.1 | 8.7 | 9.5 | 9.2 | -0.6 |

9 | KAN | -0.2 | 4.2 | 8.6 | 8.5 | 8.2 | 0.3 |

10 | MIA | 2.3 | -1.1 | 8.4 | 9 | 8.7 | -0.3 |

11 | BUF | -1.3 | 4.7 | 8.4 | 8.5 | 8.2 | 0.1 |

12 | NYG | 2.4 | -1.9 | 8.3 | 8.5 | 8.2 | 0.1 |

13 | SDG | 0.6 | 0.8 | 8.3 | 8 | 7.8 | 0.5 |

14 | PIT | 2.7 | -3.1 | 8.2 | 8.5 | 8.2 | 0 |

15 | CIN | -0.2 | 1.4 | 8.2 | 8.5 | 8.2 | -0.1 |

16 | NOR | 3.2 | -4.4 | 8.2 | 8.5 | 8.2 | -0.1 |

17 | STL | -2 | 3.4 | 8 | 8 | 7.8 | 0.3 |

18 | DET | -2.7 | 4.3 | 8 | 8 | 7.8 | 0.2 |

19 | ARI | -2.6 | 3.5 | 7.9 | 8.5 | 8.2 | -0.3 |

20 | HOU | -1.8 | 2 | 7.9 | 8.5 | 8.2 | -0.4 |

21 | SFO | -2.5 | 2.5 | 7.8 | 7 | 6.8 | 1 |

22 | ATL | 0.5 | -3.5 | 7.6 | 8.5 | 8.2 | -0.6 |

23 | CAR | -2.4 | -0.1 | 7.4 | 8.5 | 8.2 | -0.8 |

24 | NYJ | -2.9 | -0.8 | 7.2 | 7.5 | 7.3 | -0.1 |

25 | MIN | -3.9 | 0.7 | 7.2 | 7.5 | 7.3 | -0.1 |

26 | WAS | -2 | -3.1 | 7.1 | 6.5 | 6.3 | 0.8 |

27 | CHI | -1.7 | -5 | 6.9 | 7 | 6.8 | 0.1 |

28 | CLE | -4.7 | -0.3 | 6.8 | 6.5 | 6.3 | 0.5 |

29 | OAK | -4.1 | -4.2 | 6.4 | 5.5 | 5.3 | 1.1 |

30 | TAM | -6.1 | -3.4 | 6 | 6 | 5.8 | 0.2 |

31 | JAX | -7.7 | -1.2 | 6 | 5.5 | 5.3 | 0.6 |

32 | TEN | -6.3 | -4.8 | 5.8 | 5.5 | 5.3 | 0.4 |

All 32 teams are projected within 1.2 wins of their Vegas wins total, and 22 of the teams within 0.5 wins. Vegas isn’t projecting as much regression to the mean for the Colts, Seahawks, and Packers, and that’s understandable. And, I suppose, the same could be said of Oakland, Washington, and Jacksonville, three of the four teams that this model projects to win at least half a game more than Vegas. The only other team “overrated” by these projections is the 49ers, who had one of the worst offseasons you can have.

All of that is to say that I think this model can be pretty useful to serve as a baseline of team projections entering a season. Which can have a lot of implications for us in the future.

- Denver’s defense was better than league average last year, but there were a lot of drives in Broncos games. That led to more scoring for both Denver and its opponents.