In 1991, Dave Krieg led the NFL in completion percentage. He completed a career-high 65.6% of his passes, and while that mark was very good for that era, it doesn’t mean Krieg was great that season. In fact, he arguably wasn’t even good: Krieg actually finished just 24th in ANY/A that year.
One reason, I think, that Krieg was able to lead the NFL in completion percentage is because Krieg “ate” a lot of his incomplete passes. What do I mean by that? Krieg took a ton of sacks — he was sacked every ten times he dropped back to pass. When under duress, some quarterbacks eat the ball, to avoid an interception; that’s bad (well, it’s better than n interception) but it doesn’t get graded that way when calculating completion percentage. Other quarterbacks will throw the ball away; that’s good (assuming it isn’t intercepted) because no yards are lost, but it does hurt the quarterback’s completion percentage.
Even ignoring the yards lost due to sacks, fundamentally, a sack is no better than an incomplete pass. So why are quarterbacks who take sacks rather than throw the ball out of bounds given an artificial boost when it comes to completion percentage? Well, that’s largely just an artifact of how the NFL always graded things. The NFL was not always good at recording metrics, and somewhere along the way, sacks were either included as running plays, ignored, or included as pass plays. I don’t think a lot of thought went into it, but in my view, it makes the most sense to include sacks in the denominator when calculating completion percentage. Otherwise, we give undue credit to quarterbacks that take a lot of sacks, and penalize quarterbacks who throw the ball away when under pressure.
Take at the top 7 leaders from 1991 in completion percentage. Aikman, Young, and Kelly all finished with 2.5 percentage points of Krieg but with noticeably better sack rates. And Moon managed to have a sack rate that was a third of Krieg’s, too.
Completion percentage is one of the simplest stats in all of football. It’s a binary stat that doesn’t tell us anything in the way of magnitude — a 90-yard completion is treated the same as one that loses 11 yards — but that doesn’t mean completion percentage is without its advantages. On the plus side, it is not very sensitive to outliers, is relatively consistent from year to year, and is easily understood.
Given how basic a metric it is, there are compelling reasons not to mess with it too much. For one, nobody gives completion percentage that much weight, and if we are trying to make the stat “better”, why not just use a better stat? But I do think including sacks1 in the denominator makes sense and is worth the tradeoff. If we do that, it’s now Young who leads the 1991 NFL season in adjusted completion percentage, at 61.6%. Kelly is second at 60.2%, Aikman is at 60.0%, and Moon is at 59.6%. Krieg falls to fifth, as adding 32 sacks to his 98 incomplete passes drops him to 59.0%.
The most egregious example of what happens when you don’t include sacks in the denominator occurred in 1988. That year, Wade Wilson led the NFL with a 61.4% completion percentage, and the top three leaders in that metric all had below-average sack rates. But if you scroll down to #8 on the completion percentage rankings, you find the leader in adjusted completion percentage:
Marino was sacked just 6 times that year, for a sack rate of better than one percent. His adjusted completion percentage was 57.8%, over on percent better than any other passer in the league.
Overall, the leader in completion percentage would change in 13 seasons since the merger, including in 3 of the last 4 years, if we included sacks in the denominator. So what do you think — is it worth including sacks in the denominator? And is this something that we should devote future posts to?
- And, arguably, scrambles. We don’t have that data going back historically, but we probably should include that now. And, I suppose, should remove spikes from the numerator. [↩]