## Analyzing the leaders in targets in 2012

Reggie Wayne led the NFL in targets last year, but that’s a little misleading since the Colts ranked 6th in pass attempts. As a percentage of team targets, Wayne ranked second in the league, but he was a distant number two to Brandon Marshall, who saw two out of every five Bears passes in 2013.

But that doesn’t make him the best receiver. It was easier for Marshall to receive a high number of targets because the rest of the Chicago supporting cast was weak, so Jay Cutler consistently looked Marshall’s way. Chicago ranked 25th last year in Adjusted Net Yards per Attempt, so essentially we have a player on a bad passing offense receiving a ton of targets. It’s not all that obvious how you compare a player like that to Roddy White, who deserves credit for being in a great passing offense but loses targets because of the presence of Julio Jones and Tony Gonzalez (of course, without them, would Matt Ryan start looking like Jay Cutler?)

I identified the leader in targets for each team, and then calculated the percentage of team targets each leading receiver had in 2012. The table below lists that percentage on the Y-Axis; the X-Axis represents the number of ANY/A that player’s team averaged. Someone like Marshall (represented by the first four letters of his last name and the first two letters of his first name, MarsBr) will therefore be high and to the left, while Randall Cobb is low and to the right:

Grading receivers on different teams is complicated because receivers are so dependent on their teammates to produce. But at least in theory, I like plotting percentage of team targets (or receiving yards) against team passing efficiency. If the counter-argument to Marshall being great because of his high target numbers is that he’s being force-fed, then his team’s ANY/A would reflect that fact.

Similarly, if a player like Calvin Johnson is constantly double-teamed, presumably his lower target percentage would be offset by a higher team ANY/A average.

I’m reproducing the table below, but I’ve added a line that splits the data into two groups of 16 players/team: above and to the right of the line would be players that exceed expectations, while those below and to the left would underachieve relative to the other top receivers:

We know that players like Brandon Marshall, Michael Crabtree, Wes Welker, and Demaryius Thomas did well, but who did the best? One way to measure across teams is to calculate how far above the diagonal line each player was. For example, Brandon Marshall’s target rate was at 39.9% and his team averaged 5.10 ANY/A. However, the diagonal line at 5.10 ANY/A corresponds to 30.1%; therefore, we could say that Marshall’s target rate was 9.8% above expectations. If we perform that exercise for every receiver, we get the following table:

TmNameTmTarg%ANY/AProjDiff
SFOMichael Crabtree29.47.1219.69.8
CHIBrandon Marshall39.95.130.19.8
NWEWes Welker27.67.3918.29.4
DENDemaryius Thomas24.27.8515.88.4
CARSteve Smith28.56.721.86.7
HOUAndre Johnson29.66.3323.75.9
CINA.J. Green31.15.8326.34.8
INDReggie Wayne31.75.6527.34.4
NYGVictor Cruz26.86.5722.54.4
ATLRoddy White23.77.0320.13.6
TAMVincent Jackson26.36.3523.62.7
DETCalvin Johnson27.95.9825.52.4
NORJimmy Graham20.67.1819.31.3
BUFSteve Johnson29.15.52281.1
GNBRandall Cobb19.27.3718.30.9
SEASidney Rice20.37.1319.60.7
DALJason Witten22.76.4223.2-0.6
WASJosh Morgan16.97.4617.8-1
MIABrian Hartline26.45.2829.2-2.8
PITMike Wallace20.96.0525.2-4.3
BALAnquan Boldin206.1624.6-4.6
OAKDenarius Moore18.35.8626.2-7.8
JAXJustin Blackmon23.14.831.7-8.6
STLDanny Amendola18.55.6827.1-8.7
PHIJeremy Maclin20.45.2229.5-9.1
TENKenny Wright19.35.0830.2-10.9
MINKyle Rudolph19.44.9930.7-11.3
SDGMalcom Floyd16.75.4428.3-11.7
KANDwayne Bowe24.43.8336.7-12.3
ARILarry Fitzgerald263.4238.9-12.9
CLEJosh Gordon16.84.8931.2-14.4
NYJJeremy Kerley19.74.2934.3-14.6

• Josh Morgan played on the 2nd best passing offense in the league, according to ANY/A. But he comes in as below-average in this system, even though he was the team’s #1 wide receiver in terms of targets and receptions (he was actually 4th in receiving yards). Wes Welker played in an offense that was nearly as efficientg, but he was a much bigger part of the New England passing attack, and gets credited as such. On the other hand this is a bit of a strawman, as no one talks about Morgan as being a special player. But I’m glad this doesn’t overrated someone like him, which was a potential concern I had.
• Attempts are removed from the analysis. Players on high-passing offenses like Detroit don’t have a built-in advantage over players on teams like Chicago and Seattle.
• Jeremy Kerley gets dinged pretty severely in this system, which basically says: you were the biggest part of the third worst passing offense in the league; but unlike Larry Fitzgerald and Dwayne Bowe, you couldn’t even get a higher percentage of your team’s targets. If you can’t get 25% of the team’s targets on a terrible passing offense, you’re probably not a number one wide receiver. For the most part, I think this system generates pretty believable results.

There are some pretty obvious negatives, though.

Calvin Johnson against the Packers.

I think Calvin Johnson’s 2012 season has been overrated by many folks, but that just means I might put him at 2nd or 3rd in 2012 instead of as being one of the greatest seasons in history. Having him rank 12th feels ludicrous. Let’s try to understand why. Detroit ranked 16th in ANY/A, which is merely average. And Megatron was only 8th in target rate among number one receivers. This system thinks that if Johnson was the best receiver in football, Detroit’s ANY/A would have been a lot higher. And if Detroit’s ANY/A wasn’t higher because the other receivers on Detroit were so terrible, than Johnson should have been first, and not eighth, in target rate. After all, Brandon Marshall was on a terrible passing offense and saw 40% of the targets.

That is a legitimate argument, at least in theory. Now, what is the counter?

Perhaps the Lions erred by not throwing his way more frequently. It seems odd to suggest that, since usually such an argument would be reserved for a breakout player: that doesn’t apply here, since Johnson led the league in receiving yards in 2011. But consider that on 204 targets to Johnson, Detroit averaged 9.6 yards per target. On the Lions other 526 targets, the team averaged just 6.0 yards per target.

The two big offenders are Brandon Pettigrew and Tony Scheffler, although much of the blame also falls on Titus Young. Despite recording the second most targets among wide receivers on the Lions, Young only finished fifth in targets on the team behind the two tight ends and Joique Bell. He was so bad he couldn’t get looks, while the tight ends simply did nothing with the targets they received. Among the top 30 tight ends in targets, Pettigrew and Scheffler ranked 28th and 30th in catch rate. That alone is not damning, just like a quarterback’s low completion percentage is not the final word on his production. The problem for Pettigrew is that he wasn’t balancing out high-risk passes with high rewards: among those 30 tight ends, he ranked just 23rd in yards per completion. As a result, Pettigrew ranked just 28th in yards per target, and while Scheffler had a solid yards/catch ratio, due to an abysmal catch rate of 49.4%, he ranked only 25th in yards per target among those 30 tight ends. As for the other wide receivers, Young and Nate Burleson combined for 60 catches on 99 targets for only 623 yards.

When you look at why the Lions regressed in 2012, the tight end production sticks out like a sore thumb. In 2011, Pettigrew and Scheffler had 25.0% of the team’s targets, but combined had a 66% catch rate and averaged 6.8 yards per target and 10.3 yards per completion. Those aren’t great numbers, but in 2012, they accounted for 25.7% of the team’s targets and averaged 10.6 yards per completion, roughly in line with their 2011 production. The problem was their catch rate dropped to just 54 percent and they collectively averaged only 5.7 yards per target. With Titus Young imploding and Nate Burleson simply not being a good enough player to command more targets, the Lions passing game sank. If there’s a criticism of Johnson, it’s that perhaps he is not as complete a receiver as Brandon Marshall or Andre Johnson, who are able to be both possession receivers and big play threats. Arguably the Lions should have utilized Johnson more, which is pretty scary considering how many yards he picked up. But instead, Detroit wasted 286 targets on Pettigrew, Scheffler, Young, and Burleson.

There are other negatives to this system, although I’ll only focus on two more today. One is that we’re told that Larry Fitzgerald stinks, although I don’t know how you come up with any system that says otherwise based on what happened in 2012. I’m less concerned with that (just because I see no solution) than I am with the fact that the model thinks Michael Crabtree is the man. On one hand, perhaps he is.

Crabtree ranked only 14th in receiving yards, but the 49ers were 31st in pass attempts, which put Crabtree as a disadvantage. In terms of receiving yards per team attempt, Crabtree ranked 4th. But seeing him rank first here should sound the skepticism alarm in your head. My guess is that he’s benefiting from the 49ers great rushing attack, which was amplified once Colin Kaepernick came into the lineup. San Francisco ranked 7th in the league in ANY/A, but I doubt that would hold up if they threw 600 passes. Crabtree is fortunate to face defenses that aren’t expecting the pass.

But I don’t want to sell Crabtree short, either. Washington and Seattle ranked even higher in ANY/A than the 49ers, but their receivers do not stand out in the same way. Perhaps it’s a sign that Robert Griffin III and Russell Wilson are better at spreading the ball around, but I think the more likely scenario is that Crabtree is a very good receiver capable of carrying a passing offense on a run-heavy team, much like a Jimmy Smith or Michael Irvin. But it’s early: it will be interesting to see how Crabtree does next year.

• Danish

This project is amazingly interesting. I specifically like how we’re along for the ride in your thought process – we seemingly even have a degree of influence on it – and gradually inch our way closer to a solution.

• Chase Stuart

Thanks Danish. Really appreciate the kind words. You guys are absolutely influential to how this turns out. I’ve struggled with exactly how to grade receivers for years, and could use the help. Step one for me is putting everything I’m thinking on paper, and then hope I can figure out what step 2 looks like.

• Neil

Seems to me that the wide receiver quandary is the same as the usage-vs-efficiency debate in basketball (I think Doug even compared the two at some point). You’ve got situations where teammates are competing not just with opponents, but also with each other for a share of the team’s touches. So how do you compare the relative offensive contributions of, say, Hakeem Olajuwon — who had a massive usage but average efficiency — and Clyde Drexler, who had average usage but massive efficiency?

And that’s just among guys on the same team! Comparing across teams adds another layer of complexity.

http://www.d3coder.com/thecity/2012/03/30/visualization-the-outer-limits-of-the-usage-efficiency-relationship/

The bad news is that there are still holdouts who don’t even believe in a tradeoff at all (http://wagesofwins.com/), as well as the fact that the tradeoff isn’t a completely reliable predictor of future 5-man unit behavior (see 2010-11 Miami Heat, etc).

I don’t know if it’s a solvable problem in either sport, but at least this project is going to help move NFL WR analysis more towards the level NBA analysis is at right now.

• Hummmr

Thanks again. Very stimulating stuff.

Seems like this system has a lot of promise to it, but one comment I would make is that by using only end-of-season data, you actually can lose a lot of valuable information due to injuries, changes at QB, strength of opponent, and so on. For example, Josh Morgan only appears on this list because Pierre Garcon was hurt for a large portion of the year (in weeks 11-17, when Garcon was back and healthy, Garcon was getting almost 40% of Washington’s targets and almost double that of Morgan). Garcon’s missing time definitely inflated Morgan’s target percentage, deflated Garcon’s target percentage, and probably deflated Washington’s ANY/A for the season. In Crabtree’s case, the switch from Alex Smith to Kaepernick seems to have really benefitted him personally, as well as the San Francisco offense overall, so I would guess that the weeks prior to the Alex Smith injury/demotion probably skewed Crabtree’s numbers here, and seem to introduce unnecessary “noise” that affects this metric’s estimate of Crabtree’s “true” value. There are countless other examples of this, and it just seems that there is plenty of room to improve this metric to account for these types of things in order to increase the signal-to-noise ratio, and improve the estimate of each receiver’s value.
I’m not smart enough to know the best way to incorporate this type of stuff — I’ll leave that to Chase — and of course there is a tradeoff here because filtering out “noise” can also lead to sample size issues, increased effects of opponent strength, etc, but it does seem there is room for improvement here. And I realize Chase’s metric here is not presented as a finished product. Anyway, keep up the good work. I see lots of potential here. Can’t wait to see where things go from here.

• Chase Stuart

Thanks. Keep up the good commenting.

• Richie

Good thought. It might make more sense (though probably much more difficult) to use target rate and ANY/A only from the games that the given WR were playing in. (So for Garcon exclude games 2,3,6,7,8 & 9.)

• GMC

It’s nice to see this coming on the heels of what was done before, and I had thought even without this data that part of the Detroit passing game’s problem was the inability of anyone other than Calvin Johnson to get open. It’s interesting that Randall Cobb ended up being considered the #1 in Green Bay, since for much of the season he was the third receiver (or fourth) if everyone was healthy. Which they weren’t! It’s hard to compare his role on the team to Larry Fitzgerald’s or Calvin Johnson’s.

Also, does anyone else think that ANY/A appears to be rating the respective quarterbacks, rather than the receivers? Josh Morgan aside, the top four receivers in ANY/A play with Manning, Brady, Rodgers, and Brees. Meanwhile, Larry Fitzgerald and Dwyane Bowe were riding carousels of quarterback incompetence, and Kerley had only a couple of games of Greg McElroy keeping him from a complete season of Mark Sanchez. No, really? We may need to think a little bit more about our efficiency metric.

(note that all these guys are between about 18 and 28%, so that isn’t a big indicator for them).

• GMC

More thoughts:

1. It’s worth noting that my anecdotal views on the WR’s involved are borne out. Everyone who obtained 25% of their team’s targets is a receiver I think of as a legitimate star receiver who ordinarily will be the best on their team (even if, as in the case of Welker, they do so not in the traditional position outside, but from the slot). Everyone below 20% is a receiver I think of as being misplaced as a #1 option in any offense.

2. One can almost make the same statement about the “best fit line”. But that would tell us that Larry Fitzgerald is the worst #1 receiver in the NFL, and we know that isn’t true, because he’s a Calvin Johnson ACL away from being the best receiver in the NFL.

3. Maybe the answer is that we are going to fail at finding a measure of receiving talent because autocorrelation problems in our dataset are insurmountable?

• Tim Truemper

One way to think about the numbers is what do the numbers actually measure? The basic measurement point I’m making is Construct Valuidity. The data analysis possibly does not measure who was the “best # 1 receiver” but provides a metric of who had the best performance values within the circumstances of their team. Chase mentions that perhaps Crabtree is “the man” but qualifies this because of the SF running game making it easier for his achievements, etc. To take another case, Larry Fitzgerald’s numbers within the above metric suggest that he struggled to perform per the system and circumstances he was placed into (below level QB’s,problems with offensive strategy, etc), not that he is a “bad # 1 receiver (something we can all agree on).

I think pursuing the idea of who is the best receiver in any given year is too difficult a concept to objectively measure. To Chase’s credit, he created a logical formulation, then tested it with the numbers. The “face” validity of the outcome looks pretty good such as for Brandon Marshall and Calvin Johnson because it goes with our preconceived ideas of who is better based on subjective impressions of their ability. Thus, my initial point is that this is not about “the best receiver,”but instead about something more specific–best performace numbers within one’s team situation. I have to add that Chase’s analysis is great because of the scope and careful analysis that was put in, and it priovides a good way to think about WR performance within its own team context.

• Chase Stuart

Due to being an idiot, I accidentally overrode the table above.

• Chase Stuart

But due to Danny being the man, this is now fixed.

• Zulwarn

You might also want to separate the tight ends since they’re not standard receivers. Good work though.

• Tim

For the commentors above- read about validity and statistical methods. A clasasic text is Winer- Stattistcal Principles in Expereiental Design. Technical but it gets to what Chase’s modeling of WR quality ((more precisely, performance within context) is trying to do.

• Pingback: A closer look at Danny Amendola()

• Wow, fantastic blog layout! How long have you been blogging for? you make blogging look easy. The overall look of your web site is wonderful, as well as the content!. Thanks For Your article about Analyzing the leaders in targets in 2012 .

• Just wondering, before I consider subscribing… is this an actual, individually composed letter to each subscriber, or does every single writer compose a single letter every week and send the same composition to all of the subscribers they’re assigned to write to?