The recently-completed college football season provides an excellent opportunity to illustrate some subtleties inherent to team ratings in a pratical, rather than theoretical, manner. Specifically, the distinction between "best team in the land" (paraphrasing the OSU coach) and "team with the best season" illustrates the difference between predictive and retrodictive ratings quite well.

Clearly a definition is needed. The "best team in the land" is the team most likely to beat any other team. A well-known axiom of sports is that the better team doesn't always win the game. The phrase "that's why they play the game" means that the game result includes many random variables, not that you don't know who is better until the teams play. The obvious corollary to this is that the team that won the game isn't necessarily the better team. A less-obvious corollary is that winning every game doesn't mean you are the best team in the country.

The question is then: given the randomness, how do you determine who is the best team? One can find an analyst who will examine the players and coaches of each team, combine this with his years of experience, and give his you his expert opinion. While such analysts are usually well-informed and experts in the field, their opinions are just that -- opinions. Consequently a lot of people share the opinion that the "best team" cannot be measured quantitatively.

The good news is that it can. I'll refer you to the explanation of my predictive ratings for details, but the bottom line is that a team's performance in a game very accurately tells how good it is relative to the other team. Making the proper calculations, it is thus possible to determine how good each team is. My tests of the predictions show that the statistical model works perfectly -- i.e. the predictions are as good as expected. Using this system, Ohio State isn't anywhere near the best team in the country; that title most likely (~90% confidence) belongs to Kansas State or USC. This isn't my opinion, it is fact!

If you had to place a bet on the winner a USC vs. Ohio State game at even money odds, you'd be foolish to put your money on OSU. That doesn't mean they won't win; they have close to a 1/4 chance of winning such a matchup. But the odds are in USC's favor beause it is the better team. Before the Fiesta Bowl, I noted that Miami had a 2/3 chance of winning. They lost, of course, but that doesn't disprove my predictions. It means that if the teams played three times, Miami would probably win two of the three. It so happens that they lost the game that was played, but again that doesn't necessarily mean that they aren't the better team.

Buckeye fans are probably upset that I'm knocking their beloved team -- "14-and-oh!" The response is that I do give them credit, but in the other ratings. "Retrodictive" ratings measure how good a season each team has had. This is a fundamentally different problem. A team's season should be judged based on one thing -- winning and losing -- and Ohio State succeeded by that criterion. Consequently, my "standard" and "win-loss" ratings are designed to put the emphasis on who beat whom rather than the winning margins. Using ratings calculated by such a retrodictive system, Ohio State is indeed the team that had the best season, which is what the championship trophy is given for.


Return to ratings main page

Note: if you use any of the facts, equations, or mathematical principles introduced here, you must give me credit.

copyright ©2002 Andrew Dolphin