Photo by Jarret Skov

Computers should have more say in determining the BCS standings because they can produce a more accurate ranking of teams than human voters can.

Currently, all computer rankings combined account for one third of the BCS standings, while the Harris Poll and USA Today Poll each account for one third each. This gives humans, who do not have the same capacity as computers and are subject to their own biases, twice as much input as the result of seven algorithms, each independently constructed to determine the best teams. However, computers are not perfect, and they have produced some questionable rankings in the past. I believe there should be a fifty/fifty split between humans and computers in order to utilize the best attributes of each group while minimizing their limitations.

The main problem with human rankings is that they are subject to biases that are tough to remove. This is magnified with the existence of preseason rankings, which are formulated entirely via speculation with preference to teams in perceived “tough” conferences or that have performed well historically. Another major problem with preseason rankings is that they serve as the basis for all of the succeeding rankings, which makes the rankings less volatile. A highly ranked team in the preseason may lose a bad game in week one and still be ranked, while better unranked teams fly under the radar for longer than they should.

College football would benefit if no rankings were published until at least a month into the season. This would lessen the bias effect of humans since there would be no preseason rankings to use as a base, though it would not eliminate their biases entirely. It would also give both humans and computers enough of a sample size to produce fairly accurate rankings, which is why the BCS standings are released so late into the season in the first place.

Also, no human has the time or capacity to process an entire weekend’s worth of football every week, so a lot of rankings are made based off of the small percentage of games that the voter actually watched. Finally, games that occur later in the year hold more significance in the voters’ heads than games played in September, since they are fresher in the voters’ memories. A loss to an unranked team in November would kill a top ten team, whereas the team would have plenty of time to recover if it had occurred in the first few weeks instead.

For example, after Thanksgiving weekend in 2012 the AP poll had No. 3 Georgia three sports ahead of No. 6 Oregon. The teams had identical 11-1 records: Georgia lost 35-7 against No. 6 South Carolina in week six, while Oregon lost just one week prior against No. 13 Stanford in overtime, 17-14. Meanwhile, the Colley Matrix, which in my opinion is the most reliable computer ranking, had Oregon three spots ahead of UGA. Oregon had a much higher strength of schedule than UGA that year and played a much closer game in their only loss, but since their loss occurred more recently, they ended up lower in the human poll, and therefore the BCS standings as well.

The Colley Matrix is a prime example of a computer algorithm that uses strictly objective measures to determine the rankings.

First and foremost, according to its website, colleyrankings.com, “The rankings are based only on results from the field, with absolutely no influence from opinion, past performance, tradition or any other bias factor.”

It strongly incorporates a team’s strength of schedule in determining its rankings, and completely ignores margin of victory or defeat. It is a conceptually simple formula that produces remarkably agreeable results.

Of course, this is all an article in futility as the selection for the new playoff system that begins next season will rely solely on a human committee. Hopefully they will look to the computers before making their final decisions.