Torvik metrics now added to team sheets used for NCAA selection
- By macarthur31
- Wildcat Basketball Board
- 3 Replies
I subscribe to Eamonn Brennan's substack (former college basketball beat writer for The Athletic), and he had an interview with Bart Torvik regarding the inclusion of his metrics onto the teamsheets. I wanted to share an excerpt from the interview as I appreciated the nuance that his metrics are trying to capture (bold is Eamonn Brennan, while italics is Bart Torvik)
I understand that just because these types of metrics are on the teamsheets, that it doesn't guarantee that they'll be utilized. However, I appreciate the continuous learning and innovation that folks out there are trying in service to facilitating a more equitable evaluation.
The interview closes with this exchange:
Analytics like "Wins Against Bubble" will improve, and ultimately Torvik (and a growing chorus of others) believe that there's no point to having a committee. However, I agree with Brennan (not excerpted here, but summarized from his end of newsletter reflection on the interview) - it'll really be tough to imagine that college basketball fans will accept a pure formula to determine the brackets.
First of all, for the uninitiated, could you explain the general Torvik rankings and, maybe especially, Wins Above Bubble? What are they designed to do? How are they similar or different from what the committee already uses? How should fans understand their place on the team sheets?
Re: the general ratings, they are fairly similar to Kenpom, the NET, and BPI. Like Kenpom and the NET, the core of the ratings is based on adjusted offensive and defensive efficiency: points scored and allowed per possession, adjusted for strength of opponent and the location of the game. There are additional adjustments that give recent games more weight and give very little or no weight to blowouts in mismatches.
My ratings have two more unique aspects. First, for each game I use play-by-play data to determine a team's average lead or deficit, and derive an alternate calculation of efficiency using that data, which I then average with the "pure" efficiency. This rewards what some call "game control" in football. In short, a team that gets up 15 points and maintains that lead will have a better rating than a team that played a close game and then stretched the lead to fifteen in the final minutes (even though the pure efficiency numbers will be identical).
Second, and relatedly, I disregard garbage time for purposes of calculating this "GameScript +/-" stat. This further deemphasizes relatively meaningless fluctuations in the final score, and provides some disincentive to run up the score.
Wins Above Bubble is ultimately based on wins and losses only, but it uses the underlying power rating to give more credit for wins over better teams (and more punishment for losses to worse teams). Using any power rating, you can calculate how many wins a bubble-quality team would expect to win against any given team's schedule. For example, under my system a bubble quality team would on average be expected to have won 19.1 games against Wisconsin's pre-tourney schedule last year. Since Wisconsin actually won 22 games against that schedule, they had a WAB of +2.9. If they'd won 17 games, they'd have had a WAB of -2.1.
WAB is similar in theory to ESPN's Strength of Record metric on the team sheet, but I believe it is a little better tailored for tournament selection. Also, since the NCAA will be using NET to calculate its WAB metric, I think there is a good chance that it will become the de facto resume standard for the committee and may get more attention, which would be good in my view.
I understand that just because these types of metrics are on the teamsheets, that it doesn't guarantee that they'll be utilized. However, I appreciate the continuous learning and innovation that folks out there are trying in service to facilitating a more equitable evaluation.
The interview closes with this exchange:
Lastly, I have a theory, one I feel was hammered home by the news Thursday: NCAA Tournament selection is better than ever. That doesn't mean it's perfect, or that there aren't flaws with the NET and certain committee emphases. But the process itself -- and especially the data being used -- is light years ahead of where it was a decade ago, especially relative to the number of annual complaints about it. What do you think?
I agree the data is better, and I get the sense that there is a real commitment from the people in charge to use the best data available. There may be disagreements about what that means, and I think that's part of why they like to have a variety of different ratings available. I will say that I'm sticking to my line that "committees aren't sports" and there are some aspects of committee decision-making in general that are not ideal and will never be ideal. We don't really need a committee at all. But I understand some of the reasons we have one, and likely will continue to have one, and I agree they are getting better over time.
Analytics like "Wins Against Bubble" will improve, and ultimately Torvik (and a growing chorus of others) believe that there's no point to having a committee. However, I agree with Brennan (not excerpted here, but summarized from his end of newsletter reflection on the interview) - it'll really be tough to imagine that college basketball fans will accept a pure formula to determine the brackets.