Which Conferences Get Too Many Top 25 Teams?

The adjacent table attempts to show which conferences live up to their preseason predictions and which ones don’t.  It can also be understood as: Which conferences are overrated or underrated during the preseason?  The table counts conferences that had at least 10 top 25 teams the Final AP Poll during the ten year period ending in 2011.

Other Posts:







Selection Sunday: What Were The Best Conferences in Mens College Hoops in 2011?

In the adjacent table, I attempt to find out how the conferences stack up against each other by measuring the parity of the conference and the strength of the conference.  The conference labeled in blue means that it was top 5 in that category and the red label means bottom 5. 
There are some interesting results in this table; the Pac-10 being the best overall conference came as a slight surprise to me.  It was in the top 5 in conference parity and conference strength, so I should not be too surprised.  If I had to pick the top 5 conferences just based on what I saw and perceived, I would have picked the same 5, but probably not in that order.  The Southland conference had the most parity with a 136.2 rating; by default that was a surprise because I haven’t been following the Southland all that closely.  The strongest conference, which was more important to me, was the Big Ten and I would tend to agree with that.  An earlier post of mine showed that the 2011 Big Ten and Big East conferences were two of the top three strongest conferences of the last decade.  Click to see my methodology
I see perfect parity in the conference meaning that all teams finish at .500 in conference play, as a result, the standard deviation of wins in conference should be zero.  Therefore, as the conference standard deviation of wins approaches zero, the parity rating is higher.  The converse is also true; as the standard deviation moves away from zero the parity rating would be lower.  A parity rating of 100 means average parity compared to the other conferences.  For example, the Southland Conference parity rating is 136.2, which means that conference has higher than average parity.
Conference strength was a much simpler measurement; I just used the conference’s non-conference win percentage and compared it to the rest of the conferences.  Just as with the parity measurement, 100 is average. 
To measure the overall quality of the conference I combined the strength and parity indices.  I believe that conference strength is more important than conference parity when comparing the quality of different conferences, so I counted strength twice and parity once in computation of overall conference quality.  However, I also believe that a conference having great strength and little parity should not be rewarded with a high overall quality rating.  As a result, I used the geometric mean of the strength and parity measures, rather than the simple average of the two.  Here’s an illustration showing why I chose geometric mean.  Look at the Southland Conference; it is in the top 5 in parity and the bottom 5 in strength.  If I just averaged parity and strength, the parity measure would have too much influence on the overall quality measure: the overall quality would be 103.9, an almost 2 and a half-point increase, a big difference on this scale.