Archive for the ‘Rankings’ Category

Assessing US News Peer Assessment

Wednesday, August 19th, 2009

Over at Inside Higher Ed, Stephanie Lee, in Reputation Without Rigor, looks at the methodology behind the US News peer assessment survey.   Inside Higher Ed obtained the peer assessment survey form submitted by 48 of the top100 public universities in 2009 US News university rankings.  While she found some gaming, some “major oddities,” most respondents gave “honest, if imperfect” responses.   Her overall conclusion:

the reputational survey is subject to problems, such as haphazard responses and apathetic respondents, that add to the lingering questions about its legitimacy.

Some of the persons who responded on behalf of universities complained of the difficulty of giving an overall evaluation of a university, as opposed to particular programs.  Presumably, that’s less of a problem for the law-school survey.  The real problem was time:

Ten hours. With 260-some colleges, giving each two or three minutes of attention, that’s how long it would take to adequately respond to the U.S. News survey, estimates Daniel M. Fogel, president of the University of Vermont. And he says that’s time no one like him can afford to spend.

With the number of law schools at 200 or so (and growing!), the time problem also affects the law-school peer assessment surveys.

Gary Rosin

Peer Assessments and the Great Divide

Tuesday, August 18th, 2009

Earlier this year, Paul Caron listed the peer assessment scores from the 2010 US News law school rankings. The rating scale runs from 1 (highest) to 5 (lowest). Here’s how the numbers fell out:

Average: 2.55

Percentiles
     25th between 1.9 (22nd) and 2.0 (30th)
     50th between 2.3 (47th) & 2.4 (55th)
     75th about 2.9

The distribution (with a normal reference curve) looks like this:

Distribution of 2010 US News Peer Asessments

Looking at the actual scores, the distribution is decidedly non-normal.  Of particular interest are the “fat tails”–the distributions of the top and bottom 25 percent, which are much larger than would be expected with a normal distribution.  The top 75% of law schools have peer assessments have a range of 1.5 points (1.4 to 1.9), while the for bottom 25%, peer assessments have a slightly larger range, 1.7 points (from 3.1 to 4.8).

Assessing Student Learning: A Report from the Eastern Front

Tuesday, August 18th, 2009

Educating Lawyers:  Preparation for the Profession of Law (2007) started it (again).  Then the ABA adopted minimum Bar passage standards for law schools (Interpretation 301-6 of the Standards for Law School Approval).  Then the ABA’s Standards Review Commmittee appointed a Special Committee on Outcome Measures (Report and Comments).  The upshot is that law schools are going to be paying much more attention to assessing student learning.  Law schools are latecomers at this.  You might even say we’re a back-water.  The rest of the American academy started a while ago, not to mention grades K-12 and No Child Left Behind.

On the Brainstorm blog on the Chronicle of Higher Education website, Sara Goldrick-Rab asks “Is Our Student’s Learning?”  She discusses a recent presentation, at the meeting of the American Sociological Association, on the CLA Longitudinal Study.  According to Goldrick-Rab, one of the findings is that

[S]tudents who start behind tend to stay behind; put another way those inequalities at the starting gate are consistent.

Certainly, that’s what my analysis of law-school first-time Bar-passage rates shows:  Law schools whose students have lower LSAT scores have lower Bar passage rates.  See Unpacking the Bar:  Of Cut Scores and Competence, 32 J. Legal Prof. 67 (2008) (submission draft).

For law schools with a greater number of at-risk students, the question is whether better instructional practices and assessment, as well as academic support, can improve student learning.

Gary Rosin

Rankings: Peer Assessments and Test Scores

Monday, August 17th, 2009

Inside Higher Ed has an interesting piece, More Questions on Rankings, on an article by Kyle Sweitzer (Michigan State) & J. Fredericks Volkwein (Penn State), Prestige Among Graduate and Professional Schools: Comparing the U.S. News’ Graduate School Reputation Ratings Between Disciplines (forthcoming in Research in Higher Education).  The Inside Higher Ed notes that the article concludes that

Peer reviewers also seem to place a high emphasis on standardized test scores, with the average score significant for all of the graduate categories except education. Test scores also appear to have the greatest influence on the reputation (as measured by the survey) in law and medical schools.

Update:  Over on TaxProf, Paul Caron discusses the article in Peer Reputation:  Size Matters, 

Gary Rosin