Wednesday, October 3, 2012

Moneylaw: Look at the law school data

Moneylaw: Look at the law school data

by Robert Steinbuch, law.com
October 3rd 2012

Recently, Justice Clarence Thomas said that he doesn't concern himself with law school rankings in hiring law clerks. Like the rest of the Supreme Court justices, however, Thomas gets virtually all of his clerks from top schools. Indeed, 54 percent of his clerks came from three schools alone: Harvard, Yale and Chicago. Thus, regardless of his source materials, Thomas seems pretty much just as concerned about what undergirds law school rankings as most everyone else. That's not to say that rankings aren't complex and that various issues need not be addressed in employing them.

For example, my school recently received the admirable grade of "A-" under the National Jurist's assessment rubric for the "Best Value Law Schools"—placing the school in the 21-to-35 range. Because U.S. News & World Report's overall law school ranking does not consider affordability and evaluates several key factors not considered by the National Jurist, under U.S. News' metric, my school garners the 119th spot. Both measures, among others, are useful to informed consumers.

In contrast, some eschew law school (and other) ranking systems. I find unsupportable claims against quantifying and then comparing complex matters. Rankings naysayers tend to prefer finger-in-the-air analyses or concealed, and often less rigorous, methods; and to the extent that these cynics are in the system under evaluation, their crypto-pseudo-evaluative tools frequently tend to inure in their own favor. Moreover, absent rigorous assessment tools, the pallid puffery of pure "P.R." has the unsavory habit of co-opting the available information vacuum.

Of course, as a law professor, I'm required to rank complex matters all the time, as I grade law students. I'm never sure whether when academics attack the notion of rankings, all along grading students, they realize their hypocrisy. Of all the academically related rankings, grades are the most ubiquitous and likely have the most dramatic impact on students' lives. Yet some professors take this process for granted, while questioning the legitimacy of even pursuing such an endeavor in other contexts—be it ranking law schools, academic scholarship or something else. And, unfortunately, grading is often far less scientific and disciplined than the systems under greater critique by some in the academy.

Billy Beane of the Oakland Athletics, depicted in the book and movie Moneyball, demonstrated the importance of quantifying and comparing complex evaluations—in his case, the value of baseball players. Of course, no ranking system of complicated matters is perfect. But, for example, as a ballplayer, the Billy Beane portrayed by Brad Pitt would have been a ninth-round, no-signing-bonus draft pick, rather than a first-round, high-cash rookie—based on the data available at the time—had the moneyball ranking system been adopted. The Billy Beane story also displays the conventional resistance to the introduction of statistical and other scientific analyses—a phenomenon not unrelated to that experienced by Copernicus half a millennium ago when his contemporaries refused to believe that they were not the center of the universe.

The same challenge exists to this day, albeit figuratively. So, when my colleague Josh Silverstein successfully sought to improve my school's grading system, he received a few initial objections similar to—but likely less colorful than—those that Beane faced. But as Beane demonstrated, applying statistical methods to previously unscientific and unremarkable analyses has changed the game of baseball and should serve as a lesson to the academy. No longer do we have the "curse of the Bambino." Now we have a Red Sox championship. I, for one, prefer to believe in proven success rather than hexes.

Perhaps the difficulty for those unsympathetic to rankings is their inability to operationalize their sound realization that such quantitative comparisons necessitate evaluations of the factors considered within any metric to see whether the underlying normative assessments coincide with those of the user. Thus, for the above two disparate rankings, each person must determine for herself whether neither, either or both are right. Of course, the answer depends on personal preferences. Indeed, I discussed this very dilemma in my recent article on Brian Tamanaha's excellent book Failing Law Schools. As both Tamanaha and I depicted, the choice between price and prestige is one faced by many students—and individuals may value these factors dissimilarly. Having all of the above metrics allows would-be students to be better informed, regardless of their individual preferences.

Of course, anyone evaluating a ranking system should examine the specific factors within the metric to see how it measures and weighs what she values. For example, the National Jurist ranking is based on average student debt, tuition, cost of living, two-year bar passage average (comparing that to the state's average) and weighted employment rate. These all seem like reasonable factors, but individuals should drill down on some of them more than others. For example, a high average debt that is a function of limited student aid offered by a school is very different from a high average debt caused by a student base that is disproportionately lower income. The latter might be the case at, say, a historically black college, and should not be considered a negative, while the former should be of concern, particularly if the student is interested in (and likely to get) a scholarship—be it merit-, need- or diversity-based. My school did well on this factor, and particularly well on tuition.

My school did not do as well on bar-passage rates. Overall class bar-passage rates, however, are driven far more by the quality of incoming students—as measured by LSAT scores and undergraduate grade-point averages—than by what any particular law school provides. In this regard, would-be students should compare their individual metrics to the average of their classmates used in any ranking system. This will provide some guidance on how applicable the overall bar-passage rate is to them. This is particularly important for those given some bump-up during admissions—for example, those receiving transformative consideration for veteran status, diversity reasons or legacy standing—as the evidence convincingly shows that students entering with lower LSAT scores and GPAs have lower bar-passage rates.

For example, a recent study at my school demonstrated significant differences in bar-passage rates across certain demographic groups—driven at least in part by admissions programs that seek to have classes mirroring the general population—as opposed to those bachelor-degrees students with both higher LSAT scores and GPAs. The cause at one level is simple math. Think of it this way: If Columbia decided now to double the size of its incoming law school class, the average LSAT scores and undergraduate GPAs of the additional students would fall below the first half of admittees, because schools don't generally skip better-qualified candidates willing to attend for less-well credentialed applicants. So, the remaining contenders should have a lower average objective academic profile. The same holds true when increasing constituent class components.

While a particular ranking might show a given school as a great value or not, would-be students must assess their individual chances of graduating, passing the bar and getting a job. Law school almost invariably is not a good value for those who flunk out, prove unable to pass a bar exam or cannot get any job requiring a law degree—notwithstanding their desires to the contrary (as law school graduates unsurprisingly tend to want to practice law).

Law students should be particularly informed consumers; they are learned and investing considerably in further education. Rankings are one sound resource available to assist them in this search for knowledge. These metrics should be evaluated—dare I say, ranked—and used appropriately.

Robert Steinbuch is a law professor at the University of Arkansas at Little Rock William H. Bowen School of Law.

Original Page: http://www.law.com/jsp/nlj/PubArticleNLJ.jsp?id=1202573675913&rss=rss_nlj&utm_source=twitterfeed&utm_medium=twitter&slreturn=20120903202902

Shared from Pocket



Victor Cuvo, Attorney at Law
770.582.9904
(sent from new iPad)

No comments:

Post a Comment