Follow by Email

"Smart people (like smart lawyers) can come up with very good explanations for mistaken points of view."

- Richard P. Feynman, Physicist

"There is a danger in clarity, the danger of over looking the subtleties of truth."

-Alfred North Whitehead

June 1, 2010

Evidence-based vs. research-based

Promoting a practice as “evidence-based” is currently the vogue among judges and social workers. I believe that is a good thing. However, authentic evidence-based practices in the law are few and far between.

For a practice to be “evidence-based” it must meet the following conditions: 1.) The practice has been evaluated using an experimental or quasi-experimental design. 2.) The practice, through the use of statistical analysis, has been shown to be effective against the dependent variable it was designed to combat, for example recidivism. 3.) The results have been peer-reviewed by others with expertise in such research. (Cooney et al 2007, see previous post and link. They also include an endorsement by an organization, which I wouldn’t include.) Satisfying these three conditions is not easy.

First, the type of work we do as judges is not easily configured to meet experimental design. One of the principal factors in this design is minimizing change in everything but the factor being studied, called the independent variable. As anyone involved in the law knows, no two cases are alike regardless of their apparent similarities. Further, we cannot easily randomize our treatments. For example, would it be ethical (or legal) to randomly sentence one criminal defendant to probation and into a treatment program, while denying another similarly situated defendant the same treatment, and sentencing that defendant to prison because you the judge were following an experimental design to determine the efficacy of the treatment? I don’t think so.

Those scientific design problems could be somewhat overcome through statistical methods that aggregate data or through the use of historical data (quasi-experimental design). Herein lies the next problem. The sample sizes we have available for such studies are often too small for statistical methods to detect anything but huge effects. Many double blind, randomized drug studies will include sample sizes involving tens of thousands of individuals. (A double blind study is one in which neither the person receiving the treatment or giving the treatment know what the treatment is—clearly a condition that would be close to impossible to meet in legal practice.)

Legal studies would most likely not be randomized, much less double blind, and our sample sizes are most likely much smaller. The result of smaller sample size is a corresponding reduction in the ability of statistical methods to detect a statistically significant effect.

These two conditions alone make it very difficult to have many legal practices meet the criteria of being “evidence-based.” I will address the peer-review condition and why I believe us legal practitioners should be “research-based” rather than “evidenced-based” in later entries.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

1 comment:

  1. This blog proves the old adage that it is difficult, if not impossible, to properly fit a square peg in a round hole. People who think that what judges do can be easily addressed by a one size fits all approach need to walk in our shoes. Good job, Steve.