"Smart people (like smart lawyers) can come up with very good explanations for mistaken points of view."

- Richard P. Feynman, Physicist

"There is a danger in clarity, the danger of over looking the subtleties of truth."

-Alfred North Whitehead

July 4, 2011

The End of Something

I will be discontinuing regular posts to this blog. I will post intermittently when something moves me. I have written what I needed to write and have learned much. However, empirical science moves slowly. It takes time to complete an empirical study and much more time to verify the study with other studies to discern any truth. (Not to mention that there are studies that are just plain nonsense from a methodological perspective and from the conclusions drawn from the statistics.)

I desire not to waste your time or mine with writing things because I feel compelled to write a blog entry rather than because I have something I feel worth communicating. Therefore, if you are interested, you may want to have the blog sent to you via e-mail. I will now spend the rest of my summer free-time reading, riding my bicycle, and playing music. Have a great summer.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

To Win an Argument or to Find the Truth

A recent theory of argumentation that has been generating considerable discussion states that the human reason has so many cognitive biases and other flaws because reason evolved for the purpose of winning arguments through persuasion and not as a means of discovering knowledge and making better decisions.[i] These researchers argue that the existence of such cognitive biases, most powerfully represented by confirmation bias, the tendency to view evidence in way that confirms one’s prior views, makes sense only if one views the evolution of human reason for the purpose of winning arguments rather than discovering knowledge. If discovering knowledge was the primary purpose, our reasoning ability would not be as flawed as it is.

The paper is worth the read for anyone interested in the process of reasoning. As for confirmation bias, unfortunately it appears that this error in thought has been elevated to a creed within our political system. One only has to ask a devout Republican and a Democrat (or those associated with each party) to explain their perspective on an issue to see confirmation bias in full action. To allow even the potential correctness of the other side’s position is considered weakness and rejection for being an apostate—a logical result of a system driven by a hypercompetitive desire to prevail at a ballot box rather than to solve problems.

The authors explain that all hope for an actual conversation leading to knowledge is not lost. These researchers state that “people are quite capable of reasoning in an unbiased manner, or at least when they are evaluating arguments rather than producing them, and when they are after the truth rather than trying to win a debate.”[ii] Those of us in decision making positions that affect others would better serve our society by really listening to others, and not only to find the errors in other’s arguments, but to also find any potential truths that expose the errors in our perceptions and thoughts.



[i] Mercier, Hugo, and Dan Sperber, BEHAVIORAL AND BRAIN SCIENCES (2011) 34, 57 –111; http://www.dan.sperber.fr/wp-content/uploads/2009/10/MercierSperberWhydohumansreason.pdf

[ii] Ibid. p. 72.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

June 20, 2011

Psychology and Confessions

Neuroscientist David Eagleman, in his book Incognito, describes the human brain as a team of rivals in which various parties in the brain can compete with each other to create a sense of being “conflicted”. He also discusses research that shows that keeping secrets is unhealthy for the brain. Simply stated, he says, a secret is a competition within the brain between telling someone something or not telling them it. This tension is what makes something a secret. He states that secret agents and spies are probably equipped with a strong module for withholding secrets. In contrast, based on the high percentages of confessions in juvenile delinquent cases, it doesn’t appear to me that juveniles have much ability to keep a secret.

Which brings me to the law of confessions. I never believed that the law regarding whether or not a confession was voluntary made much sense. What does it mean to say that “pressures brought to bear on the defendant by representatives of the State exceeded the defendant’s ability to resist”? Except for those defendants who turn themselves into police and actually volunteer a report of their transgressions, almost any confession is the result of pressures brought to bear on the defendant by the police that exceeded the defendant’s ability to resist. That is why they confessed.

Asking some defendants the question, “Did they commit the crime?” is sufficient pressure to exceed their ability to resist, so they confess. That statement would not and should not be considered “involuntary”.

Further, it is stated that the confession must be “a product of a free and unconstrained will.” This formulation requires that there is some entity (apparently called the “will”) that operates as a cause without a cause or as the “ghost in the machine” or the “soul.”

I could never figure out how one would evaluate whether or not another person's confession was the product of a free and unconstrained will. How does one get access to another’s will, except by projecting one’s own conception of their own “will” onto another?

The concept of having a “will” is a cultural and religious construct. For example, some Christian denominations believe in the possibility of “free-will” while others don’t. Buddhists view the concept of having a self as an error in thought.

Instead of looking into some metaphysical conception such as someone’s will, the law would be clearer if the inquiry would be simply reduced to two formulations that encompass the law regarding confessions. The first question is: Were the tactics the State used to obtain the confession incompatible with the values of our society as they relate to a citizen’s relationship with the State? For example, the intentional infliction of physical pain or the threatening of harm to force a confession is clearly incompatible with what we believe is proper behavior a State actor should take toward a citizen. If the tactics were incompatible with societal values, the confession is suppressed.

The second question is: Was the confession obtained unduly unreliable? If it is, the confession is suppressed. In many instances, an unreliable confession will be the result of coercive police conduct with a defendant who is susceptible to providing a false confession. However, Courts have found confessions to be “involuntary” under circumstances where the confession was obtained from an inordinately susceptible individual with very little police encouragement. See for example, State v. Hoppe, 2003 WI 43. The focus should be an evaluation of the unreliability of the confession. The exclusion of unreliable confessions increases the accuracy of the determination of guilt within our criminal justice system.

A focus on proper police conduct and on a statement’s reliability would get the Courts out of pretending to be able to divine what someone’s “will” was (what about that defendant whose will was to confess but other demons in the mind were not allowing him to do so—could any amount of coercion make his statement “involuntary?) and when that “will” was overcome by police pressure resulting in a confession that is not the product of a free and unconstrained will.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

June 13, 2011

David Eagleman Speaks on Neurolaw

Here are a couple of videos of David Eagleman speaking about law and neuroscience.

http://eaglemanlab.net/neurolaw

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

The Brain as a Team of Rivals

David Eagleman, in his recent book, Incognito, The Secret Lives of the Brain, argues that the brain is composed of various modules, some with competing objectives, comprising a “team of rivals”.

Eagleman, a neuroscientist at Baylor College of Medicine and director of the Initiative on Neuroscience and the Law, further argues that “blame-worthiness” in the criminal justice system is the wrong question. He persuasively argues that as the understanding of the brain advances, including our understanding of the neurological bases for much deviant behavior, behavior for which defendants had been considered “blameworthy” now lead to legal findings of “not-blameworthy.”

Eagleman argues that the only question is whether the behavior of the defendant can be modified. If it can, then rehabilitation, sometimes in the form of punishment, is appropriate. If the behavior is not modifiable, then a defendant should be warehoused in a place where he or she cannot harm the public.

Eagleman’s perspective on the criminal justice system is purely scientific, and his conclusions logically flow from this perspective. However, as I have argued in other entries in this blog, the criminal justice system is not merely a treatment system, but a social and political institution built on a society’s history and beliefs including religious beliefs—many of which are antithetical to scientific findings and the scientific method. The use of science in the law has sociological limits.

Eagleman’s book addresses many of questions involved in deciphering human behavior and is a must read for anyone involved in the criminal justice system or engaged in any undertaking that involves the modification of human behavior.

June 5, 2011

Looks Matter

Several studies have examined the relationship between the physical attractiveness of a defendant and jury verdicts. One study found that defendants who were considered physically attractive by the jurors were treated more leniently by the jurors and those that were considered physically unattractive were treated more harshly by the jurors.[i]

Another study found that defendants who were considered physically attractive were almost twice as likely to be acquitted than those considered unattractive.[ii]

Finally, a researcher looked at the effect of the physical attractiveness of a victim on a jury’s verdict in a car theft case. His research showed that a defendant was judged more harshly when the victim was physically attractive than when the victim was physically unattractive provided the victim was also careful in attempting to prevent the theft.[iii]

Now all of the above studies involved the use of mock juries. Whether or not these same patterns would hold in an actual trial is another question. However, the studies do indicate yet another potential prejudice of which we need to be vigilant.



[i] Izzett, R.E. & Leginski, W. (1974). Group discussion and the influence of defendant characteristics in a simulated jury setting. The Journal of Social Psychology, 93, 271-279.

[ii] MacCoun, R.J. (1990). The emergence of extralegal bias during jury deliberations. Criminal Justice and Behavior, 17, 303-314.

[iii] Kerr, Norbert L., (1978), Beautiful and Blameless: Effects of Victim Attractiveness and Responsibility on Mock Jurors’ Verdicts, Personality and Social Psychology Bulletin, Vol. 4, No. 3.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

May 30, 2011

Racial Bias and Judicial Decisions

Racial bias and prejudice should have no place in a judicial decision. How good are judges at excluding this pernicious tendency from their deliberations? Researchers attempted to evaluate this question.[i]

These researchers explain that two types of potential bias exist can exist in the courtroom. The first is explicit bias which is bias that people are aware of and sometimes openly acknowledge. With cultural change, explicit bias has decreased and is certainly unacceptable within the judiciary.

The second type of bias is implicit bias which is bias that one may not be aware of and that operates at the unconscious level. Various techniques have been designed to measure implicit bias. One such technique is the Implicit Association Test (IAT) which measures the association between a race and words such as good/bad. (For several such tests see for example, https://implicit.harvard.edu/implicit/demo/ )

The Rachlinski study showed the judges who were tested (from various geographical locations around the U.S.) had a strong white preference on the IAT test. The black judges showed no strong racial preference.

These researchers that attempted to discover whether measures of strong white preference in the IAT impact judicial decision making. The judges were asked to make judicial decisions involving the criminal justice system in three different cases. These questions involved a determination of the guilt of the individual as well as the appropriate disposition. The judges were primed to understand the race of the defendant. The results of this test showed that the judges’ decisions, on the average, were not affected with the race of the defendant.

However, the researchers discovered that judges who had a white preference in the IAT test gave higher sentences to black defendants and judges who had a black preference in the IAT test gave higher sentences to white defendants.

These researchers came to the following conclusions. “First, judges, like the rest of us, carry implicit biases concerning race. Second, these implicit biases can affect judges’ judgment, at least in context where judges are unaware of a need to monitor their decisions for racial bias. Third, and conversely, when judges are aware of a need to monitor their own responses for the influence of implicit racial biases, are motivated to suppress that bias, they appear able to do so.”[ii]

A judicial decision (or any decision within the criminal justice system) should never be affected by racial bias. Those of us in within the justice system must never tolerate explicit bias, and must be alert to, and guard against, implicit bias. The research would indicate if we are aware of our biases, and are committed to an unbiased decision, we have a reasonable likelihood at being successful in making unbiased decisions.



[i] Rachlinski, Jeffrey J., Sheri Lynn Johnson, Andrew J. Wistrich, and Chris Guthrie, 2009, Notre Dame Law Review, Vol. 84:3 pp. 1195-1246

[ii] Ibid, p. 1221

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

May 15, 2011

Thoughts of Death May Increase Desire to Punish

Terror management theory (TMT) posits that fear of death can strongly influence human behavior, including behavior within the courtroom. The theory starts with the hypothesis that when humans confront their own mortality, they are comforted by a belief that they share with others a stable world-view of various values. This world-view may be a transcendent religious view, or a this-world, shared sense of community values.

Multiple studies have been completed that show that when humans are reminded of their death, they have a tendency to want to protect, defend, and enforce their world-views. This tendency holds for decisions made in a courtroom—generally resulting in more punitive decisions.

Research has shown that when judges are reminded of their mortality, they will impose higher bail than judges who have not been reminded of their own mortality. Juries will have a greater tendency to convict on lesser evidence and recommend more severe sentences after being reminded of their mortality. People also can become more physically aggressive against those who threaten their world-views when they are reminded of their own mortality.

Research has shown that TMT can lead people being more lenient rather than punitive when it is the victim of the crime that threatens one’s world-view—any experienced criminal defense attorney understands this.

Terror management theory is another mechanism that those in the criminal justice system must be aware of. While advocates may attempt to exploit this phenomenon, judges must be on guard to protect against any undue influence on their or a jury’s decision from thoughts of one’s mortality.[i]

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

[i] Arndt, Jamie, Joel Lieverman, Alison Cook, (2005), Terror Management In the Courtroom: Exploring the Effects of Mortality Salience on Legal Decision Making, 11 Psychology, Public Policy and Law 407, pp. 407-437/

May 9, 2011

Judges May Not Be Able to Close the Valves of Their Attention

Emily Dickinson believed that the soul is able to “Close the valves of her attention like stone.” Is this ability limited to the ethereal world, or do judges also have this ability? As discussed in the last entry, the research shows that jurors find it difficult putting inadmissible information out of their minds when deliberating on a verdict. What does the research say about judges’ ability to do this?

Researchers examined this question by testing a group of judges with a series of questions.[i] The first scenario looked at the effect of judge’s exposure to settlement discussions on their ultimate award of damages. One group of judges learned that the plaintiff is requesting $175,000 in damages. Another group of judges learned that the plaintiff is requesting $10 million in damages. Each group of judges is paired with another group that is not involved in the settlement discussions. Judges are warned not to use the information gleaned from the settlement discussions in their ultimate determination of damages.

The group of judges that are exposed to the $175,000 anchor (see Anchoring and Adjustment ) awarded damages averaging $612,000 compared to $1.4 million for the group of judges who were not exposed to the anchor. The group of judges that are exposed to the $10-million anchor awarded damages averaging $2.2 million compared to $808,000 for the group of judges who were not exposed to the anchor. The differences were statistically significant. The judges were not able to disregard the information they learned from the settlement discussions.

The second scenario involved judges deciding a contract dispute. Half of the judges had to rule on a discovery dispute involving the attorney/client privilege which included an in camera examination of a letter between the plaintiff and his/her attorney. The letter greatly weakened the case for the plaintiff. For the control group of judges, who were not exposed to the letter, 55.6% ruled for the plaintiff. Only 29.2% of the judges who were exposed to the letter and found it privileged, found for the plaintiff. Of the judges who were exposed to the letter and did not find it privileged (and therefore could consider it as evidence) only 25% found for the plaintiff. The 29.2% and the 25% were not statistically significant from each other. Again, it appears that judges were not able to put the privileged materials out of their minds when deciding the case.

The third scenario involved a court trial of a criminal sexual assault charge. The issue was consent. One set of judges had to rule on a pretrial motion in limine regarding the complaining witness’s prior sexual history for being promiscuous. The controls were not exposed to this information regarding the complaining witness. About 49% of the judges who had not been exposed to the information, found the defendant guilty, whereas 20% of the judges who had been exposed to the information about the victim and who had found it inadmissible, found the defendant guilty. (Only 7.7% of the judges who were exposed to the information and found it admissible convicted the defendant. However, because of the small sample size it was not significantly different from 20 %.) Again, it appeared that judges, on the average, could not keep the inadmissible evidence out of their decision making process.

The fourth scenario involved judges again making a determination of damages in a personal injury case. One set of judges were told that the plaintiff had a prior conviction involving swindling elderly people out of their life savings in an investment scheme. This was inadmissible evidence. Those judges, who learned of the plaintiff’s conviction, awarded 12% less than the judges who didn’t learn of this information. The difference was marginally statistically significant. Judges did not appear to be as affected by this information in this decision.

The fifth scenario involves sentencing a defendant. Some condemning information was received from the defendant through a cooperation agreement. The information was ruled inadmissible. The set of judges who were not exposed to this information sentenced the defendant to 78 months in prison. The judges who were exposed to the information, and had found the condemning information inadmissible, sentenced the defendant to an average of 85.9 months in prison. Again, it appears that the prohibited information was used in the judicial decision.

The sixth scenario involved one group of judges making a decision on whether or not to find probable cause for a search warrant, and the other group of judges having to make the same decision, but this time during a suppression motion evaluating the same facts after the police did the search and found a large amount of drugs. (The police had clear authority to search without a warrant under the circumstances of the scenario, provided they had probable cause.) About 24% of the judges would have issued the warrant, and about 28% would have found probable cause at the suppression hearing. The differences were not statistically significant. Judges appeared to be able to keep the fact that drugs were found, which is not relevant to the inquiry, out of their minds.

The seventh scenario involved judges again deciding the guilt of a defendant. One set of judges heard a suppression motion on a statement of the defendant where he confessed to the crime. The statement was inadmissibly obtained and suppressed by the court. Of the judges that did not hear the suppressed evidence, 17.7% of the judges convicted the defendant. Of the judges who had heard the suppression motion, 20.7% of the judges convicted the defendant. Again, these two conviction rates were not statistically significant from each other. Again, judges appear to have been able to keep the inadmissible information from affecting their decision.

This research casts some doubt on whether or not judges have the ability to “close the valves of their attention like stone”. It appears that this closure, under many scenarios, is more like cotton cloth than stone.



[i] Wistrich, Andrew J., Guthrie, Chris, and Rachilinski, (2005), Can judges ignore inadmissible information? The difficulty of deliberately disregarding, University of Pennsylvania Law Review, Vol. 153, pp 1251-1345.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

May 2, 2011

Do Not Consider this Entry for Any Purpose

Jurors are often asked whether or not they can put aside what they heard or saw about a case and decide the case fairly and impartially based only upon the evidence presented at trial. In highly publicized cases, often jurors who had been exposed to pretrial publicity remain on a jury, after assuring all that they will remain “fair and impartial.” Jurors are also often instructed to “disregard that answer” or to use evidence in one way, but not for any other purpose, such as with “other-acts” evidence.

Are jurors able to follow these instructions? Empirical studies do not support the conclusion that they can. Researchers exposed mock jurors to different levels of two types of incriminating pretrial evidence—either factually oriented or emotionally oriented. They further measured the effect of time after exposure to that evidence on any bias resulting from the pretrial publicity. They found that the passage of time between the exposure to the pretrial publicity and the trial reduced any biasing effects of the publicity for factually oriented material. However, they did not find that this delay reduced the biasing effect of emotionally laden pretrial publicity.[i]

This research was replicated using videotaped and written pretrial publicity. No differences were found in the biasing effect of either videotaped or written pretrial publicity. Further, these researchers did not find a difference in the biasing effects of factually oriented pretrial publicity and emotionally laden pretrial publicity.[ii]

Research regarding trial instructions to disregard certain evidence, or to use certain evidence in limited ways is mixed. Research that looked at the disclosure of the defendant’s criminal history show that evidence of prior convictions did not necessarily increase the conviction rate. If the convictions were for similar conduct, then jurors were not able to keep it from influencing their verdicts. Some research shows that the deliberation process may assuage the effects of inadmissible evidence on juror bias.

One study looked at the type of instructions the judge used to inform the jury not to consider a statement of the defendant. A jury informed not to regard the statement because it was illegally obtained was less likely to follow the admonition than a jury informed not to regard the statement because the quality of the tape was too poor.[iii]

One finding in the research related to admonitions to disregard evidence is what is called the “back-fire effect”. The “back-fire effect” is when a jury pays more attention to evidence that they are instructed not to consider than if they were not so instructed. The explanation for the exclusion of the evidence appears to impact whether or not a jury will heed judicial instructions to disregard the evidence. There is some evidence to support the idea if the exclusion of the evidence is based on the unreliability of the evidence, such as hearsay, jurors will be more likely to heed the judicial instruction.

There are many psychological theories as to why jurors will not disregard, or are not capable of disregarding, inadmissible evidence. One of the more common and interesting theories is the production of a reactance in jurors. Reactance theory maintains that when a people are told that they cannot do something that they believe they should be able to do, they react to this prohibition by increasing their determination to engage in the behavior.[iv]

The research regarding the ability of a jury to disregard various inadmissible evidence shows that there is reason to believe that they often can’t. I don’t think this conclusion is anything that surprises anyone experienced in the court room. It does provide us with a warning to redouble our efforts to minimize a jury’s exposure to inadmissible evidence, and to not be misguided by fictions that believe that a limiting instruction can cure everything.



[i] Kramer, G.P., Kerr, N.L. & Carroll, J.S. (1990). Pretrial publicity, judicial remedies, and jury bias. Law and Human Behavior, 14, 4009-438.

[ii] Wilson, J.R & Bornstein, B.H. (1998). Methodological consideration in pretrial publicity research. Is the medium the message? Law and Human Behavior, 22, 585-598.

[iii] Kassin, S.M., & Sommers, S.R. (1997). Inadmissible testimony, instructions to disregard, and the jury: Substantive versus procedural considerations. Personality and Social Psychology Bulletin, 23, 1046-1054.

[iv] Lieberman, Joel D. and Arndt, Jamie (2000). Understanding the Limits of Limiting Instructions, Psychology, Public Policy and the Law, Vol 6, No. 3 677-711.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

April 24, 2011

The Danger of Other-Acts Evidence

The rules of evidence generally prohibit the use of “evidence of other crimes, wrongs, or acts” to prove someone’s character to show that the person acted in conformity with their character. Other crimes, wrongs or acts can be admitted to prove such things as motive, opportunity, intent, plan, knowledge, identity, or absence of mistake or accident.[i] The jury instruction on “other-acts” evidence concludes: “You may not consider this evidence to conclude that the defendant has a certain character or a certain character trait and that the defendant acted in conformity with that trait or character with respect to the offense charged in this case.”

This evidentiary rule has generated more confusion than most rules within the legal community among jurors, lawyers, and judges. I believe the reason for the confusion is that the rule works against how humans actually make many decisions, and the stated exceptions to the rule can be read to essentially negate the rule. However, I also believe the rule is needed to provide a fair trial of factual issues.

The rule is attempting to proscribe evidence that one has a certain character and then acted in conformity with this character. Character is defined as the aggregate of features and traits that form the individual nature of some person or thing; or any trait, function, structure, or substance of an organism resulting from the effect of one or more genes as modified by the environment.[ii]

One cannot stare into someone’s eyes and divine a person’s character. The only way to judge a person’s character is to observe the individual’s behaviors and then inductively construct it. For example, one considers that the person was convicted of burglary in the past, and then one reasons that this person has the character for dishonesty and thievery when he or she is confronted with a certain set of circumstances.

To speak about someone’s character and acting in conformity with the character, one must think of a human as some sort of a probabilistic input/output device. First, based on the prior behavior of the subject individual, the evaluator must infer someone’s character. Then one must assume that if someone has this type of character, that when confronted with a set of environmental stimuli, he or she will be more likely to act in a certain way, i.e. provide a certain output, than an individual without that character.

For example, the probability to commit a burglary is a function of the probability that a person will be confronted with the environmental stimulus to burgle multiplied by the probability that one has the character to burgle if confronted with such environmental stimulus. Research shows that the probability that a person with a history of committing crimes or involving themselves with other anti-social behavior will commit another crime is higher than for a person without such history. Therefore, the character evidence of a history or prior convictions or other anti-social activity is evidence that tends to make more probable the defendant’s guilt than a person without such history.[iii]

However, the Wisconsin Supreme Court has stated that: “Evidence of other crimes and misdeeds is not excluded because of an inherent lack of probative value, but is withheld as a precaution against inciting prejudice.” The Court further held that two reasons that character evidence is excluded are: “The over strong tendency to believe the defendant guilty of the charge merely because he is a person likely to do such acts” and the “tendency to condemn not because he is believed guilty of the present charge but because he has escaped punishment from other offenses.”[iv]

Humans make decision based on heuristics—rules that make difficult decisions easier. One such heuristic is that people have character and act in accordance to that character. People say: “Don’t trust Joe, because he is a crook.” They avoid Joe and don’t get hurt and the warning seemed to work, although they probably wouldn’t have gotten hurt if they had trusted him. This heuristic is a quick and easy way to avoid problem individuals. It is related to the same heuristic that classifies people by such things as race, sex, national origin, and all other such classification schemes that are many times erroneous, and often pernicious.

Coupled with this heuristic is the human inability to accurately calculate probabilities. The additional probative value of the character evidence is probably far less than what most people imagine. For most people, knowing someone has been convicted of a crime in the past, feels like strong evidence against that person in another accusation. A defendant with a history of criminal offenses is more likely to offend than someone who doesn’t have that history, but most of the people with a prior conviction will not re-offend. And predictions of whether or not a defendant has committed a crime, often based on merely one past crime, will often be wrong. The additional probative value of “other-acts” evidence on the question of whether or not a defendant re-offended is, in most cases, relatively low.

Also, as emotions are central to human decision making and not separate from them, any emotional response elicited from the “other-acts” evidence (or for that matter any evidence) in a criminal case will impact the likelihood of conviction. Anyone with any experience trying cases understands the need for an emotional edge to win a case. Emotions often lead the rational thought process.

Humans also have an innate need to punish actions that result in harm and wrongful acts. If a defendant had not been punished (or in the eyes of the jury insufficiently punished) for the other bad acts, then the potential risk that a jury would convict a defendant because of unpunished “bad-acts” is greater than if these acts already had been sufficiently punished.

The exceptions to the exclusion of “other-acts” evidence attempt to confine their use to situations where the “other-acts” evidence has a great deal of probative value on an issue of the trial other than just propensity evidence. But when one starts getting into issues such as general motive and intent, it is easy, and often inevitable to slide or plop, into propensity evidence. According to the jury instructions motive refers to a person's reason for doing something and intent is whether the defendant acted with the state of mind required for the offense.

For example, in a child sexual assault case, the State will argue that the defendant’s motive for touching a child was “to obtain sexual gratification from a child”, and his intent was to become sexually gratified, and that a prior similar sexual assault of a different child is admitted into evidence to prove both. The jury instruction on “other-acts” tells the jury that the prior act can only be used on the issues of motive and intent. Why is the prior act probative of the issue of motive and intent? Because of the argument, if he did it before, it is more unlikely that the touching was inadvertent or accidental due to the “law of chances”. If such touching happened once, it may have been from an unlikely, inadvertent or accidental touching. However, if it happened twice, the probability of an inadvertent or accidental touching on both occasions becomes much more unlikely.

If the defendant’s touching wasn’t inadvertent or accidental, then why did the defendant touch the child? Because the defendant touched the child to intentionally obtain sexual gratification. As intent is an element of the crime, the other-acts evidence is certainly being used to prove that the defendant acted in conformity with his character (being sexually attracted to children) with respect to the offense charged. The issues (not to mention the jury instruction on “other-acts”) get muddied and contradictory quite quickly.

If the “other-acts” evidence is offered for, and relative to, an acceptable purpose, then the Court weighs whether or not the probative value of the evidence is substantially outweighed by things such as unfair prejudice or confusion of the issues. The inclusion or exclusion of “other-acts” evidence is ultimately a balancing between allowing a jury to hear evidence that will more likely lead them to a finding of guilt when, without the evidence, it would not make that finding, or excluding the “other-acts” evidence resulting in the jury being more likely to acquit, when with the evidence, it would have found the defendant guilty. The rules regarding “other-acts” evidence, like many of the rules in the criminal justice system, involves a choice between type one error, which is a convicting a defendant for a crime he or she didn’t commit, and type two error, which is acquitting a defendant when in fact the defendant was guilty.

Based on research regarding human decision making, the human need to punish, the inability of humans to correctly assess probabilities, and the uncertain assessment of the probability that a defendant actually committed a crime based on the defendant’s past, it is essential that the trial court act as the gatekeeper to the evidence a jury hears, to ensure that a fair assessment of the facts will be made by the jury. The danger of unfair prejudice urging a conviction despite the evidence is heightened under the circumstances where the “other-acts” evidence ignites the desire to punish, specifically in those instances of “other-acts” conduct that are inordinately heinous, especially relative to the charged conduct, and where the “other-acts” conduct was unpunished or insufficiently punished. The trial court has the power and tools to limit and circumscribe the use of “other-acts” evidence—to allow what facts are necessary to prove the purpose for the evidence, but to exclude those facts that may arouse the human desire to punish and that may prevent factual issues from being fairly determined.



[i] Wis. Stat. §904.04(2)(a).

[ii] Dictionary.com

[iii] See Wis. Stat. §904.01.

[iv] State v. Evers, 139 Wis. 2d 424, 407 N.W.2d 256 (1987). This case is a great exposition on “other-acts.”

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

April 18, 2011

Emotions and Decisions

Inherent in many legal rules is the ideal of a human as a rational and unemotional decision-maker. For example, we instruct the jury not to be “swayed by sympathy, prejudice, or passion”. We attempt to ascertain whether or not a confession was the result of “a rational intellect and a free will.” We are required to decide whether evidence should be excluded because the evidence “appeals to the jury’s sympathies, arouses its sense of horror, provokes its instinct to punish” or has a “tendency to suggest a decision on an improper basis, commonly, although not necessarily, an emotional one.”

The concerns that these rules attempt to address are legitimate, but the conception of human actors as rational decision-makers is, most likely, erroneous. This misconception is a foundation of neo-classical economics, and through the law and economics movement, this erroneous view has been further reinforced within the law.

David Brooks, in his recent book, The Social Animal, discusses this misconception. According to Brooks, the view of humans as rational, logical thinkers was the product of the French Enlightenment “led by thinkers like Descartes, Rousseau, Voltaire, and Condorcet.”[i] Brooks contrasts them with the English Enlightenment thinkers of David Hume, Adam Smith, and Edmund Burke who more clearly understood the role of emotion in being human. Brooks quotes Hume: “Reason is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”

The research regarding human decision- making puts emotions at its center. A decision is not a calculation of sums, but an intuitive, partly subconscious resolution. According to Brooks, “It is nonsensical to talk about rational thought without unconscious thought” as rational thought is built upon the unconscious thought.[ii] Unconscious thought, although at times extremely good at making correct decisions, can also lead us astray in our decision making. I attempted to address some of those problems of thinking in earlier blog entries.

Jonah Lehrer, in his book “How We Decide” discusses how emotions play a central role in decision making. Plato analogized human reason as a charioteer driving one well-mannered horse and also an ill-mannered horse of the emotions. Plato was wrong again. Lehrer describes research regarding a brain damaged subject who is unable to make a decision. Lehrer writes: “For too long, people have disparaged the emotional brain, blaming our feelings for all of our mistakes. The truth is far more interesting. What we discover when we look at the brain is that the horses and charioteer depend upon each other. If it weren’t for our emotions, reason wouldn’t exist at all.”[iii]

Lehrer’s book has many interesting ideas, including the type of intelligence required to be an outstanding professional quarterback (It isn’t superior rational skill, and for some of the reasons discussed in his book, I was glad the Packers had Aaron Rodgers at quarterback during the last Super Bowl rather than Brett Favre.). Emotions are central to many decisions, including complex decisions, such as that which jurors often face during a trial.

Brooks’ book discusses research on human behavior in many different contexts. Although the book is structured on the somewhat corny life stories of fictional characters Raymond and Erica, it’s an easy, but worthwhile, read for anyone interested in human behavior.



[i] Brooks, David, 2011, The Social Animal—The Hidden Sources of Love Character and Achievement, Random House, N.Y., p. 233-34.

[ii] Ibid, p. 239

[iii] Lehrer, Jonah, 2009, How We Decide, Houghton Mifflin Harcourt Publishing Company, N.Y., N.Y. p13

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

April 3, 2011

The Problem(s) with Memory-Part Three

In my last two entries, I discussed five of the seven “sins of memory” identified by Daniel L. Schacter in his book, The Seven Sins of Memory, How the Mind Forgets and Remembers. I will now discuss the last two sins.

The sixth “sin of memory” is bias. According to Schacter, bias “refers to distorting influences of our present knowledge, beliefs, and feelings on new experiences or our later memories of them.” Schacter identifies five different types of biases. “Consistency and change biases show how our theories about ourselves can lead us to reconstruct the past as overly similar to, or different from, the present. Hindsight biases reveal that recollections of past events are filtered by current knowledge. Egocentric biases illustrate the powerful role of the self in orchestrating perceptions and memories of realty. And stereotypical biases demonstrate how generic memories shape interpretation of the world, even when we are unaware of their existence or influence.”[i]

Schacter argues that consistency bias changes one’s memory of past events to reflect how one currently is viewing a situation. If we have an opinion on a subject now, consistency bias makes us want to remember that we always had that opinion, even though in realty we didn’t. Change bias changes one’s memory of past events to make one think that their current state is better than it was in the past, when in fact it hasn’t changed.

Hindsight bias is the phenomena that people’s memories of what they were predicting change after an event happens. A common example is the prediction of how well a sports team will do in a big game or season. After the game or season, people’s memories of their predictions before the event tend to match with whatever occurred. If you predicted a win, and the team lost, you tend to remember that you predicted the team would lose. If you predicted a loss, and the team won, you tend to remember that you predicted the team would lose.

Egocentric biases involve remembering past experience in a way that casts oneself in a positive light. Schacter discusses research on “positive illusions” in which most people believe that they are above average in various personality traits. Of course we can’t all be above average, and therefore some of us suffer from “positive illusions.”

Friedrich Nietzsche aptly conveys the meaning of egocentric bias in his aphorism: "I have done that, says my memory. I cannot have done that, says my pride, and remains adamant. At last, memory yields.”

Stereotype bias is when we remember past events in a way that is consistent with a stereotype of whatever it is that we are considering. If we think about a librarian, we may think about a woman with glasses that is quiet and introverted rather than an athletic extroverted male. When we remember a situation, we remember consistent with our stereotype, which then strengthens our stereotype. Schacter argues that this phenomena is also present in racism and sexism.

The seventh sin of memory according to Schacter is the sin of persistence. Persistence involves remembering things you want to forget. An example is music that keeps running through one’s head. This sin is often the underpinning of depression as people can develop persistent memories of failure. It also underpins post-traumatic stress disorder.

Schacter concludes his book by illustrating that his “seven sins” of memory could also be considered seven virtues of memories. Each of the sins has developed to allow humans to thrive.

The research on memory shows that human memory is not equivalent to recording a scene with a camera. The processes of memory make a remembrance a subjective impression of a past event--an approximation, with some memories being a more accurate approximations than others. I can recommend Schacter's book to anyone interested in memory.



[i] Schacter, Daniel L. (2001) The Seven Sins of Memory, How the Mind Forgets and Remember,N.Y., N.Y. p. 139

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

March 19, 2011

The Problem(s) with Memory - Part Two

In my last entry, I discussed three of the seven “sins of memory” identified by Daniel L. Schacter, in his book, The Seven Sins of Memory, How the Mind Forgets and Remembers. I will now discuss two more sins—ones that threaten the certainty of recollections we hear coming from a witness stand.

The fourth “sin of memory” is misattribution. Misattribution includes believing we remember things, that in fact, never occurred, and believing we are imagining things that we are actually remembering.

Schacter writes: “.. misattributions are surprisingly common. Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time or place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it—without awareness—from something we read or heard.”[i]

Examples of misattribution are déjà vu and erroneous eye witness identifications. Schacter discusses numerous examples of individuals falsely accused of crimes based on faulty eye witness identifications. He discusses the research on “unconscious transference” where one unconsciously transfers memory of an individual from one context to another, and binding failures where we incorrectly glue together various pieces of memory to create a fabricated memory.

The fifth “sin of memory” is the sin of suggestibility.

Schacter writes: “Suggestibility in memory refers to an individual’s tendency to incorporate misleading information from external sources—other people, written materials or pictures, even the media—into personal recollections. Suggestibility is closely related to misattribution in the sense that the conversion of suggestions into inaccurate memories must involve misattribution. However, misattribution often occurs in the absence of overt suggestion, making suggestibility a distinct sin of memory.”[ii]

This sin of memory is also implicated in false eyewitness identifications through things such as show-ups or poorly conducted lineups. Further, Schacter reports research that casts considerable doubt on an eyewitness’s evaluation of their certainty of their identifications. Anyone who has defended someone on charges based on an eyewitness testimony knows its power on a jury, especially the eyewitness’s testimony that they are “certain” that the defendant was the person who committed the crime. The research Schacter references infers that a very uncertain identification can be transformed into a certain identification merely with the word “Okay” spoken by the officer administering the line-up.

As a defense attorney, I was involved in an attempted homicide case that involved an issue of whether or not a knot would slip. I claimed that the knot would slip—which would help exonerate my client. The State argued that it would not slip. An investigator with the DA’s office and I took the knot to a Coast Guard Officer in Milwaukee. I was watching the officer play with knot. Unfortunately, the investigator had left the room for a minute. While the investigator was playing with the knot, the knot slipped. When the investigator came back into the room I told the investigator what happened. The investigator said to the officer: “It didn’t slip did it?” The officer immediately said, “No”. I couldn’t believe it. We went to trial. The knot slipped at trial, and my client was acquitted. I do not believe that the officer was lying. I believe that the officer’s memory was changed by the investigator’s suggestion that the knot did not slip.

Suggestibility also is implicated in false confessions. Certainly many false confessions are the result of outright coercion. But, as Schacter notes, a subset of these false confessions involve the confessors believing that they committed crimes that they actually didn’t commit. In his book, Schacter discusses individual cases involving false confessions and suggestibility.

Schacter further reports on the research regarding the suggestibility of interviews with young children. I understand this research is controversial as child victim advocates sometimes view this research as antithetical to protecting children and getting child molesters off the street. As a prosecutor I was extremely interested in this issue. It’s the reason I bought Schacter’s book, and started requiring recorded interviews by trained forensic interviewers. Anyone who has been involved with interviewing children knows the power of suggestibility—in both directions—from allegations to recanting of allegations. Anyone making decisions about other’s lives based on memories should know the research about suggestibility and be on guard for those factors that may influence memories.

I will continue my discussion of memory a future entry. As I will be involved in another project for the next several weeks, I probably won’t be back for a week or so.



[i] Schacter, Daniel L. (2001) The Seven Sins of Memory, How the Mind Forgets and Remember,N.Y., N.Y. p. 90

[ii] Ibid. p. 113.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

March 14, 2011

The Problem(s) with Memory

Human memory is often central to decisions made in the court system. Witnesses testify based on their memories of their observations of various matters. How confident should one be in a human’s memory? How confident should one be in someone’s confidence of their memories? Let’s say that human memory is far from infallible.

When I was a prosecutor, I handled most of the sexual assault cases. In many of those cases, the only direct evidence against a defendant was the victim’s memory of the assault. Many of the victims were children. I attempted to have the law enforcement agents who investigated these crimes consider the victim’s memory similar to a crime scene. I wanted to preserve the crime scene and prevent contamination of it until all of the evidence had been collected. I had trained interviewers complete videotaped forensic interviews of the victims. I insisted on taped statements from suspects and witnesses.

I also read a book by Daniel L. Schacter, a professor of psychology at Harvard, entitled The Seven Sins of Memory.[i] Although published ten years ago, it still offers much about the research into memory. The book is arranged in chapters around what Dr. Schacter has called the seven sins of memory. I will touch on each of the sins.

Schacter identifies the first sin of memory as transience which is forgetting things due to the passage of time. He writes: “Perhaps the most pervasive of memory’s sins, transience operates silently but continually: the past inexorably recedes with the occurrence of new experiences.” We forget things with time.

The second sin of memory that Schacter identifies is absentmindedness. Absentmindedness is “lapses of attention that results in failing to remember information that was either never encoded properly (if at all) or is available in memory but overlooked at the time we need to retrieve it.” Schacter believes absentmindedness is often the result of divided attention. You are thinking about something else so you are not remembering another thing. Anyone who has set his or her keys or glasses down, only to forget where, has experienced this sin.

Schacter states the third sin of memory is blocking. Blocking occurs when the information that you are attempting to recall has been encoded in memory, but you don’t have the ability to recall it when desired. We all have had the experience of having something on “the tip of our tongue” but can’t seem to come up with it.

These first three sins are mostly passive culprits in reducing the accuracy of the justice system. People forget, and therefore cannot tell us, what we want to know to make an accurate determination of past event. An honest statement from a witness of “I don’t remember” will often be considered as no evidence at all, providing no evidentiary weight in any direction. The sins that I address next week will be more pernicious in our quest for the truth.



[i] Schacter, Daniel L. 2001, The Seven Sins of Memory, How the mind forgets and remembers. Houghton Mifflin Co., N.Y., N.Y.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.

March 7, 2011

Competing Ideas or Different Questions?


Two recent Sunday editions of the Milwaukee Journal Sentinel had opinion pieces that addressed the use of evidence-based strategies in addressing crime. The first article was written by Milwaukee County Sheriff, David A. Clarke, Jr.. See Let's treat criminals like...criminals .

It is safe to say that Sheriff Clarke is highly skeptical of using researched-based strategies when making decisions within the criminal justice system. He believes that those who conduct and use such studies are either “academic elites,” “criminal sympathizers”, or “criminal advocates” (or a combination of the three) engaged in an attempt to “indoctrinate the public.” He ultimately believes that the best way to address crime is to reduce the cost of incarceration so that longer sentences are more affordable. The sheriff advocates locking criminals up for as long as possible in cheap, private prisons. The sheriff argues that the value of incarceration in terms of reduced victimization is not being fully considered by those advocating being “smart on crime.”

Milwaukee County District Attorney John Chisholm, and two Milwaukee County Judges, Jeffrey Kremers and Richard Sankovitz, responded to the sheriff in the following Sunday paper. See Rely on the facts to fight crime . They argue that “Decisions about how to correct offenders should be based on research and actual experience, not myth or anecdote or mere intuition.” They further argue that “…we in the criminal justice system are stewards of limited resources; we are responsible for holding offenders accountable and protecting the community—but cost effectively, within what the community can afford.”

The authors of each of the articles struggle with the central issues of crime and punishment in our society. First, we all can agree that crime is an economic drag on our society. In an ideal society, everyone would voluntarily follow the laws of society. In such society, we would no longer have the costs of victimization, the costs of law enforcement and the criminal justice system, and the costs of incarceration. Those societal resources could be used for such things as education and health care, or returned to the citizens to spend in anyway they desire. But we don’t live in this ideal society.

Recently, we have heard much about Wisconsin having to be more competitive with other states. Can we be competitive with states, like Minnesota, that have similar crime rates but spend considerable less taxpayer money on incarceration? Are the people in Minnesota doing something that allows them to control crime at a lower cost than us? Or are Wisconsin people or Wisconsin communities more criminally inclined than Minnesota’s, so Wisconsin requires the expenditure of more resources to fight crime to obtain similar crime rates? If there are ways to provide the same level of community safety at lower costs, or more community safety at the same costs, shouldn’t decision makers explore those avenues?

The use of “evidence-based” strategies when making decisions within the criminal justice system is one such avenue. The sheriff favors long sentences because he correctly understands that incarceration, and therefore incapacitation, for the longest period of time possible is highly effective in reducing the risk that someone will re-offend over the period of time the person is incarcerated. Further, one can say with a high level of certainty that incapacitation will reduce recidivism for most everyone who is incarcerated. For all practical purposes, we don’t really have to be concerned about a population of offenders for which incapacitation doesn’t work. The implication is that the best strategy to maximize community safety is to incarcerate all offenders for the maximum period of time possible. The reduction in the probability that someone will re-offend reduces the expected costs of victimization which is the benefit of incarceration. I believe that is what the sheriff is arguing.

However, the other issue that must be considered is the societal costs of the long periods of incarceration. We don’t live in a society where the costs don’t have to be considered. That is another ideal society that doesn’t exist—yet one we often like to pretend does. When a judge sentences someone to prison for ten years, in effect he or she is deciding to spend approximately $250,000 of taxpayers’ money with the hope that the society will gain at least $250,000 of community safety and other value from the expenditure. If ten people are sentenced to prison for ten years, the society will need to pay $2.5 million for incarceration costs. Sheriff Clarke understands that point, but argues that we should be trying to reduce the cost of incarceration—again a worthy goal.

But what if five of the ten people that we send to prison for ten years would never re-offend again regardless of the sentence? If that is the case, we are spending $1.25 million dollars and getting no additional return to community safety for the expenditure. We are also unnecessarily losing that defendant’s output as a worker and member of a family and community. Wouldn’t it be wise to be able to try to identify the five individuals who wouldn’t reoffend regardless of their punishment, and not spend the $1.25 million to incarcerate them, but spend it somewhere else where it could benefit us? This identification process, of matching defendants with an appropriate sentence, and only using treatment that has been proven effective, is what using “evidence-based” practices is about.

Everyone in the criminal justice system, including law enforcement officers, prosecutors, judges, and correctional professionals are required to make decisions about how best to handle individuals whom they believe have committed a crime. Currently, many of these decisions are based on folk-theory, custom, and erroneous intuitions. The use of empirically supported theories when making decisions has been shown, in fields such as medicine, to be a more effective strategy in obtaining desired goals than using the other, less effective strategies.

Sheriff Clarke was clearly, and rightfully, upset by a convicted bank robber robbing another bank after being released from prison on some type of early release program. He called this program a “failed experiment”. We don’t know if this convict’s release was based on an evidenced-based assessment or not. But does the failure of one convict who participated in a program mean that the program has failed? It doesn’t, but it does exemplify a weakness in any risk-assessment approach—evidence-based or otherwise. Risk assessments can be wrong.

Part of the “evidence-based” approach to crime uses scientifically validated risk-assessments of defendants to identify treatment needs and then match those needs with the sentence. For example, if a defendant has been identified, through the risk assessment, as a good candidate for probation and treatment, then he or she is placed on probation rather than sent to prison. If someone is identified as a continued threat to society, that person is sent to prison. The weakness is that the identification of high and low risk offenders by using a risk assessment instrument is far from accurate.

The research underlying any risk-assessment uses group averages with confidence intervals. The research allows one to say that for a group of individuals, one can be 95% confident that the proportion of individuals who will recidivate will fall within the upper and lower limits of the confidence interval. (See for example, Hart, Stephen D., Christine Mitchie, and David J. Cooke, (2007), “Precision of actuarial risk assessment instruments. Evaluating the ‘margins of error’ of group v. individual predictions of violence” British Journal of Psychiatry, 190 (suppl. 49) pp. s60-65.)

However, when evaluating a single person a different question is asked. For example, if a defendant takes Risk Assessment A and has a score of 10, one can say that we are 95% certain that people with a score of 10, the probability that such a person will recidivate is between an upper and lower limit. The problem is that these limits are quite wide, and much wider than the group confidence intervals. The prediction equation is nothing more than an average value one would observe over many replications, with a high level of risk of being wrong for any one individual.

Therefore, unlike a policy of incarcerating everyone, where we know incapacitation will without a doubt prevent recidivism, a risk assessment will on the average be correct, but will, at times, be wrong. The use of the risk assessment will help identify some of the people who would not recidivate regardless of sentence, but will also erroneously identify some defendants as non-recidivists when in fact they will recidivate and will erroneously identify some defendants as recidivists when they will not recidivate.

Does that make these evidence-based risk assessments useless? Negative. Currently, police officers, prosecutors, judges, probation agents, and correctional professionals make assessments regarding whether or not a particular defendant will re-offend. These assessments drive the decisions on whether to arrest and prosecute, as to what sentence to impose, or whether a defendant shall be released from prison early. The use of “evidence-based” information will help these professionals make better decisions. The research shows that using a risk-assessment instrument increases the accuracy of predictions over predictions based solely on unguided personal judgment. We all know that some of the decisions that will be made will be wrong, as they are now, but these tools, developed and used appropriately (I share the sheriff’s concerns about the system’s ability to separate solid research from hype) will help us become more accurate. The increased accuracy, in turn, will increase community safety, not decrease it, and also preserve taxpayer dollars.

The views expressed in this blog are solely the views of the author(s) and do not represent the views of any other public official or organization.