Researchers from Cornell have demonstrated that data science and artificial intelligence tools can successfully identify instances in which prosecutors ask prospective jurors different questions in an effort to prevent women and African Americans from serving on juries.
In a first-of-its-kind study, researchers analyzed jury selection transcripts using natural language processing (NLP) tools.
They discovered numerous quantifiable differences between how prosecutors questioned black and white jurors. Once validated, this technology could provide evidence for appeals cases and be used in real time to ensure more diverse juries during jury selection.
The study was published in the Journal of Empirical Legal Studies.
Blume collaborated with Martin Wells, the Charles A. Alexander Professor of Statistical Sciences at the Cornell Ann. S Bowers College of Computing and Information, and Effenberger to analyze transcripts from 17 South Carolina capital cases to determine whether NLP software could identify this and other signs of disparate questioning. Their dataset included over 26,000 questions asked to prospective jurors by judges, defense attorneys, and prosecutors.
The researchers examined not only the number of questions posed to potential Black, white, male, and female jurors, but also the topics covered, the complexity of each question, and the parts of speech used.
The analysis revealed significant differences in the length, complexity, and tone of the questions prosecutors posed to potential African-American jurors versus those posed to potential white jurors, indicating that they were likely attempting to influence their responses. The questions posed by the defense and the judges revealed no such racial disparities.
The study also uncovered evidence that prosecutors had attempted to exclude Black individuals based on their stances on the death penalty. Compared to white potential jurors, Black potential jurors were asked more explicit and graphic questions about execution methods, particularly those who were ultimately excused from service.
In six of the seventeen cases analyzed in the study, a judge ruled that the prosecutor unlawfully excluded potential jurors based on their race. By examining the combined NLP analyses for each case, the researchers were able to differentiate between cases that violated Batson v. Kentucky and those that did not.
The findings, according to the researchers, demonstrate that NLP tools can successfully identify biased jury selection. Now, they hope to see similar research conducted on larger datasets with a greater variety of case types.