Tech

This Company Is Using Racially-Biased Algorithms to Select Jurors

Momus Analytics' predictive scoring system is using race to grade potential jurors on vague qualities like "leadership" and "personal responsibility."
This-Company-Is-Using-Racially-Biased-Algorithms-to-Select-Jurors_LK3
Illustration by Lia Kantrowitz

In a short, polished video introducing his new company, attorney Alex Alvarez tells the story of a case he didn’t expect to lose. It was a straightforward slip-and-fall lawsuit pitting him against a less experienced attorney, and Alvarez was riding high off a series of “multi-million dollar” verdicts. He was shocked when this one didn’t go his way.

“If the reason I win or lose, or any lawyer wins or loses, is based on his skill level, then why did this happen?” Alvarez says in the clip. “And that started me on a quest to find out why jurors decide cases and how juries are deciding cases in America.”

Advertisement

Thus was born Momus Analytics, named after the Greek god of blame, criticism, and mockery—and part of a new and controversial breed of legal tech companies. Drawing on big data and machine learning techniques similar to those marketers use to determine whether to send you an ad for an SUV or a bicycle, these startups are offering attorneys unprecedented insight into jurors’ lives.

Some, like Momus, are also making unprecedented promises about the ability of that information to predict how a juror will lean in the deliberation room.

Lawyers, jury consultants, and legal technology researchers who reviewed Momus’ pending patent application) at Motherboard’s request warned that the company may be founded on myth in more than just name. Rather than delivering superhuman insight, the experts said, Momus appears to be drawing assumptions from demographic information that has no reliable correlation to juror disposition—and some of the data it relies on may violate the constitutional prohibition against excluding jurors based on race or sex.

“Lawyers are not allowed to pick jurors based on race or gender, that’s a constitutional mandate,” American University Professor Andrew Ferguson, who has researched the use of big data for jury selection, told Motherboard. “The idea that the algorithm is going to weight race or gender or other protected classes in a way that could be outcome determinative—that’s a problem, because then you’re kind of tech-washing your racialized assumption of individuals.”

Advertisement

Alvarez and other Momus Analytics employees did not respond to multiple calls, emails, or written questions.

Momus begins by scraping public records and jurors’ social media posts. It then feeds the collected data into algorithms that determine scores for “leadership,” “social responsibility,” and “personal responsibility.” The company’s patent application lists several characteristics Momus relies on to reach those conclusions. Among them: people of Asian, Central American, or South American descent are more likely to be leaders, while people who describe their race as “other” are less likely to be leaders.

From there, the software spits out recommendations. “It will tell you who your best juror is, who your worst juror is, and all parts in between,” Alvarez says in another promotional video.

For many lawyers, the prospect is incredibly appealing. Voir dire can often seem like a mystic art, passed down through generations and often riddled with superstitions and ingrained biases. In some cases, an attorney may receive a list of the day’s jury pool, several hundred names long, only 24 hours in advance. With a limited number of questions, and an even smaller number of peremptory strikes to eliminate jurors without cause, the attorney must do their best to ensure that anyone predisposed against their client doesn’t end up on the jury.

For all but the wealthiest clients, who can afford jury consultants, doing actual research on jurors is all but impossible before most trials.

Advertisement

Other emerging companies are making more moderate promises, compared to Momus, and attempting to filter race out of their algorithms. The growing industry promises to help lawyers better serve their clients, but also raises questions about the balance between human and machine judgement in our justice system.

The long history of jury discrimination

Racial discrimination in jury selection—such as outright bans on non-white jurors—has been illegal in the U.S. since the Civil Rights Act of 1875, although that law did little to change the practices in many jurisdictions. In 1986’s Batson v. Kentucky, the Supreme court furthered that protection, ruling that it is unconstitutional to eliminate a juror with a peremptory challenge based on their race. But studies of decades of post- Batson cases show the decision failed to fully eliminate the practice.

Some counties with large African American populations have seen more than 80 percent of black jurors struck during selection in death penalty cases, resulting in all-white juries half of the time and juries with only one black member the remainder of the time, according to a report from the Equal Justice Institute. A separate study of the trials of 173 inmates who were on death row in North Carolina post- Batson found that black jurors were 2 ½ times more likely to be struck than non-black jurors.

Some researchers, including Ferguson, have suggested that in certain applications, big data could lead to more representative juries, but there are tradeoffs.

Advertisement

“There’s reasonable arguments out there that using big data—publicly available info on jurors—will prevent over-relying on only characteristics of jurors that we can see, including protected characteristics like race,” Drew Simshaw, a Gonzaga University professor who studies artificial intelligence and legal technology, told Motherboard. “But we don’t know if the data that’s being used is relying on data that reflects inequality, prejudice, and discrimination in society. The proprietary nature of the services, the lack of transparency, and this black box issue present challenges.”

Technology’s promise

When Larry Eger was a new attorney—”and I thought that I was maybe more attractive than I actually was”—he had a tendency to favor young women during jury selection, thinking they might be more amenable to him as an advocate for his client. With experience, he found he fared better with grandmothers.

Many trial lawyers have similar stories that inform their particular strategies for jury selection. “We have our own stereotypes in our head when we’re selecting a juror and when we look at the jury panel,” Eger, the elected public defender for Florida’s 12th circuit, told Motherboard. “And as a defense attorney, I’m going to guess that most of us have a certain prejudice that if you have a black juror he’s going to be more sympathetic, but so many other factors come into play.”

For most of the history of jury trials, lawyers had little to base their decisions on apart from what they could observe visually and discern from a limited set of questions. Egers described the algorithmic ranking of jury pools as “frightening,” but didn’t deny the allure of tools that quickly and automatically gather large amounts of data on jurors—especially if an opposing prosecutor is using them. “When your ox is being gored, you want to take advantage and do whatever you can,” he said.

Advertisement

Among the companies selling big data insights to lawyers are Vijilent and Voltaire. Both scrape the public records and public social media profiles of jurors and run them through versions of IBM Watson’s Personality Insights tool, which uses natural language processing algorithms to categorize the jurors’ within the “big five” personality traits model (openness to experience, conscientiousness, extraversion and introversion, agreeableness, and neuroticism).

Given how difficult it is to collect data on juries, the research into the correlation between personality traits and jury decisions is extremely limited. But at least one study has found a slight relationship between jurors with higher levels of extraversion and conscientiousness and not-guilty verdicts in criminal trials.

Vijilent and Voltaire differ from Momus in a significant way: They don’t claim to predict how jurors will judge a particular case.

“Our goal is to use AI and [machine learning] tools in data processing,” Basit Mustafa, the founder of Voltaire, told Motherboard. “In terms of actually selecting the bias [a juror might display], we want to actually leave that more up to the human.”

A client might ask Voltaire to run a search on a juror and flag any posts that contain keywords like “gun,” “food poisoning,” or “cancer,” depending on the type of case. The company would send back a report including the jurors’ big five personality assessment, a list of posts with flagged keywords, and other insights—for example, are they friends with doctors, and might therefore enter a medical malpractice case with a particular bias?

Advertisement

The company doesn’t honor requests to search for keywords related to protected classes like race or sex, Mustafa said, and it specifically turns down cases that are likely to stray into those areas, like housing discrimination.

The “Momus Methodology”

Momus Analytics’ publicly provides no details about how it takes the next step and determines the best and worst jurors for a case. The actual software appears to have been programmed by Frogslayer, a Texas company that creates custom software.

In its patent, Momus describes a machine learning model trained to rank jurors, but does not explain what data it used to train such a model. It does, however, provide lists of the kind of data the model considers determinative of a juror’s leadership qualities and bias toward personal or social responsibility.

In addition to race, the factors that Momus says indicate a propensity for leadership include: “alliance with negative social issues,” “US postal service as a profession,” “having a master’s level or higher of education,” “affiliated with the Democratic Party,” and “supporter of the NRA or hunting.” Traits indicative of non-leaders include: “profession as a boat captain or airplane pilot,” “profession in social work,” and having any level of education less than a master’s degree.

“I disagree that those are accurate and reliable predictors of whether somebody would be a leader or have a high level of personal or social responsibility,” Leslie Ellis, the current president of the American Society of Trial Consultants, told Motherboard. “In my experience of talking to thousands of mock jurors, post trial interviews, and having done a good amount of academic research on jurors, I would not rely on these characteristics to be predictive of either of those traits.”

Advertisement

“What we have seen in all sorts of different types of jury research … is that those types of easily quantifiable characteristics are very often not what’s actually predicting or correlating with the verdict. More often, they act as proxies for other things,” she added.

Nonetheless, parts of the legal community have embraced Momus Analytics.

The company launched last year and its software is currently only available to a select group of plaintiff’s attorneys, but it claims the methodology behind the program has led to verdicts worth more than $940 million since 2008. Motherboard reached out to six attorneys that appear to be responsible for some of the verdicts Momus claims as success stories. None of them responded to calls or emails.

The National Law Journal recently named Momus one of its 2020 emerging legal technologies and the National Judicial College invited Alvarez, the company’s founder, to speak at a judges’ symposium on artificial intelligence and the law.

Both Vijilent and Voltaire have been tempted at times to move further toward the kind of predictive analytics Momus advertises. Rosanna Garcia, the founder of Vijilent, told Motherboard that she initially considered offering predictions about jurors’ to clients but found the level of interest didn’t justify the difficulty of training the models. “You would have to take data insights from 100-plus jury outcomes in order to be able to train your model” and that data is hard to come by, she said.

About a year ago, Mustafa turned down an offer from an IBM salesperson to incorporate Watson’s Tone Analyzer feature into Voltaire for a small additional licensing cost. Tone Analyzer employs ”sentiment analysis,” a field of natural language processing that analyzes the tone and emotion with which people write about certain subjects. The technique is ubiquitous in digital marketing—although research has found that certain applications tend to discriminate against groups like African Americans in their analysis.

“I looked at the data and thought, ‘that’s just flat out wrong,’” Mustafa said. “That’s really scary to me, as someone who’s trying to sell insight to lawyers. I don’t think that people come down to scores, flat out. I don’t think you can do that.”