Ask Trish: Algorithms in the Criminal Justice System

“Lots of people talk about algorithms in the criminal justice system and how it's bad. Can u explain?”

Jul 23, 2024

Share this...

By Trisha Prabhu

“Lots of people talk about algorithms in the criminal justice system and how it’s bad. Can u explain?”

Hi there, and welcome back to another week of Ask Trish! I hope you’re all well and having a wonderful July (I myself am spending plenty of time outside, trying to soak up as much summer as I possibly can!).

Thank you so much to this week’s question-er for the very interesting, important question. Indeed, algorithms have been deployed in the criminal justice system in the US and globally for quite a while now, and their use has generated a lot of interest/attention, both good and bad. There are those who believe that these technologies can make justice more swift and accurate and those that argue that they enable discrimination and disproportionately punish historically marginalized groups. And at a high level (and this is always missed, despite the fact that it is so important!), there are also those that are asking: results aside, ought we to employ algorithms in our criminal justice system at all? What applications are we, as a society willing to accept? And what applications can’t we accept? 

These posts (which I aim to keep quite short, so as to not lose your attention!) cannot dig into all of those questions, but I can introduce you to the topic – and in this week’s post, that’s exactly what I’ll do. First, we’ll talk about i) how algorithms are being used in the criminal justice system today (note that this will not be comprehensive, and unfortunately, because of the information out there, there will be a Western bias. That’s something we have to change!), ii) why some people argue that these algorithms are a “good” thing, and iii) why others argue that they’re a “bad” thing. If, after you read this, you have more questions, I’d suggest that you use this post as a launching pad for further research. Do some digging online, and learn as much as you can.

Sound like a plan? Let’s get into it:

First and foremost, how are algorithms being deployed in the criminal justice system? Well, largely, algorithms have been used by police, judges, and prisons as a predictive tool in different scenarios; those include determining where crime is likely occur, whether a defendant ought to be detained before trial, what sentence a convicted criminal ought to receive, where in prison they ought to live, what they should receive parole…and so on. For instance, back in 2016, ProPublica detailed an algorithmic tool called “COMPAS” (owned by the company Northpointe, and now called equivant) that states across the US were using to predict a defendant’s risk of re-offending (the fancy word for that is “recidivism”). The algorithm took in a defendant’s answers to a 137-question questionnaire (questions included things like “Was one of your parents ever sent to jail or prison?” and “How often did you get into fights while at school?”) and spit out a risk score, a number from 1-10. Numbers closer to 1 signaled a lower predicted risk of reoffending, while numbers closer to 10 signaled a higher predicted risk of reoffending. Judges would then use the risk score to make the crucial decisions I’ve cited above. It’s important to note – this is not just happening in the US. In Spain, local authorities have used a predictive algorithm, called VioGén, to predict how likely a domestic abuse victim is to be abused again – and accordingly, what kind of protection to provide. Like COMPAS, VioGén requests answers to a number of questions, and spits out a score, which authorities can then lean on in their assessment.

So that’s what algorithms in the criminal justice system look like. Why do some people favor them? Well, their makers argue that by drawing on data about what kinds of people in the past were, say, re-arrested, algorithms can provide more accurate, spot-on insights about whether a given defendant is likely to be arrested again, too. They also note that human decision-making in the criminal justice system has historically been very faulty; judges, for instance, have been found to issue harsher rulings when they’re hungry. (I’m not kidding! It’s called the “hungry judge effect.”) And on that note, these folks also point out that with the US locking up tons and tons of people – arguably, way too many – and often getting the call wrong, computers may have some value to add. 

But though some favor the use of algorithms in the criminal justice system, many, many more have argued against them – and for good reason. Just take the COMPAS tool: in their reporting, ProPublic found that in fact, the tool was “remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” And when taking into account all crime, the tool was just a bit more accurate than a coin flip. Perhaps most concerning were the racial disparities: ProPublica found that “the formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.” That, of course, is simply unacceptable. Why does this happen? Well, even algorithms that don’t explicitly ask for information like “race” are bound to make de facto decisions based on race when they ask for information that is highly correlated with race, like income or education level. Moreover, much of the data these tools use is biased itself. Just think about it – that data comes from biased human decision-making, possibly, say, racist and sexist judges. (That’s not to say that all judges are racist or sexist, just to say that those biases sadly do show up in the criminal justice system.) So if you train an algorithm based on that data…the algorithm will also be biased! No doubt, these algorithms are far from perfect – and that’s why so many people think they’re “bad” and ought not to be used. Indeed, as I mentioned earlier in this post, many scholars, technologists, and members of the public are now asking: should we be using algorithms in the criminal justice system at all? And if so, when? What kind of testing should we put these algorithms through? These are important questions, with evolving answers.

Hopefully, that gave you a helpful, clear introduction to the way that algorithms are being deployed in the criminal justice system. Once again, this is just an introduction to this topic – and if you’re interested, I’d strongly encourage you to do some additional research (there’s a lot to learn here). And no doubt, this issue will evolve/continue to garner a lot of attention, so keep your eyes peeled for any new news…and do let me know if you have any other algorithm-related questions on your mind! Or, really, any questions at all in your mind. That’s right: whatever internet/tech-related issues you’re wondering about, I’d love to hear from you. Please go ahead and share your thoughts here. Thank you in advance for contributing!

Have a great week,

Trish

@asktrish

This week, Trish dives into an incredibly important topic: algorithms in the criminal justice system. Algorithms are used primarily as a predictive tool in the criminal justice system, across a wide range of scenarios — everything from recidivism risk to sentencing. Some argue that these algorithms are more fair than humans, but others note that algorithms have been found to be incredibly faulty and biased. Trish provides readers a full overview in this week’s post — link in bio ⬆️⬆️ #criminaljustice#algorithm#ai #tech

♬ original sound – Ask Trish

Share this...