top of page
Writer's pictureKen Ecott

UK police forces adopt crime prediction software. But Is It Biased?


14 UK police forces have made use of crime-prediction software or plan to do so.

Police in the UK are starting to use futuristic technology that allows them to predict where and when crime will happen, and deploy officers to prevent it, research has revealed. “Predictive crime mapping” may sound like the plot of a far-fetched film, but it is already widely in use across the US, with many forces across the UK adopting the technology.

The technology involves machine learning algorithms that are able to predict when and where a crime might take place by processing data from records on previous criminal activity.

California-based Predpol struck a contract with the Los Angeles Police Department in 2011, which saw significant decreases in property crime within six months of using the company’s technology.

Predpol claims its research has found the software to be twice as accurate as human analysts when it comes to predicting where crimes will happen. No independent study, however, has confirmed those results.

Predictive policing is built around algorithms that identify potential crime hotspots.. (PredPol)

 

The human rights group Liberty said it had sent a total of 90 Freedom of Information requests out last year to discover which forces used the technology.

It believes the programs involved can lead to biased policing strategies that unfairly focus on ethnic minorities and lower-income communities.

And it said there had been a "severe lack of transparency" about the matter.

Defenders of the technology say it can provide new insights into gun and knife crime, sex trafficking and other potentially life-threatening offences at a time when police budgets are under pressure.

One of the named forces - Avon and Somerset Police - said it had invited members of the press in to see the Qlik system it used in action, to raise public awareness.

"We make every effort to prevent bias in data models," said a spokeswoman.

"For this reason the data... does not include ethnicity, gender, address location or demographics."

But Liberty said the technologies lacked proper oversight, and moreover there was no clear evidence that they had led to safer communities.

"These opaque computer programs use algorithms to analyse hordes of biased police data, identifying patterns and embedding an approach to policing which relies on discriminatory profiling," its report said.

"[They] entrench pre-existing inequalities while being disguised as cost-effective innovations."

The American Civil Liberties Union [ACLU], the Brennan Center for Justice and various civil rights organisations have all raised questions about the risk of bias being baked into the software.

Historical data from police practices, critics contend, can create a feedback loop through which algorithms make decisions that both reflect and reinforce attitudes about which neighbourhoods are “bad” and which are “good.” That’s why AI based primarily on arrests data carries a higher risk of bias—it’s more reflective of police decisions, as opposed to actual reported crimes.

Racial inequalities in the criminal justice system in England and Wales were highlighted in a recent report written by the Labour MP David Lammy at the request of the prime minister.

Predictive software

Liberty's report focuses on two types of software, which are sometimes used side-by-side.

The first is "predictive mapping", in which crime "hotspots" are mapped out, leading to more patrols in the area.

The second is called "individual risk assessment", which attempts to predict how likely an individual is to commit an offence or be a victim of a crime.

A screenshot of the adapted predictive crime hotspot mapping software that was used by Kent Police

 

Companies that develop such applications include IBM, Microsoft, Predpol and Palantir and there are efforts to create bespoke solutions.

"When you boil down what the software is actually doing, it comes down to two things: your age and number of prior convictions,” said Seena Fazel, a professor of forensic psychiatry at the University of Oxford. “If you are young and have a lot of prior convictions you are high risk.

Using machine-learning techniques, the aim is to calculate a risk score for individuals as to their likelihood of committing crimes in the future.

In addition, the police hope to use the system to identify which members of their own workforce need support to help reduce illness.

West Midlands Police leads the effort. The others involved include Metropolitan Police, Greater Manchester Police, Merseyside Police, West Yorkshire Police, Warwickshire and West Mercia Police.

"We want to see analytics being used to justify investment in social mobility in this time of harmful austerity, addressing deep-rooted inequalities and helping to prevent crime," said Tom McNeil, strategic adviser to the effort.

"To support this we have appointed a diverse ethics panel placing human rights at the centre."

Met Police Control Room, New Scotland Yard, London

However, a report by the Alan Turing Institute - which was commissioned by the police - raised concerns that those involved had been too vague about how they planned to address the risks involved.

“A tool can help police officers make good decisions,” says Daniel Neill, the creator of CrimeScan. “I don’t believe machines should be making decisions. They should be used for decision support."

Who’s accountable?

While there has been a push to make developers more cognisant of the possible repercussions of their algorithms, others point out that public agencies and companies reliant on AI also need to be accountable.

A government agency using AI algorithm has the most responsibility and they need to understand it, too. If you can’t understand the technology, you shouldn’t be able to use it.

 

Further reading:

41 views0 comments
bottom of page