Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Durham Police to use AI to predict future crimes of suspects, despite racial bias concerns

Recent research found that artificially intelligent systems are being taught to be prejudiced by learning from humans

Aatif Sulleyman
Friday 12 May 2017 16:20 BST
Comments
While the system could prove useful, there are fears that it could also be seriously flawed
While the system could prove useful, there are fears that it could also be seriously flawed (Getty/iStock)

Police officers in Durham will soon use artificial intelligence to determine whether a suspect should be kept in custody or released on bail.

The system, which is called the Harm Assessment Risk Tool (Hart), has been trained using Durham Constabulary data collected from 2008 to 2013, and will also consider a suspect’s gender and postcode.

It is designed to help officers assess how risky it would be to release suspects.

Hart will be used in an “advisory” capacity, according to the BBC.

It was developed alongside academics from the University of Cambridge and has been built to err on the side of caution to lower the risk of it recommending the early release of potentially dangerous suspects.

Hart is, therefore, likely to classify somebody as medium- or high-risk, something that’s reflected in the results of tests conducted in 2013.

It was accurate 98 per cent of the time when it classified a suspect as “low-risk”, and 88 per cent of the time when it classified a suspect as “high-risk”.

“I imagine in the next two to three months we’ll probably make it a live tool to support officers’ decision making,” Sheena Urwin, the head of criminal justice at Durham Constabulary, told the BBC.

While the system could prove useful, there are fears that it could also be seriously flawed.

A recent report found that artificially intelligent systems are being taught to be prejudiced by learning from humans.

An investigation into a separate algorithm used by US authorities to predict how likely a suspect is to commit future crimes, conducted by ProPublica last year, also found issues.

According to the report, the algorithm was twice as likely to incorrectly flag black suspects as future criminals than white suspects, and white suspects were incorrectly classified as low-risk more often than black suspects.

“Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No,” reads the report.

Hart will not be able to accurately risk-assess suspects with a criminal history from beyond Durham Constabulary’s jurisdiction.

Its creators believe they have mitigated any associated risks, and an auditing system explaining how the AI technology reached a decision will also be available if required.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in