advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Researchers say police should be banned from using facial recognition

  • The issue of facial recognition software and its application by law enforcement has increased in recent years.
  • A study looking into how it is applied by police in the UK by the University of Cambridge was recently published. 
  • It categorises live facial recognition (LFR) as not meeting ethical standards when used by police.

In recent years we have seen facial recognition software emerge in the zeitgeist, particularly as it refers to its use by law enforcement. So much so that the likes of Amazon have restricted the police from using its own software for such use cases.

With policing in general coming under scrutiny for targeting marginalised segments of society, a new study published by researchers at the University of Cambridge does little to ease concerns.

This as the study found the use of live facial recognition (LFR) by police in the UK failed to meet minimum legal and ethical requirements.

“Researchers constructed the audit tool based on current legal guidelines – including the UK’s Data Protection and Equality acts – as well as outcomes from UK court cases and feedback from civil society organisations and the Information Commissioner’s Office. They applied their ethical and legal standards to three uses of facial recognition technology (FRT) by UK police,” a press release regarding the study explains.

Perhaps unsurprisingly, the study found that the mechanisms employed by police when it comes to LFR is difficult to pin down.

“In all three cases, they found that important information about police use of FRT is ‘kept from view’, including scant demographic data published on arrests or other outcomes, making it difficult to evaluate whether the tools ‘perpetuate racial profiling’ say researchers,” according to the release.

“In addition to lack of transparency, the researchers found little in the way of accountability – with no clear recourse for people or communities negatively affected by police use, or misuse, of the tech,” it adds.

Here the researchers explain that no one is policing the police when it comes to the application of this technology, which can be quite invasive. “Police forces are not necessarily answerable or held responsible for harms caused by facial recognition technology,” noted the study’s lead author, Evani Radiya-Dixit.

This is an issue that will only increase moving forward, as the researchers point out that as many as 10 known police forces in England and Wales are making use of these technologies.

It is why studies like this, as well as the audit tools created by the Cambridge researchers, will prove important in the coming years, especially as more police in the UK and other parts of the globe turn to facial recognition technology.

“Over the last few years, police forces around the world, including in England and Wales, have deployed facial recognition technologies. Our goal was to assess whether these deployments used known practices for the safe and ethical use of these technologies,” explained Professor Gina Neff, executive director at the Minderoo Centre for Technology and Democracy.

“Building a unique audit system enabled us to examine the issues of privacy, equality, accountability, and oversight that should accompany any use of such technologies by the police,” Neff emphasised.

As Engadget points out, as citizens’ fear constant surveillance by the police continues to grow, it is time for legislation to be developed to ensure facial recognition software is utilised ethically and not abused.

[Image – Photo by Danny Lines on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement