Microsoft doesn’t want police using its AI for facial recognition

  • Microsoft has reiterated its position on generative AI being used by police departments for facial recognition.
  • It has banned the use of its enterprise-focused Azure OpenAI by police departments in the United States.
  • The company has also banned the use of its AI for real-time facial recognition globally.

Much has been made of the use of facial recognition software by law enforcement, with most big tech firms looking to ban its use and potential exploitation. The same holds true for Microsoft, which has reiterated its position when it comes to the use of its AI solutions for facial recognition.

The company this week updated its policies on the matter, per TechCrunch, adding new stipulations when it comes to its enterprise-focused Azure AI platform.

Microsoft has confirmed that its generative AI technology is banned for use by any police departments in the United States.

Here it outlined that integrations for Azure AI may not be used for, “facial recognition purposes by or for a police department in the United States.”

It added that it cannot be used for, “any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individual in uncontrolled, ‘in the wild’ environments, which includes (without limitation) police officers on patrol using body-worn or dash-mounted cameras using facial recognition technology to attempt to identify individuals present in a database of suspects or prior inmates.”

Along with police departments, Microsoft is looking at Azure AI’s potential use in other scenarios, such as surveillance at an airport or another environment where security is a critical concern.

On this front that company has noted that its technology cannot be used, “without the individual’s valid consent, be used for ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal information, including biometric data.”

It is good to see Microsoft reaffirm its stance on the matter, but it also remains to be seen how other generative AI players will be enforcing the use of their technology when it comes to facial recognition.

AWS, for example, has noted that its facial recognition software should be used in a responsible manner, but has not stated its policy on its use by police departments, nor what kinds of bans are in place.

“In all public safety and law enforcement scenarios, technology like Amazon Rekognition should only be used to narrow the field of potential matches,” AWS has shared. What that means now that it has invested far more of Anthropic, is unclear.

Either way, as generative AI becomes more pervasive, its implications when it comes to privacy needs to be weighed carefully and regulated clearly.

[Image – Photo by Colin Lloyd on Unsplash]


About Author


Related News