advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Apple says governments won’t be allowed to use CSAM detection for surveillance

Ahead of the long weekend we just enjoyed here in South Africa, Apple detailed a new system it would be implementing to stop the spread of child sexual abuse material (CSAM) on its platforms.

The solution is rather smart and uses image hashes of known CSAM material to detect images of child sexual abuse when it is uploaded to Apple’s servers.

Unfortunately, the basic understanding many folks have of encryption coupled with Apple’s very poorly explained press release lead to many assumptions that Apple was going to be monitoring smartphones constantly. This lead to fears that say parents sharing photos of kids playing in the bath would end up being accused of some really serious crimes.

Worse still, there were fears that this sort of solution could be used by governments to police their people in extreme ways. That is a justifiable concern as Apple’s system simply looks for hashes of known content, and it could be reworked to look for hashes of images from other sorts of content, right?

In a bid to allay these fears, Apple published a FAQ about the system at the weekend in which it not only said that its CSAM system won’t be used for government requests, but that it has already refused to implement such a system requested by governments.

“Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it,” explained Apple.

Apple also clarified that, unless content is being uploaded to iCloud Photos, it won’t be scanned. Once again, Apple is also using a catalogue of known CSAM material and the likelihood of anybody being flagged falsely is one in one trillion per year.

“This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities,” Apple added.

We urge Apple users to read through the FAQ located here as a PDF. We would also recommend that you read through the technicalities of this new feature located here to get a better understanding of the tech before it rolls out as part of iOS 15, iPadOS 15, watchOS 8 and macOS Monterey planned for release later this year.

advertisement

About Author

advertisement

Related News

advertisement