advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Apple’s new child safety measures explained

A poorly worded article on Apple’s website is causing a large amount of confusion as regards encryption and privacy. Understandably users are worried and, while the feature Apple is introducing is clever, folks are worried that it could be used for more sinister applications.

The feature in question is the ability for Apple to insure that child sexual abuse material (CSAM) isn’t shared or stored on its devices or services.

An update containing this feature will be delivered in iOS 15, iPadOS 15, watchOS 8 and macOS Monterey planned for release later this year.

There are a few aspects to this update but the one causing a ruckus is the ability for Apple to scan your images for CSAM. That sounds a lot scarier than it is and the devil is in the details.

Apple says it has designed CSAM Detection with user privacy in mind so how does Apple scan your device without breaching privacy?

Well Apple isn’t going to be looking at photos instead it will be looking for image hashes using a trio of technologies namely NeuralHash, Private Set Intersection and Threshold Secret Sharing.

NeuralHash is a “perceptual hashing function that maps images to numbers”, Apple explains in a technical document.

Rather than mapping individual pixels, perceptual hashing uses numbers to convey features of an image regardless of colour, resolution and quality. In the image below you can see the same image has the same hash despite one version being greyscaled.

NeuralHash showcase of how the same image will have the same hash regardless of image quality, colour or resolution. The hash is based on image features.
And example of how NeuralHash hashes images.

Using this technology Apple together with the National Center for Missing and Exploited Children and other child safety organisations have created a database of hashes containing known CSAM content.

This is the point where Private Set Intersection enters the mix.

“First, Apple receives the NeuralHashes corresponding to known CSAM from the above child-safety organizations. Next, these NeuralHashes go through a series of transformations that includes a final blinding step, powered by elliptic curve cryptography. The blinding is done using a server-side blinding secret, known only to Apple. The blinded CSAM hashes are placed in a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This blinded database is securely stored on users’ devices. The properties of elliptic curve cryptography ensure that no device can infer anything about the underlying CSAM image hashes from the blinded database,” Apple explains.

When a user then pushes an image to their iCloud, Private Set Intersection kicks in and determines whether the image matches the CSAM hashes without revealing the result. Instead it creates a “cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image”.

Finally, Threshold Secret Sharing makes sure that unless the safety voucher crosses a threshold of known CSAM content Apple can’t see it.

“The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account,” says Apple.

“Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated,” adds the Cupertino firm.

Now, the fear is that this sort of technology could be used for nefarious purposes and yes it could be but until that happens we can’t say for sure that it will be. To be absolutely clear, Apple isn’t looking for images parents are sharing with each other showing their kids in the bath, it is looking for known CSAM content based on a database provided by child protection organisations.

That’s the important distinction we feel Apple might not have made clear in the public facing article but it did in its technical document which few folks will read.

Apple will also be adding new tools that warn parents and children when they send or receive sexually explicit photos.

“When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it,” writes Apple.

Messages are analysed using on-device machine learning to determine whether an image is sexually explicit and Apple doesn’t have access to the messages.

We’re rather impressed at how Apple brought this solution about but how successful it is depends on how it is used in the wild.

One thing we know for sure is that Silicon Valley needs more people who can explain things like this in the simplest terms.

advertisement

About Author

advertisement

Related News

advertisement