Today Apple SVP Craig Federighi was interviewed by the Wall Street Journal about it’s new Child protection features. The initial explanations offered by Apple were confusing and concerning. Craig’s explanation was at times a little tricky to follow too.

Here’s a (hopeully) better explanation of this whole process and some context:

Your iPhone has a neural engine inside it. The neural engine already scans your photos on device to index your images. This makes them searchable. That’s why when you search for the term ‘dogs‘ or ‘trees’ for instance, the phone is able to bring up the shots you requested. It’s also how the memories feature works. Heck it’s how the faces feature from iPhoto back in the day worked (but without the benefit of a dedicated neural engine). All of this ‘scanning’ that people are worried about is happening locally on device. No data is shared with Apple for any of the features described above.

So how does the neural engine do this?

The neural engine is trained by Apple with machine learning algorithms. Taking the example of a Dog, the neural engine knows what a Dog looks like because it was trained with a specific algorithm that taught it what Dogs look like. When you take future photos, if the neural engine finds that the photo matches to it’s ’Dog algorithm’ it gets indexed and sorted as such. The next time you search your library for the term ‘Dogs’, the photo you took is surfaced.

And how does this relate to CSAM? 

In the simplest sense, Apple has created an on device machine learning algorithm that looks for CSAM. But much narrower in scope. Narrower than the aforementioned ‘Dog algorithm’. The algorithm has been programmed with neural hashes that represent known CSAM images from a database. This means CSAM not in the datebase won‘t be detected as the algorithm doesn’t recognise a match. At this stage in the process, nothing changes for you as the user and your privacy has not been affected one bit. If you’ve been happily using photo memories, or searching your library for shots you took on that nice family trip to Disney World then understand this. This was achieved in the same way that Apple detects CSAM on device.

User saves photo > photo is run through neural engine on device > photo gets categorised and indexed as ‘Dog’ > user is able to search their library at will for photos of Dogs and find that photo they just saved

But what about my iCloud Photo Library?

Here is where the CSAM feature differs. If and only if the machine learning algorithm on your device has detected known CSAM photos, then those specific photos that were flagged get scanned a second time by another algorithm in iCloud as they get uploaded. Those photos and those photos alone become viewable by Apple. A human reviewer than checks to make sure the algorithm didn’t make a mistake. If it did make a mistake then nothing happens. But if confirmed CSAM content is matched, your account is reported.

Just to summarise

1. Do you use photo memories or faces or ever search your library by location or using key words? All of that happens on device and Apple never sees your photos. And the scanning for CSAM works in exactly the same way but with a much narrower scope. If you don’t like the idea of the system scanning for CSAM then I hate to break it to you, but your phone has been doing on device machine learning and scanning of your photos for many years.

2. Apple isn’t scanning your entire iCloud Photo Library. They’re running a second scan on any images that were first detected and flagged using on device machine learning. This happens as the image gets uploaded.

Machine Learning has been happening on our devices for years for many of the features we use in the photos app. The benefit of keeping it on device is it’s private. No data is processed in the cloud.

The only real thing that has changed with this new system, is that if known CSAM photos were detected using on device machine learning (around 30 images), they would then be scanned on upload to iCloud and get reviewed by a human.

Nothing about the way your iPhone works has changed. Nothing about how your photos are stored or processed has changed. The only time any processing or scanning of images happens in iCloud is if your iPhone detected CSAM images on device using the same old neural engine that’s been running in the same old way that it has since the iPhone X.

My personal view is this. The technology being used is the most thoughtful and privacy preserving implementation used by any company so far. My only concern is if governments try to force Apple to widen the scope of their on device machine learning algorithm. But I also appreciate that as Craig pointed out, the security community would being able to determine if Apple lied to us quickly enough. And Apple has multiple audit levels to prevent interference through government or other means.

I think the fact that Apple has been transparent about this and told us is very reassuring. They could have chosen not to. Which would have been morally wrong. But if it was their intent to start abusing our trust by covertly scanning for other things in the cloud, then telling us upfront about this feature would be a pretty dumb way to start.

Based on everything Apple has done for privacy so far I’m willing to continue to trust them. They didn’t do a good job of explaining this technology but I do believe they’re trying to do what is right, not what is easy. If in the future they go back on their word regarding widening the scope of the system to detect other things, I’ll be the first to call them out on it and call for them to be sued into the ground.

Until then I remain confident in Apple’s commitment to privacy.

Citation

Featured image: TheRegisti via Unsplash

Leave a Reply