On-Device CSAM Scanning For iCloud Photos

Apple:

Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.

To help address this, new technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC).

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations.

The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

The naysayers of the last week are not necessarily wrong. This issue is nuanced and Apple’s decisions involve concessions. Personally, I think Apple have done well here. They probably could have handled the communication surrounding the announcement better, but the actual functionality and policy decisions are reasonable.

In a world where Apple is isolated to all external pressures, I don’t think they would have done this at all. Ideally, Apple would like it such that your device and your data is sacrosanct. Impenetrable. That fits with their business model, it fits with their morals, and it fits with their marketing.

However big Apple is, they do still have to conform to government expectations and the law. That’s why in China they let a third-party Chinese company run the iCloud data centre. They can make it as secure as can be under that arrangement, but it’s still a compromise from the ideal situation where the data centres are managed by Apple themselves (and is how it happens in every other region of the world).

In the US, big tech companies are expected to help governments track and trace child abuse content happening on their platforms. Apple runs a big platform full of user generated content, iMessage and iCloud, yet their contribution to this specific cause has been disproportionally small, infinitesimal even. Facebook reports something like 20 million instances of CSAM a year to the NCMEC organisation. In that same timeframe, Apple barely crossed the 200 mark.

So, this is the underlying motivation for these new policies. Big government bodies want Apple to help to track down the spread of CSAM on their devices, in just the same way that other big cloud companies like Google, Amazon, and Facebook comply by scanning all incoming content to their platforms.

You have to assume that privacy issues are a key reason why Apple has historically been so lax in this department. It’s not that Apple has sympathy for the people spreading child pornography. Why right now? That is still unclear. Perhaps, behind closed doors, someone was threatening lawsuits or similar action if Apple didn’t step up to par soon. Either way, it’s crunch time.

I’m sure governments would welcome a free-for-all backdoor. Of course, Apple isn’t going to let that happen. So, what Apple has delivered is a very small, teensy-tiny, window into the content of user’s devices.

The actual system is almost identical to what Google/Amazon/Facebook do on their servers, attempting to match against a database of hashes provided by NCMEC. Except, Apple runs the matching process on device.

I’ve seen a bit of consternation around this. I don’t think it’s a big deal where the matching process happens. Arguably, if it is happening on device, security researchers have more visibility into what Apple is doing (or what a hypothetical nefarious actor is doing with the same technology) compared to if it was taking place on a remote server.

You do have to basically take Apple’s word that it is only scanning photos destined to be sent to iCloud. Sure. But you have to take Apple’s word on a lot of things. A malicious state could secretly compel Apple to do much worse with much less granularity. I don’t trust Apple any more or any less about this than the myriad other possible ‘backdoors’ that iOS could harbour. The slippery slope argument is a concern, and worth watching out for in the future, but I don’t see anything here that is an obvious pathway to that.

You also have to stay grounded in the baseline case. Right now, if you use iCloud Backup (clarifying again that this new Child Safety stuff only applies to if you use iCloud Photos, photos stored in the backup only are exempt), all of your phone’s data is stored on an Apple server somewhere in a manner that they can read your data if they so desire. This also means that a government can subpoena Apple to hand over that information. This is not a secret. Apple has done it countless times, purportedly in the presence of a valid warrant, including very publicly in the midst of the PR fiasco that was the 2016 San Bernadino shooter case.

With that in mind, almost all of your phone is already accessible to law enforcement or state actors if they so desire. This new entry point in the name of Child Safety pales in comparison to that level of potential access.

One assumption I’ve seen floated around is that Apple wants to roll out end-to-end encrypted iCloud Backup option in the future. The criticism is that this on-device scanning policy undermines the point of E2E because the scanner would still be able to “spy” on the data before it was cryptographically sealed. I mean, I guess that’s true to a degree, but I’d still rather have the option for end-to-end backups with a CSAM scanner in place, than not have it all which is the world we live in today.

The weakest link in the chain on the technical side of this infrastructure is the opaqueness of the hashed content database. By design, Apple doesn’t know what the hashes represent as Apple is not allowed to knowingly traffic illicit child abuse material. Effectively, the system works on third-party trust. Apple has to trust that the database provided by NCMEC — or whatever partner Apple works with in the future when this feature rolls out internationally — does only include hashes of known CSAM content.

I think there’s a reasonable worry that a government could use this as a way to shuttle other kinds of content detection through the system, by simply providing hashes of images of political activism or democracy or whatever some dictatorial leader would like to oppress. Apple’s defence for this is that all flagged images are first sent to Apple for human review, before being sent on. That feels flimsy.

My suggestion would be that all flagged images are reported to the user. That way, the system cannot be misused in secret. This could be built in the software stack itself, such that nothing is sent onward unless the user is notified. In press briefings, Apple has said they don’t want to do this because their privacy policy doesn’t allow them to retain user data, enabling a legitimate criminal who is sharing CSAM would simply be able to delete their photo library when alerted. I think tweaks to policy could solve it. For instance, it would be very reasonable for a flagged image to be automatically marked frozen in iCloud, unable to be deleted by a user, until it has gone through the review process. The additional layer of transparency is beneficial.