Apple is going for a big step to examine CSAM

Apple may launch a strange feature anytime this week. Apple will now examine your iCloud photographs for child sexual abuse material (CSAM). Intent aside, Apple’s technology and methodology are extremely flexible.

Matthew D. Green, an Associate Professor at the Johns Hopkins Information Security Institute, made the claims. Green discussed Apple’s ideas and what they may mean if implemented in a series of tweets.

Apple’s plan to unveil a new scanning tool tomorrow has been verified to Green by numerous sources. The “client-side tool” will scan for CSAM on users’ devices. It will send any image tagged by it to Apple servers.

“Your phone will download a database of “fingerprints” for all bad photos (child porn, terrorist recruitment videos, etc.) It will search your phone for matches.”

He says the gadget will hunt for CSAM using a perceptual hashtag. Photos with the hashtag will be sent to Apple’s servers.

Apple reportedly produced these hashes using “a new and proprietary neural hashing method.” Green also says Apple has the NCMEC’s clearance to utilize it.

This technology should help catch and eventually stop child pornography distribution and production, at least within the iOS ecosystem.

The enormous implications of such a technology worry Green.

Apple will only scan pictures saved in iCloud, meaning those shared with Apple. But a gadget that can scan your photos does not sound very privacy-friendly.

Green says so. He claims in his tweets that such a client-side scanning technique might seek for any image on your device.

The issue is that the consumer has no idea what hashtags or “fingerprints” are being utilized during scanning. Anyone with access to the system may thus gather a specific group of files from your library.

So, how much do you trust Apple? It may simply use the same technique to look for anything other than CSAM in your photographs tomorrow.

Even if Apple utilizes it correctly, these hashtags are prone to mismatches. Green indicates that they are “imprecise” to capture near matches, i.e. even if a picture is cropped or otherwise changed.

As a result, the algorithm frequently misidentifies innocent images as harmful. We’ve all seen it on Twitter and other social media sites that refuse to reveal content because “you’ve decided not to view it.” When you open it, you see a doggie in a flower field.

Aside from Apple abusing the technology, Green is worried about threat actors. If someone can manufacture collisions, or otherwise mislead the algorithm to identify unrelated pictures, it loses all effectiveness.

The most concerning idea are that Apple may use the technology in future encrypted services. Globally, law enforcement authorities are already pushing to monitor encrypted sites more closely.

If Apple is required to utilize the technology for such and comparable purposes within the encrypted network in the future, Apple’s privacy arguments will be weakened.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.