Experts Say Apple’s Child Porn Detection Tool Is Less Accurate Than Advertised

Privacy advocates say Apple’s child pornography detection tool has a higher false-positive rate than the company claims.

Apple announced earlier this month that it will begin scanning user photos stored on iCloud for material found on a database of child pornography. The company says the false-positive rate in initial tests of the system was one in a trillion. But researchers have reverse engineered Apple’s system to show it classifies multiple sets of distinct photos as identical. The mistakes in the detection system cast doubt on Apple’s numbers and suggest iCloud users may be accidentally targeted by law enforcement or malicious actors.

The detection tool marks a shift for Apple. After a 2015 terrorist attack in San Bernardino, Calif., the company refused to build the FBI a backdoor into the shooter’s phone, saying, “The only way to guarantee that such a powerful tool isn’t abused and doesn’t fall into the wrong hands is to never create it.” Experts have warned that authoritarian governments could use the detection tool to hunt down political dissidents.

Apple’s detection system works by comparing “hashes,” or unique identifiers, of iCloud photos to hashes of known child pornography. If the system flags a certain number of hash matches, Apple employees will confirm the materials and alert authorities.

But researchers using code posted publicly by Apple say they’ve already found distinct photos that share the same hash. In one example, researchers artificially modified a picture of a dog so that the photo’s hash was identical to a photo of a young girl. Experts worry that bad actors could modify normal photos to trigger a match with child pornography, and send the photos to unsuspecting users. – READ MORE

Responses