(Replying to PARENT post)

Before they make it to human review, photos in decrypted vouchers have to pass the CSAM match against a second classifier that Apple keeps to itself. Presumably, if it doesnโ€™t match the same asset, it wonโ€™t be passed along. This is explained towards the end of the threat model document that Apple posted to its website. https://www.apple.com/child-safety/pdf/Security_Threat_Model...
๐Ÿ‘คfay59๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

What happens if someone leaks or guesses the weights on that "secret" classifier? The whole system is so ridiculous even before considering the amount of shenanigans the FBI could pull by putting in non-CSAM hashes.
๐Ÿ‘คsigmar๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You can still hack someone's phone and upload actual CSAM images. That exposes the attacker to additional charges, but they're already facing extortion and all that anyway. I don't understand the "golly gee whizz, they'd have to commit a severe felony first in order to launch that kind of attack" argument.

Don't know why this hasn't already been used on other cloud services, but maybe it will be now that its been more widely publicized.

๐Ÿ‘คlamontcg๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

How...exactly did they train that CSAM classifier? Seeing as that training data would be illegal. I'd be most interested in an answer on that one. They are willing to make that training data set a matter of public record on the first trial, yes?

Or are we going to say secret evidence is just fine nowadays? Bloody mathwashing.

๐Ÿ‘คsalawat๐Ÿ•‘4y๐Ÿ”ผ0๐Ÿ—จ๏ธ0