a785236

πŸ“… Joined in 2017

πŸ”Ό 57 Karma

✍️ 10 posts

πŸŒ€
10 total posts
Stories3
Comments7
Ask HN0
Show HN0
Jobs0
Polls0

(Replying to PARENT post)

It's 17% heavier than the iphone 13 mini.

Source: https://www.apple.com/iphone/compare/?modelList=iphone-13-mi...

πŸ‘€a785236πŸ•‘1moπŸ”Ό0πŸ—¨οΈ0
πŸ‘€a785236πŸ•‘3yπŸ”Ό65πŸ—¨οΈ4
πŸ‘€a785236πŸ•‘3yπŸ”Ό2πŸ—¨οΈ0

(Replying to PARENT post)

A minor but important correction. Krebs wrote that the Gov claimed that β€œfixing the flaw could cost the state $50 million.” That’s not quite right. In the press conference linked in Kreb's post, the Governor actually claims that the β€œincident alone may cost Missouri taxpayers up to $50 million.” I’d guess this number includes an estimate for the legal cost of dealing with the data breach plus any statutory penalties the state might incur (plus a grossly inflated price for fixing the bug).
πŸ‘€a785236πŸ•‘4yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

No, it isn't.

> ... this algorithm replaces the data with a random value that has no relation to the original.

Based on that sentence, I assume that when you write "the data" you mean "the part of a picture corresponding to a person's face." But removing the face from a picture doesn't necessarily make it particularly difficult to identify the subject if the subject is very familiar to you. It doesn't matter if you've never seen that specific picture, or if you have no additional context like place and time.

Just look at the examples on the GitHub page for proof! The picture of Obama and Trump is clearly recognizable, and at least one of the other Obama photos is easy to recognize. The soccer players are identifiable from their jersies (Messi is #10 on Barcelona). Jennifer Lawrence was also easy for me to spot.

πŸ‘€a785236πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I wish the authors wouldn't oversell the privacy claim:

> Github: "The DeepPrivacy GAN never sees any privacy sensitive information, ensuring a fully anonymized image."

> Abstract: "We ensure total anonymization of all faces in an image by generating images exclusively on privacy-safe information."

> Paper: "We propose a novel generator architecture to anonymize faces, which ensures 100% removal of privacy-sensitive information in the original face."

Changing a face anonymizes an image the same way that removing a name anonymizes a dataset -- poorly. This is cool, but it's not anonymization.

πŸ‘€a785236πŸ•‘6yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

You're certainly right that formal definitions are important. However, on this forum, I think informality can be appropriate. Though there are variations and inconsistencies, in the theoretical cryptography community, second preimage resistance is most often formalized as "universal one-wayness" and preimage resistance is formalized as "one-wayness."

I did however was careless when I claimed that shrinking by 1 bit suffices for preimage resistance. The hash function needs to shrink by at least log(n) bits to rule out computationally-bounded adversaries finding preimages.

Also, apologies for the formatting of my OP - I don't post here often.

πŸ‘€a785236πŸ•‘7yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

It's true as long as the function f is sufficiently "shrinking." The domain of the function (where the x's live) must be sufficiently larger than its range (where the f(x)'s live). For example, if the domain is size N, a range of size 0.99N is enough to guarantee that collision resistance implies preimage or second-preimage resistance.

Said another way, if there are many collisions and you still* have a hard time finding them (collision resistance), then you can prove that it's also hard to find preimages or second preimages.

Your example, f(x) = x is not shrinking at all: there are no collisions.

A fundamental property of hash functions is that they're shrinking---so much so that it often goes without mention in informal settings. Hash functions are typically defined in two ways: shrinking arbitrary length inputs to a constant length (e.g., n bits to 256 bits) or shrinking arbitrary length inputs by some constant amount (e.g., n bits to n-1 bits, or n/2 bits). Even shrinking by one bit serves to halve the domain, guaranteeing many collisions and ruling out counter-examples like the one you gave.

πŸ‘€a785236πŸ•‘7yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

K-anonymity provides very little protection, if any. A few brief points:

1. I've never seen a formal definition of security that k-anon supposedly satisfies. While I personally really like formal guarantees, maybe one might argue this wouldn't be so bad absent concrete problems with the definition. Which leads us to...

2. K-anon doesn't compose. The JOIN of 2 databases, each k anonymized, can be 1-anonymous (i.e., no anonymity), no matter what k is.

3. The distinction between quasi-identifiers and sensitive attributes (central to the whole framework) is more than meaningless: is misleading. Every sensitive attributes is a quasi-identifier given the right auxiliary datasets. Using k anon essentially requires one to determine a priori which additional datasets will be used when attacking the k anonymized dataset.

4. My understanding of modified versions (diversity, closeness, etc) is less developed, but I believe they suffer similar weaknesses. The weaknesses are obscured by the additional definitional complexity.

(Edit: typos and autocorrect)

πŸ‘€a785236πŸ•‘8yπŸ”Ό0πŸ—¨οΈ0