(Replying to PARENT post)
Thing is, that's not even true in that it doesn't fully acknowledge the problem. You often CAN tell information from people's outward appearance, albeit probabilistically. Therein lies the problem: you can very easily train an algorithm to be maximally right according to your cost function, but end up biased because the underlying distributions aren't the same between groups.
The issue is that as a society we've (mostly) decided that unfairly classifying someone based on correlated but non-causal characteristics is wrong, EVEN in the extreme that you're right more often than you're wrong if you make that assumption.
(Replying to PARENT post)
This. Some Childhood development studies have born out that children preferentially treat their own 'ingroup' preferentially over those of an 'outgroup' [1]. This is before they really are "impressionable" as I understand it, so it shows something of an inbuilt mechanism. Though they made this distinction:
“Racism connotes hostility and that’s not what we studied. What the study does show is that babies use basic distinctions, including race, to start to cleave the world apart by groups of what they are and aren’t a part of.” [2]
1 - https://www.frontiersin.org/articles/10.3389/fpsyg.2018.0175...
2 - https://www.telegraph.co.uk/news/science/science-news/107705...
(Replying to PARENT post)
https://en.wikipedia.org/wiki/Thoughtcrime#Crimestop
We basically need this in software so that AI researchers can stop getting lambasted with claims of racism
(Replying to PARENT post)
They give everyone tests which reveal that primitive part of your brain. Essentially they want to shock you by forcing you to fail. It is almost like they are trying to shame people, which is a terrible way to teach.
(Replying to PARENT post)
You won’t hear much about it though because: 1) it’s a sensitive topic that is easily misunderstood by those who aren’t researchers 2) when everyone else is trying to scrub racism out of their models for politically correct reasons, having a “racist” AI can actually produce a competitive advantage in some industries, and it can be a difficult advantage for competitors to get similar performance if they don’t allow for natural racism to emerge.
In short, it’s not a big deal, humans have some amount of racism whether they admit it or not. What matters is that you treat people equally and without prejudice, regardless what you may think of their race. Judgements about a group and judgements about an individual are two different things.
Racist results don’t even need to be about negative things, it could be as simple as “a person of this race prefers this kind of food over that one”
(Replying to PARENT post)
Well there’s all kinds of mimicry in nature to exploit exactly this assumption.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Our minds are literally categorical machines--in order to fight entropy we find stable states through classification of the world into categories. So literally any form of thought is a form of discrimination between many categories. Every word can be thought of as a category.
(Replying to PARENT post)
(Replying to PARENT post)
Absolutely. The mark of intelligence is to recognize them and correct for it.
And the people who just permanently slip up, and never correct themselves, and somehow always make the same snap judgments? 100% racist.
(Replying to PARENT post)
I mean it doesn’t seem like people tended to label some races with negative labels, just that they labeled their race at all, and didn’t do that for white people. At least in the example.
I don’t think it is useful to lump these two concepts together.
(Replying to PARENT post)
You can't understand very much at all just based on how things look. That holds for humans and inhumans alike.
(Replying to PARENT post)
It this were the case, babies would be racist by default, but they are not. Racism is taught.
(Replying to PARENT post)
"And when one user uploaded an image of the Democratic presidential candidates Andrew Yang and Joe Biden, Yang who is Asian American, was tagged as “Buddhist” (he is not) while Biden was simply labeled as “grinner.”"
They are not removing images that categorize blacks as blacks. They are removing images that are incorrect.
(Replying to PARENT post)
Racism starts out of preferential treatment of some people. Most racist people have a « root event », where the other party didn’t get condemned. It may even have happened several times, with various consequences (rape, molestation, repeat racket, etc).
Then they proceeded to denounce the rape/molestation/racket to the police. The police doesn’t act because they suspect you are racist. Which has the opposite effect than desired: It doesn’t condemn the criminal, and puts the burden on the victim.
Then they seek security in their lives. Therefore, if the probability is high to experience rape, molestation, racket or crime, enough that the racist has been confronted to it in his life, then the first best approximation of judging whether someone might be a criminal is whether he’s free or in jail. That’s in well-functioning societies. In non-functioning societies where criminals are not in jail, the second approximation to protect yourself is secondary indicators, inferred from grossly racist statistics. Here racism is born.
It also explains why racist people often still have various friends of the group they are supposed to mistrust. It’s because they were able to assess their probity and trustworthiness in some opportunity. That’s why mixing people by compulsory rules works. But mixing people is a poor palliative. It doesn’t solve the underlying problem if a non-functioning society, so it doesn’t make the people less racist, despite giving the appearance that people work together.
Racism is often used with regret by those who exert it. But it is the second best approximation to seek security. Racism is the result of a non-functioning society which is caused by being more lenient on criminals of chosen category. I’m pretty sure it is possible to engineer racism by letting a made-up group get away with crimes.
(Replying to PARENT post)
It's really only people where you can't tell what it does/is from the outside. Cars, trees, animals, mountains... everything else, if it looks a way it acts that way. Early AI will probably have just as much trouble with this as people have historically.
I really wish people would start viewing Racism as a willingness to let that primitive part of the mind be in command, rather than a binary attribute you either have or don't. Like, nobody is 100% not racist. There will always be slip ups, over-simplifications, snap judgements, subconscious or not.