(Replying to PARENT post)
I'm now using a custom email domain (although it's just redirecting to my Gmail for now), and doing regular backups of my emails (through Thurderbird) and Google Drive.
With these news being more and more common, you never know when it'll happen to you.
(Replying to PARENT post)
I used to think working with governments is very hard. Getting support in places like DMVs and passport renewals is super hard. But never thought so futuristic tech world will be so much more arcane and worse than even the governments.
(Replying to PARENT post)
(Replying to PARENT post)
I fully expect people to start developing vocabularies to harden their group communication against these automatons.
Also, please, don't use term "AI". Calling it intelligence is too charitable for something that can't distinguish game of chess from hate speech.
(Replying to PARENT post)
You should view ML as an enhancement for humans.
So the problem here is that no ML system without essentially being an AGI is capable of fully comprehending context, in this case use of "black" and "white" in Chess that has absolutely no racial context.
So the mistake here is (likely) automated action against the channel. That should never happen. At least, not for this particular signal.
What should happen is the channel should be flagged for review by a human. Any human would appreciate the context and take no action.
This is the "enhancement" part. With no automated assistance, a human might be capable of moderating (made up numbers) 100 channels. That doesn't scale at Youtube scale. ML systems might enhance that human's moderating ability to 10,000 channels.
Even better, by flagging for review, you're actually producing training data for your ML. The P-R for your automated disciplinary system needs to be incredibly good before you take humans out of the equation. False positive bans or blocks on channels is bad for the individual channel. It can be devastating in fact. The corporate view is that "one channel doesn't matter". And they're right... to a point. At some point it undermines confidence in the platform and then it does matter.
(Replying to PARENT post)
(Replying to PARENT post)
> Even though the channel was restored within 24 hours, the YouTube did not explain why it had blocked Croatian chess player Antonio Radic, also known as 'Agadmator,' from its platform briefly, the Dailymail reported.
> Experts suspect that it was the usage of words like "black" and "white" that confused the Yutube's AI filters. They found that 80% of chess videos that were flagged for hate speech actually ahd terms like 'black,' 'white,' 'attack' and 'threat.'
YouTube didn't say why, and they simply "suspect."
It's just as likely that is was flagged for other reasons. For example, mass "reporting" of the channel as offensive by people with an axe to grid against this particular chess YouTuber. (It's been my personal experience there's a lot of politics in the chess community.)
(Replying to PARENT post)
https://www.youtube.com/user/AGADMATOR
I enjoy watching on my lunch breaks. Great breakdowns of games between top players past and present.
(Replying to PARENT post)
People I know actually use "dark-brown", "light-brown", and "light-skinned" and other such more accurately descriptive terms when actually describing someone's appearance, and I like that much better.
(Replying to PARENT post)
Surely effectively all chess videos, banned or not, have those words in them? This all feels pretty speculative.
(Replying to PARENT post)
Or maybe donโt automate banning YouTube channels via nebulous black boxes? Or can advertisers stop being afraid of anything and everything?
(Replying to PARENT post)
(Replying to PARENT post)
He doesnโt know why his podcast was flagged. He mentions a few possibilities-one of which is the black and white thing, which would be a mistake. Itโs notable how guarded his speech is, as though he might say something else verboten.
Iโve watched hundreds of chess videos on YouTube and never heard of one getting flagged until now.
(Replying to PARENT post)
(Replying to PARENT post)
Anyone knowledgeable on this subject?
(Replying to PARENT post)
Youtube is essentially a public forum with audience of entire countries, where the people who run it choose what gets amplified, what gets shunned and what get outright blocked.
Who would be happy with a private entity deciding the future mindsets at this scale?
(Replying to PARENT post)
These cases have lots of exposure and people who notice. So the real QA is in these edge cases where customers find bugs. I worry about how not so visible algorithms.
I had a company market to me about sentiment analysis. I asked their accuracy and they answered that they canโt actually test but their training model has x%. So to make decisions accordingly. But then I had coworkers doing stuff like what percent had what sentiment vs the other with no assumptions.
I think the problem is people are kind of bad with probabilities so something either is or isnโt. And I seem to witness a gulf where there are people who kind of understand and then people who donโt but trust the output.
(Replying to PARENT post)
(Replying to PARENT post)
Maybe Google should put more effort in training AIs to recognize when they don't know instead of shoehorning an answer.
An analogy is I would expect a non-stupid human who has never seen a screw before, but trained to work with hammering nails to quickly recognize something is wrong the first time they're presented with a screw and not attempt to hammer it flush.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
What if people start publishing hate speech but replace the hateful words by, say, 'Alphabet' ?
Will the AI learn that Alphabet is a hateful word ?
Surely someone with enough resource should be able to do that.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
People love making clickbait titles.
(Replying to PARENT post)
So 18% did include obvious racist language or hate speech?
(Replying to PARENT post)
Various missingspaces, Carnegie Melon[sic], etc.
(Replying to PARENT post)
(Replying to PARENT post)
I can't say its name on youtube on my own channel, 'cos YouTube thinks I'm saying something else. (not sure what would happen if I kept on doing it. Maybe they would start sending me apprentice Nazi traffic :D )
(Replying to PARENT post)
(Replying to PARENT post)
Does it take a lot of effort to replace them with neutral non-offensive colors?
Speaking about chess, don't you think that chess being mostly-male mostly-white is a direct consequence of the language they use?
(Replying to PARENT post)
(Replying to PARENT post)
> Sorry about that
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Anyone who has ever used these instruments of injustice is tainted.
Oh, wait.
(Replying to PARENT post)
From a quick read: last June, the channel was banned for โharmful and dangerousโ content. The ban was removed in under 24 hours. YouTube has not confirmed the reason. And yet:
> Experts suspect that it was the usage of words like "black" and "white" that confused the Yutube's AI filters.
Well, if experts said it, it just be true! The experiment they ran is speculative as best and the article just ran with it as absolute fact. If the filter was caused by use of โblackโ and โwhiteโ wouldnโt every chess channel fall victim to it? The filter isnโt new, why was it triggered once last June?
A lot of questions raised and an article that has no interest in answering them. Itโs credited to โBuzz Staffโโฆ I think itโs always suspect when no one wants to attach their name to something they wrote. A smell of clickbait running all the way through this one.