(Replying to PARENT post)

Not a word from Ilya. I can’t wrap my mind around his motivation. Did he really fire Sam over “AI safety” concerns? How is that remotely rational.
👤valine🕑2y🔼0🗨️0

(Replying to PARENT post)

It might be because of AI safety, but I think it's more likely because Sam was executing plans without informing the board, such as making deals with outside companies, allocating funds to profit-oriented products and making announcements about them, and so on. Perhaps he also wanted to reduce investment in the alignment research that Ilya considered important. Hopefully we'll learn the truth soon, though I suspect that it involves confidential deals with other companies and that's why we haven't heard anything.
👤ah765🕑2y🔼0🗨️0

(Replying to PARENT post)

> Did he really fire Sam over "AI safety" concerns? How is that remotely rational.

Not rational iff (and unlike Sustkever, Hinton, Bengio) you are not a "doomer" / "decel". Ilya's very vocal and on record that he suspects there may be "something else" going on with these models. He and DeepMind claim AlphaGo is already AGI (correction: ASI) in a very narrow domain (https://www.arxiv-vanity.com/papers/2311.02462/). Ilya particularly predicts it is a given that Neural Networks would achieve broad AGI (superintelligence) before alignment is figured out, unless researchers start putting more resources in it.

(like LeCun, I am not a doomer; but I am also not Hinton to know any better)

👤ignoramous🕑2y🔼0🗨️0

(Replying to PARENT post)

Because that's not the actual reason. It looks like a hostile takeover. The "king" of, arguably, the most important company in the world, got kicked out with very little effort. It's pretty extraordinary, and the power shift is extraordinary too.
👤bufferoverflow🕑2y🔼0🗨️0

(Replying to PARENT post)

If it really was about “safety” then why wouldn’t Ilya have made some statement about opening the details of their model at least to some independent researchers under some tight controls. This is what makes it look like a simple power grab, the board has said absolutely nothing about what actions they would take to move toward a safer model of development.
👤tdubhro1🕑2y🔼0🗨️0

(Replying to PARENT post)

To shine some light on the true nature of the "AI safety tribe" aspects I highly recommend reading the other top HN post / article : https://archive.is/Vqjpr
👤singularity2001🕑2y🔼0🗨️0

(Replying to PARENT post)

No he didn't fire Sam over AI safety concerns. That's completely made up by people in the twittersphere. The only thing we know is that the board said the reason was that he lied to the board. The guardian[1] reported that he was working on a new startup[1] and that staff had been told it was due to a breakdown in communication and not to do with anything regarding safety, security, malfeasance or a bunch of other things.

[1] https://www.theguardian.com/technology/2023/nov/18/earthquak...

👤seanhunter🕑2y🔼0🗨️0