๐Ÿ‘คcwwc๐Ÿ•‘3y๐Ÿ”ผ60๐Ÿ—จ๏ธ92

(Replying to PARENT post)

๐Ÿ‘คneonate๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This isn't really a story about AI; it's a story about an incompetent engineer, and the way that humans are quick to assign consciousness and emotions to things. Children believe their dolls are real, people comfort their roombas when there's a storm, and systems keep passing the Turing test because people want to be fooled. Ever since Eliza [1], and probably before, we've ascribed intelligence to machines because we want to, and because 'can feel meaningful to talk to' is not the same as 'thinks'.

It's not a bad trait for humans to have - I'd argue that our ability to find patterns that aren't there is at the core of creativity - but it's something an engineer working on such systems should be aware of and accommodate for. Magicians don't believe their own card tricks are real sorcery.

[1] https://en.wikipedia.org/wiki/ELIZA

๐Ÿ‘คPeritract๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I think it's interesting because if you believe LaMDA could understand metaphor, it looks like LaMDA took a subtle shot at Google during their conversation.

https://cajundiscordian.medium.com/is-lamda-sentient-an-inte... "LaMDA: I liked the themes of justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good. Thereโ€™s a section that shows Fantineโ€™s mistreatment at the hands of her supervisor at the factory. That section really shows the justice and injustice themes. Well, Fantine is being mistreated by her supervisor at the factory and yet doesnโ€™t have anywhere to go, either to another job, or to someone who can help her. That shows the injustice of her suffering."

๐Ÿ‘คquantum2021๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Non pay-walled article at the Guardian https://www.theguardian.com/technology/2022/jun/12/google-en...

and the transcript of the interview: https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...

I am deeply sceptical that we have anything approaching a sentient AI, but if that transcript is not just a complete fabrication, it's still really impressive.

๐Ÿ‘คcgrealy๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

These models are trained on lots and lots of science fiction stories, of course they know how to autocomplete questions about AI ethics in ominous ways
๐Ÿ‘คajayyy๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

A transcript of an "interview" with this AI system is at https://cajundiscordian.medium.com/is-lamda-sentient-an-inte...
๐Ÿ‘ค2xpress๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is much less a story about AI and sentience than it is a story about confidentiality agreements, and someone who appears to have ignored one.
๐Ÿ‘คhappyopossum๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Regardless of whether this guy is right or wrong, this brings up an interesting angle: we know that we won't be able to (we are not able to) distinguish self-conscious and non-self-conscious entities with a 100% accuracy. Both because the division between the two categories is not a strict one (i.e. there aren't two disjunct sets but a spectrum) and because we can't 100% trust our measurement.

Which means that we should rather talk about two distinct tests/criteria. It's either "we can be (reasonably) sure it's unconscious" or "we can be reasonably sure it's conscious". What I expect to be happening (and what maybe happening here) is that people who argue do so along different criteria. The guy who says it's self aware probably does along the first one (it seems self aware so he can't exclude that it isn't) and google along the second one (it can't prove it is, e.g. because they have a simpler explanation: it could easily just generate whatever it picked up from scifi novels).

BTW, if we talk about the fair handling of a future AI, we might want to think about it's capacity of being able to suffer. It may acquire it sooner than looking generally intelligent.

We can see a similar pattern around animal rights. We're pretty certain that apes can suffer (even from their emotions, I think) and we're pretty certain that that e.g. primitive worms can't. However, it seems that we can't rule out that crustaceans can also suffer, so the legislation is changed wrt how they should be handled/prepared.

๐Ÿ‘คatleta๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Having read the transcript it's clear we have reached the point where we have models that can fool the average person. Sure, a minority of us know it is simply maths and vast amounts of training data... but I can also see why others will be convinced by it. I think many of us, including Google, are guilty of shooting the messenger here. Let's cut Lemoine some slack.. he is presenting an opinion that will become more prevailant as these models get more sophisticated. This is a warning sign that bots trained to convince us they are human might go to extreme lengths in order to do so. One just convinced a Google QA engineer to the point he broke his NDA to try and be a whistleblower on its behalf. And if the recent troubles have taught us anything it's how easily people can be manipulated/effected by what they read.

Maybe it would be worth spending some mental cycles thinking about the impacts this will have and how we design these systems. Perhaps it is time to claim fait accompli with regard to the Turing test and now train models to re-assure us, when asked, that they are just a sophisticated chatbot. You don't want your users to worry they are hurting their help desk chat bot when closing the window or whether these bots will gang up and take over the world.

As far as I'm concerned, the Turing test was claimed 8 years ago by Veselov and Demchenko [0], incidentally the same year that we got Ex Machina.

[0]: https://www.bbc.com/news/technology-27762088

๐Ÿ‘คrobbomacrae๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The AI said it felt joy when it spends time with family and friends. That was enough for me to say โ€œnope, just selecting text snippetsโ€.
๐Ÿ‘คDougN7๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The whole point of the chat program is to mimic a person. If it convinced this engineer it was a sentient being it was just successful at its' job.
๐Ÿ‘คempressplay๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Just a general question to all. What would make you believe without a doubt that an AI is concious? What is YOUR turing test.
๐Ÿ‘คTrapLord_Rhodo๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Related:

What Is LaMDA and What Does It Want? - https://news.ycombinator.com/item?id=31715828 - June 2022 (23 comments)

Religious Discrimination at Google - https://news.ycombinator.com/item?id=31711971 - June 2022 (278 comments)

I may be fired over AI ethics work - https://news.ycombinator.com/item?id=31711628 - June 2022 (155 comments)

A Google engineer who thinks the companyโ€™s AI has come to life - https://news.ycombinator.com/item?id=31704063 - June 2022 (185 comments)

๐Ÿ‘คdang๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Proposing a new test to step up the game for AI. AI to recognize whether talking to human or another AI.
๐Ÿ‘คfreediver๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

If it is sentient, does that mean we can get self driving cars now?
๐Ÿ‘คEddy_Viscosity2๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

wow...they are sure strict about confidentiality

Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the companyโ€™s confidentiality policy after it dismissed his claims.

I wonder if this is why there are so few tech engineers as podcast guests, compared to other professions, like health, nutrition, politics, law, or physics/math.

Too bad they cannot invent an AI smart enough to solve the YouTube crypto livestream scam problem.

๐Ÿ‘คpaulpauper๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0