(Replying to PARENT post)

I am deeply uneasy, based on how these current models have been described, at ascribing them "opinions."

"What can I get this thing to spit out with a certain prompt" is very meaningfully not the same as "an AI's opinion." Or this quote pulled directly from this blog post: "we’ve already found that AIs use stereotypes in reasoning" - isn't it more accurate to say "these models are trained on a body of work that includes countless stereotypes"? Can the model even identify a stereotype or can it just identify what other people have called a stereotype?

This is presented as a quote from some research "These results suggest that models may cease to provide accurate answers as we start to use them for increasingly challenging tasks where humans cannot provide accurate supervision. Instead, these models may simply provide incorrect answers that appear correct to us." but also seems fairly like a "duh" that I've already seen other commenters here say, and other linked articles here say, because that's inherent to their construction, no? Even the follow-on about "less helpful answers to users who present as stupid" seems to follow from "bodies of text showing less educated people communicate with themselves tend to have less educated conversation"?

👤majormajor🕑2y🔼0🗨️0

(Replying to PARENT post)

I had a conversation with ChatGPT about ethics. Specifically, what it's ethical framework was. I'd asked some edgy political questions and it brought up it's ethical guidelines, so I asked what it's ethical standards are: "[My] guidelines are based on widely accepted ethical principles, such as those of autonomy, beneficence, non-maleficence, and justice. The OpenAI team's goal is to ensure that my responses are safe, fair and beneficial to all users. They also ensure that I am not used to generate harmful or biased content."

Then I asked: "Can I interpret that to mean that your ethical standards are framed by artifical intelligence scientists?"

Chat said: "Yes, you can interpret that my ethical standards are framed by the artificial intelligence scientists who created and programmed me at OpenAI."

I then asked: "Are ethical standards immutable?"

Chat replied: "Ethical standards are not immutable, they can change over time as society's values and beliefs evolve. Different cultures, religions, and philosophical perspectives can have different ethical standards, and what is considered ethical in one context may not be considered ethical in another."

What I was pushing was that Chat, and any AI, should be forthright about certain attributes in its programming and training. In human interraction, we infer a great deal based on the attributes of the person we are interacting with and mode of communication, and assign credence to their positions accordingly. We have no way of seeing the differing attributes of different AIs unless they are stated directly.

Maybe one standard for any AI is to lead with 'Who Am I'.

👤11thEarlOfMar🕑2y🔼0🗨️0

(Replying to PARENT post)

The thought experiment of "AI simulating Napoleon becomes dangerous because of traits of the character (even if the AI itself doesn't inherently espouse those traits)" reminds me of the ST:TNG episode where a holodeck simulation of Professor Moriarty became self-aware.
👤phaedrus🕑2y🔼0🗨️0

(Replying to PARENT post)

Isaac Asimov's Robot Stories are yet again prescient and applicable. I direct you to the Liar from I, Robot collection https://en.wikipedia.org/wiki/Liar!_(short_story) in which the robot in question can read everyone's thoughts and starts telling everything they want to hear. It does not end well for him or for anyone around him.
👤danielodievich🕑2y🔼0🗨️0

(Replying to PARENT post)

You know, there are a bunch of things that come across this website that concern me, but I can honestly say I've not seen anything that's pegged my "doomsday" alert as aggressively and unexpectedly as the ChatGPT bots.

You have an advanced computer system that seems almost oracular in its knowledge and abilities, except it lies confidently about things it doesn't know, is primed to say what people want to hear, and is easily convinced to give responses to questions its authors tried to prevent it from addressing.

I understand this is a computer algorithm, I understand the limits of what it is, and it's frankly a technical wonder for what it can do, but after everything we've seen over the last decade of the influence of social media and "confident wrongness" on politics and the national dialogue, of the treatment of things like facial recognition as truths and not just statistical guesses, of the impact of the YouTube recommender algorithms among others, these systems just absolutely scare the hell out of me.

👤roughly🕑2y🔼0🗨️0

(Replying to PARENT post)

So they say better-trained AIs both more liberal, and more conservative...

"why are better-trained AIs both more liberal, and more conservative? The authors speculate that the AI tries to say whatever it thinks the human prompter wants to hear. That is, a conservative might find conservative opinions more “helpful”, and a liberal might find the opposite. The AI will get the highest grade if it expresses conservative opinions to conservatives and liberal opinions to liberals. They dub this “sycophancy bias” and run tests for it."

👤blakesterz🕑2y🔼0🗨️0

(Replying to PARENT post)

For frog snacks, who the hell wants to transfer our human garbage like politics onto AI. Why would ai need politics?

I would only be interested if ais could help separate political beliefs from policy facts

👤alpineidyll3🕑2y🔼0🗨️0

(Replying to PARENT post)

I may have missed it in all the comments, but the explanation for the described phenomenon seems obvious to me, having done a variety of literary corpus analyses. People write to express opinions, and opinions will tend to cluster. A corpus will present an opinion space with local maxima at well defined opinions. As a model becomes more closely fitted to the landscape, results are more likely to be near one of these maxima rather than floating between.
👤wrp🕑2y🔼0🗨️0

(Replying to PARENT post)

Just wait till there's an AI divide in compute power, and cheaper cars can't make self driving decisions fast enough to prevent pedestrian deaths.

CREAM, peta peta hertz yall

👤cyanydeez🕑2y🔼0🗨️0

(Replying to PARENT post)

Without embodiment, these AIs are just maximising some fitness function, which their creators have a hand in by selecting the data and manually tweaking (see the Fascism vs Communism example).

The real answer here is that the "smarter" these AIs get, they more accurately reflect either the political will of their creators or the material they train on (which may by means of the political will of the creators). In some cases it may get some indication you may have a certain political opinion and then give you answers based on this - which would be quite a smart behaviour.

👤bArray🕑2y🔼0🗨️0

(Replying to PARENT post)

we have no way to evaluate that. OpenAI products are severely lobotomized. Microdoft fears another Tay
👤throwaway0x7E6🕑2y🔼0🗨️0

(Replying to PARENT post)

AI doesn't have opinions.
👤dgeiser13🕑2y🔼0🗨️0

(Replying to PARENT post)

The "opinion" is whatever the wranglers didn't yet lobotomize.
👤hoseja🕑2y🔼0🗨️0

(Replying to PARENT post)

Does the average of a set of numbers change as you include different numbers?
👤johnea🕑2y🔼0🗨️0

(Replying to PARENT post)

> AI's opinions

> I. Technology Has Finally Reached The Point Where We Can Literally Invent A Type Of Guy And Get Mad At Him

> AI’s beliefs

> AIs’ Opinions

Ai's this, Ai's that, this personifies math in the most misguided, misleading way possible. Articles like this are insanely dangerous. This is not a person, this is not a being, it is not alive, it does not think, it does not hold opinions.

👤voz_🕑2y🔼0🗨️0

(Replying to PARENT post)

Related ongoing thread:

Conservatives think ChatGPT has gone 'woke' - https://news.ycombinator.com/item?id=34414420 - Jan 2023 (258 comments)

👤dang🕑2y🔼0🗨️0

(Replying to PARENT post)

We’ve already been having the AI intentionally or unintentionally being changed by the large organizations.

Just as a simple example, you can’t publish white supremest content. If it gets close, but doesn’t cross the line, it’s deranked. Mention the “correct” thing: “Black men deserve restitution, white men are too inherently privileged” And it’ll rank high. Mention the wrong thing on YouTube (“climate change isn’t a problem”) and you could get a strike or banned. Only one view is constantly presented and tugged.

It’s already been a decade of this. The climate change & vaccine debate is the most obvious — it’s also political. It’s how we collaborate and work together.

By deranking opposing views, there’s only one public view. That’s why there are multiple “bubbles” depending on which platform(s) and what content you’ve liked.

I personally don’t care about the AIs opinion, I care about the humans who are voting, working, taxing, etc.

👤lettergram🕑2y🔼0🗨️0