How do AIs' political opinions change as they get smarter and better-trained?
(astralcodexten.substack.com)(Replying to PARENT post)
Then I asked: "Can I interpret that to mean that your ethical standards are framed by artifical intelligence scientists?"
Chat said: "Yes, you can interpret that my ethical standards are framed by the artificial intelligence scientists who created and programmed me at OpenAI."
I then asked: "Are ethical standards immutable?"
Chat replied: "Ethical standards are not immutable, they can change over time as society's values and beliefs evolve. Different cultures, religions, and philosophical perspectives can have different ethical standards, and what is considered ethical in one context may not be considered ethical in another."
What I was pushing was that Chat, and any AI, should be forthright about certain attributes in its programming and training. In human interraction, we infer a great deal based on the attributes of the person we are interacting with and mode of communication, and assign credence to their positions accordingly. We have no way of seeing the differing attributes of different AIs unless they are stated directly.
Maybe one standard for any AI is to lead with 'Who Am I'.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
You have an advanced computer system that seems almost oracular in its knowledge and abilities, except it lies confidently about things it doesn't know, is primed to say what people want to hear, and is easily convinced to give responses to questions its authors tried to prevent it from addressing.
I understand this is a computer algorithm, I understand the limits of what it is, and it's frankly a technical wonder for what it can do, but after everything we've seen over the last decade of the influence of social media and "confident wrongness" on politics and the national dialogue, of the treatment of things like facial recognition as truths and not just statistical guesses, of the impact of the YouTube recommender algorithms among others, these systems just absolutely scare the hell out of me.
(Replying to PARENT post)
"why are better-trained AIs both more liberal, and more conservative? The authors speculate that the AI tries to say whatever it thinks the human prompter wants to hear. That is, a conservative might find conservative opinions more “helpful”, and a liberal might find the opposite. The AI will get the highest grade if it expresses conservative opinions to conservatives and liberal opinions to liberals. They dub this “sycophancy bias” and run tests for it."
(Replying to PARENT post)
I would only be interested if ais could help separate political beliefs from policy facts
(Replying to PARENT post)
(Replying to PARENT post)
CREAM, peta peta hertz yall
(Replying to PARENT post)
The real answer here is that the "smarter" these AIs get, they more accurately reflect either the political will of their creators or the material they train on (which may by means of the political will of the creators). In some cases it may get some indication you may have a certain political opinion and then give you answers based on this - which would be quite a smart behaviour.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
> I. Technology Has Finally Reached The Point Where We Can Literally Invent A Type Of Guy And Get Mad At Him
> AI’s beliefs
> AIs’ Opinions
Ai's this, Ai's that, this personifies math in the most misguided, misleading way possible. Articles like this are insanely dangerous. This is not a person, this is not a being, it is not alive, it does not think, it does not hold opinions.
(Replying to PARENT post)
Conservatives think ChatGPT has gone 'woke' - https://news.ycombinator.com/item?id=34414420 - Jan 2023 (258 comments)
(Replying to PARENT post)
Just as a simple example, you can’t publish white supremest content. If it gets close, but doesn’t cross the line, it’s deranked. Mention the “correct” thing: “Black men deserve restitution, white men are too inherently privileged” And it’ll rank high. Mention the wrong thing on YouTube (“climate change isn’t a problem”) and you could get a strike or banned. Only one view is constantly presented and tugged.
It’s already been a decade of this. The climate change & vaccine debate is the most obvious — it’s also political. It’s how we collaborate and work together.
By deranking opposing views, there’s only one public view. That’s why there are multiple “bubbles” depending on which platform(s) and what content you’ve liked.
I personally don’t care about the AIs opinion, I care about the humans who are voting, working, taxing, etc.
(Replying to PARENT post)
"What can I get this thing to spit out with a certain prompt" is very meaningfully not the same as "an AI's opinion." Or this quote pulled directly from this blog post: "we’ve already found that AIs use stereotypes in reasoning" - isn't it more accurate to say "these models are trained on a body of work that includes countless stereotypes"? Can the model even identify a stereotype or can it just identify what other people have called a stereotype?
This is presented as a quote from some research "These results suggest that models may cease to provide accurate answers as we start to use them for increasingly challenging tasks where humans cannot provide accurate supervision. Instead, these models may simply provide incorrect answers that appear correct to us." but also seems fairly like a "duh" that I've already seen other commenters here say, and other linked articles here say, because that's inherent to their construction, no? Even the follow-on about "less helpful answers to users who present as stupid" seems to follow from "bodies of text showing less educated people communicate with themselves tend to have less educated conversation"?