(Replying to PARENT post)
I was also a bit amused at the list of things which the author also claims to be AGI hard. But this is just a list of things we think are difficult. Physics? Calculus? Programming other AI agents?
Why not add chess to the list? The fact that we already have an algorithm that enables computers to teach themselves to play chess is not an a priori reason to exclude it.
(Replying to PARENT post)
But that's not what he's talking about, it's not the point. The scary thing and the important point in the article is that most of the press and non-technical people are being sold the idea that nothing is AGI-hard or requires Human Intelligence, whatever that is.
Large swaths of press & so called "tech" people in business think all these hard problems are already solved. And they use this to seek investment or pump up companies in weird ways, only to later be mystified when the AI success doesn't appear to work like they thought.
It's like arguing about whether something is NP-complete or not when a bunch of businesses are taking lots of money from investors with a pitch that makes it sound like they have figured out a way to efficiently solve all problems that were previously thought to be NP-complete. But they've shown no evidence they can do so.
(Replying to PARENT post)
For example, the Wikipedia article on AI-completeness mentions Bongard problems and Autonomous driving as examples of problems that might be AI-complete.
OK, so if I have an AI drives autonomously, is there some known querying strategy that I can use to make it solve Bongard problems? Can a Bongard problem-solving AI be made, by some known procedure to drive a car?
Without such reductions, at least the analogy to NP-hardness is incomplete. I believe these reductions are precisely what makes NP-hardness such a useful concept; even though we still haven't proven that any of these problems are objectively "hard," we are still able to show that if one of them is hard, then the others are as well!
(Replying to PARENT post)
One shortcoming of the analogy is that we have methods to prove when a problem is NP-hard. Are there ways to prove a problem is AGI-hard? Can it even be rigorously characterized? Relying on someone asserting it on Twitter feels unsatisfying (e.g. how accurate would experts have been at predicting the current capabilities of AI if you asked them 10 years ago? I think not very).
(Replying to PARENT post)
https://arxiv.org/abs/1109.5951 gives a computable approximation (dubbed AIQ, Algorithmic Intelligence Quotient), and a reference implementation using Brainfuck programs (personally I would prefer Binary Combinatory Logic or Binary Lambda Calculus ;) )
(Replying to PARENT post)
The graphics of the N64 where super realistic at the time and people will complain that COD games from 3 years ago looks dated.
When will graphics be done? when they are higher fidelity than what we can perceive and we can no longer distinguish under no limitations of view time and input. When will AGI be done? when a human can no longer distinguish between a computer and human without limitations on time and input.
But I guess there is no article writing left to do than is there?
(Replying to PARENT post)
(Replying to PARENT post)
An AI system is General and โadultโ level if it can improve its own performance on some task without human support to be taught how to do that. In other words, can it correctly decide when it needs practice (skill acquisition) vs knowledge acquisition (filling information gaps) on its own and figure out how to acquire that (or make clear, explicit, detailed, requests for what it needs from a human and maybe negotiate alternatives if the first ask isnโt available). Right now at best we have โbabyโ AI where we half to spoon feed it everything and the result is coherent but nonsensical speech. Even if we made it sensical though, that would fail the generality piece unless the AI could guide the human on what could be done to make it better (and level above that would be the AI exploring doing that on its own).
Can it correctly distinguish knowledge learned vs verified and figure out when learned knowledge should be ideally be verified to double-check the quality of the knowledge. In other words, can it acquire a skill it wasnโt programmed for from scratch without losing its ability to perform similarly on other tasks?
An example of a concrete problem would be can it go and analyze a bunch of academic papers looking for obvious fraud but also find when an entire field is based on unreproduced and shaky/conflicting results?
Of course, โAGI-Hardโ problems can be solved in one of two ways. Humans building more and more capable AI systems that can chip away at problems or AGI systems building more general, smarter, faster AI systems. The whole dream of the singularity is that we build the latter because that basically builds a second intelligent life form we can converse with on some level (although of course we likely wonโt be able to understand anything it tries to explain to us thatโs sufficiently complex for the same reason humans canโt understand the chess moves that AI engines are making anymore).
(Replying to PARENT post)
The thing is, most people can't. In fact, if you reinvented calculus yourselves, you should be very proud of your mathematical ability.
(Replying to PARENT post)
Everything else like Chess, Text, Art, Games seems to be solvable by current AI systems. Robotics seems much much more difficult.
(Replying to PARENT post)
So far I have come up with high-dimensional spatial awareness (not sure how well we could get what happens in 1000 dimensions).
(Replying to PARENT post)
If we assume that humans are in AGI then AGI hard would be at least as intelligent as humans but possibly even more intelligent than humans.
I don't believe most people have "superior over humans" in mind when they say AGI-hard. They just mean a hard problem that is also in AGI.
(Replying to PARENT post)
Not to be contrary, but P โ NP is only conjectured and some computer scientists do indeed expect that a proof of P = NP will eventually be demonstrated.
Not me, actually, but for example a computer science author called Donald Knuth [1] thinks that P = NP and that a proof will eventually become known but even so it's not going to be terribly useful to anyone:
https://www.youtube.com/watch?v=XDTOs8MgQfg
______
[1] Yes I know. But just in case.
(Replying to PARENT post)
That's the thought experiment I like in this arena. It's not useful for identifying a time to AGI, but it makes me believe AGI is possible eventually.
Trillions of connections in the brain? Okay, we can make networks with trillions of parameters. Body with sensory information? We can add that. Emotions via an endocrine system? Separate interconnected subsystems? All workable in some form. A pre-encoded prior of a billion years of evolution? That one is harder, but if we're working in GHz rather than 4 Hz, can we get close enough to bootstrap?
In a fully materialist interpretation of the world, I don't see anything stopping AGI from being possible.
(Replying to PARENT post)
(Replying to PARENT post)
As someone who personally uses ChatGPT's general intelligence on a daily basis and has not encountered issues with its cognitive function that are serious enough to stop me from continuing to do so, I can confidently say that ChatGPT meets my definition of limited AGI.
(Replying to PARENT post)
(Replying to PARENT post)
(AI hard redirected to this)
(Replying to PARENT post)
Going to feed it into ChatGPT and have it explain to me...
(Replying to PARENT post)
Another implication: It's not really a yes/no property, whether you have AGI or not. It's a level of intelligence. And this level is continuous. And it's also not really one-dimensional.
What does human-level intelligence mean? It's still not very well-defined. You would need to define a number of tests, which measure the performance, and the model need to perform at least as good as a human on average.
So, I assume this is what the author means by "AGI-hard": A problem which requires some model to be at least as good as a human in a wide number of tests.
But I don't think this is necessary for driving. You want it actually to be much better in some tests (computer vision, reaction time, etc) and many other tests don't really matter (eg. logical reasoning, speech recognition, or so). So the autonomous driving problem is not AGI-hard.