๐Ÿ‘คfreediver๐Ÿ•‘2y๐Ÿ”ผ131๐Ÿ—จ๏ธ124

(Replying to PARENT post)

As LeCun always says: The term "AGI" is a bit ill-formed. What does "general" really mean? Human intelligence is not really general. It's actually very very specialized towards the world we live in. So he prefers the term "human-level intelligence", or maybe super-human-level at some point.

Another implication: It's not really a yes/no property, whether you have AGI or not. It's a level of intelligence. And this level is continuous. And it's also not really one-dimensional.

What does human-level intelligence mean? It's still not very well-defined. You would need to define a number of tests, which measure the performance, and the model need to perform at least as good as a human on average.

So, I assume this is what the author means by "AGI-hard": A problem which requires some model to be at least as good as a human in a wide number of tests.

But I don't think this is necessary for driving. You want it actually to be much better in some tests (computer vision, reaction time, etc) and many other tests don't really matter (eg. logical reasoning, speech recognition, or so). So the autonomous driving problem is not AGI-hard.

๐Ÿ‘คalbertzeyer๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This article is a bit too hand-wavy for my taste. As a buzzword, AGI Hard sounds cool but until intelligence is more strictly defined (and the modifiers โ€œartificialโ€ and โ€œgeneralโ€ arenโ€™t helping) AGI will always be something we talk about rather than something known.

I was also a bit amused at the list of things which the author also claims to be AGI hard. But this is just a list of things we think are difficult. Physics? Calculus? Programming other AI agents?

Why not add chess to the list? The fact that we already have an algorithm that enables computers to teach themselves to play chess is not an a priori reason to exclude it.

๐Ÿ‘คjanalsncm๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I find it interesting that everyone is arguing about AGI-hard or AI-complete or whatever and trying to find where those boundaries are. Those are interesting problems for sure.

But that's not what he's talking about, it's not the point. The scary thing and the important point in the article is that most of the press and non-technical people are being sold the idea that nothing is AGI-hard or requires Human Intelligence, whatever that is.

Large swaths of press & so called "tech" people in business think all these hard problems are already solved. And they use this to seek investment or pump up companies in weird ways, only to later be mystified when the AI success doesn't appear to work like they thought.

It's like arguing about whether something is NP-complete or not when a bunch of businesses are taking lots of money from investors with a pitch that makes it sound like they have figured out a way to efficiently solve all problems that were previously thought to be NP-complete. But they've shown no evidence they can do so.

๐Ÿ‘คben7799๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'm not sold on the concept of AI-hard or AI-complete problems.

For example, the Wikipedia article on AI-completeness mentions Bongard problems and Autonomous driving as examples of problems that might be AI-complete.

OK, so if I have an AI drives autonomously, is there some known querying strategy that I can use to make it solve Bongard problems? Can a Bongard problem-solving AI be made, by some known procedure to drive a car?

Without such reductions, at least the analogy to NP-hardness is incomplete. I believe these reductions are precisely what makes NP-hardness such a useful concept; even though we still haven't proven that any of these problems are objectively "hard," we are still able to show that if one of them is hard, then the others are as well!

๐Ÿ‘คhakuseki๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I like the concept of AGI-hard and the characterization of the common traps of AI productization feels accurate.

One shortcoming of the analogy is that we have methods to prove when a problem is NP-hard. Are there ways to prove a problem is AGI-hard? Can it even be rigorously characterized? Relying on someone asserting it on Twitter feels unsatisfying (e.g. how accurate would experts have been at predicting the current capabilities of AI if you asked them 10 years ago? I think not very).

๐Ÿ‘คthrowaway_5753๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

https://arxiv.org/abs/cs/0605024 gives a formal model of intelligence (achieving high reward in an agent/environment setup, across all computable environments weighted by their Kolmogorov complexity)

https://arxiv.org/abs/1109.5951 gives a computable approximation (dubbed AIQ, Algorithmic Intelligence Quotient), and a reference implementation using Brainfuck programs (personally I would prefer Binary Combinatory Logic or Binary Lambda Calculus ;) )

๐Ÿ‘คchriswarbo๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The goal posts aren't moving, or not more than in any other technology.

The graphics of the N64 where super realistic at the time and people will complain that COD games from 3 years ago looks dated.

When will graphics be done? when they are higher fidelity than what we can perceive and we can no longer distinguish under no limitations of view time and input. When will AGI be done? when a human can no longer distinguish between a computer and human without limitations on time and input.

But I guess there is no article writing left to do than is there?

๐Ÿ‘คdrdrek๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

[author here] oh wow, surprised to see this here! it flopped the first time I posted it. questions/corrections/additions welcome!
๐Ÿ‘คswyx๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Here are simpler definitions I think:

An AI system is General and โ€œadultโ€ level if it can improve its own performance on some task without human support to be taught how to do that. In other words, can it correctly decide when it needs practice (skill acquisition) vs knowledge acquisition (filling information gaps) on its own and figure out how to acquire that (or make clear, explicit, detailed, requests for what it needs from a human and maybe negotiate alternatives if the first ask isnโ€™t available). Right now at best we have โ€œbabyโ€ AI where we half to spoon feed it everything and the result is coherent but nonsensical speech. Even if we made it sensical though, that would fail the generality piece unless the AI could guide the human on what could be done to make it better (and level above that would be the AI exploring doing that on its own).

Can it correctly distinguish knowledge learned vs verified and figure out when learned knowledge should be ideally be verified to double-check the quality of the knowledge. In other words, can it acquire a skill it wasnโ€™t programmed for from scratch without losing its ability to perform similarly on other tasks?

An example of a concrete problem would be can it go and analyze a bunch of academic papers looking for obvious fraud but also find when an entire field is based on unreproduced and shaky/conflicting results?

Of course, โ€œAGI-Hardโ€ problems can be solved in one of two ways. Humans building more and more capable AI systems that can chip away at problems or AGI systems building more general, smarter, faster AI systems. The whole dream of the singularity is that we build the latter because that basically builds a second intelligent life form we can converse with on some level (although of course we likely wonโ€™t be able to understand anything it tries to explain to us thatโ€™s sufficiently complex for the same reason humans canโ€™t understand the chess moves that AI engines are making anymore).

๐Ÿ‘คvlovich123๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> Can AI invent calculus from first principles?

The thing is, most people can't. In fact, if you reinvented calculus yourselves, you should be very proud of your mathematical ability.

๐Ÿ‘คsanxiyn๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Isn't AGI-Hard everything that seems rather simple for animals? I would think just of robotics and navigation in unknown environment.

Everything else like Chess, Text, Art, Games seems to be solvable by current AI systems. Robotics seems much much more difficult.

๐Ÿ‘คMichaelRazum๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I occasionally wonder what would be examples of types of problems/questions that would be easy for post-singular superhuman AGI but very hard/impossible for humans? Not in sense how fast the problem is solved, but in the sense that the question/answer is even understood?

So far I have come up with high-dimensional spatial awareness (not sure how well we could get what happens in 1000 dimensions).

๐Ÿ‘คbeefield๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

If AGI is a class of problems that require AGI then AGI hard only refers to the hardest problems that any AGI problem can be translated to.

If we assume that humans are in AGI then AGI hard would be at least as intelligent as humans but possibly even more intelligent than humans.

I don't believe most people have "superior over humans" in mind when they say AGI-hard. They just mean a hard problem that is also in AGI.

๐Ÿ‘คimtringued๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

>> Software engineers and computer scientists already have a tool for understanding the limits of infinite spaces - the P vs NP problem - as well as an accepted wisdom that there is a known area of algorithm research that is too unrealistic to be productive, because working on it would be functionally equivalent to proving P = NP.

Not to be contrary, but P โ‰  NP is only conjectured and some computer scientists do indeed expect that a proof of P = NP will eventually be demonstrated.

Not me, actually, but for example a computer science author called Donald Knuth [1] thinks that P = NP and that a proof will eventually become known but even so it's not going to be terribly useful to anyone:

https://www.youtube.com/watch?v=XDTOs8MgQfg

______

[1] Yes I know. But just in case.

๐Ÿ‘คYeGoblynQueenne๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

What mechanism behind intelligence could only be found in the natural world but not made into a machine?

That's the thought experiment I like in this arena. It's not useful for identifying a time to AGI, but it makes me believe AGI is possible eventually.

Trillions of connections in the brain? Okay, we can make networks with trillions of parameters. Body with sensory information? We can add that. Emotions via an endocrine system? Separate interconnected subsystems? All workable in some form. A pre-encoded prior of a billion years of evolution? That one is harder, but if we're working in GHz rather than 4 Hz, can we get close enough to bootstrap?

In a fully materialist interpretation of the world, I don't see anything stopping AGI from being possible.

๐Ÿ‘คohwellhere๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I dislike using hard-CS-like-language for vague concepts in order to make it sound more scientific.
๐Ÿ‘คsubstation13๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I hereby endorse ChatGPT as a limited form of AGI. It can complete a variety of novel cognitive tasks (example: judging texts and proposals) often enough that it meets my personal requirements for this. My opinion is that the definitions which others believe disqualify ChatGPT from being considered AGI are not relevant, since they focus on things it fails at rather than what it can do. For example, when it tests as well as average human students, people focus on its mistakes rather than its achievements applying general forms of intelligence to succeed at that level. I've also personally interacted with it as an intelligent conversational partner with severe limitations. It's not that it can do everything, but that it does things that require what meets my personal definition of general intelligence often enough to qualify as limited AGI. You can have your own definition. Maybe that means never dropping the ball. Maybe that means getting 100% on tests. It's not my definition. In my opinion, ChatGPT is a general form of intelligence, remarkably, able to generate novel opinions and judgments about all sorts of situations, as well as sometimes briefly appear to learn things. Sure it has limitations, sure it isn't human in its abilities, but it passes my tests often enough that I personally consider AGI a milestone humanity successfully passed with the release of ChatGPT on November 30, 2022.

As someone who personally uses ChatGPT's general intelligence on a daily basis and has not encountered issues with its cognitive function that are serious enough to stop me from continuing to do so, I can confidently say that ChatGPT meets my definition of limited AGI.

๐Ÿ‘คlogicallee๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

King's Quest 2 is AGI hard. I have no idea how anyone was supposed to find required items hidden away in any of hundreds of trees in the game. It was definitely a different time for video games back then.
๐Ÿ‘คsuprjami๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Wikipedia: https://en.wikipedia.org/wiki/AI-complete

(AI hard redirected to this)

๐Ÿ‘คpalad1n๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Weird article. After reading most of it, I still don't know what AGI or AGI-hard is.

Going to feed it into ChatGPT and have it explain to me...

๐Ÿ‘คxeyownt๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

As always, what can't be done is shrinking.
๐Ÿ‘คjasfi๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0