(Replying to PARENT post)
Let’s say it can simulate theory of computation better than 99% of the population and can very capably synthesize and infer from any text based sources. I think that would shake the world, and it wouldn’t even need to be near AGI.
(Replying to PARENT post)
I don't think so, the scaling laws haven't failed so far. I fully expect that making the model bigger and training it on more data will make it better at logic.
For a nice example with image models, Scott Alexander made a bet that newer image models would be able to do the things that Dall-E 2 gets wrong. [1] (This post also discusses how GPT-3 could do many things that GPT-2 got wrong.) He won the bet three months later through Imagen access. [2]
[1]: https://astralcodexten.substack.com/p/my-bet-ai-size-solves-... [2]: https://astralcodexten.substack.com/p/i-won-my-three-year-ai...
(Replying to PARENT post)
In fact it has just gotten closer.
Logic reasoning is a pretty solid branch of AI since it’s inception. Robust solutions exist for most problems; even a programming language based on its principles (Prolog).
With ChatGPT there is now a system that can express the results from automatic logic reasoning in language.
The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
This will probably require retraining the model, but I guess the demo we are given by OpenAI leaves little doubt that this is perfectly doable.
ChatGPT has the potential to plug the gap between GOFAI and natural language, which is quite a feat.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Yes, you are correct. The statement I made is not accurate. The specified rhyme scheme was AABA BBAB AABA BBAB, which means that each stanza should have the rhyme scheme AABA BBAB, with the first and second lines rhyming with each other, and the third and fourth lines rhyming with each other.
That said, I wouldn't be surprised if the next version was able to grok this.
(Replying to PARENT post)
> We are a very long way from AGI.
Let's not forget that computers are insanely good at bitwise computations. It's just a matter of time before someone adds a Coq/Lean style reinforcement to AI's learning capabilities.
(Replying to PARENT post)
Logic however, transcends language. And it is clear that GPT3 has absolutely no understand of basic logic. It gives the impression of understanding logic by constructing sentences which are sometimes logically coherent.
We are a very long way from AGI.