(Replying to PARENT post)

People are gushing over its ability to write code, forgetting that code is just another type of language that can be used to express ideas.

Logic however, transcends language. And it is clear that GPT3 has absolutely no understand of basic logic. It gives the impression of understanding logic by constructing sentences which are sometimes logically coherent.

We are a very long way from AGI.

👤Thorentis🕑3y🔼0🗨️0

(Replying to PARENT post)

This test shows it probably displays theory of computation better than 80% of the population. The wonder is, it’s just text, so this is an emergent property.

Let’s say it can simulate theory of computation better than 99% of the population and can very capably synthesize and infer from any text based sources. I think that would shake the world, and it wouldn’t even need to be near AGI.

👤anon7725🕑3y🔼0🗨️0

(Replying to PARENT post)

> We are a very long way from AGI.

I don't think so, the scaling laws haven't failed so far. I fully expect that making the model bigger and training it on more data will make it better at logic.

For a nice example with image models, Scott Alexander made a bet that newer image models would be able to do the things that Dall-E 2 gets wrong. [1] (This post also discusses how GPT-3 could do many things that GPT-2 got wrong.) He won the bet three months later through Imagen access. [2]

[1]: https://astralcodexten.substack.com/p/my-bet-ai-size-solves-... [2]: https://astralcodexten.substack.com/p/i-won-my-three-year-ai...

👤ruuda🕑3y🔼0🗨️0

(Replying to PARENT post)

> We are a very long way from AGI.

In fact it has just gotten closer.

Logic reasoning is a pretty solid branch of AI since it’s inception. Robust solutions exist for most problems; even a programming language based on its principles (Prolog).

With ChatGPT there is now a system that can express the results from automatic logic reasoning in language.

The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.

This will probably require retraining the model, but I guess the demo we are given by OpenAI leaves little doubt that this is perfectly doable.

ChatGPT has the potential to plug the gap between GOFAI and natural language, which is quite a feat.

👤periheli0n🕑3y🔼0🗨️0

(Replying to PARENT post)

If logic is its biggest weakness, then I just laugh - because that is the one area of AI that every model before these language models excelled at well beyond human levels. All it takes is GPT to formulate the english sentence into logic predicate statements and throw it through a "3rd party" script that does the heavy logic validation/proving and you're good. Those are well-treaded areas of programming, and were ironically where people expected AIs to come from and be strongest in - nobody expected exceptional painting and conversing skill just from averaging out a shit-ton of data.
👤dogcomplex🕑3y🔼0🗨️0

(Replying to PARENT post)

If you want some more things that ChatGPT isn't good at, try to get it to form novel anagrams, palindromes, or other such wordplay. It's good at regurgitating textbook examples of those, but I found you can trip it up by asking it to do things like "Create a palindrome containing the word 'coffee'"
👤6177c40f🕑3y🔼0🗨️0

(Replying to PARENT post)

It certainly has logic. I had some fun using the "virtual machine" example someone else did, with the "nvidia-smi" command, if I told him it was hot in the room, the next run of the command was showing an higher temperature on the GPU. This is the logical conclusion from an hotter room.
👤dwild🕑3y🔼0🗨️0

(Replying to PARENT post)

Experienced this myself trying to get it to write a poem with an unusual rhyme scheme. ChatGPT's response when I tried to explicitly spell it out (a few different ways):

Yes, you are correct. The statement I made is not accurate. The specified rhyme scheme was AABA BBAB AABA BBAB, which means that each stanza should have the rhyme scheme AABA BBAB, with the first and second lines rhyming with each other, and the third and fourth lines rhyming with each other.

That said, I wouldn't be surprised if the next version was able to grok this.

👤klenwell🕑3y🔼0🗨️0

(Replying to PARENT post)

> absolutely no understand of basic logic

> We are a very long way from AGI.

Let's not forget that computers are insanely good at bitwise computations. It's just a matter of time before someone adds a Coq/Lean style reinforcement to AI's learning capabilities.

👤est🕑3y🔼0🗨️0

(Replying to PARENT post)

This is a totally uninformed/naïve/layman’s take, but what if AGI is just a really good language model used in a clever way such that it can perform an efficient search of its “thought” space, validating its thoughts are correct along the way. Programming, logic, math, etc are perhaps the easiest forms of “thoughts” for a computer to validate, but given enough quality data maybe it could be good at all kinds of other tasks as well.
👤rlt🕑3y🔼0🗨️0