(Replying to PARENT post)
How much of the "novel ways which have never been considered before" is just the novelty effect of having your very own artist? A human being could certainly produce any of the works of Dall-E 2, given the same prompt. The change here is the cost, and not the capability. Of course, this is still significant, but it doesn't suggest to me that Dall-E 2 "thinks" differently or would be able to seriously alter the nature of our cognition, except to the extent that it allows us to realize the same ideas faster.
๐คpolio๐2y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
That's not an LLM, though. We haven't seen evidence yet that an LLM can combine existing language in novel ways. In fact, we've seen over and over that LLMs are quite generic.
Compared to text-to-image, which is seemingly impossible to use without getting something weird
๐คCapricorn2481๐2y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
I remember reading what Lenat did with early versions of Eurisco: he ran it on some axioms from number theory and observed discovery of some number theory theorems, some of them pretty fundamental. As "time" progressed, the "theorems" Eurisco kept getting discovered became duller and duller, like expansion of square of sum into sum of products (I can't remember exact example from the book, but level of dulliness is about right).
I believe this can be served as example of combination of existing concepts in novel ways, applied to math, which is very formal and, on the other hand, is the basis of many other areas of human knowledge.
If this can serve as a such example, then you are wrong.
Also, what you have shown as your examples are not very much interesting outside of entertainment.
๐คthesz๐2y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
It mirrors back the distribution, not the individual samples. It can interpolate but not extrapolate
๐คWanderPanda๐2y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
This is a good point. No human could possibly have envisioned a teddy bear swimming in an Olympic pool, as we did not previously have the words or conceptual framework to describe that. That image could never have existed before the invention of transformer technology.
๐คjrflowers๐2y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
That conclusion is not correct. Generative models can combine existing concepts in novel ways which have never been considered before. This is most easily seen with text-to-image models. For example the teddy bear swimming in an Olympic pool [1] or many other surreal examples [2] were not part of the training set. So as long as an answer can be expressed as a combination of primitive concepts, LLMs can generate them.
[1] https://imagen.research.google/main_gallery_images/teddy-bea...
[2] https://imagen.research.google/