(Replying to PARENT post)

I ended up reading the book Blindsight (Peter Watts) that's been floating around in comments recently. A major theme in the book is intelligence and its relation to consciousness (including whether consciousness is even beneficial). If you agree with the idea, you'd consider that DALL-E is indeed intelligent even though it appears to be a "Chinese Room". Humans would be "gluing things together" in just the same way, but with this odd introspective ability that makes it seem different.
๐Ÿ‘ค_nhynes๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'm becoming convinced that these algorithms are huge steps towards AGI simply because AGI might end up being a collection of many of these domain-specific networks with a network sitting above them who's only role is to interrogate the sub networks for solutions to the problem at hand, and discriminate which solution(s) are most worth trying, simulating those and then picking one out and executing it in the real world. That seems to me to be pretty close to what we as humans do.
๐Ÿ‘คTrevorJ๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Exactly (and cannot recommend Blindsight highly enough). Of course DALL-E is a Chinese Room. The most exciting and subversive idea in Blindsight is that consciousness is maladaptive, and the scramblers are _more_ fit for the lack of it. Long after neutral nets have surpassed our functional intelligence, we'll still be criticizing their ability to navel-gaze as well as humans.
๐Ÿ‘คdoph๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The "Chinese room" argument, so far as I understand it, applies to any discreet computation process simulating consciousness.

The argument of the article is that DALL-E doesn't respond appropriately to a particular kind of input - two entities in some kind of spatial relationship (that it hasn't often seen). Dall-E's not extrapolating the three-D world but stretching a bunch 2-D images together with some heuristics. That works to create a lot of plausible images sure but it implies to this ability might not, say, be able to be useful for the manipulation of 3-D space.

So, given a "Chinese room" is just a computation, it's plausible that some Chinese room could handle 3-d image manipulation more effectively than this particular program.

Which is to say, "no, the criticism isn't this is a Chinese room, that is irrelevant".

๐Ÿ‘คjoe_the_user๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

fantastic book. Made me consider the question of whether consciousness exists at all or if it is just some hack by evolution to allow introspection.

I haven't found a definition of consciousness which is quantifiable or stands up to serious rigour. If it can't be measured and isn't necessary for intelligence, perhaps there is no magic cut-off between the likes of Dall-E and human intelligence. Perhaps the Chinese-room is as conscious as a human (and a brick)?

๐Ÿ‘คtwak๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

>Humans would be "gluing things together" in just the same way

I'm often struck by how stark this is in ancient fantasy art. The 'monsters' are usually just different animal parts remixed -- the head of one on the body of another, things like that. Fundamentally, we're all doing DALL-E-ish hybridization when we're being creative; it's very difficult to imagine things that are truly alien such that they're outside the bounds of our 'training data'.

๐Ÿ‘คkoboll๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

That book gave me the highest dose of existential crisis I've ever felt. I should probably re-read it.
๐Ÿ‘คmiguelxpn๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Are humans just "Chinese rooms"? We don't really understand anything deeply, but our neurons just fire in a way that gives good responses and makes us feel like we understand stuff.
๐Ÿ‘คmetacritic12๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It's not clear what generates consciousness. Until we know for sure (e.g. A/B testing with humans who can report when they do and do not experience consciousness in different neural configurations) then I think it's impossible to know what level of conscious experience large ML models have.

Blindsight is an excellent book in its exploration of consciousness, but the speculative part is that a working sense of self isn't necessary for embodied intelligence (like the scramblers), which I tend to doubt. An agent without a model of itself will have difficulty planning actions; knowing how its outputs/manipulators are integrated into the rest of reality will be a minimum requirement to control them effectively. It is certainly possible that "self" or "I" will be absent; humans can already turn the ego off with drugs and still (mostly) function but they remain conscious.

๐Ÿ‘คbenlivengood๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

re the chinese room, you might want to consider the computational and memory complexity of a lookup table. https://www.scottaaronson.com/papers/philos.pdf page 14
๐Ÿ‘คthe8472๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I think the book does make a point that maybe one benefit of consciousness the ability to filter through all the information and spam that conscious beings produce. E.g. The scramblers may view all the radio waves we blast everywhere as attempts at warfare and reducing the fitness of other species. Why else would a species emit so much information if not to DDOS their enemies?! tl;dr consciousness is a defense against ****posting and trolling caused by conscious beings.
๐Ÿ‘คPulcinella๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

human brains are a chinese room. Our DNA and experiences wrote the book.
๐Ÿ‘คplanetsprite๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0