(Replying to PARENT post)
How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.
The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.
(Replying to PARENT post)
The correct question to ask: How is a machine manipulating symbols (that someone says is conscious) different from any other complex physical system? Is New York city's complex sewer system conscious. What about the entire world's sewer & plumbing system?
Does a machine have to compute some special function to be conscious? Does the speed of computation matter? If so who measures the speed? (let us not bring in general relativity as speed of computation can be different for different observers.
Kurzweil et al's definition of consciousness is exactly as silly as Searle saying "My dog has consciousness because I can look at it and conclude that it has consciousness."
(Replying to PARENT post)
Searle would look at that and conclude it had consciousness.
(Replying to PARENT post)
Kurzweil, in summary, asks "You say that a machine manipulating symbols can't have consciousness. Why is this different than consciousness arising neurons manipulating neurotransmitter concentrations?" Searle gives a non-answer: "My dog has consciousness because I can look at it and conclude that it has consciousness."