Submitted by timscarfe t3_yq06d5 in MachineLearning
trutheality t1_ivnxvm9 wrote
The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.
We know that humans understand things. We also know that at a much lower level, a human is a system of chemical reactions. Chemical reactions are the rule-following machinery: they are strictly governed by mathematics. The chemical reactions don't understand things, but humans do.
There is actually no good argument that the Chinese room system as a whole doesn't understand Chinese, even though the man inside the room doesn't understand Chinese.
Nameless1995 t1_ivp554d wrote
> The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.
This is "addressed" (not necessarily successfully) in the original paper:
https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf
> I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."
> My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
> Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong AI will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.
And also in Dneprov's game:
http://q-bits.org/images/Dneprov.pdf
> “I object!” yelled our “cybernetics nut,” Anton Golovin. “During the game we acted like individual switches or neurons. And nobody ever said that every single neuron has its own thoughts. A thought is the joint product of numerous neurons!”
> “Okay,” the Professor agreed. “Then we have to assume that during the game the air was stuffed with some ‘machine superthoughts’ unknown to and inconceivable by the machine’s thinking elements! Something like Hegel’s noˆus, right?”
I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.
Even the notion of intentionality do not have clear metaphysical grounding, and there are ways to take (and already taken) intentionally in functionalist framework as well (in a manner such that machines can achieve intentionality). So it's not clear what exactly is it that we are supposed in find in "understanding" but missing in chinese rooms. The bias is perhaps clearer in the professors rash dismissal of the objection that the whole may understanding by suggesting that some "machine superthoughts" would be needed. If understanding is nothing over and beyond a manner of functional co-ordination, then there no need to think that the systems-reply suggestion requires "machine superthoughts". My guess is that Searle and others have an intuition that understanding requires some "special" qualitative experience or cognitive phenomenology. Except I don't think they are really as special, but merely features of stuff that realize the forms of cognition in biological beings. The forms of cognition may as well be realized differently without "phenomenology". As such, the argument can boil down partially to "semantics" whether someone is willing to broaden the notion of understanding to remove appeals to phenomenology or any other nebulous "special motion of living matter".
> We know that humans understand things. We also know that at a much lower level, a human is a system of chemical reactions. Chemical reactions are the rule-following machinery: they are strictly governed by mathematics. The chemical reactions don't understand things, but humans do.
Note that Searle isn't arguing that rule-following machinery or machines cannot understand. Just that there is no "understanding program" per se that can realize understanding no matter how it is realized or simulated. This can still remain roughly true based on how we define "understanding".
This is clarified in this QA form in the paper:
> "Could a machine think?" The answer is, obviously, yes. We are precisely such machines.
> "Yes, but could an artifact, a man-made machine think?" Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. "OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
> "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
> "Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
The point Searle is trying to make is that understanding is not exhausively constituitive of some formal relations but it also depends on how the formal relations are physically realized (what sort of relevant concrete causal mechanisms are underlying them and so on).
Although for anyone who says this, there should be a burden for them to explain exactly what classes of instantiation is necessary for understanding, and what's so special about those classes of instantiation of relevant formal relations that's missed in other Chinesen-room like simulations. Otherwise, it's all a bit vague and wishy washy appeals to intuition.
trutheality t1_ivq6265 wrote
>"Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
This, I think, again falls into the same fallacy of trying to ellicit understanding from the rule-following machinery. The rule-following machinery operates on meaningless symbols, just as humans operate on meaningless physical and chemical stimuli. If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.
​
>The point Searle is trying to make is that understanding is not exhausively constituitive of some formal relations but it also depends on how the formal relations are physically realized
This also seems like an exceptionally weak argument, since it suggests that a sufficiently accurate physics simulation of a human would, after all, hit all of Searle's criteria and be capable of understanding. Again, even here, it is important to separate levels of abstraction: the physics engine is not capable of understanding, but the simulated human is.
One could, of course, stubbornly insist that without us recognizing the agency of the simulated human, it is just another meaningless collection of symbols that follows the formal rules of the simulation, but that would be no different from viewing a flesh-and-blood human as a collection of atoms that meaninglessly follows the laws of physics.
Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.
Nameless1995 t1_ivq9tkh wrote
> If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.
There is a bit nuance here.
What Searle is trying to say by "programs don't understand" is not that there cannot be physical instantiations of "rule following" programs that understands (Searle allows that our brains are precisely one such physical instantiation), but that there would be some awkward realizations of the same program that don't understand. So the point is actually relevant at a more higher level of abstraction.
> Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.
Right. The point of Searle becomes even more confusing because on one hand he is explicitly allowing "rule following machines" can understand (he explicitly says that instances of appropriate rule-following programs may understand things and also that we are machines that understand), at the same time he doesn't think mere simulation of functions of a program with any arbitrary implementation of rule-following is not enough. But then it becomes hard to tease out what exactly "intentionality" is for Searle, and why certain instances of rule-following through certain causal powers can have it, while the same rules simulated otherwise in the same world correspond to not having "intentionality".
Personally, I think he was sort of thinking in terms of hard problem (before the hard problem was made: well it existed in different forms). He was possibly conflating understanding with having phenomenal "what it is like" consciousness of certain kind.
> consistent failure to apply the same standards to both machines and humans
Yeah, I notice that. While there are possibly a lot of things we don't completely understand about ourselves, there also seems to be a tendency to overinflate ourself. As for myself, if I reflect first-personally I have no clue what is it I exactly do when I "understand". There were times, I have thought when I was younger, that I don't "really" "understand" anything. Whatever happens, happens on it own, I can't even specify the exact rules of how I recognize faces, how I process concepts, or even are "concepts" in the first place, or anything. Almost everything involved in "understand anything" is beyond my exact conscious access.
red75prime t1_ivwmz24 wrote
I'll be blunt. No amount of intuition pumping, word-weaving, and hand-waving can change the fact that there's zero evidence of the brain violating the physical Church-Turing thesis. It means that there's 0 evidence that we can't build transistor-based functional equivalent of the brain. It's as simple as that.
Nameless1995 t1_ivxosmw wrote
I don't think Searle denies that so I don't know who you are referring to.
Here's quote from Searle:
> "Could a machine think?"
> The answer is, obviously, yes. We are precisely such machines
> Yes, but could an artifact, a man-made machine think?"
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. "OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
red75prime t1_ivxqxgr wrote
Ah, OK, sorry. I thought that the topic had something to do with machine learning. Exploration of Searle's intuitions is an interesting prospect, but it fits other subreddits more.
[deleted] t1_ivpj1jf wrote
Thanks, awesome post.
I am very skeptical our intuition when it comes to the mind. We have gottem so much wrong, I would be surprised if it really turns out that there is something "special" going on on the brain to justify this difference
Nameless1995 t1_ivpm198 wrote
I think as long as we are not purely duplicating the brain, there would always be something different (by definition of not duplicating). The question becomes then the relevancy of difference. I think there is some plausibility to the idea that some "formal" elements of the brain associated with cognition can be simulated in machines, but would that be "sufficient" for "understanding"? This question is partly hinging on semantics. We can choose to define understanding in a way such that it's fully a manner of achieving some high level formal functional capabilities (abstracting away from the details of the concrete "matter" that realizes the functions). There is a good case to be made that perhaps it's better to think of mental states in terms of higher level functional roles than "qualitative feels" (which is not to say there aren't qualitative feels, but that they need not be treated as "essential" to mental states -- the roles of which may as well be realized in analogous fashion without the same feels or any feels). If we take a similar stance that the point of having or lacking phenomenal feels (and phenomenal intentionality) becomes moot because all that would matter for understanding would be a more abstracted level of formal functionalities (which may as well be computational).
If on the other hand, we decide to treat "phenomenal feels" (and "phenomenal intentionality") as "essential" to understanding (by definition -- again a semantics issue), then I think it's right to doubt whether any arbitrary realizations of some higher level abstracted (abstracted away from phenomenal characters) behavior forms would necessarily lead to having certain phenomenal feels.
Personally, I don't think it's too meaningful to focus on "phenomenal feels' for understanding. If I say "I understand 1+1=2" and try to reflect on what it means for me to understand that, phenomenality of an experience seems to contribute very little if anything -- beyond serving potentially as a "symbol" marking my understanding (a symbol that is represented by me feeling in a certain way, an alternatively non-phenomenal "symbols" may have been used as well) -- but that "feeling" isn't true understanding because it's just a feeling. Personally, then I find the best way to characterize my understanding by grounding it in my functional capabilities to describe and talk about 1+1=2, talk about number theories, do arithmetic, --- it then boils down to possession of "skills" (which becomes a matter of degree).
It may be possible that biological materials has something "special" to constitute phenomenality infused understanding, but these are hard to make out given the problem of even determining public indicators for phenomenality.
[deleted] t1_ivppbkm wrote
I love philosophy but I admit I am very out of my element here haha. Never bothered with this subject.
From my naive understanding the "mind" (which I already think is a bit of an artbitrary definition without a clear limit/definition) is composed by several elements, say X Y and Z, each one with their own sub elements. As you say unless we build an exact copy we are not gonna have all elements (or we might even have a partial element, say understanding, without all the sub elements that compose it)
I think, for example, that whatever elements compose moral relevance is obviously lacking in modern day IA. That is not controversial. So apart from this I find very uninteresting to try to figure out if a machine can "understand" exactly like a human or not.
So I think as long as we stick with a very precise language it can talk about it in a more meaningful way.
waffles2go2 t1_ivq5apy wrote
The "special" part is what we can't figure out. It's not any math that we are close to solving and I really don't think we're even asking the right questions.
waffles2go2 t1_ivq412x wrote
Millions of years of evolution yet we are going to figure it out with some vaguely understood maths that currently can solve some basic problems or produce bad artwork...
So pretty special....
[deleted] t1_ivq6ogt wrote
I am not claiming we are about to solve it, especially not in this field. I am claiming tho that our intuitions have deceived us about certain concepts (see the entire personal identity debate) and is very easy to think that we are "more" than we really are.
And we have some evidence of that: For example about the personal identity debate we need a certain intuition about our identity thats unifying across our life, even if it turns out to be a type of fantasy thats simply constructed this way because of its utility.
So I dont doubt that the same process is going on in our minds with concepts like consciousness and understanding
waffles2go2 t1_ivq3j0p wrote
It's vague and wishy-washy because it's a working hypothesis that is looking for something better to replace it or those to augment it.
I'll agree it's primary weakness is the vague definition between what makes something "thinking" and what is simply a long lookup table but this is the key question we need to riff on until we converge on something we can agree upon....
This is a way more mature approach to driving our thinking than this basic "maths will solve everything" when we feely admit we don't understand the maths....
billy_of_baskerville t1_ivqb6e1 wrote
>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.
Well put, I agree.
Professional-Song216 t1_ivp0q3x wrote
Hopefully those sitting far in the back heard you.
ayananda t1_ivp9imv wrote
This is first thing I started to think. The same argument goes to the notion of conciousness. If we define it too well, we can easily imagine mechanical machine which has this attributes. There clearly is something wicked going on in this biological processes...
[deleted] t1_ivot03x wrote
[deleted]
waffles2go2 t1_ivq1w4s wrote
LOL, I think you've got it totally backwards.
The Chinese Room assumes that the person explicitly DOES NOT understand Chinese but the "system" (the wall that masks this person) behaves as if it does to the person feeding it input.
You further support the argument in your final statement...
Was that really what you intended to do?
trutheality t1_ivqhe68 wrote
Yes, the premise of the thought experiment is that the man doesn't understand Chinese but the system "appears" to understand Chinese. This is used to argue that because the man doesn't understand Chinese, the apparent "understanding" is not "real." What I'm saying is that the reason the man doesn't understand Chinese is that the level of abstraction of which he's conscious of is not the level at which understanding happens, and I'm asserting that this is not a convincing argument against the system "truly" understanding Chinese as a whole.
Stuffe t1_ivyg2a7 wrote
>The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.
No one can ever cares to explain this "understanding whole system" magic. You obviously cannot build a perfect sphere out of Lego, because there are no bricks with the needed rounded shapes. And likewise you cannot build "understanding" out of Lego, because there are no bricks with the needed "understanding" nature. Just saying "whole system" is no more an explanation than saying "magic".
Viewing a single comment thread. View all comments