Submitted by timscarfe t3_yq06d5 in MachineLearning

Mods feel free to delete this if you feel it's inappropriate.

https://youtu.be/_KVAzAzO5HU

We interviewed Francois Chollet, Mark Bishop, David Chalmers, Joscha Bach and Karl Friston on the Chinese Room argument.

The Chinese Room Argument was first proposed by philosopher John Searle in 1980. It is an argument against the possibility of artificial intelligence (AI) – that is, the idea that a machine could ever be truly intelligent, as opposed to just imitating intelligence.

The argument goes like this:

Imagine a room in which a person sits at a desk, with a book of rules in front of them. This person does not understand Chinese.

Someone outside the room passes a piece of paper through a slot in the door. On this paper is a Chinese character. The person in the room consults the book of rules and, following these rules, writes down another Chinese character and passes it back out through the slot.

To someone outside the room, it appears that the person in the room is engaging in a conversation in Chinese. In reality, they have no idea what they are doing – they are just following the rules in the book.

The Chinese Room Argument is an argument against the idea that a machine could ever be truly intelligent. It is based on the idea that intelligence requires understanding, and that following rules is not the same as understanding.

TL;DR - Chalmers, Chollet, Bach and Friston think that minds can arise from information (functionalists with some interesting distinctions on whether it's causal / strongly emergent etc), Bishop/Searle not, they think there is an ontological difference in "being".

26

Comments

You must log in or register to comment.

trutheality t1_ivnxvm9 wrote

The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.

We know that humans understand things. We also know that at a much lower level, a human is a system of chemical reactions. Chemical reactions are the rule-following machinery: they are strictly governed by mathematics. The chemical reactions don't understand things, but humans do.

There is actually no good argument that the Chinese room system as a whole doesn't understand Chinese, even though the man inside the room doesn't understand Chinese.

32

Nameless1995 t1_ivp554d wrote

> The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.

This is "addressed" (not necessarily successfully) in the original paper:

https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf

> I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."

> My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

> Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong AI will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.

And also in Dneprov's game:

http://q-bits.org/images/Dneprov.pdf

> “I object!” yelled our “cybernetics nut,” Anton Golovin. “During the game we acted like individual switches or neurons. And nobody ever said that every single neuron has its own thoughts. A thought is the joint product of numerous neurons!”

> “Okay,” the Professor agreed. “Then we have to assume that during the game the air was stuffed with some ‘machine superthoughts’ unknown to and inconceivable by the machine’s thinking elements! Something like Hegel’s noˆus, right?”

I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.

Even the notion of intentionality do not have clear metaphysical grounding, and there are ways to take (and already taken) intentionally in functionalist framework as well (in a manner such that machines can achieve intentionality). So it's not clear what exactly is it that we are supposed in find in "understanding" but missing in chinese rooms. The bias is perhaps clearer in the professors rash dismissal of the objection that the whole may understanding by suggesting that some "machine superthoughts" would be needed. If understanding is nothing over and beyond a manner of functional co-ordination, then there no need to think that the systems-reply suggestion requires "machine superthoughts". My guess is that Searle and others have an intuition that understanding requires some "special" qualitative experience or cognitive phenomenology. Except I don't think they are really as special, but merely features of stuff that realize the forms of cognition in biological beings. The forms of cognition may as well be realized differently without "phenomenology". As such, the argument can boil down partially to "semantics" whether someone is willing to broaden the notion of understanding to remove appeals to phenomenology or any other nebulous "special motion of living matter".

> We know that humans understand things. We also know that at a much lower level, a human is a system of chemical reactions. Chemical reactions are the rule-following machinery: they are strictly governed by mathematics. The chemical reactions don't understand things, but humans do.

Note that Searle isn't arguing that rule-following machinery or machines cannot understand. Just that there is no "understanding program" per se that can realize understanding no matter how it is realized or simulated. This can still remain roughly true based on how we define "understanding".

This is clarified in this QA form in the paper:

> "Could a machine think?" The answer is, obviously, yes. We are precisely such machines.

> "Yes, but could an artifact, a man-made machine think?" Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. "OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

> "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

> "Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The point Searle is trying to make is that understanding is not exhausively constituitive of some formal relations but it also depends on how the formal relations are physically realized (what sort of relevant concrete causal mechanisms are underlying them and so on).

Although for anyone who says this, there should be a burden for them to explain exactly what classes of instantiation is necessary for understanding, and what's so special about those classes of instantiation of relevant formal relations that's missed in other Chinesen-room like simulations. Otherwise, it's all a bit vague and wishy washy appeals to intuition.

12

trutheality t1_ivq6265 wrote

>"Why not?" Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

This, I think, again falls into the same fallacy of trying to ellicit understanding from the rule-following machinery. The rule-following machinery operates on meaningless symbols, just as humans operate on meaningless physical and chemical stimuli. If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.

​

>The point Searle is trying to make is that understanding is not exhausively constituitive of some formal relations but it also depends on how the formal relations are physically realized

This also seems like an exceptionally weak argument, since it suggests that a sufficiently accurate physics simulation of a human would, after all, hit all of Searle's criteria and be capable of understanding. Again, even here, it is important to separate levels of abstraction: the physics engine is not capable of understanding, but the simulated human is.

One could, of course, stubbornly insist that without us recognizing the agency of the simulated human, it is just another meaningless collection of symbols that follows the formal rules of the simulation, but that would be no different from viewing a flesh-and-blood human as a collection of atoms that meaninglessly follows the laws of physics.

Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.

3

Nameless1995 t1_ivq9tkh wrote

> If understanding arises in a program, it is not going to happen at the level of abstraction to which Searle is repeatedly returning.

There is a bit nuance here.

What Searle is trying to say by "programs don't understand" is not that there cannot be physical instantiations of "rule following" programs that understands (Searle allows that our brains are precisely one such physical instantiation), but that there would be some awkward realizations of the same program that don't understand. So the point is actually relevant at a more higher level of abstraction.

> Ultimately in these "AI can't do X" arguments there is a consistent failure to apply the same standards to both machines and humans, and, as you point out, a failure to provide falsifiable definitions for the "uniquely human" qualities being tested, be it understanding, qualia, originality, or what have you.

Right. The point of Searle becomes even more confusing because on one hand he is explicitly allowing "rule following machines" can understand (he explicitly says that instances of appropriate rule-following programs may understand things and also that we are machines that understand), at the same time he doesn't think mere simulation of functions of a program with any arbitrary implementation of rule-following is not enough. But then it becomes hard to tease out what exactly "intentionality" is for Searle, and why certain instances of rule-following through certain causal powers can have it, while the same rules simulated otherwise in the same world correspond to not having "intentionality".

Personally, I think he was sort of thinking in terms of hard problem (before the hard problem was made: well it existed in different forms). He was possibly conflating understanding with having phenomenal "what it is like" consciousness of certain kind.

> consistent failure to apply the same standards to both machines and humans

Yeah, I notice that. While there are possibly a lot of things we don't completely understand about ourselves, there also seems to be a tendency to overinflate ourself. As for myself, if I reflect first-personally I have no clue what is it I exactly do when I "understand". There were times, I have thought when I was younger, that I don't "really" "understand" anything. Whatever happens, happens on it own, I can't even specify the exact rules of how I recognize faces, how I process concepts, or even are "concepts" in the first place, or anything. Almost everything involved in "understand anything" is beyond my exact conscious access.

2

red75prime t1_ivwmz24 wrote

I'll be blunt. No amount of intuition pumping, word-weaving, and hand-waving can change the fact that there's zero evidence of the brain violating the physical Church-Turing thesis. It means that there's 0 evidence that we can't build transistor-based functional equivalent of the brain. It's as simple as that.

2

Nameless1995 t1_ivxosmw wrote

I don't think Searle denies that so I don't know who you are referring to.

Here's quote from Searle:

> "Could a machine think?"

> The answer is, obviously, yes. We are precisely such machines

> Yes, but could an artifact, a man-made machine think?"

> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question. "OK, but could a digital computer think?" If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

2

red75prime t1_ivxqxgr wrote

Ah, OK, sorry. I thought that the topic had something to do with machine learning. Exploration of Searle's intuitions is an interesting prospect, but it fits other subreddits more.

2

[deleted] t1_ivpj1jf wrote

Thanks, awesome post.

I am very skeptical our intuition when it comes to the mind. We have gottem so much wrong, I would be surprised if it really turns out that there is something "special" going on on the brain to justify this difference

1

Nameless1995 t1_ivpm198 wrote

I think as long as we are not purely duplicating the brain, there would always be something different (by definition of not duplicating). The question becomes then the relevancy of difference. I think there is some plausibility to the idea that some "formal" elements of the brain associated with cognition can be simulated in machines, but would that be "sufficient" for "understanding"? This question is partly hinging on semantics. We can choose to define understanding in a way such that it's fully a manner of achieving some high level formal functional capabilities (abstracting away from the details of the concrete "matter" that realizes the functions). There is a good case to be made that perhaps it's better to think of mental states in terms of higher level functional roles than "qualitative feels" (which is not to say there aren't qualitative feels, but that they need not be treated as "essential" to mental states -- the roles of which may as well be realized in analogous fashion without the same feels or any feels). If we take a similar stance that the point of having or lacking phenomenal feels (and phenomenal intentionality) becomes moot because all that would matter for understanding would be a more abstracted level of formal functionalities (which may as well be computational).

If on the other hand, we decide to treat "phenomenal feels" (and "phenomenal intentionality") as "essential" to understanding (by definition -- again a semantics issue), then I think it's right to doubt whether any arbitrary realizations of some higher level abstracted (abstracted away from phenomenal characters) behavior forms would necessarily lead to having certain phenomenal feels.

Personally, I don't think it's too meaningful to focus on "phenomenal feels' for understanding. If I say "I understand 1+1=2" and try to reflect on what it means for me to understand that, phenomenality of an experience seems to contribute very little if anything -- beyond serving potentially as a "symbol" marking my understanding (a symbol that is represented by me feeling in a certain way, an alternatively non-phenomenal "symbols" may have been used as well) -- but that "feeling" isn't true understanding because it's just a feeling. Personally, then I find the best way to characterize my understanding by grounding it in my functional capabilities to describe and talk about 1+1=2, talk about number theories, do arithmetic, --- it then boils down to possession of "skills" (which becomes a matter of degree).

It may be possible that biological materials has something "special" to constitute phenomenality infused understanding, but these are hard to make out given the problem of even determining public indicators for phenomenality.

1

[deleted] t1_ivppbkm wrote

I love philosophy but I admit I am very out of my element here haha. Never bothered with this subject.

From my naive understanding the "mind" (which I already think is a bit of an artbitrary definition without a clear limit/definition) is composed by several elements, say X Y and Z, each one with their own sub elements. As you say unless we build an exact copy we are not gonna have all elements (or we might even have a partial element, say understanding, without all the sub elements that compose it)

I think, for example, that whatever elements compose moral relevance is obviously lacking in modern day IA. That is not controversial. So apart from this I find very uninteresting to try to figure out if a machine can "understand" exactly like a human or not.

So I think as long as we stick with a very precise language it can talk about it in a more meaningful way.

1

waffles2go2 t1_ivq5apy wrote

The "special" part is what we can't figure out. It's not any math that we are close to solving and I really don't think we're even asking the right questions.

1

waffles2go2 t1_ivq412x wrote

Millions of years of evolution yet we are going to figure it out with some vaguely understood maths that currently can solve some basic problems or produce bad artwork...

So pretty special....

1

[deleted] t1_ivq6ogt wrote

I am not claiming we are about to solve it, especially not in this field. I am claiming tho that our intuitions have deceived us about certain concepts (see the entire personal identity debate) and is very easy to think that we are "more" than we really are.

And we have some evidence of that: For example about the personal identity debate we need a certain intuition about our identity thats unifying across our life, even if it turns out to be a type of fantasy thats simply constructed this way because of its utility.

So I dont doubt that the same process is going on in our minds with concepts like consciousness and understanding

1

waffles2go2 t1_ivq3j0p wrote

It's vague and wishy-washy because it's a working hypothesis that is looking for something better to replace it or those to augment it.

I'll agree it's primary weakness is the vague definition between what makes something "thinking" and what is simply a long lookup table but this is the key question we need to riff on until we converge on something we can agree upon....

This is a way more mature approach to driving our thinking than this basic "maths will solve everything" when we feely admit we don't understand the maths....

1

billy_of_baskerville t1_ivqb6e1 wrote

>I think the biggest problem with CRA and even Dneprov's game is that it's not clear what the "positive conception" (Searle probably elaborates in some other books or papers) of understanding should be. They are just quick to quip "well, that doesn't seem like understanding, that doesn't seem to possess intentionality and so on so forth" but doesn't elaborate what they think exactly possessing understanding and intentionality is like so that we can evaluate if that's missing.

Well put, I agree.

1

ayananda t1_ivp9imv wrote

This is first thing I started to think. The same argument goes to the notion of conciousness. If we define it too well, we can easily imagine mechanical machine which has this attributes. There clearly is something wicked going on in this biological processes...

2

waffles2go2 t1_ivq1w4s wrote

LOL, I think you've got it totally backwards.

The Chinese Room assumes that the person explicitly DOES NOT understand Chinese but the "system" (the wall that masks this person) behaves as if it does to the person feeding it input.

You further support the argument in your final statement...

Was that really what you intended to do?

1

trutheality t1_ivqhe68 wrote

Yes, the premise of the thought experiment is that the man doesn't understand Chinese but the system "appears" to understand Chinese. This is used to argue that because the man doesn't understand Chinese, the apparent "understanding" is not "real." What I'm saying is that the reason the man doesn't understand Chinese is that the level of abstraction of which he's conscious of is not the level at which understanding happens, and I'm asserting that this is not a convincing argument against the system "truly" understanding Chinese as a whole.

3

Stuffe t1_ivyg2a7 wrote

>The fallacy in the Chinese room argument in essence is that it incorrectly assumes that the rule-following machinery must be capable of understanding in order for the whole system to be capable of understanding.

No one can ever cares to explain this "understanding whole system" magic. You obviously cannot build a perfect sphere out of Lego, because there are no bricks with the needed rounded shapes. And likewise you cannot build "understanding" out of Lego, because there are no bricks with the needed "understanding" nature. Just saying "whole system" is no more an explanation than saying "magic".

1

PassionatePossum t1_ivnplxx wrote

The Chinese room argument always seemed like philosophical bullshit to me. Right from the start, it assumes that there is a difference between „merely following rules“ and „true intelligence“.

Whatever our brain is doing, it is also following rules, namely the laws of physics that govern how electrical charges build up and are passed to the next neurons. I hope that nobody is arguing that we couldn‘t simulate a brain at least in principle. Because suggesting otherwise would be to believe in magic.

And if there is no fundamental difference between following rules and intelligence, the whole argument just becomes silly since intelligence isn‘t something that is either there or not, it becomes a spectrum and the only interesting question remains at which point we define intelligence as „human-like“.

16

doesnotcontainitself t1_ivqlssr wrote

If I remember right, Searle himself holds that understanding relies on how the system of rules is physically and biologically implemented in an environment. Part of his conclusion is that a non-biological machine can’t understand (or be conscious). But there are plenty of phenomena like this; no magic needed.

Also, the argument isn’t assuming your distinction, it’s arguing for it from other premises that seem intuitive initially.

Does the argument succeed? Probably not, for reasons others have given. But you can’t dismiss it as magic and nonsense.

4

PrivateFrank t1_ivp7i5c wrote

>Right from the start, it assumes that there is a difference between „merely following rules“ and „true intelligence“.

It depends on how flexible those rules are, right? Are the rules a one to one lookup, or are there branching paths with different outcomes?

If the man in the room sees an incoming symbol, looks it up in the book, and sees only one possible output symbol, and sends that out, then he doesn't need to understand Chinese.

If he has more than one option of output, and needs to monitor the results of his output choices, then he's no longer just a symbol translator. He's now an active participant in shaping the incoming information. To get better at choosing symbols, he's going to have to learn Chinese!

1

waffles2go2 t1_ivq71kz wrote

The whole point of this thread is that it is VERY INTERESTING at which point we define intelligence.

So you're both totally wrong and correct in one post - glad you came around.

1

geneing t1_ivnp4o0 wrote

Why are we wasting time on this? Searle made a few subtle mistakes and played a few tricks.

  1. He never defines what "understand" means. Without a clear definition, he can play rhetorical tricks to support his argument.
  2. Is it really possible to translate from English to Chinese by just following a book of rules? Have you seen "old" machine translations that were basically following rules - it was trivial to tell machine translation from human translation.
13

red75prime t1_ivoiy3c wrote

  1. Sure. Take an ML translator algorithm and the weights and do all matrix multiplications and other operations by hand.
1

Merastius t1_ivp2wc2 wrote

The part of the thought experiment that is deceptive is that, the simpler you make the 'rule following' sound, the more unbelievable (intuitively) it'll be that the room has any 'understanding', whatever that means to you. If instead you say that the person inside the room has to follow billions or trillions of instructions that are dynamic and depend on the state of the system and what happened before etc (i.e. modelling the physics going on in our brain by hand, or modelling something like our modern LLMs by hand), then it's not as intuitively obvious that understanding isn't happening.

3

mjmikulski t1_ivp4c5g wrote

> it was trivial to tell machine translation from human translation.

Isn't it anymore?

1

Nameless1995 t1_ivp5o2x wrote

> He never defines what "understand" means. Without a clear definition, he can play rhetorical tricks to support his argument.

Right.

> Is it really possible to translate from English to Chinese by just following a book of rules? Have you seen "old" machine translations that were basically following rules - it was trivial to tell machine translation from human translation.

Newer one's are still following rules. It's still logic gates and bit manipulation underneath.

1

PrivateFrank t1_ivp7x8b wrote

>Newer one's are still following rules. It's still logic gates and bit manipulation underneath.

Yeah, but at the same time the translation "logic" is being continuously refined through learning.

The book of rules is static in the old example.

1

Nameless1995 t1_ivp9j9b wrote

> Yeah, but at the same time the translation "logic" is being continuously refined through learning.

Yes by following more rules (rules of updating other rules).

> The book of rules is static in the old example.

True. The thought experiments make some idealization assumptions. Current programs need to "update" partly because they don't have access to the "true rules" from the beginning (that's why the older models didn't work as well either). But in CRA, the book, represents the true rules all laid bare. One issue is that in real life, the rules of translations itself is dynamic and can change with change in languages. To address that CRA can focus on a specific instance of time (time t) and consider it as given that the true rules are available for time t, and consider the question about knowledge of Chinese at time t. (But yes, there may not be even true "determinate" rules -- because of statistical variations of how individuals use language. Instead there can be a distribution of rules each aligning with real life usages to varying degrees of fit. The book can be then treated as a set of coherent rules that belongs to a dense area in that distribution at time t.)

1

PrivateFrank t1_ivpaaxh wrote

>Yes by following more rules (rules of updating other rules).

But those rules are about improving performance of the translation according to some other benchmark from outside of the rule system.

Unless one of the Chinese symbols sent into the room means "well done that last choice was good, do it again, maybe" and is understood to mean something like that, no useful learning or adaptation can happen.

2

Nameless1995 t1_ivpc688 wrote

Right. The Book in CRA is meant to represent the true rules (or at least something "good enough" for a human with billingual capabilities) at a given time so the "need" for updating rules from feedback is removed (feedback is needed in practical settings, because we are not in a thought-experiment which stipulates some oracle access). The point is that the practical need of contemporary ML models for refination (given lack of magical access to data generating processes), doesn't entail the 'in principle' impossibility to write down serviceable rules of translation for a specific time instance in a book.

1

PrivateFrank t1_ivpfnm2 wrote

Then this instant-time version of the CRA doesn't need understanding.

But you have to compare that to a human for the analogy to mean anything, and an instant-time human being is as empty as the CRA of understanding and intentionality.

1

Nameless1995 t1_ivpjf1l wrote

This is bit of a "semantics". As I also said in the other comment, it all boils down to semantics about how we decide to define "understanding" (a nebulous notion) in the first place (even intentionality is a very unclear concept -- the more I read papers on it the more confused I become because some many diverse ways people go around this concept).

If we define that understanding such that it by definition "updating" information and such (which I think is a bit weird of a definition in terms of standard usage of understanding in community. Would an omniscient being with the optimal policy setup in theory be treated as incapable of understanding?), then yes the vanilla CRA wouldn't understand but would not make any interesting claim about capabilities of programs.

Either way, Searle's point of using CRA was more for the sake of illustration to point towards something broader (need of some meaningful instantiation of programs to realize understanding properly). The point mostly stand for Searle for any formal programs (with update rules or not). In principle CRA can be modified correspondingly (think of the person following rules of state transitions --now the CRA can be allowed to have update rules from examples and feedback signals as well). But yes, then it may not be as intuitive to people if CRA would count as "not understanding" at that point. Searle was most likely trying to point towards "phenomenality". And how arbitrary instantiation of "understanding-programs" would not necessarily realize some phenomenal consciousness of understanding. But I don't think it's really necessarily to even have phenomenal consciousness for understanding (although again, that's partly a semantic disagreement about how the carve out "understanding").

2

PrivateFrank t1_ivplvi0 wrote

Hey I'm not an ML guy, just someone with an interest in philosophy of mind.

Intentionality and understanding and first-person (phenomonologistic) concepts, and I think that's enough to have the discussion. We know what it is like to understand something or have intentionality. Intentionality in particular is a word made up to capture a flavour of first-person experience of having thoughts which are about something.

I think that to have "understanding" absolutely requires phenomenal consciousness. Or the "understanding" in an AI has could be the same as how much a piece of paper understands the words written upon it. At the same time, none of the ink on that page is about anything - it just is. There's no intentionality there.

It's important to acknowledge the context at the time that there were quite a few psychologists, philosophers and computer scientists who really were suggesting that the human mind/brain was just passively transforming information like the man in the Chinese room. It's important to not let current ML theorists make the same mistake (IMO).

The difference between the CRA and what we can objectively observe about organic consciousness is informative about where the explanatory gaps are.

3

Nameless1995 t1_ivpyc47 wrote

> Intentionality and understanding and first-person (phenomonologistic) concepts, and I think that's enough to have the discussion. We know what it is like to understand something or have intentionality. Intentionality in particular is a word made up to capture a flavour of first-person experience of having thoughts which are about something.

> I think that to have "understanding" absolutely requires phenomenal consciousness. Or the "understanding" in an AI has could be the same as how much a piece of paper understands the words written upon it. At the same time, none of the ink on that page is about anything - it just is. There's no intentionality there.

That's exactly Searle's bigger point with CRA. CRA is basically in the same line as Chinese Nation and Dneprov's game, paper simulations, and such arguments (not particularly novel). The larger point of Searle is that although machines may understand (according to Searle, we are such machines), programs don't understand by simply the characteristics of the programs. That is particular instantiation of programs (and we may be such instantiation) may understand depending on the underlying causal powers and connections instantiating the program but not any arbitrary instantiation may understanding (for example "write symbols in paper" or "arrange stone" instantiations).

The stress is exactly that in any arbitrary instantiation may not have relevant "intentionality" (the point becomes muddied because of the focus on the simplicity of the CRA.)

However, a couple of points:

(1) First I think at the beginning of disagreements, it's important to set up the semantics. Personally, I think a distributionalist attitude towards language. First, I am relatively externalist about meaning of words (meanings of words are not decided by solo people in the mind, but it's grounded in how the word is used in public). Second, like Quine, I don't think most words have clean determinate "meaning" and especially more so for words like "understanding". This is because there are divergences among how different individuals use words, and there may be no consistent unfying "necessary and sufficient" rules explaining the use of words (because some usages by different people may contradict each other). Such issues don't express that much in day to day discourse, but becomes more touchy when it comes to philosophy. So what do we do? I think the answer is something like "conceptual engineering". We can try to make suggestions about refinements and further specifications of nebulous concepts like "understanding" when it's necessary for some debate. Then it can be upto the community as a whole to accept and normalize those usages in necessary contexts or counter-suggests alternatives.

(2) With the background set up in (1), I believe "understanding"'s meaning is indeterminate. Now from a conceptual engineering perspective, I feel like we can go multiple different mutually exclusive branches in trying to refine the concept of "understanding". However, we need to have some constraints as well (we need to ideally keep the word's usage still roughly similar to how it is).

(3) One way to make the notion of "understanding" more determinate, is to simply stipulate the need of "intentionality" and "phenomenality" for understanding.

(4) I think making intentionality essential for understanding is fair (I will come back to this point), but not sure if "phenomenality" is as essential for understanding (not needed for the word to roughly correspond to how we use it).

(4.1) First note that it is widely accepted in Phil. of Mind, that intentionality doesn't require phenomenality. There is also a debate on whether intentionality ever has a phenomenal component at all. For example, Tye may argue in a phenomenal field, there is just sensations, sound, images, imageries etc. no "thoughts" representing things as about something. When thinking about "1+1=2", Tye may say the thought itself is not phenomenally present, instead you would have just some phenomenal sensory experiences associated with the thought (perhaps your a vague audio experience of 1+1=2, perhaps some visual experiences of 1,+1,=2 symbols in imaginations and so on). Functionalists can have a fully externalist account of intentionality. They may say say that some representation in mental state being "about" some object in the world is simply a matter of having the relevant causal connection (which can be acheived by any arbitrary instantiations of a program with proper interface to the io signals - "embedding it in the world") or relevant evolutionary history (eg. teleosemantics) behind the reasons for the selection of the representation-producer-cosumer mechanism. This leads to the "so-called" causal theories of intentionality. They would probably reject intentionality as being some "internal" flavor of 1st person experience, rather than it being grounded in the embodiment of the agent.

(4.2) Even noting Anscombe's work on intentional theories of perception that was kind of one of the starting points on intentional theories -- she was pretty neutral on the metaphysics of intentionality and she was trying to take a very anti-reificationist stance, even close to treating it more of a linguistic device. She also distinguished intentional content from being mental objects. For example, if someone's worshiping has the intentional object -Zeus (they worship Zeus), then it's wrong to say that the intentional content is "the idea of zeus" (because it's wrong to say that the subject is simply worshipping the idea of Zeus, because the subject's 'intention' is to worship the true Zeus which happens to not exist - but that's irrelevant). This raises the question - what kind of metaphysical states can even constitute or ground this "subject taking the intentional content of her worship as being Zeus" (the intentional content is not exactly the idea of Zeus, or Zeus-imageries that the subject may experience -- but then what does this "intentionality" correspond to?). After thinking I couldn't really think of any sensible answer besides again going back to functionalism: the intentional object of the subject is zeus because that's the best way to describe their behaviors and functional dispositions.

(4.3) That's not the only perspectives of intentionality of course. There are for example, works by Brentaro and approaches of intentionality from core phenomenological perspectives. But it's not entirely clear if there is some sort of differentiable "phenomenal intentionality" in phenomenality. I don't know if I distinctly experience "aboutness" rather than simply having a tendencing to use "object-oriented language" in my thinking (which itself isn't distinctly or obvious phenomenal in nature). Moreover, while me understanding certain concepts may "feel" phenomenally to be a certain way, it feels to be a very poor account of understanding. Understanding is not a "feeling" nor it is clear why having the "phenomenal sense" of "understanding xyz" is necessary. Instead upon reflection of me for example having the "understanding of arithmetic" constitutes, I don't find it to be necessarily associated with my qualitatively feeling a certain way with dispositions to say or think about numbers, +, - (they happen, and the "qualitative feeling" may serve as a "mark" signifying understanding -- "signifying" in a functional correlational sense), but it seems to be most meaningfully constituted by possession of "skills" (ability to think arithmetics, solve arithmetical problems etc.). This again leads to functionalism. If I try to think beyond that, I find nothing determinate in 1st person experience constituting "understanding".

2

Nameless1995 t1_ivpyckd wrote

(5) Given that I cannot really get a positive conception of understanding beyond possessing relevant skills, I don't have a problem in broadening the notion of understanding to abstract out phenomenality (which may play a contingent (not necessary) role in realizing understanding through "phenomenal powers"). So on that note, I have no problem with allowing understanding (or at least a little bit of it) to a "system of paper + dumb-rule-follower + rules as a whole" producing empty symbols in response to other empty symbols in a systematic manner such that the it is possible to map the input and output in a manner making it possible to interpret it as a function to do arithmatic. You may say that these symbols are "meaningless" and empty. However, I don't think "meaning" even exists beyond just functional interactions. "meaning" as we use the term simply serves as another symbol to simplify communication of complicated phenomena wherein the communication itself is just causal back-and-forth. Even qualia to me are just "empty symbols" gaining meaning not intrinsically but from their functional properties in grounding reactions to changes in world states.

(6) Note I said "little bit of it" regarding understanding of arithmetic the system "paper + dumb-rule-follower+ some book of rules" as whole. This is because I am taking understanding as a matter of degree. The degree increases with increase in relevant skills (for example, if the system can talk (or having functional characteristics mappable to) about advanced number theory, talk about complex philosophical topics about metaphysics of numbers, then I would count that as "deeper understanding of arithmatic")

(7) 5/6 can be counter-intuitive. But the challenge here is to find an interesting positive feature of understanding that the system lacks. We can probably decide on some functional characteristics or some need of low-level instantiation details (beyond high level simulation of computational formal relations) if we want (I personally don't care either way) to restict paper+dumb-rule-followers to simulate super-intelligence. But phenomenality doesn't seem too interesting to me, and even intentionality is a bit nebulous (and controversial; I also relate to intentional-talk being simply a stance that we take to talk about experiences and thoughts rather than taking it as something metaphysically intrinsic in phenomenal feels (https://ase.tufts.edu/cogstud/dennett/papers/intentionalsystems.pdf)). Some weaker notion of intentionality can definitely be allowed already in any system behavior (including paper-based TM simulation as long as that is connected to a world for input-output signals). Part of the counter-intuitive force may come from the fact is that our usage of words and "sense" of a word x applies in context y, can be a bit rooted in internal statistical models (the feeling of "intuition" that it doesn't feel right to say the system "understands" is the feeling of ill-fittingnes due to our internal statistical models). However, if I am correct that the words have no determinate meaning -- it may be too loose or even have contraidcting usages, in our effort to clean them up through conceptual engineering it may be inevitable that some intuition needs to be sacrificed (because our intuitions themselves can be internally inconsistent). Personally, considering both ways, I am happier to bite the bullet here and allow papers+dumb-writers to understanding things as whole when the individual parts don't: simply following from my minimalist definition of understanding revolving around highl level demonstrable skills. I feel like more idealized notion are hard to define and get into mysterian territories while also unnecessarily complicating the word.

2

waffles2go2 t1_ivqahzo wrote

Excellent and thoughtful :)

CRA pushes our thinking as a useful tool.

Semantics can get refined as we push our ideas - we are just starting to understand.

1

waffles2go2 t1_ivqb7pe wrote

Searle is pushing the ball forward. I love this topic but most in this thread don't seem to appreciate that this thinking is evolving, understands its weaknesses, and is trying to address them.

God and the "chemicals is math so the brain is math" logic is so "community college with a compass point on it"....

1

geneing t1_ivplj76 wrote

The new machine learning based translators don't really have a set of rules. They essentially learn probabilities of different word combinations. (e.g. https://ai.googleblog.com/2019/10/exploring-massively-multilingual.html), which we could argue should count as "understanding" (since Searle didn't define it clearly).

0

Nameless1995 t1_ivpnf6f wrote

> They essentially learn probabilities of different word combinations.

These isn't dichotomous with having a set of rules. The rules operate at a deeper (less interpretable level -- some may say "subsymbolic") compared to GOFAI. The whole setup of model+gradient descent correspond to having some update rules (based on partial differentations and such). In practice they aren't fully continuous either (though in theory they are) because of floating point approximations and underlying digitization.

2

waffles2go2 t1_ivq94uo wrote

We're "wasting our time" because you clearly don't understand the theory. We are trying to prove/disprove and what you offer is somewhat slow thinking...

1 - Captain obvious says "everyone knows this" - it seems that you somehow believe this is "done" and Searle (and others) are trying to push it forward and iterate.

Why do you hate the scientific process or did you invent something better?

2 - Derp - your own opinion, not something Searle offers, nor is it relevant if you understand to the problem (it is given that you cannot tell the difference between someone who understands Chinese and the output of the "searcher").

Overall, great evidence as to why we need more discussion on this topic.

Great job!

1

Flag_Red t1_iw1hqzr wrote

This comment is unnecessarily hostile.

1

visarga t1_ivnccvy wrote

Putting a LLM on top of a simple robot makes the robot much smarter (PaLM-SayCan). The Chinese Room doesn't have embodiment, was it a fair comparison? Maybe the Chinese Room on top of a robotic body would be much improved.

The argument tries to say that intelligence is in the human, not in the "book". But I disagree, I think intelligence is mostly in the culture. A human alone, who grew up alone, without culture and society, would not be very smart or solve tasks in any language. Foundation models are trained on the whole internet today. They display new skills. Must be that our skills reside in the culture. So a model learning from culture would also be intelligent, especially if embodied and allowed to have feedback control loop.

7

ReasonablyBadass t1_ivnhqt5 wrote

People still talk about the chinese room? But it's so nonsensical.

It's like saying: a cpu can't play pong. That's why a cpu plus program to play pong can't play pong

6

Thorusss t1_ivnwgsq wrote

My take:

The person inside does indeed not understand Chinese, but the whole system of the room including the human and all instructions does.

The Chinese room is a boring flawed argument, that only is considered relevant by people who get tricked into confusing parts of the system with the whole thing.

5

PrivateFrank t1_ivp8cak wrote

> The Chinese room is a boring flawed argument, that only is considered relevant by people who get tricked into confusing parts of the system with the whole thing.

Are your fingers part of the system, or your corneas? Once you claim the "whole system does X", you need to say what is and is not part of that system.

Chalmers' "extended mind" suggests that "the system of you" can also include your tools and technologies, and other people and entire societies.

1

Nameless1995 t1_ivpf7g8 wrote

Friston's characterizations of qualia being knowledge about internal states is how I treat them too. But it still doesn't explain why the knowledge "feels like" something instead of the "information access" being realized by possessing simply blind dispositions to act and talk in a certain manner (including use of "qualia" language serving as an information bottleneck to simplify complex internal dynamics and allow simpler communication within intra-mind components and among inter-minds).

Ultimately I don't think there is anything mysterious. The formal functional characteristics that we find in mathematical forms are ultimately realized by physical concrete things with physical properties (whatever "physicality" may be -- which may turn out to be idealistic in essence). One of the properties of certain physical configurations may be qualitativity. Qualitativity need not be "essential" for realization of cognitve forms (or access to internal state information), but simply one "way" the realization happens in biological (and possibly non-biological) entities. If similar forms of realization can happen in sillicon is then upto a theory of consciousness that account for which physical configurations and what exactly leads to qualitative dispositions as opposed to non-feel dispositions.


I also don't think Bishop's answers to need of phenomenality answers anything. Why should uncertainty reduction require phenomenality per se? You can just well have a higher-order decision mechanism that experiments, makes discrete actions in the world, intervenes and so on to reduce search space without any appeal to phenomenality. I think it's possible that how our "phenomenal consciousness" works contingently at a certain level, but it doesn't mean there could not have been alternatives without phenomenality to realize some uncertainty-reduction functionalism (especially if we can make a mathematical model of this process).

2

billy_of_baskerville t1_ivqbeaj wrote

Thanks for posting!

Just in case people in the community are interested, I also wrote a blog post recently on a related subject, namely whether large language models can be said to "understand" language and how we'd know: https://seantrott.substack.com/p/how-could-we-know-if-large-language

There are at least two opposing perspectives on the question, and one of them (the "axiomatic rejection view") basically adopts the Searle position; the other (the "duck test view") adopts a more functionalist position.

2

tomvorlostriddle t1_ivo10dg wrote

>The Chinese Room Argument is an argument against the idea that a machine could ever be truly intelligent. It is based on the idea that intelligence requires understanding, and that following rules is not the same as understanding.

You're always following rules, just different ones.

For example you can also remember that on a road bike

  • small levers shift to small gears
  • big levers shift to big gears

you are following rules, but it's a parsimonious representation, easy to remember and intuitive

most people would call that more intelligent than to remember

  • small right leaver makes driving harder
  • big right lever makes driving easier
  • small left leaver makes driving easier
  • big left leaver makes driving harder

It's more wasteful to remember it like this, counterintuitive, but is it qualitatively something completely different than the other way?

1

aozorahime t1_ivry04l wrote

intelligence is a quite vast word IMHO. we understand something because we learn from someone or object that we get exposed to. The more we learn the more we understand something. does a machine have emotion? I guess not unless we want them to be ( by giving them the training to do so). That's the difference between humans and machines. What I believe is that current machine learning or artificial intelligence is to assist humans to solve a problem that humans need time to solve it.

1

Ricenaros t1_iwdldik wrote

I relate to the person in the room so hard. I often feel like I have no idea what I'm doing

1

Philience t1_ivoav4e wrote

This thought experiment is designed at the classical computational theory where computers did follow simple rules. Modern Machine learning systems are dynamic chaotic systems that do not follow rules as classical computers do. They learn, they make mistakes, they create representations on different levels. In many ways, they are like real biological systems that can understand.
Thus I think the Chinese room can not evoke intuitions relevant to modern or future artificial candidates for intelligence.

0

timscarfe OP t1_ivowy2o wrote

This is not true at all, and not relevant for this discussion

Neural networks are finite state automatas (FSAs), they are very simple computation models (far simpler than normal computer programs, which are at the top of the chomsky hierarchy i.e. Turing machines)

See https://arxiv.org/abs/2207.02098 for a primer "Neural Networks and the Chomsky Hierarchy"

2

Philience t1_ivoywj3 wrote

What part do you think is false?

What do you mean by "simple"? Trying to predict each state of a neural network is almost impossible. Neural Networks are usually seen as prime examples of complex cognitive models (and are not computational models). Even though your algorithm that emulates a neural network in a computer, of course, does computation. Neural Networks do not multiply matrices. Your computer does.

I admit whether they are chaotic or dynamic systems depends on the model and the scale of the model. And most models might be none of the above.

1

rx303 t1_ivownm4 wrote

Intelligence is not a binary variable. It is merely a measure of capability to discover long-range dependencies. The further the planning horizon, the smarter the agent.

0

waffles2go2 t1_ivq095j wrote

LOL, I love the Chinese Room debate, for those who think it's simplistic, I suggest they stick to coding....

Great post and I love reading the dense responses.

0