Submitted by timscarfe t3_yq06d5 in MachineLearning
Merastius t1_ivp2wc2 wrote
Reply to comment by red75prime in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
The part of the thought experiment that is deceptive is that, the simpler you make the 'rule following' sound, the more unbelievable (intuitively) it'll be that the room has any 'understanding', whatever that means to you. If instead you say that the person inside the room has to follow billions or trillions of instructions that are dynamic and depend on the state of the system and what happened before etc (i.e. modelling the physics going on in our brain by hand, or modelling something like our modern LLMs by hand), then it's not as intuitively obvious that understanding isn't happening.
Viewing a single comment thread. View all comments