Submitted by timscarfe t3_yq06d5 in MachineLearning
tomvorlostriddle t1_ivo10dg wrote
>The Chinese Room Argument is an argument against the idea that a machine could ever be truly intelligent. It is based on the idea that intelligence requires understanding, and that following rules is not the same as understanding.
You're always following rules, just different ones.
For example you can also remember that on a road bike
- small levers shift to small gears
- big levers shift to big gears
you are following rules, but it's a parsimonious representation, easy to remember and intuitive
most people would call that more intelligent than to remember
- small right leaver makes driving harder
- big right lever makes driving easier
- small left leaver makes driving easier
- big left leaver makes driving harder
It's more wasteful to remember it like this, counterintuitive, but is it qualitatively something completely different than the other way?
Viewing a single comment thread. View all comments