• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

The Chinese Room Experiment

Quite recently while browsing and reading up on peoples thoughts on Asperger’s, someone posed the question “How do you make sense of the world?”

Hm… how do I? Perhaps I don’t make sense of the world. It’s not the most logical place. But all that aside, it made me think about something else which is an interesting comparison.

A few decades ago a philosopher by the name of John Searle thought he had the answer to Alan Turing’s Turing test. Said test was an experiment to prove if a machine could show any sign of intelligence on par with a human.

The experiment Searle thought up, and it’s pretty much a thought experiment, is called The Chinese room. The experiment pointed out how there’s a difference between strong and weak artificial intelligence.

The difference in strong and weak AI would lie in the fact that weak AI would follow rules, and strong AI would actually learn and adapt rather than just adhere to rules set. One could probably conclude that strong AI would be closest to the human brain in terms of adaptability and ability to learn. Currently, if people are talking about AI, they are referring to strong AI. Thus far, a short background on strong and weak AI. The more you know…

So, with that out of the way, let’s move on to the experiment itself and what it entailed.

Imagine; a room, a person, a manual written in his/her own native tongue to interpret Chinese symbols. Said person has no knowledge of the Chinese language at all. The room also has 2 slots. One that serves as one where someone will give you input, another one that serves as the output. You’re translating what comes in, from Chinese, doing something with it according to a set of instructions, and then sliding them to the slot for output, again in Chinese.

Basically, you, as a person would do the same as a computer program; just in a slightly bigger size, and perhaps a slower pace.

It would pass the Turing test, as being “intelligent” yet one could not, with certainty say that it is a human, or at least has the same kind of intelligence a human has, if it were a computer. Afterall, interpreting symbols, or more generic, interpreting images, has in many cases little to do with understanding, but just comparing information.

Perhaps some of you already connected the dots on why I bring this comparison up.

Experiencing the world, or “making sense of the world” to some of us most likely like that similar to weak AI. We don’t understand it, but interpret according to a set of arbitrary rules. That perhaps coincides with our inability to adapt, since it provides new situations for which we’re not given any rules (yet). And social dynamics, to name one, are full of unwritten rules.

So, really, how do I make sense of the world? Obviously, I’m not hauling around a big book with rules to interpret the input and output something, but I find there’s a certain truth to the notion of weak AI, and the way AS works for me. Plenty of stuff just goes totally over my head. Being able to respond in an appropriate manner does not equal intelligence in that regard; being able to mimic, while it surely is something one needs a brain for, would actually be something even a computer, or any other machine running the right software could do. So in that regard, how “human” are you if you mimic social dynamics?

That in turn makes me wonder if you can expect and assume everyone to be intelligent as it is. I’m not discrediting myself and my intelligence, but the notion that someone’s intelligence can be on par with so called “weak AI” in specific fields, rather than overall “strong AI” makes me wonder about intelligence in general, be it in people or machines.

Comments

I suspect the Turing Test is more a measure of how easily people can be fooled rather than an indicator of true artificial intelligence. I have to admit some of the computers answering phones are pretty good, but I can still tell when I am talking to a machine and when I am talking to a human.
 

Blog entry information

Author
King_Oni
Read time
3 min read
Views
1,755
Comments
1
Last update

More entries in Random / Silly

More entries from King_Oni

Share this entry

Top Bottom