Малку е дугачко, ама корисно е за да се следи дискусијата.
Intentionality and Consciousness
An important feature of the majority of mental states is that they have an “intentional” structure: they are intrinsically about, or directed toward, something. (Intentionality in this sense is distinct from the ordinary quality of being intended, as when one intends to do something.) Thus, believing is necessarily believing that something is the case; desiring is necessarily desiring something; intending is necessarily intending to do something. Not all mental states are intentional, however: pain, for example, is not, and neither are many states of anxiety, elation, and depression.
Speech acts are intentional in a derivative sense, insofar as they are expressive of intrinsically intentional mental states, including expressed psychological states and propositional contents. According to Searle, the derived intentionality of language accounts for the apparently mysterious capacity of words, phrases, and sentences to refer not only to things in the world but also to things that are purely imaginary or fictional.
Although not all mental states are intentional, all of them, in Searle’s view, are conscious, or at least capable in principle of being conscious. Indeed, Searle maintains that the notion of an unconscious mental state is incoherent. He argues that, because consciousness is an intrinsically biological phenomenon, it is impossible in principle to build a computer (or any other nonbiological machine) that is conscious. This thesis runs counter to much contemporary cognitive science and specifically contradicts the central claim of “strong” artificial intelligence (AI): that consciousness, thought, or intelligence can be realized artificially in machines that exactly mimic the computational processes presumably underlying human mental states.
The Chinese Room Argument
In a now classic paper published in 1980, “Minds, Brains, and Programs,” Searle developed a provocative argument to show that artificial intelligence is indeed artificial. Imagine that a person who knows nothing of the Chinese language is sitting alone in a room. In that room are several boxes containing cards on which Chinese characters of varying complexity are printed, as well as a manual that matches strings of Chinese characters with strings that constitute appropriate responses. On one side of the room is a slot through which speakers of Chinese may insert questions or other messages in Chinese, and on the other is a slot through which the person in the room may issue replies. The person in the room, using the manual, acts as a kind of computer program, transforming one string of symbols introduced as “input” into another string of symbols issued as “output.” Searle claims that even if the person in the room is a good processor of messages, so that his responses always make perfect sense to Chinese speakers, he still does not understand the meanings of the characters he is manipulating. Thus, contrary to strong AI, real understanding cannot be a matter of mere symbol manipulation. Like the person in the room, computers simulate intelligence but do not exhibit it.
The Chinese room argument has generated an enormous critical literature. According to the “systems response,” the occupant of the room is analogous not to a computer but only to a computer’s central processing unit (CPU). He does not understand Chinese because he is only one part of the computer that responds appropriately to Chinese messages. What does understand Chinese is the system as a whole, including the manual, any instructions for using it, and any intermediate means of symbol manipulation. Searle’s reply is that the other parts of the system can be dispensed with. Suppose the person in the room simply memorizes the characters, the manual, and the instructions so that he can respond to Chinese messages entirely on his own. He still would not know what the Chinese characters mean.
Another objection claims that robots consisting of computers and sensors and having the ability to move about and manipulate things in their environment would be capable of learning Chinese in much the same way that human children acquire their first languages. Searle rejects this criticism as well, claiming that the “sensory” input the computer receives would also consist of symbols, which a person or a machine could manipulate appropriately without any understanding of their meaning.