>> In 1950, Alan Turing took on this question. He thought to himself with the question, can machines think, is just hopelessly vague. It's really difficult for us to get our head around. And maybe a better question to ask was, when can a machine be mistaken for a real thinking person? So, Turing proposed the following thought experiment. He says, imagine that you're in a room and you're facing a barrier and behind the barrier, there's a human and there's a computer. And you can ask these, the human and the computer, questions but you don't know which one is which. You don't know which is the human and which is the computer. And your task is to work out which is the human and which is the computer just from the answers you get from the questions that you ask. Now, the questions that you can ask can be about anything. So, you could ask, what, where is the best place to buy wallpaper? How do you feel about the current government? Do you like ducks? Whatever. And you take the answers that you get. And Turing says, when we get to the point that we cannot decide which is the human or which is the machine, or if our decision about which is which becomes arbitrary, they we have managed to build a machine that's capable of mentality, a machine that can think. It can think because it's been able to trick another human being. So this is an organism, or rather a machine, that's reached the appropriate level of functional complexity to count as having a mind. Now, if you want to see a fun cultural reference to this test, you can look at Blade Runner, where Harrison Ford's character is sent to find the replicates in a human society. So, you have this human society and then, there are robots in this society which looked, look and act a lot like humans and it's Harrison Ford's job to weed them out. So, he applies something like the Turing test. He asks the machines and the humans lots of different questions to try and work out whether what he's talking to is a replicate, a robot that's pretending to be a human or a human. So, how good is the Turing test for testing whether a machine can think? Here, there's some problems that philosophers have leveled at it. First of all, it's language based, so all the testing is for an intelligence that can communicate via language. We couldn't, for instance, check for animal intelligence if those animals couldn't speak languages. The second problem with the Turing test that people have raised, is that perhaps it's too anthropocentric. Because what we're testing it for, is whether a machine can be mistaken for being a human being. Well, that means we're testing for human intelligence, and surely, it seems very chauvinistic to think that the only intelligence worth studying is human intelligence. There could be lots of different forms of intelligence out there which could be realized by a machine but which we're not allowing an opportunity to shine through using the Turing Test. The third problem with the Turing Test is that it doesn't take into account the inner states of the machine. So, take, for example a machine that's doing a sum. And you asked it, what's 2 plus 8? Now, you could have a machine that does some sort of calculation, it thinks 2 plus 8, adds the two things together and gives you the answer 10. But you could have a machine that's programmed in a certain way that when it receives the input, 2 plus 8, it rattles through its files, and comes to a file which says 2 plus 8 on it. Pulls it out, opens it up and it says, 10. And it throws out the answer, 10. Now, in the first instance, we want to say that, that machine is doing some kind of computation and it's going through some kind of process, thought process. Whereas, the second machine is just retrieving a file. Now, if it were possible to build a machine that could pass Turing's test just using a file system, then surely this is a bit worrying, because we don't, we don't seem to have thought processes. We just have a machine that can retrieve lots of different files, and spit out the answers. Now, this isn't necessarily a killer flaw to Turing's plan. First of all, you could argue that a machine that just pulls out files according to the questions which it was asked, probably wouldn't be able to pass the test. It would have to have such a gigantic file store and such a huge database searching system, that is just wouldn't be the type of thing that could pass the test. Secondly, any kind of machine that could pass the Turing test, perhaps we don't want to go all the way and say that it has mentality, that it can have thought processes, that it can have mental states. But maybe what we should instead say, is that, look, if a machine can pass the Turing test, then it must be worthwhile looking to see how it's structured. What kind of thing could pass the Turing test? Because surely, that in itself, is going to give us an insight into what's required for human mentality. So, coming back to the thought about whether minds are really like computers, imagine that we did build a computer that could pass the Turing test. Would this properly count as having a mind? The philosopher, John Searle, had some very interesting things to say about this. He introduced us the following thought experiment. He says, imagine that you are in a room and there are two slots entering the room. One that says, I, and one that says, O. And what happens is you get symbols on bits of paper that come through the eye slot. Now, in your room, you have a book and in the book, it's a list of algorithms about what to do depending on the type of symbol you receive. So, if you receive a symbol, it might say, if you receive a symbol of type A, put out a symbol of type B. So, you have a steady stream of symbols coming in, and you look up in your book, what you should be putting out again, and let's imagine that you just have a whole stack of these, these symbols available to you in the room. So, you receive a symbol, you check in your book what you should put through the output box, the O box, you pick up a symbol, and put it back through the O box. Now imagine that, unknown to you, these symbols are actually Chinese. And the person who is feeding in the symbols is a Chinese speaker who is asking you questions. So, what's happening is that you're receiving questions and you're actually giving out answers. And the rule book is such that you're actually giving our really quite coherent answers. In fact, you're realizing something like a machine that would pass Turing's test. So, the Chinese speaker outside really believes that she's conversing with another Chinese speaker. Now, the big question that Searle asks is, well, I, the person inside the box, or the person inside the room don't understand Chinese. So, if I were unable to converse with this Chinese speaker in a way that seems perfectly coherent to the person outside, I, myself, have no understanding of what I'm doing. It doesn't make sense to say, I understand Chinese, because I don't. All I'm doing is looking up rules in this logbook. And Searle points out that this is just how computers work. Computers receive inputs and then their program is like the logbook. The program tells them what sort of output they should give. So, they check it, they check up or they run through a list of rules. And see that, given a particular input, they should give a particular output. But Searle's point is that, that machine can never itself be said to understand Chinese. That machine isn't thinking, that machine is just putting on a very good simulation of a thinker. A simulation that's so good that it could fool, fool someone on the outside. But a simulation that can't possibly count as thinking itself, because there is no understanding involved. And this is one of the most important questions that has been leveled at the computational matter of the mind. We're saying minds are like computers, that minds take inputs, manipulate the, and send out outputs. Then where does the meaning come in? This turns back to one of our opening questions, the aboutness of thoughts. We want to say that our thoughts are about things, that our thoughts have meaning, that we can think about things, and yet, what we have on this computational view of the mind is a system that has no understanding at all. There's no meaning involved. The system just receives inputs and sends out outputs. At no point do we have any understanding. What's made it really salient by Searle's thought experiment is that computers only work on syntactic properties of symbols. What does this mean? Well, imagine you have a square with a line through it. And that's a symbol. And let's just say that, that symbol is Chinese for the question, do you like ducks? And you're the person inside the Chinese room and you get the symbol in. Now, all you have to work on is the syntactic properties of the symbols. The syntactic property is just the square with the line, it's the shape of the symbol if you like. So, you look at the shape of symbol and you can add an answer depending on what the shape of the symbol is. Now, the important thing about computers is that all they ever operate on are the shapes of symbols. Computers don't need to understand what those symbols stand for. The computer doesn't need to understand that a square with a line through it means the question, do you like ducks, all it needs to be able to do is run its program so that it knows what the correct output to give is there. But this is misleading because what's very important for human thought are the semantic properties. Semantic properties are what that symbol stands for. So, the square with a line through it stands for the question, do you like ducks? What it stands for, the meaning is known as the semantic property of that symbol. As humans, we can look at that symbol and provided we understand the code, we know that it stands for the question, do you like ducks? But computer doesn't need to know that. The computer just needs to know how it ought to change that symbol and give a particular output. This is Searle's point. He says, that a system, no matter how functionally complex it is, even if they can pass something like the Turing test that only other works on the syntactic properties, the symbols, the shapes that they get, will never be a proper thinking system because it will never have the aboutness. It will never be able to understand the semantic value of those symbols. And intuitively, in order to have a thinking system, you have to have a system that understands the semantic properties of symbols as well as the syntactic ones. That's how we get the aboutness of thoughts.