There has been a rather disturbing trend of late. This trend involves technology development ‘spectators’ casually waltzing into any given field, mistakenly declaring some minor aspect of said field an issue, resolving (usually through some kind of semantic redefinition) said issue then having the general public declare these misanthropes genius’s.
In no field do these leaches of academic spirit irritate me more then in AI. The particular argument I am well sick of reading can best be summed up by a recent article posted on ‘The Online Investing AI Blog’ aptly titled “There is no such thing as Artificial Intelligence”. The basic argument is that these machines display real intelligence, there is nothing ‘artificial’ or ‘fake’ about the intelligence they are exhibiting. Also, by calling these machines and the programs that drive them ‘artificial’ we are, as a community, somehow cheapening the intelligence they do show. (And may eventually hurt their feelings).
The confusion here is more then just linguistic, it is a misunderstanding driven from the technology spectators’ misinterpretation of what AI is and where it genuinely stands today. The term, ‘Artificial Intelligence’ was first derived over 60 years ago. The field was first brought firmly into the academic environment by Alan Turing in the all famous paper ‘Computing Machines and Intelligence’. This paper described the kind of programs that were considered ‘Intelligent’ at the time, and presented a testing scenario to examine what level of Intelligence these programs may genuinely have. The motivation for calling this intelligence ‘Artificial’ was not to minimize the potential or reduce the relevance of said intelligence, but to separate the academic debate on computer intelligence from that of the complicated and much discussed biological intelligence debate that was then and still is now very prevalent in philosophical literature.
After Turing’s paper was dropped on the academic world, the field of AI very quickly broke down into many different fields dealing with many different approaches to creating AI. First, the Functionalism debate broke out amongst philosophers and cognitive psychologists, who were trying to decide if a functional representation of the mind could be classified as an equivalent re-enactment of the mind, and hence be more then a mimic of our intelligence. This debate lead to cognitive psychologists using computer programs to test their models of cognition. If their program could react to stimuli in the same way that humans do, and then maybe their model of cognition was correct: leading to the distinction between 2 types of AI. The first is Weak AI: that which mimics intelligence or real world events. For instance, a Weak AI program can simulate weather conditions in real time and show what will happen to terrain or buildings in the area. These programs are very helpful in helping us predict and understand real world events.
The second category is called Strong AI. These are programs that claim to be more then just simulations or mimics of our intelligence; they are intelligent in their own right. There has been much debate over what is should be necessarily represented to classify any program as a Strong AI. Some Cognitive psychologists would like to claim that some of their programmed cognitive models are intelligent in their own right and satisfy the conditions of Strong AI, as you cannot tell their responses from a human response. However, this seems deeply unsatisfactory. We would not like to classify weather simulation as being real weather, and the cognitive program mimic follows the same principles, so what exactly would be an accepted basis for Strong AI?
This debate split the field even further. The philosophers divided into many factions: there were those who thought cognitive functional equivalence was enough for Intelligence, and those who thought that neurological functional equivalence was enough. Then there were those such as Searle who didn’t really mind where you got your equivalence from, as long as it carried all the necessary conditions for intelligence with it, and did not fall into the mimic trap, as demonstrated by his 30 year debated ‘Chinese Room’ mind experiment. But the field of AI split into another direction as well: as computer languages became more developed and more complex, and the technology improved to support faster and faster processing, software engineers started developing AI that had no base on human intelligence at all.
These AI programs very quickly gained all kinds of status, especially after the Turing Challenge was established. To win the prize, one had to create a program that, when placed behind a screen could not be distinguished from a human. The only interaction template available to the judge was a keyboard and text screen. The trick was that sometimes there would be a human and not a computer behind the screen. The most popular method of trying to beat this test was simply programming a massive reference board with as many possible responses to as many questions as the programmer could think of. This was enhanced with word recognition software that would try and put common words in previous questions together to sound like it knew what it was talking about, often with hilarious results. To this day, no program has ever satisfied the conditions of the Turing Challenge, and as such competitions such as the ‘Loebner Prize Competition’ remain unclaimed.
Confusion of this fact also came from the well known chess match between IBM’s ‘Deep Blue’ and Kasparov. It is well known that Deep Blue won the chess competition against the chess master, but onlookers would be mistaken in thinking that this was an example of computer ‘intelligence’ winning over mankind. Why? Because of the way that Deep Blue is programmed. The method used by Deep Blue developers is known as the “Brute Force” method: giving a computer enough computational power to run every single scenario then choose and play the move that will give it the best odds of winning a match. This is not the computer making an intelligent choice regarding chess, it is a glorified number crunching machine.
Ideas of developing other methods of choice that did not involve brute force kept software engineers entertained for some time. Programs that have been developed since that time and classified under the AI banner are programs such as Neural Networks, Bayesian Networks, Genetic Algorithms, Fuzzy Logic, and a wide variety of other such program bases. Despite the fact that they are all very different in their conception they are still classified as Artificial Intelligence despite their results simply because that is what they are: they are made by man, as mans attempt at creating something else that is functionally equivalent enough to our understanding of the world to be actually intelligent. No programmer, philosopher or scientist, when facing a real and functional Artificial Intelligence would ever think of it as “fake”, it is simply the wrong essence of the word in question. Hence it is offensive and naive to walk into the field and criticize those who care about the development and foundations of Artificial Intelligence for calling the results of their blood sweat and tears fake. One could argue that so far there has been no such program developed that can be classified as real intelligence, simply a set of highly developed mimic machines, you can even have a debate as to whether this mimicking is basis enough for it to be ‘real’ intelligence, but that’s beyond the point. Knowing the history of their field and knowing how many areas AI now covers, and knowing how diverse and complicated it has become, AI researches are well aware that there may be a time when some of these programs need to be renamed and re-categorized. But this relabeling will not be done by the side watches, the buyers and sellers of technology, and those who love to read the paper and get excited by the iPhone new music recognition software. This renaming will be done by those who understand the field and the place their projects have within that.
Your Roomba and your sorting machines and your autonomous warehouse mechanisms and your search engines and your computer viruses and your programmed computer game opponents and your spam bots may be intelligent or they may not be. It is important to remember that that which looks intelligent may not be. But making claims that your particular example of technology is intelligent, and no-one but you recognizes this and all others are giving the poor thing an inferiority complex is plain ignorant. Next time, talk to those in the field and ask them what they mean when they call something Artificially Intelligent. They will clearly have a better idea then most of you.
1 comment:
Wow, what a wonderful in-depth post! I think the idea behind the concept that all intelligence is really a redefinition of intelligence.
intelligence - getting a high quality solution in a short period of time.
So, that means, that it doesn't matter if a computer is using brute force, strong AI, or weak AI. What really counts is how the results measure against a human being.
Post a Comment