The history of Artificial Intelligence is an interesting topic. Many consider it to be a new technology. In fact the roots of the field date back to Aristotle. Although Aristotle introduced many concepts related to modern computer science. The one that directly affects AI was Aristotle’s epistemology or science of knowing. He believed that the study of thought itself was the basis of all knowledge. In his logic, Aristotle studied if something is deemed to be true then can something that is related also be true. For example “all men are mortal” and that “Socrates is a man”. Then it can be concluded that “Socrates is mortal”. This is a form of reasoning that has become more apparent over the last 2000 years. In the work of many different scientists and mathematicians. Some of which include:
- George Boole
- Charles Babbage
- Gottlob Frege
- As well as many others.
The history of AI is wide and varied.
It breaches the realms of logic, philosophy, mathematics and computer science. All of which led to the invention of the first computer by Alan Turing in 1943. Turing wrote a paper seven years later on AI, called Computing Machinery and Intelligence. This paper introduced the Turing test which would test if a machine could be deemed as intelligent. The test would involve a human, a machine and interrogator. All were separated and the interrogator could ask any questions to either machine or human. If the responses from the machine were able to fool the interrogator then it could be deemed as intelligent.
This test has been disputed in later years by John Searle (1980).
Searle came up with the Chinese room argument. There is a man in a room. He is English and is not able to understand any other languages. In front of him he has some Chinese symbols. In his hand a book of instructions for manipulating the symbols. From outside the room someone passes questions in Chinese. From the instructions the man is able to answer correctly with the symbols. Does this mean he understands Chinese? No he is just following instructions. Without understanding the meaning of what he is doing. The argument is trying to say that the man in the room could still be a computer following instructions. Demonstrating a form of intelligence but not understanding meaning or having a conscious.
This argument, led to a whole area of debate in the field of AI.
The term “Artificial Intelligence” was first used in 1956. By a scientist called John McCarthy, in the Dartmouth conference. McCarthy later developed LISP, LISt Processing language which is one of the main languages used in the field of Artificial intelligence. The conference spawned the field of Artificial Intelligence and the development of new technology. As with most fields of science there will be shifts in patterns of study or development of new ones. AI has been no exception. There have been quite a few over time. Reasoning as search, micro-worlds, connectionism, expert systems and intelligent agents are just but a few that exists. All have different approaches in developing artificial intelligence.
Artificial intelligence is a field that has not been fully realised. However its study has led to the development of other fields within the computer science. One of these areas is the Semantic Web. Where the aim is to develop machine-accessible code to give structure and meaning to the data on the Web. To achieve this, logic, knowledge representation and ontologies are required. All are areas linked with Artificial Intelligence. The language that is recognised as a standard for modelling ontologies on the web is OWL (Web Ontology Language). This is a domain specific language and evolved from the development of LISP. Although the concepts of AI are being applied to the Semantic Web, it must be said that the similarities end there.
The reason for this is that to realise AI, is to create an agent exhibiting human level intelligence. Whereas the Semantic Web will use partially intelligent agents leaving humans still to make decisions.
This is where a computer will be useful outside of the office
Using the field of Artificial Intelligence. Machines, computers and other devices can be developed to aid humans in many tasks. A way of looking at this is, imagine if a human was ill and decided to input the symptoms into a computer. The computer or ‘agent’ would deduce from the symptoms what type of illness it is. As it is connected to the internet and understands the concept of doctors and illness through ontologies. It communicates with the doctors ‘agents’. This will confirm the symptoms and find the next available appointment in that area. Also taking into consideration the pharmacies and stock level of the medicines that may be required. The appointment is then presented back to the human. Who still confirms the appointment and still needs to see the doctor.
The Semantic Web is only one extension of AI research. There are many other areas that are related to the field. Some of these include aviation, finance, business intelligence, pattern recognition and many more.
One point was noticed while writing this paper. That up until the first computer was invented, AI was influenced by computer science. However it seems that the field of AI is now influencing computer science. One believes the reason for this is that there is a better understanding of Aristotle’s logic and Leibniz’s law. Allowing the development of powerful reasoning systems which function similar to human logic. This is allowing technology to be developed. In a way that can aid humans in many different aspects of life.
However the problem still lies with how can intelligence be defined.
Does it understand? Will it have a conscious? What about common sense? All these questions need to be answered before full Artificial Intelligence is realised.