Artificial Intelligence

There Is No Such Thing As Artificial Intelligence

As more businesses and internal audit teams reference the use of artificial intelligence, we take a look at what "AI" is, and what it is not.

5 minutes reading time
Computers and smart devices are hugely impressive - performing calculations at speeds well beyond human capability, proving theorems that we had been unable to prove, controlling our heating, lighting, refrigeration and entertainment, answering a wide range of requests and questions, diagnosing illness, even flying planes. But are they intelligent?

Intelligence is the ability to perceive information which can then be used to adapt behaviours. In perceiving information, an agent organises and interprets sensory information in order to represent and understand it. To use this adaptively is to plan, learn and reason, to solve problems and to draw inferences and understand complex ideas. Our understanding is then often reflected in our values, attitudes, and preferences.

Artificial intelligence typically assumes that human thought can be mechanised. Alan Turing wanted to know if machines could behave like thinking entities. He asserted that if a machine’s behaviour is indistinguishable from a human’s then it is “thinking”. However, it wasn’t long before a computer program fooled people into believing they were communicating with other humans, and so passed this Turing Test. Critics were less convinced that the machine was actually “thinking”, since it took words from it’s inputs and inserted them into pre-formed sentences (such as “Tell me more about x”), and if this isn’t an example of mechanised thinking, it isn’t an example of artificial intelligence.

Intuitively we might expect an intelligent system to represent and manipulate the knowledge of experts in a given field. Expert Systems developed in the 80’s and 90’s to store expert knowledge in fields like medicine. Such a system would give a diagnosis from symptoms using the stored expert knowledge. Although useful, these systems are little more than programs that search databases - they cannot be said to be thinking or reasoning like experts, but simply following instructions to link inputs (symptoms) with outputs (diagnoses).

In 1996 Deep Blue beat chess champion Gary Kasparov, ostensibly demonstrating artificial intelligence. However, it achieved the feat by harnessing immense processing power - assessing the probabilities of all possible moves incredibly quickly. If human thought required such a volume of intensive mathematical computation, chess games would last a very long time.

Deep Blue used traditional data-processing techniques to process a limited set of instructions one after another (serial computing) in a very specific environment. In contrast, the brain is massively parallel, with many areas active at once. This allows intelligent agents much greater scope to reason with uncertain or incomplete data by employing different processes simultaneously.

Although the use of statistics and correlation can help us reason under uncertainty, traditional data-processing techniques are unable to deal with vast, highly complex datasets. Big Data is an area of Information Technology (IT) that attempts to overcome this limitation by running tasks in parallel across multiple servers. Although superficially similar to brain activity, each server processes data sequentially - highlighting patterns in the data rather than learning and understanding. Also, as the data becomes more complex, the systems are less able to cope effectively - complexity tends to lead to a higher ‘false discovery rate’ (false positives).

Another field of IT, Machine Learning, offers a more adaptable approach. It relies upon algorithms (finite sequences of well-defined instructions) that improve through experience. These machine-learning algorithms improve their behaviour by processing lots of annotated examples, and have solved many problems that rules-based programs struggled with, though they can be poor at dealing with ‘soft data’ (images, video, sound files, unstructured text). An obvious drawback of systems using algorithms like these is that they will be as biased as the examples they are trained on (an example of ‘garbage in, garbage out’).

IT systems can manage expert knowledge, solve complex problems, reason under uncertainty, formulate plans, and ‘learn’ hence they seem to exhibit intelligent behaviour, but they do so by processing data sequentially according to sets of instructions, or algorithms. The types of problems that computers have solved tend to be very specific, and the ‘knowledge’ acquired is not usually transferrable to other tasks.

In contrast to conventional serial-processing, Deep Learning (a subset of machine learning) is an approach to artificial intelligence that uses ‘neural networks’ (inspired by the brain). These are layers upon layers of variables (‘neurons’) that adjust themselves to the properties of the data they are trained on. Each layer detects specific features of the input data to model high-level abstractions, and are capable of functions such as classifying images or converting speech to text. This requires vast amounts of training data and processing power. Among its problems are a limited amount of training data, a lack of interpretability of these complex networks, and algorithmic bias. The training data will often contain biases, and algorithms tend to inherit these biases. Once again these systems are poor at generalising their function to other areas of knowledge.

Ultimately computers process symbols (at the basic level, numbers, or 1s and 0s) according to instructions. Philosopher John Searle has argued that merely processing symbols cannot be synonymous with thinking; consciousness (and by extension, intelligence) is something more. Perhaps it is an emergent property, a ghost in the machine? On the other hand, it is also possible that by identifying intelligence as something over and above the mere functioning of a brain or computer we are guilty of reification, of creating something that isn’t there.

Developments in computer science provide interesting insights into what we mean by intelligence, and can greatly enhance our experience of the world; however there appears to be something almost intangible that so-called intelligent systems are lacking. They can represent the world in different ways, and adapt and learn, but so far they have not demonstrated an understanding of the data that they process. They are absent of the values, attitudes and preferences which allow us to make judgments and adapt in novel environments. Without an understanding of the information that they manipulate there is no intelligence, and so there is no such thing as artificial intelligence … yet.
This article last updated 29 June 2021
Rhodri Bowden, ThinkingAudit Ltd

Rhodri Bowden

Director

ThinkingAudit Ltd

Share by: