Artificial intelligence (AI), models that have recently been developed are capable of doing amazing feats such as recognizing images and creating human-like speech. AI cannot perform human-like behaviour, but that doesn’t mean that it can think or comprehend like humans.
As an AI researcher, I am interested in how humans perceive and reason about the world. It is important to stress that AI systems think and learn differently from humans. There is still a lot to be done before AI can really think like humans.
AI technology has allowed systems to mimic human behaviour. GPT-3 is a language model capable of producing text that is almost identical to human speech. PaLM, another model, can provide explanations for jokes it’s never seen before.
Gato, a general-purpose AI, was developed recently. It can do hundreds of tasks such as captioning images, answering queries, playing Atari videogames, and controlling a robot arm that stacks blocks. DALLE is an AI that has been programmed to generate modified images or artwork from a text description.
These breakthroughs led to some bold claims regarding the potential of such AI and what it can teach us about human intelligence.
Nando De Freitas, an AI researcher at Google DeepMind, believes that existing models can be scaled up to produce human-level AI. Others share this opinion.
All the excitement can lead to the assumption that human-like behaviour is synonymous with human-like understanding. However, AI and humans have many key differences in the way they think and learn.
Neural nets and the human brain:
Artificial neural networks (or “neural nets”) are the foundation of modern AI. The term “neural”, as it is used, refers to the human brain. These networks made up of billions upon billions of cells known collectively as neurons, form intricate webs of connections that allow them to process information back and forth.
The biology is simplified in neural nets. A real neuron is replaced by a simple nucleus, and the strength between nodes can be represented by a single number known as a “weight”.
The neural nets can recognize patterns and even “generalize,” to stimuli that are similar but not identical to the ones they’ve previously seen. A system’s ability, using what it has learned from data, to generalize to new data is called generalization.
It is the core of neural nets’ success. These neural nets mimic the techniques used by humans to identify patterns, recognize features, and generalize results. But there are important differences.
“Supervised learning” is the most common method of training neural nets. The inputs and outputs are shown to the neural nets, and then the weights of the connections are gradually adjusted until the network produces the desired output.
A neural net is a computer program that can learn to recognize the words in a sentence and predicts the next one.
This is very different from the way humans normally learn. The majority of human learning is unsupervised, which means that we don’t know what the “right” response to a given stimulus is. This must be done by us.
For example, children don’t get instructions on how they should speak. Instead, they learn through imitation and feedback.
Another distinction is the sheer amount of data used for training AI. GPT-3 was trained with 400 billion words mostly sourced from the internet. It would take nearly 4,000 years for this amount of text to be read by a human at 150 words per minute.
These calculations demonstrate that humans are not capable of learning the same things like AI. We must make greater use of fewer data.
Neural nets can learn more than we can.
Another fundamental difference is the way that neural nets learn. The neural nets use a technique called “backpropagation” in order to match a stimulus up with the desired response. It allows weights to be adjusted in the correct way by passing errors backwards through their network.
Neuroscientists agree that backpropagation is not possible in the brain. This would be because it would require external signals which don’t exist.
Researchers have suggested that different forms of backpropagation might be used by brains, but no evidence has been found to support this.
Instead, we learn from our mental structures, which link many properties and associations together. For example, “banana” is a concept that includes its shape and colour as well as knowledge of how it can be held.
AI systems are not capable of forming such conceptual knowledge. They rely on complex statistical associations that are extracted from their training data to then apply these to similar contexts.
AI that incorporates different inputs (such as text and images) is being built. However, it remains to be determined if this will be enough to allow these models to use the same mental representations that humans use to understand the world.
It’s not clear how humans learn, reason, and understand. These tasks are performed by humans in a different way than AI systems.
Researchers believe that in order to build machines that think and learn like humans, we need new methods and a deeper understanding of the brain.