Understanding Artificial Intelligence
Everyone has heard the term artificial intelligence or ‘AI’, but terminology and variations such as ‘machine learning’, ‘deep learning’, and ‘neural networks’ can be difficult to distinguish. Many of these terms are used interchangeably with ‘artificial intelligence’ leading to confusion and misunderstandings about the differences.
Furthermore, Hollywood has a tendency to conflate AI with robots. Movies like Terminator, I, Robot, and Blade Runner perpetuate this stereotype that AI is typically embedded within a human-like robot, but the two are definitely not one and the same.
The term ‘artificial intelligence’ was first by John McCarthy way back in 1956 to describe machines that can perform tasks that are characteristic of human intelligence. It was a very general definition that is broad enough to include many of the things AIs can and cannot do today: problem solving, recognizing objects or faces, recognizing sounds, planning, learning, and understanding language.
The term, ‘machine learning’ was coined shortly thereafter in 1959. Arthur Samuel used the term to define the ability to learn without being explicitly programmed. Today, many of the rules- and logic-based systems that were previously referred to as artificial intelligence are no longer considered to be AI, making things even more confusing. However, in general, artificial intelligences can be broken down into three main categories:
Artificial Narrow Intelligence
Narrow or ‘weak’ AI are brute force systems that associate patterns in the input data to produce outputs that approximate those data. Some examples are IBM’s Watson, Apple’s Siri, and other neural networks.
A good specific-use example would be a natural language translation system based on deep learning (e.g., Google Translate). Google Translate is trained by rote for millions of iterations on corpora of human-translated texts so that if given as input an English sentence, it can look-up the corresponding French sentence, or produce an approximation by rigid, unintelligent template-matching.
Narrow or weak AI methods generally have none of the conceptual fluidity and generativity of human cognition necessary for a human-level, human-style virtual assistant. IBM recently conceded the limitations of narrow AI in their decision “to shift focus [away from] machines that fully replicate human general intelligence,” basically accepting that Watson will never attain human-level linguistic competence.
“The problem with Watson,” said the editor-in-chief of the Journal of Artificial Intelligence in Medicine, who has tried to apply Watson to medical literature to recommend treatments as a virtual medical assistant, “is that it’s essentially a really good search engine… The sort of knowledge Watson [has] is very flat and very broad.”
Artificial General Intelligence
Artificial General Intelligence (AGI) is the holy grail for AI research today. Narrow AI systems can perform internet searches, translate natural language (somewhat), drive automobiles (somewhat), or play some video games, but none of these systems can perform all of these tasks with human fluidity and generality. In theory, an AGI system would be able to do all of these cognitive tasks as a human could.
AGI would posses the ability to think generally; to make decisions that go beyond previous experience. Decisions would be made based on the AGI’sinnate and acquired knowledge.
There remain obstacles to overcome before achieving a true AGI. A key challenge is human creativity. Part of the reason machine learning and deep learning both require large amounts of data to do tasks that a human can do (with far less data) is that human brains make creative leaps. If AGI solves the creativity challenge, computers will be able to do the same thing that we do everyday, skip over the minutia (needless data), using our creative intuition.
Super Intelligence
University of Oxford philosopher, Nick Bostrom, defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”.
It is up for debate whether human intelligence can be surpassed, but Bostrom argues that a super intelligent machine could be built in the next century. If that were so, biological brains would be surpassed in practically every field, including scientific creativity, general wisdom, and social skills.