Defining AI is challenging, especially in its early days. Massive mechanical machines tackled simple tasks, and while not labeled as AI then, they undoubtedly represented its primitive form.
These early machines were not pragmatic as it was likely humans could solve the problems faster than them. Basic mathematical operations could be far more efficiently done with a pen and paper.
As Moore's law kicked in, computation power grew ever greater and more accessible. This meant the analytical compute power of AI started to advance beyond that of humans.
Games like chess could now be played by a machine that could compute far more moves ahead than any human could ever hope to. On May 11th 1997, IBM's Deep Blue supercomputer beat the current chess world champion Gary Kasparov in a 6 game match. It represented a real turning point as AI was finally starting to succeed humans in its capabilities.
Deep Blue was able to “out compute” Kasparov. It's powerful processor and memory meant it was able to search far more moves ahead than any human ever could. The computer was programmed by humans who gave it algorithms which it could execute faster than humans.
This marks the first true AI iteration, where human-developed algorithms, are executed far more rapidly by computers.
The latest and greatest iteration of AI is one where we don't tell computers how to do something, but simply show them what we want it to do and let it figure out the details.
So instead of giving the computer a strict set of rules to follow to tell the difference between a cat and a dog, we simply show it lots of pictures of cats and dogs and let it learn how to perform the distinction itself.
This iteration of AI enabled many more problems to be solved by computers. But why? Why weren't human written algorithms good enough?
In his 1988 book, Mind Children: The Future of Robot and Human Intelligence, Hans Moravec wrote:
It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.Put simply: the things humans are really good at, we are unable to describe how we do them. And conversely, humans are really good at describing how to do problems we are not good at.
We are all really great at recognising the faces of our friends and family, we've been doing it for thousands of years, but can we describe how to do it? We can recognise beautiful art in the form of music or paintings, but can we write an algorithm for a computer to do the same?
Conversely, given a list of 1,000 numbers and asked to add them all together it will probably take a human quite a while, but it would be quite easy to tell a computer how to do it. Similarly with Deep Blue in 1997, the programmers were pretty good at describing how to evaluate a chess position, so they simply plugged it into the computer and ran it over very deep searches.
In this way, it seems the simplest problems require the most complex human developed AI systems.
So it's pretty clear from Moravec's Paradox that taught AI is the only way to solve more “human” tasks. But currently there's a problem.
If you look, for example, at Alpha Zero (a taught AI developed at Google Deepmind for playing Chess) it takes many thousands or even millions of games to reach a very basic proficiency level. Conversely humans can usually get pretty good in <100 games.
It seems like AI has an inherent inefficiency in how quickly it can learn. In reality however, humans might not be much better. When you teach a human chess, they've got a huge amount of prior data from outside of the game. Humans have picked up life lessons and logic which they are able to apply to the game whereas a computer starts of with completely 0.
This is where Artificial General Intelligence (AGI) comes in. AGI, the ability of an AI system to learn and apply knowledge across various domains like humans, is more than just a theoretical pipe dream — it's a necessity for scaling AI. Without AGI, we are fundamentally limited by the time it takes to train new AI for new problems. We need AI systems that can, like humans, transfer skills from one area to another, enabling them to tackle new challenges more efficiently.