
A microprocessor inside a computer is a lot of these rooms put together
Have you ever wondered how does an Artificial Intelligence think? What are the coincidences and the differences between robots and humans? How do they learn? How much can they learn?
I’m going to tell you how a human, a traditional computer and an artificial intelligence tackle the same problem, a very simple one. I hope it shows the profound differences in the three ways you can accomplish the same result.
Ready? The problem is simple: add two natural numbers. Let’s see the differences between a human, a traditional computer program and an artificial intelligence. This is man vs Artificial Intelligence, round 1.
Human
You surely have experienced the human way to tackle addition. First you have to know what a number is. You accomplish this at a very young age. You extract the similarities between three ducks, three cows, three people and, very importantly, three fingers. All these sets have in common the number three. Once you acquire the concept three, you are able to apply it to any imaginable set with three members. Even better, you are able to think of the number three, without applying it to any members. The number three doesn’t exist in nature, for there isn’t a physical thing named three, but it does in your mind.
Once you have grasped the concept of the ten first numbers (as many as your fingers), including the amazing zero, you learn all the other numbers. But you don’t really know the numbers; surely you have never met 12987465729934661537 in person before. What you know is a set of rules that govern all the numbers. All of them? Yes, all of them.
Let’s go to the addition. First you learn the concept, probably by putting your hands together with some fingers extended and then counting the result. You reach the conclusion that a+b=b+a and you learn by heart the result of adding two numbers between 0 and 9. You learn a few rules, like when to carry over and, suddenly, you can work out a sum.
Now, given sufficient time, paper and ink, would you be able to add any two numbers? Before you answer: have you considered how many numbers are there? If you count all the grains of sand in all the beaches in all the planets around all the stars of all the galaxies, you haven’t even begun to count all the numbers there are. They are infinite. And you dare to say that you are able to sum any couple of them?
Well, you’re right. What you know is an algorithm, and you know you can apply it to any couple of numbers. You have jumped from your finite mind to an infinite concept. Humans do this a lot; in fact is how they learn: Earth is big, my neighbors are people like me, all men are created equal… This is also why they never learn: Earth is so big that it must be flat, strangers are dangerous, all men are the same…
Traditional computer
A computer is a machine that can compute. Duh. What is computing? Well, that’s a tricky one; usually people say that computing is anything that a computer can do. Double duh. This is why it’s interesting the concept of Turing machines (more on this soon on this very same blog).
But today we’re going to explore how a computer works by mixing signals. Think of a room with two entrance doors and one exit door. Each door has a built-in doorbell. The two entrance doorbells are operated by two people and you are in charge of the exit doorbell. At certain times, the two entrance operators will ring their bells or they won’t. You don’t know what they will do, but here’s your job:
if anyone rings at the entrance doors, you will ring at the exit door;
if no one rings, you’ll keep quiet.
Your job is simple enough, don’t you think? Well, you are what is known as an OR gate. There are also AND gates, where you ring only if both people ring. There’s a special gate, called the NOT gate, with only one entrance door, in which you must do the opposite of what the entrance operator does, that is, keep quiet if he rings and ring if he keeps quiet. Maybe it should be called the teenager gate.
Take this with a grain of salt and think it’s an oversimplification, but in the end these processes, and endless combinations and permutations of theses processes, is all a microprocessor does. A microprocessor contains millions of these rooms put together, with millions of people ringing and keeping quiet.
This neat trick allows to perform arithmetic operations very quickly, as long as the sequences rings/doesn’t ring are also fast. Computers only use 1 (someone rings his bell) or 0 (someone keeps quiet). The binary numeral system is particularly suited to this: you can write any number using just 0’s and 1’s (7 is 101). But for a computer 7 is nothing, it has no meaning. A computer only understands the sequence rings/doesn’t ring/rings. The meaning of those three rings is given to a computer by a human being.
Of course, you only need to teach a computer once; it would not be very practical otherwise. The teachings are many, ranging from how many rings are a number to what to do when a key is pushed.
These teachings are disposed in layers so they can be reused. Maybe you have heard of some of these layers: BIOS, operating system, applications…
There are a few special applications that allow you to teach a computer with your own teachings; they include what is known as a programming language. You can write your own code with it. Some of them are easier, some of them are harder, but all of them do basically the same thing. In fact, if they are Turing-complete (and the vast majority of them are), they all do exactly the same thing.
To illustrate how a computer work, we’re going to use a programming language called pidgin code. It has the same rules as pidgin English: if it sounds correct, then it is. This program will execute an addition:
define a, b, c as numbers input a, b c = a + b print c
Simple enough, right? But this simple program has a lot of knowledge, human knowledge, embedded. We have defined previously -we have taught the computer- what is a number and what is an addition, Then and only then the computer is able to carry out the sum.
Let’s ask the same questions we did for humans. Is a machine able to add any two natural numbers, given enough time and memory (that’s pen and paper for machines)?
It would seem that they are, but that’s not entirely true.Remember that sentence in our code that states define a, b, c? Such definition includes how long is a number, that is, how many digits it has. In our former example of 7 as 101, that means that your numbers are a sequence of three rings.
You can’t define a number saying it will have as many rings as necessary, because the computer needs to know when it ends. So, up until your maximum number in your definition, you can add two natural numbers, but that’s it. The finite mind of a computer can’t grasp an infinite concept, and that’s a difference with humans.
Artificial Intelligence
Let’s see how we would perform an addition using an artificial intelligence or, more accurately, a neural network. The first thing we need is what is called a corpus, that is, a set containing as many operations already solved as possible. How many do you ask? In order to get a good proficiency, we’re talking from 15,000 to 1,000,000.
Once we have this, we setup our neural network. Here’s a video explaining how a neural network works.
First big difference with our previous method: we don’t need to teach our neural network what is a number: she will find out by herself. In fact, it’s not important if we write the number in decimal number system, in binary or even Roman numerals, as long as they are used in a consistent way (an inconsistent way would imply a bigger learning time and a bigger corpus).
Our neural network will discover what is a number and what is not a number and she will also figure out how to add two numbers.
The pidgin code for an artificial intelligence would be similar to this:
input (a,b)
output (c)
And we would be amazed to see how c equals a + b.
Almost always.
Yes, she fails sometimes. How many times? It depends on many things: how good is our corpus, how difficult is the operation it’s doing, how good has been the training. Usually we have failure rates ranging from 15% to 3% for non-trivial operations.
But that’s not too strange: surely a human mind also makes some mistakes when adding two numbers. And we have a contraption that magically has deduced what a number is, what an addition is and how to perform the operation between two of those numbers. That’s impressive.
Let’s ask the same question we asked our previous contenders. Given enough time and memory, would an artificial intelligence be able to sum any two natural numbers?
The answer is no. In fact, the artificial intelligence is more limited in this respect than a traditional computer program. If the two numbers don’t appear in her corpus, she won’t be able to sum them. She will not figure out the concept of natural number and extrapolate it the same way humans do.