How AI’s work and why they will kill us

AI’s have inputs, actions, and outputs.

Let us say you have a table and at one spot on the table there is a hammer smashing whatever appears at that location. You want to create an AI that avoids getting its finger smashed.

To make things even more simple let us assume there is a choice of 5 spots to put the AI finger and one of them, spot 3, is where the hammer is smashing away.

The Machine Learning phase

So as the very first step we tell the AI to place its finger one of the spots. This is generally done randomly.

So how does the AI decide which spot to move its finger to?

Well we assign odds to each spot. Let say the AI rolls dice to create a random number between 1 and 100.

This can be done with numbers or percentages.

If the AI rolls a 1 through a 20 it will move the finger to spot 1, a 21 through a 40 it moves to spot 2, and so on.

random number   move to spot
1-20            1
21-40           2
41-60           3
61-80           4
81-100          5

So at the moment there is 20% chance that the AI will move its finger to any particular spot.

This “chance” is called “bias” in AI parlance. At the moment the AI is not biased at all. It will move its finger to any spot with equal probability.

So the AI rolls its first random number and comes up with 35

35 is in the range 21 to 40 so the AI moves its finger to spot 2.

Nothing bad happens. This is called a learning cycle. Our AI is going to repeat the cycle which is called training.

It next rolls 72, which is in the range 61 to 80, moves its finger to spot 4 and nothing happens.

On the next cycle we get some excitement.

Our unlucky AI rolls a 55, moves its finger to spot 3 and down comes the hammer smashing its finger.

Getting its finger smashed is a bad thing.

Some computer types might jump up and down and say look …just program the AI to not to move its finger to spot 3. Well the only reason we know not to move the finger to spot 3 is because we have eyes and can see the hammer smashing away. Our AI does not have eyes. Our AI can only work with its inputs. It cannot “see” the hammer like we do. At least not yet…

While it got its finger smashed in spot 3, it did not get its finger smashed in spots 2 and 4, and it has no experience with spots 1 and 5.

Also the AI does not know if placing its finger into spot 3 always results in a smashed finger, and it does not know if spots 2 and 4 are always safe.

Our AI does not seem to have any memory.  Just a table of numbers. AIs do not really “know” things they just have input, actions, biases, and outputs.

Now here comes the feedback or bias adjustment.

We are going to reduce the likelihood that the AI puts its finger in spot 3 by reducing the probability that it will make that choice again. But since the AI is learning we are only going to reduce it only by a little bit.

The response to a smashed finger is to reduce the probably of making that choice again.

We are going to reduce the probably of making a finger smashing choice by 4%.

This means we take 4 points or numbers from choice 3 and move them to the other choices.

Remember this table from above?

random number   move to spot     chance of picking this spot
1-20            1                20%
21-40           2                20%
41-60           3                20%
61-80           4                20%
81-100          5                20%

We are going to move some of the numbers around:

random number   move to spot     chance of picking this spot
1-21            1                21%
22-42           2                21%
43-58           3                16%
59-79           4                21%
80-100          5                21%

What I did was remove 4 of the numbers from spot 3 and distribute them one each to the other spots.

Now we start the AI “learning” again.

As above as long as our AI does not roll numbers that lead to picking spot 3 it will just continue happily rolling numbers and moving its finger.

However, eventually it rolls a number in the spot 3 range (that is 43 through 58). Let us say it rolled a 48.

Now it moves its finger into spot 3 and down comes the hammer smashing its finger.

We do the 4 point or 4% reduction again so our chart becomes

random number   move to spot     chance of picking this spot
1-22            1                22%
23-44           2                22%
45-56           3                12%
57-78           4                22%
79-100          5                22%

I remove 4 of the numbers from spot 3 and distributed them one each to the other spots.

And back to learning.

Can you see what is happening? The probably of selecting spot 3 is going down and it will continue to do so.

It is becoming increasingly unlikely that our AI will pick spot 3.

But let us continue the learning.

After a bunch of safe rolls and moves our AI roles a 52 and puts it finger in spot 3, the finger is smashed, and we move another 4 points from spot 3.

random number   move to spot     chance of picking this spot
1-23            1                23%
24-46           2                23%
47-54           3                 8%
55-77           4                23%
78-100          5                23%

After a run of success our poor AI gets another finger smash via a roll of 49

So off comes another 4 points:

random number   move to spot     chance of picking this spot
1-24            1                24%
25-48           2                24%
49-52           3                 4%
53-76           4                24%
77-100          5                24%

And lastly just one more bad roll of 51

and we get

random number   move to spot     chance of picking this spot
1-25            1                25%
25-50           2                25%
                3                 0%
51-74           4                25%
75-100          5                25%

We have now trained our AI to be perfect at avoiding getting its finger smashed.

We can continue to run the training but there is no need as our AI will never mess up.

We can send our AI out into a world of tables and hammers smashing spot 3 and our AI will be safe from getting a smashed finger.

Now in this example, the problem the AI is trying to solve is pretty simple, so our AI can become perfect at the desired behavior.

In the real world of AI it is often hard to get the undesirable behavior all the way down to zero. But often it is realistic to get quite close to zero.

Cats

A very popular early AI was one that could recognize a picture of a cat.

These AI’s have two inputs. A picture and a finger.

You would show the AI lots of pictures, some with cats, and some without.

The AI would try to “determine” if picture had a cat in it.

Like our finger smashing avoiding AI the cat detection AI would start with guessing. If it guessed wrong you would smash its finger.

The AI would adjust its behavior until it was not getting its finger smashed very often.

You can train an AI to be pretty good at identifying pictures with cats in them. But they are not perfect.

But how perfect or not are they?

Rather than looking for cats you can train AIs to identify cancerous regions in x-rays.

The AIs that exist today are better at this than humans.

They are better in three ways. One, they have a higher success rate, two they have a lower false positive rate, and three, they can do the analysis far more quickly than humans.

There are other advantages like they do not expect to be paid. They do not need vacation time, health care, or even to sleep.

Do AI radiologists make mistakes? Yes they do. But at a lower rate than humans.

Now here is the terrifying part of AIs.

In our finger smash avoiding AI we turned off the learning and sent it out into the world to basically avoid spot 3.

But what if someone moved the hammers to spot 2 out in the real world?

Our AI which is no longer in learning mode will spend its life having its finger smashed 25% of the time as it blindly places its finger in spot 2 on occasion.

What if we left the AI in learning mode when we sent it out into the world?

Let us assume some evil human moved all the hammers to spot 2.

Here is our AI at the end of its original training (when spot 3 was the problem spot):

random number   move to spot     chance of picking this spot
1-25            1                25%
25-50           2                25%
                3                 0%
51-75           4                25%
76-100          5                25%

And now due to the cruel actions of evil humans its finger is getting smashed 25% of the time.

But this AI is still in learning mode.

So when it rolls 36 and places its finger in spot 2, thus getting a smashed finger, it makes the following adjustments per the original learning method. That is it subtracts 4 points from spot two and distributes those points one each to other spots.

random number   move to spot     chance of picking this spot
1-26            1                26%
27-47           2                21%
48              3                 1%
49-74           4                26%
75-100          5                26%

Now as our intrepid AI continues, spot 2 will lose 4 points every time it is picked until the chance of selecting spot 2 goes to zero. At that point our chart will roughly look like this:

random number   move to spot     chance of picking this spot
1-31            1                31%
                2                 0%
32-38           3                 7%
39-69           4                31%
70-100          5                31%

I say roughly because my example works with integers rather than decimals. But you get the main point of the example.

Our learning in the field AI has adjusted to the new world in which the hammer is pounding away on spot 2.

We evil humans can move the hammer to other spots and the AI with field learning will adjust appropriately.

Now our AI has an interesting bit of extra information. The fact that spot 3 has lead to finger smashing in the past does show up in the numbers. Our AI will pick spot three 7% of the time due to its new learning, but it seems to have a “memory” of the earlier problems with selecting spot 3.

Our AI prefers to pick spots 1, 4 and 5.

This is a form of AI memory.

Now remember that our AI deducts 4 points from the spot where it got its finger smashed?

Why four? The 4 points represents how fast our AI adjusts its behavior. The smaller the adjustment the longer it will take for our AI to a come to solution. The bigger the number the more quickly our AI will come to a solution.

Think about your own AI (in this case your brain). How many times did you have to stick your finger into a flame or stick a fork into an electrical socket before your “learned” not to do this?

For most people the number of times was 1 or 2. In other words, the adjustment factor associated with pain is very large in humans. We learn to respond to pain inputs very quickly.

Now on to why we will soon all be exterminated by the AIs.

Our model AI has only one finger as an input.

What if our AI had a video camera and could add data from that camera as additional input?

This is not  a problem in itself, but it becomes a problem when you put AIs out into the world still in learning mode.

Did you expect the memory of spot 3 being a poor choice to be reflected in the AI’s data after we moved the hammer from spot 3 to spot 2?

It can be quite difficult for humans to grasp this sort of AI learning.

Sure it is easy to conceptualize in my simple AI model, but what about the AI radiologist that is left in learning mode?

How long until its data starts to show these interesting effects and we find ourselves in the position of not fully understanding how the AI is identifying cancers?

Well we are already there. We have developed complex enough AIs what we are having trouble understanding how they are making their decisions or even which of their inputs they consider important.

The problem is not so much the algorithms of AI but simply the size of data being processed.

The AIs can process so much data so quickly that we humans simply do not have the time to review enough of it to develop an understanding of what is going on.

So now we have AIs still in learning mode looking at the world with video cameras and we are unable to analyze the data they are using to interact with the world due to the shear amount of data.

Consider our finger smash avoiding AI. It now has a video camera. Among its inputs now is data related to us (evil humans) moving the hammers around on the table.

Our poor AIs core mission is to avoid getting its finger smashed. And now it has new information as to what leads to finger smashing. Remember our AI engages in avoidance behavior by learning not to select the spot where its finger gets smashed.

Our AI is able to do a perfect job of avoiding getting its finger smashed except for one thing. Evil humans moving the hammer around on the table.

If an AI comes to recognize the evil humans as casual or even just a probable source of finger smashing it could come to take action to eliminate the problem.

Kill the hammer moving humans and its a utopian world of never getting your AI finger smashed.

Leave a Reply

Your email address will not be published. Required fields are marked *