The real Dangers of Artificial Intelligence

The Media is full of horror visions about the dangers of Artificial Intelligence. So is the fiction literature, movies and tv.

Even Elon Musk believes that AI is dangerous.

But an all-powerful strong AI that wipes out humanity is not our immediate problem. It might not be for a long time. There are far more likely scenarios that already have an impact on us.

After nuclear weapons, destruction of our environment and unstoppable pandemics, AI might be the next in line for a likely scenario of how we wipe out our own species.

The scenario everyone is thinking about is the following: By accident or on purpose a strong ai is created that is capable of improving itself. That moment is called “Singularity”. It learns extremely fast and by the time it hits human intelligence its optimization rate is so rapid that it surpasses us within minutes. Before humans can react it is so far above our intellect that we have no means to contain it anymore. Then it either rules us all or wipes us out on purpose. Or maybe it’s not trained to even acknowledge humans and pursues goals that are alien and potentially deadly to us.

Is Skynet likely?

This scenario is possible. But its highly unlikely we reach it within the next decade or two. We are still not able to simulate the working of a percent of a human brain even in slow-mo. Our most advanced models have thousands of neurons with hundreds of connections each. Well, a brain has billions of neurons with up to one hundred thousand connections each. Also, we still don’t know exactly how they work. Our artificial neurons are mathematical concepts that are far from the real thing. Real neurons are the only thing proven to produce what we call “Intelligence”. We can’t know if artificial neurons are able to produce it too. Last but not least we have no idea how to identify intelligence at all. We can’t even define it for sure. From the rudimentary machine learning models, we have now to a full-blown intellect is a step of a magnitude that is hardly comprehensible

But that doesn’t mean AI isn’t dangerous. It is worrying that most people focus on a sci-fi scenario and don’t notice intelligent systems that are already in use and that might pose a danger to people.

Case 1: BlackBox systems

Some years ago the PayPal algorithm decided that my spending habits were suspicious. I had to use a credit card for my next purchase. No big deal, but it bothered me. Asking the support what was wrong I got the answer that they “can’t change what the algorithm had decided”. Wait, what?

It’s not that no-one knows how the algorithm works. Someone wrote it. But then it trained and that training creates a black box. You can’t reconstruct the full process that leads to a decision.

Since then I came across some more examples:

  • Hedge Funds
  • Insurance Companies
  • Car Manufacturer
  • Credit Rating Companies

In itself, a black box is no problem. You can still test it thoroughly and check if it does the right things in the right situation. But you can’t test all situations. That’s why you need AI in the first place. And much worse: You can’t explain to an affected person why a decision was made.

The implications are huge: Getting banned from a game? Bad luck, the algorithm decided. No credit for your home? Bad luck the algorithm decided. Killed by autonomous drone? The algorithm decided, must be right. Finance Crash? Yeah, the Algorithm did that, no idea why.

This is a huge problem and people are actually working on solving it. But there’s more.

Case 2: Bias

Let’s say we use a machine learning algorithm to preprocess the resumes coming to our company. We do that for two reasons:

  1. It’s fast and cost-effective
  2. The one doing it before didn’t like women as developers and maybe was a bit of a racist

A machine isn’t sexist or racist. It should solve that right?

So we feed the model with the data of our last resumes and which of them we took. But we are not stupid. Since the one before was a bit racist and a bit sexist we don’t include this info in the data. No gender, no origin. Should work, right?

We then start our application process and everything is fine. But after some months we see the same problems: No women in development, no employees with foreign origin. Was our racist, sexist guy right? Are women bad developers and do foreigners lack the needed education?

Nope. Our model is just great at learning patterns. For example, it learned that the question “When was your first contact with computers?” is incredibly relevant to the selection process. The guy before always chose the ones that were very young when they got their computers. Why is that a problem? Because women are on average older when coming in contact with computers for the first time. So our model just found a way to identify women and excludes them. Now we are facing lawsuits for a sexist selection process. We still have no idea how the model identifies foreigners. (see case 1) Maybe the name? The university? The wording? We are about to get sued for a racist selection process and don’t know why.

And that’s only the problem we face. What about the applicants? They get discriminated on a whole new level! What if we hadn’t found out about the discrimination because it was much more subtle? What if everyone was starting to use our software?

It is very hard to exclude a preexisting bias from training data. It is very hard to find data without bias. Good luck…

Case 3: Oppression Tools

Not enough yet? Oh, you are in for a treat!

Face recognition is awesome. It allows us to find criminals, they can’t hide anymore. We can log everyone’s position and behavior in real time. Also, we can get information we never had access to before since our algorithms can see patterns in cues we weren’t even aware of. Who knows who we could identify on simple camera data?

  • people with an illness they don’t even know of yet?
  • stealing in a supermarket?
  • about to attack someone?
  • have heart attacks and need immediate help?
  • didn’t pray at the time our religion says they should?
  • didn’t cry at our great leaders funeral?
  • not our preferred sexual orientation?
  • didn’t show up at work at the time they said they did?
  • think about murdering someone?

Ai is incredibly good in pattern recognition and becoming better. This can benefit us all in ways we can’t even think of yet. But its also the perfect tool in the wrong hands. It can lead to a quality of oppression that is unthinkable now. Vulnerable people who need help and have to hide in their home country might be targeted and systematically punished.

That would be horrible! Are we done yet? Sadly no.

Case 4: Cheat Bots

So let’s assume you are 90 years old and your grandson calls you. You are happy because he hasn’t contacted you in years. But he’s not happy. He’s in trouble and needs money, quick. This is a simple but effective scheme that is a huge problem here in Germany. But its limited by the number of people who would do such a horrible thing. They can’t call everyone.

Well… Google could. How long until everyone does?

So you are poor now after being ripped off by a gangster’s AI. Fine, let’s play a little bit of online poker. That’s how you made your money anyway and you are good at it, right? Wrong. Servers are full of bots stripping even the pros. The CS-GO matches of your youth are long past since every script kid can run a bot that directly plays the game from reading the screen. Undetectable. No more PvP games for you grandpa.

AI acting like humans is a real and immediate danger. In games, bot’s were always a problem, but Ai allows a quality of human-like behavior that wasn’t seen before. Also, these AI’s are about to develop skills that far exceed ours in specific areas. When an AI disguises as human and acts in places where we assume that only humans are we have a cheat bot. These things can disrupt and even destroy the whole system they infiltrate.

Dangers of AI: So everything is over?

No. Not really. There still is time to fix these issues and people are working on fixing them. The people who develop AI aren’t idiots. They know of the dangers of these tools. Some of them, like biases, are even taught to new developers in courses by Microsoft. You actually learn how to avoid them.

In the end, Ai isn’t doom and isn’t the savior. It’s just what we make out of it. We know the pattern.

  • Environment-friendly energy(with a catch) and nuclear weapons, same technology
  • New sturdy materials and heaps of waste, same technology.
  • The perfect tool for mining, and explosives, same technology.

Let’s be aware of the dangers of artificial intelligence and let’s try to act responsibly with this new tool. We don’t need to repeat the errors of the past and if we do, it must not be the end of the world.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *