Tech Magazine

Could AI Ever Really Become Like V.I.K.I.?

Posted on the 01 July 2017 by Techloot @tech_loot

What if I told you that examples already exist where artificial intelligence has shown signs of “aggressive” and sentient capabilities? Would you think I am a few matchsticks short of a box? Well, it’s true. Maybe we’re not looking at a V.I.K.I type scenario as of yet, but things are getting kind of creepy.

The Threat

Stephen Hawking, the famed physicist, issued humanity a stark warning concerning the fast advances in AI technology: “Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute.”

Note: Hawking said “potentially the best or worst thing to happen” to us.

Maybe Hawking is paranoid – and maybe he is not. Could mankind end up controlled or, at the very worst, eliminated by sentient beings? We all know that AI is already eliminating jobs. Even in the early stages of AI technology, it is beginning to exhibit Anthropomorphic traits, such as greed, cunning and even violence.

The Proof

DeepMind
Last year, Google tested its Artificial Intelligence software DeepMind in a mock contest in fruit gathering. Whichever DeepMind “agent” (Agent Smith?) was able to collect the greenest apples won the contest.

Nevertheless, the game was set up in two way: play the game fairly and have the chance to end up with the same number of apples as the other agent, or shoot each other with lasers until the one or the other wins. So keep that in mind – the DeepMind agents had the choice to be fair and both win or react violently in order to come off on top.

In the beginning, everything was fine. The two agents played by the rules, took turns, and everything seemed nice. That was until the apples started to dwindle in number.

Once that happened the agents began to play “aggressively,” zapping each other with the laser, which knocked one or the other agents out of the game for a short time, leaving the other to gather as many apples as it could during that time.

Even though no extra apples could be gained by the end for this type of “aggressive” behaviour, the agents still chose to engage in such. While the people at Google are calling it “interesting,” others are downright saying it’s troubling. These sentient “beings” were given the choice of “right” or “wrong,” yet they chose to cheat, and with no real benefit.

The good news is that AI programs must be designed with malicious intent in order for it to become so – at least for now. The bad news is that there are plenty of people, both criminal and those working for sovereign states, who want to create inherently aggressive and dangerous AI programs. We can already find a large number of cyber security firms presenting “artificial intelligence based advanced threat prevention,” which is anti-virus and anti-malware.

Cylance is one company that is said to have an AI based threat protection that can “ … accurately predict and stop advanced threats before they can execute … only Cylance can step up and prove it.” So, when will rogue countries such as North Korea start using AI-based advanced attacks against us, ultimately zapping AI based protection out of the game long enough to do some serious damage to computer systems it was designed to protect? You know very well where it’s going, as do our world leaders.

The Rebuttal

Then we have those who say that there isn’t anything to worry about – for now. The Matrix won’t take place for another 300 years, they say. Kai-Fu Lee who is an opinion writer for The New York Times wrote, “These are interesting issues to contemplate, but they are not pressing.

They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.”

Uh, I’m sorry, but didn’t Google’s DeepMind, when given options to play a game fairly and still win, show “troubling signs of aggression?”

Whatever side of the argument you are on, the most prudent thing to do would be to follow the advice of the guy with the IQ of 160 (Hawking). We really should start educating ourselves about Artificial Intelligence and in how many ways it affects our daily lives, where the technology is going, and what future implications it could have.


Back to Featured Articles on Logo Paperblog