Business Magazine

Artificial Intelligence: Is There Peril in Deep Learning?

Posted on the 12 December 2012 by Stephendeangelis @EnterraCEO

There have been cinematic depictions of evil robots ever since Fritz Lang's 1927 science-fiction film Metropolis entitled "Metropolis." More recent depictions include HAL from "2001: A Space Odyssey" and skeletal robots from the "Terminator" movies. As I noted in my previous post, some people are taking the threat of the rise of intelligent machines very seriously. Researchers in England are so concerned about advances in artificial intelligence that "a team of scientists, philosophers and engineers will form the new Centre for the Study of Existential Risk (CSER) at the University of Cambridge in the United Kingdom." ["Cambridge University team to assess the risk posed to humanity by artificial intelligence," by Chris Wood, Gizmag, 27 November 2012] The CSER will assess "extinction-level" threats to humanity, including the rise of intelligent robots.

However, before we start scaring anyone, we should remember that scientists remain a ways away from developing a sentient machine. Randy Rieland reports that one of the best "synthetic brains" that has been created is still pretty limited. ["One Step Closer to a Brain," Smithsonian, 18 October 2012] He writes:

"A few months ago Google shared with us another challenge it had taken on. It wasn’t as fanciful as a driverless car or as geekily sexy as augmented reality glasses, but in the end, it could be bigger than both. In fact, it likely will make both of them even more dynamic. What Google did was create a synthetic brain, or at least the part of it that processes visual information. Technically, it built a mechanical version of a neural network, a small army of 16,000 computer processors that, by working together, was actually able to learn. At the time, most of the attention focused on what all those machines learned, which mainly was how to identify cats on YouTube. That prompted a lot of yucks and cracks about whether the computers wondered why so many of the cats were flushing toilets. But Google was going down a path that scientists have been exploring for many years, the idea of using computers to mimick the connections and interactions of human brain cells to the point where the machines actually start learning. The difference is that the search behemoth was able to marshal resources and computing power that few companies can."

In the last post, I also mentioned that "IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain – in an important step toward creating a true artificial brain." ["IBM supercomputer used to simulate a typical human brain," by Dario Borghino, Gizmag, 19 November 2012] Neither Google nor IBM claims that their synthetic brains represent true machine intelligence -- they are just very smart software programs. Rieland accepts the fact that Google's cat recognition achievement is impressive, but he rhetorically asks, "In the realm of knowledge, is this cause for great jubilation?" In answer to his own question, he writes, "Well, yes. Because eventually all the machines working together were able to decide which features of cats merited their attention and which patterns mattered, rather than being told by humans which particular shapes to look for. And from the knowledge gained through much repetition, the neural network was able to create its own digital image of a cat's face." He continues:

"That's a big leap forward for artificial intelligence. It's also likely to have nice payoffs for Google. One of its researchers who worked on the project, an engineer named Jeff Dean, recently told MIT’s Technology Review that now his group is testing computer models that understand images and text together. 'You give it "porpoise" and it gives you pictures of porpoises,' Dean explained. 'If you give it a picture of a porpoise, it gives you "porpoise" as a word.' So Google's image search could become far less dependent on accompanying text to identify what's in a photo. And it's likely to apply the same approach to refining speech recognition by being able to gather extra clues from video. ... But now a slice of perspective. For all its progress, Google still has a long way to go to measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million times smaller than the human brain's visual cortex."

If that's the state of the art, it begs the question: What's all the fuss about artificial intelligence leading to "extinction-level" threats to humanity? The concern seems to be that research is advancing so fast in the area of artificial intelligence that it's time to start thinking about the long-term consequences of developing machine intelligence. I discussed some of the challenges associated with the development of sentient machines in a previous post entitled Philosophy and Artificial General Intelligence. But those concerns primarily dealt with how humans will treat machines, not how machines will treat humans. The folks at CSER have decided it's time to worry about the latter subject. Reggie Ugwu writes:

"At the University of Cambridge in England, a newly formed department dubbed the Center for the Study of Existential Risk (CSER) aims to identify, and curb, specific developments in science and technology that could potentially endanger human civilization as we know it. 'Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole,' reads the deadpan and terrifying statement of purpose on CSER's website. 'Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change.' The center was jointly founded by the astrophysicist Martin Rees, the philosopher Huw Price and the software magnate Jaan Tallinn, a co-founder of Skype. Of particular concern to the trio are the prospects of artificial general intelligence (AGI) and the point when computers are developed that surpass the intelligence of the human brain, a phenomenon sometimes referred to as the Singularity." ["New Cambridge Research Center Aims To Save Us From Robopocalypse," Complex Tech, 27 Novmeber 2012]

Ugwu reports that Price, in an interview yesterday with The Register, discussed how a sentient computer might compete with humans. "Think how it might be to compete for resources with the dominant species. ... Take gorillas for example— the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival. Nature didn't anticipate us, and we in our turn shouldn't take artificial general intelligence for granted. ... We need to take seriously the possibility that there might be a 'Pandora's box' moment with AGI that, if missed, could be disastrous." Spawned by the Cambridge University announcement reporting the establishment of CSER, the folks at the blog site "if." ask, "Artificial Intelligence: are we stupid to build machines that can outwit us?" [28 Nov 12] In the end, they are not so sure. The article explains:

"Artificial Intelligence has ... seeped into the social media sphere. A number of social media data mining companies claim to use AI to sift through data. When cutting data by sentiment, for example, there are tools that are apparently programmed to learn and adapt to the intricacies of colloquial language, turns of phrase, and the finer nuances of language. Much more sophisticated than the old Boolean search. So, will these tools take over and destroy the social media marketing race? There’s no question that computers have seeped into all industries over the past few decades, in many instances proving a far more cost-effective and efficient workforce than us humans. It's certainly true that social media can be automated – automated scheduled posts, automated data mining, automated follower acquisition. But isn't it the human element of social media that draws people towards it? A computer may be able to mine unimaginably Big Data, but can it truly analyze and understand the insights into human psychology that the data reveals? Can it really capture whimsicality, irony, dry wit or rhetoric? Surely it's not about replacing, but supporting – combine a computer's brain power with the human ability to understand and interpret and the real value in data is suddenly revealed."

There are researchers who believe that all of those questions will eventually be answered in the affirmative. So we are left with their original question, "Are we stupid to build machines that can outwit us?" For my part, I'm not panicking. Although Price's example of the gorilla population being indirectly affected by how humans change the environment is interesting, it seems to be a flawed analogy. The gorillas have no means of controlling humans, but humans can pull the plugs on machines. Price seems to have a Terminator-like future in mind where intelligent robots act independently from humans and start making decisions based on their own survival. That future seems highly unlikely to me. I'm much more concerned about how humans will create extinction level events through stupidity than I am with evil robots destroying mankind through their genius. In fact, I'm hoping that those smart machines can help us humans be a little smarter in how we deal with planet we live on and the resources we use.


Back to Featured Articles on Logo Paperblog