Quantum Computing and Machine Learning

Posted on the 22 August 2013 by Stephendeangelis @EnterraCEO

"The brain performs its canonical task — learning — by tweaking its myriad connections according to a secret set of rules," writes Natalie Wolchover. "To unlock these secrets, scientists 30 years ago began developing computer models that try to replicate the learning process. Now, a growing number of experiments are revealing that these models behave strikingly similar to actual brains when performing certain tasks." ["As Machines Get Smarter, Evidence Grows That They Learn Like Us," Scientific American, 24 July 2013] How well are machines learning? The answer to that question really depends upon what you are asking them to learn. Dominic Basulto reports, "Not only are machines rapidly catching up to — and exceeding — humans in terms of raw computing power, they are also starting to do things that we used to consider inherently human. They can feel emotions like regret. They can daydream." ["Humans Are the World's Best Pattern-Recognition Machines, But for How Long?" Big Think, 24 July 2013]

Such reports might give the impression that super computers, powered by artificial general intelligence, are about to make humans obsolete (or least an inferior species to machines). Iain Thomson, however, reports that, in spite of the dramatic advances being made by computer scientists, current artificial general intelligence systems are about "as smart as a somewhat-challenged four-year-old child." ["IQ test: 'Artificial intelligence system as smart as a four year-old'," The Register, 16 July 2013] He explains:

"Researchers at the University of Illinois at Chicago have applied an IQ test to MIT's ConceptNet 4 artificial intelligence system, and determined it's about as smart as a somewhat-challenged four-year-old child. The team used the Weschsler Preschool and Primary Scale of Intelligence Test on the system and found it performed reasonably well on vocabulary and recognizing similarities, but scored very poorly on comprehension. 'ConceptNet 4 did dramatically worse than average on comprehension - the "why" questions,' said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. 'If a child had scores that varied this much, it might be a symptom that something was wrong.' ConceptNet 4 has now been replaced with a smarter AI system, ConceptNet 5, but Sloan said its predecessor's performance highlighted one of the fundamental problems with generating true artificial intelligence. Such systems are have great difficulty generating what humans call common sense, since that all-too-rare capacity requires not only an extensive amount of factual knowledge, but also subjective facts we learn in life."

George Dvorsky sees the UIC research as little more than a publicity stunt. The fact that computers lack common sense, he insists, "is exactly why the AI is not nearly as smart as a 4-year old. It's just a glorified calculator at this point — crunching numbers, running scripts, and making probability assessments." ["No, we didn't just create an AI that’s as smart as a 4-year old," io9, 16 July 2013] He continues:

"What it's not doing are all those things that make a 4-year-old so brilliant: living in an environment and learning from experience. What's more, the AI is not embodied, nor does it have the biological inclinations that drive human tendencies. It's also important to remember that a four-year-old's brain is in full-on developmental mode; it's a work in progress that's being forged by experience. Intelligence is not something that's constructed, it's something that develops over time. Sometimes I get the feeling that AI developers simply want to create an end-product AI and say, 'voila, here's an intelligent entity right out of the box.' But that's not how intelligence comes about, and that's not how it works — at least not in the human sense of the term."

Okay, so computers aren't about to take over the world. Nevertheless Basulto believes that as computing power increases and machines become more adept at pattern recognition, their ability to learn will increase rapidly. He explains:

"The future of intelligence is in making our patterns better, our heuristics stronger. In his article for Medium, Kevin Ashton refers to this as 'selective attention' — focusing on what really matters so that poor selections are removed before they ever hit the conscious brain. While some — like Gary Marcus of The New Yorker or Colin McGinn in the New York Review of Books, may be skeptical of [Ray] Kurzweil's Pattern Recognition Theory of Mind, they also have to grudgingly admit that Kurzweil is a genius. And, if all goes according to plan, Kurzweil really will be able to create a mind that goes beyond just recognizing a lot of words. One thing is clear — being able to recognize patterns is what gave humans their evolutionary edge over animals. How we refine, shape and improve our pattern recognition is the key to how much longer we’ll have the evolutionary edge over machines."

Wolchover reports that one promising algorithm is "used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983." She reports that it "appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle." Sue Becker, a professor of psychology, neuroscience, and behavior at McMaster University in Hamilton, told Wolchover, "It's the best possibility we really have for understanding the brain at present. I don’t know of a model that explains a wider range of phenomena in terms of learning and the structure of the brain." Wolchover notes that "the Boltzmann machine, bears the name of 19th century Austrian physicist Ludwig Boltzmann, who developed the branch of physics dealing with large numbers of particles, known as statistical mechanics. Boltzmann discovered an equation giving the probability of a gas of molecules having a particular energy when it reaches equilibrium. Replace molecules with neurons, and the Boltzmann machine, as it fires, converges on exactly the same equation."

Devin Powell reports that a new algorithm is being added to the machine learning kit. "In a series of papers posted online this month on the arXiv preprint server," he writes, "Seth Lloyd of the Massachusetts Institute of Technology in Cambridge and his collaborators have put a quantum twist on AI. The team developed a quantum version of 'machine learning', a type of AI in which programs can learn from previous experience to become progressively better at finding patterns in data. Machine learning is popular in applications ranging from e-mail spam filters to online-shopping suggestions. The team's invention would take advantage of quantum computations to speed up machine-learning tasks exponentially." ["Quantum boost for artificial intelligence," Nature, 26 July 2013] As stated, a quantum computer is necessary to take advantage of the algorithm. Powell explains:

"At the heart of the scheme is a simpler algorithm that Lloyd and his colleagues developed in 2009 as a way of quickly solving systems of linear equations, each of which is a mathematical statement, such a x + y = 4. Conventional computers produce a solution through tedious number crunching, which becomes prohibitively difficult as the amount of data (and thus the number of equations) grows. A quantum computer can cheat by compressing the information and performing calculations on select features extracted from the data and mapped onto quantum bits, or qubits. Quantum machine learning takes the results of algebraic manipulations and puts them to good use. Data can be split into groups — a task that is at the core of handwriting- and speech-recognition software — or can be searched for patterns. Massive amounts of information could therefore be manipulated with a relatively small number of qubits. 'We could map the whole Universe — all of the information that has existed since the Big Bang — onto 300 qubits,' Lloyd says. Such quantum AI techniques could dramatically speed up tasks such as image recognition for comparing photos on the web or for enabling cars to drive themselves — fields in which companies such as Google have invested considerable resources. (One of Lloyd's collaborators, Masoud Mohseni, is in fact a Google researcher based in Venice, California.) 'It's really interesting to see that there are new ways to use quantum computers coming up, after focusing mostly on factoring and quantum searches,' says Stefanie Barz at the University of Vienna, who recently demonstrated quantum equation-solving in action. Her team used a simple quantum computer that had two qubits to work out a high-school-level maths problem: a system consisting of two equations. Another group, led by Jian Pan at the University of Science and Technology of China in Hefei, did the same using four qubits. Putting quantum machine learning into practice will be more difficult. Lloyd estimates that a dozen qubits would be needed for a small-scale demonstration."

Powell's comment about Google's "considerable" investment in machine learning may be a reference to its recent purchase of a D-Wave quantum computer. As I noted in a post entitled Quantum Computing: Is the Future Here?, Google's primary interest in quantum computing is advancing research into machine learning. As breakthroughs continue to be made in the area of quantum computing, machine learning should advance as well.