National Museum of the United States Navy/Flickr" src="https://s.yimg.com/ny/api/res/1.2/ERmU9AQwXAV5k5hmzY5ATQ-/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTc1MQ-/https://media.zenfs.com/en/the_conversation_us_articles_815/8902808f07f0b 5a56479c7d41f9f30d8″ data-src= "https://s.yimg.com/ny/api/res/1.2/ERmU9AQwXAV5k5hmzY5ATQ-/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTc1MQ-/https://media.zenfs.com/en/the_conversation_us_articles_815/8902808f07f0b5a56 479c7d41f9f30d8″/>
A room-sized computer equipped with a new type of circuit, the Perceptron, was introduced to the world in 1958 in a short news story hidden deep in The New York Times. The story quoted the US Navy as saying the Perceptron would lead to machines that "will be able to walk, talk, see, write, reproduce themselves and be aware of its existence."
More than six decades later, similar claims are being made about today's artificial intelligence. What has changed in the intervening years? In some ways, not much.
The field of artificial intelligence has been going through a boom-and-bust cycle since its early days. As the field experiences another boom, many proponents of the technology appear to have forgotten the failures of the past - and the reasons for them. While optimism drives progress, it's worth paying attention to history.
The Perceptron, invented by Frank Rosenblatt, arguably laid the foundation for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components. Modern artificial neural networks that form the basis for well-known AI, such as ChatGPT and DALL-E, are software versions of the Perceptron, except with significantly more layers, nodes and connections.
Similar to modern machine learning, if the Perceptron gave the wrong answer, it would change its connections so it could make a better prediction of what will happen next time. Well-known modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, can produce impressive long text-based responses and link images to text to produce new images based on clues. These systems continue to improve the more they interact with users.
AI boom and bust
About a decade after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that by the mid-1970s the world would have "a machine with the general intelligence of an average human being." But despite some success, human intelligence was nowhere to be seen.
The story continues
It quickly became clear that the AI systems knew nothing about their subject. Without the right background and contextual knowledge, it is virtually impossible to accurately resolve ambiguities in everyday language - a task that humans perform effortlessly. The first AI "winter" or period of disillusionment occurred in 1974, after the perceived failure of the Perceptron.
However, by 1980, AI was back in business and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases based on observable data. There were programs that could draw complex conclusions from simple stories, the first self-driving car was ready to hit the road, and robots that could read and play music played to a live audience.
But it wasn't long before the same problems suppressed the excitement again. The second AI winter hit took place in 1987. Expert systems failed because they could not process new information.
The 1990s changed the way experts approached problems in AI. Although the final thaw of the second winter did not lead to an official bloom, AI underwent substantial changes. Researchers tackled the problem of knowledge acquisition with data-driven machine learning approaches that changed the way AI acquired knowledge.
This time also marked a return to the neural network style perceptron, but this version was much more complex, dynamic and, above all, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, collect data, and distribute data sets for machine learning tasks.
Well-known refrains
Fast forward to today and confidence in AI advances is once again beginning to reflect the promises made nearly 60 years ago. The term "artificial general intelligence" is used to describe the activities of LLMs, such as those that power AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine with intelligence equal to that of humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possible to be aware of.
Just as Rosenblatt thought his Perceptron was a basis for a conscious, human-like machine, so do some contemporary AI theorists think about today's artificial neural networks. In 2023, Microsoft published an article stating that "GPT-4 performance is remarkably close to human-level performance."
But before we claim that LLMs exhibit human-level intelligence, it may help to think about the cyclical nature of AI advances. Many of the same problems that dogged previous versions of AI are still present. The difference is the way these problems manifest themselves.
For example, the knowledge problem persists to this day. ChatGPT continually struggles with responding to idioms, metaphors, rhetorical questions, and sarcasm - unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.
Artificial neural networks can pick out objects in complex scenes with impressive accuracy. But give an AI a picture of a school bus on its side, and it will confidently say it's a snow plow 97% of the time.
Lessons to pay attention to
In fact, it turns out that AI is quite easy to fool in ways that humans would immediately recognize. I think it's a consideration worth taking seriously, in light of how things have gone in the past.
Today's AI looks very different than AI ever did, but the problems of the past remain. As the saying goes, history may not repeat itself, but it often rhymes.
This article is republished from The Conversation, an independent nonprofit organization providing facts and trusted analysis to help you understand our complex world. It was written by: Danielle Williams, Arts and Sciences at Washington University in St. Louis Read more: Danielle Williams does not work for, consult with, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.