Fashion Magazine

How We Got Here and Where We Are Going

By Elliefrost @adikt_blog

SuPatMaN / Shutterstock" src="https://s.yimg.com/ny/api/res/1.2/57JPmuqYCW5Y4XcXmtduBg-/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU0MA-/https://media.zenfs.com/en/the_conversation_464/dc932f3655d46b f8b79ba3262b2eb71d" data-src= "https://s.yimg.com/ny/api/res/1.2/57JPmuqYCW5Y4XcXmtduBg-/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU0MA-/https://media.zenfs.com/en/the_conversation_464/dc932f3655d46bf8b79ba 3262b2eb71d"/>

With the current buzz around artificial intelligence (AI), it would be easy to assume that it is a recent innovation. In fact, AI has been around in one form or another for over 70 years. To understand the current generation of AI tools and where they could lead, it's helpful to understand how we got here.

Each generation of AI tools can be seen as an improvement over the previous generations, but no tool focuses on consciousness.

The mathematician and computer pioneer Alan Turing published an article in 1950 with the opening sentence: "I propose to consider the question: 'Can machines think?'". He then proposes something called the Imitation Game, now commonly called the Turing Test, where a machine is considered intelligent if it cannot be distinguished from a human in blind conversation.

Five years later, the term "artificial intelligence" was first published in a proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

From that early beginning, a branch of AI developed in the 1960s that became known as expert systems. These systems were designed to capture human expertise in specialized domains. They used explicit representations of knowledge and are therefore an example of what is called symbolic AI.

There were many early successes that became widely known, including systems for identifying organic molecules, diagnosing blood infections, and prospecting for minerals. One of the most high-profile examples was a system called R1 that in 1982 reportedly saved Digital Equipment Corporation $25 million a year by designing efficient configurations of its minicomputer systems.

The main advantage of expert systems was that a subject matter expert, without any programming knowledge, could in principle build and maintain the computer's knowledge base. A software component known as the inference engine then applied that knowledge to solve new problems in the field, leaving a trail of evidence that provided some form of explanation.

These were all the rage in the 1980s when organizations were clamoring to build their own expert systems, and they still remain a useful part of AI today.

Enter machine learning

The human brain contains approximately 100 billion nerve cells, or neurons, connected by a dendritic (branching) structure. Thus, while expert systems aimed to model human knowledge, a separate field known as connectionism also emerged, which aimed to model the human brain in a more literal way. In 1943, two researchers, Warren McCulloch and Walter Pitts, had produced a mathematical model for neurons, whereby each neuron would produce a binary output depending on its input.

Read more: AI will soon be impossible for humans to comprehend - the story of neural networks tells us why

One of the first computer implementations of connected neurons was developed in 1960 by Bernard Widrow and Ted Hoff. Such developments were interesting, but of limited practical use until the development of a learning algorithm for a software model called the multilayer perceptron (MLP) in 1986.

The MLP is an arrangement of typically three or four layers of simple simulated neurons, with each layer fully connected to the next. The learning algorithm for the MLP was a breakthrough. It enabled the first practical tool that could learn from a set of examples (the training data) and then generalize so that it could classify previously unseen input data (the test data).

This was achieved by assigning numerical weights to the connections between neurons and adjusting them to obtain the best classification with the training data. The data was then used to classify examples that had not been seen before.

The MLP could handle a wide range of practical applications, provided the data was presented in a format it could use. A classic example was the recognition of handwritten characters, but only if the images were preprocessed to extract the most important features.

Newer AI models

After the success of the MLP, numerous alternative forms of neural networks began to emerge. Key among these was the convolutional neural network (CNN) in 1998, which was similar to an MLP except for the additional layers of neurons for identifying key features of an image, eliminating the need for preprocessing.

Both the MLP and the CNN were discriminative models, meaning they could make a decision, typically classifying their input to produce an interpretation, diagnosis, prediction, or recommendation. Meanwhile, other neural network models were developed that were generative, meaning they could create something new after being trained on a large number of previous examples.

Generative neural networks can produce text, images, or music, as well as generate new sequences to support scientific discovery.

Two models of generative neural networks have stood out: generative adversarial networks (GANs) and transformer networks. GANs achieve good results because they are partly 'adversarial', which can be seen as a built-in critic demanding improved quality from the 'generative' component.

Transformer networks have become known through models such as GPT4 (Generative Pre-trained Transformer 4) and the text-based version, ChatGPT. These large-language models (LLMs) are trained on enormous datasets sourced from the Internet. Human feedback improves their performance even further through so-called reinforcement learning.

In addition to producing impressive generative power, the expanded training package has meant that such networks are no longer limited to specialized, narrow domains like their predecessors, but are now generalized to cover any topic.

Where is AI going?

The capabilities of LLMs have led to dire predictions of AI taking over the world. Such scaremongering is unjustified in my opinion. While current models are clearly more powerful than their predecessors, the trajectory remains firmly toward greater capacity, reliability, and accuracy, rather than toward any form of consciousness.

As Professor Michael Wooldridge noted in his 2017 testimony before the House of Lords, "the Hollywood dream of conscious machines is not imminent, and indeed I see no path that leads us there." Seven years later, his assessment still holds true.

There are many positive and exciting potential applications for AI, but a look at history shows that machine learning is not the only tool. Symbolic AI still has a role to play, as it offers the ability to integrate known facts, insights, and human perspectives.

For example, a self-driving car could be provided with the rules of the road instead of learning them by example. A medical diagnostic system could be checked against medical knowledge to provide verification and explanation of the outputs of a machine learning system.

Social intelligence can be applied to filter out offensive or biased results. The future looks bright and will involve the use of a range of AI techniques, including some that have been around for many years.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here Where Going
Here Where Going

Adrian Hopgood has been a long-time unpaid worker with LPA Ltd, makers of the VisiRule symbolic AI tool.

Back to Featured Articles on Logo Paperblog