What It Means to Be Human is Elusive

By Elliefrost @adikt_blog

Photo: John Walton/PA

Intelligent machines have been serving and enslaving humans in the realm of imagination for decades. The omniscient computer - sometimes benign, usually malevolent - ​​was a staple of the science fiction genre long before such an entity was feasible in the real world. That moment may now be approaching faster than societies can establish appropriate rules. In 2023, the potential of artificial intelligence (AI) came to the attention of a wide audience far beyond tech circles, thanks in large part to ChatGPT (which launched in November 2022) and similar products.

Given how quickly progress is progressing in this area, that fascination is sure to increase in 2024, coupled with anxiety about some of the more apocalyptic scenarios possible if the technology is not adequately regulated. The closest historical parallel is humanity's acquisition of nuclear energy. The challenge of AI is arguably greater. Getting from a theoretical understanding of how to split the atom to assembling a reactor or bomb is difficult and expensive. Malicious applications of online code can be transmitted and replicated with viral efficiency.

The worst outcome - human civilization accidentally programs itself into obsolescence and collapse - is still science fiction, but even the low probability of catastrophe should be taken seriously. Meanwhile, harms on a more mundane scale are not only feasible, but present. The use of AI in automated systems to manage public and private services risks entrenching and reinforcing racial and gender biases. An "intelligent" system trained on data skewed by centuries of white male dominance in culture and science will make medical diagnoses or evaluate job applications based on criteria that have biases built into them.

This is the less glamorous end of concerns about AI, which may explain why it gets less political attention than lurid fantasies about robot uprisings, but it is also the most urgent task for regulators. While in the medium and long term there is a risk that we underestimate what AI can do, in the shorter term the opposite trend - being needlessly overwhelmed by the technology - is preventing rapid action. The systems currently being rolled out in many areas, delivering both useful scientific discoveries and sinister deepfake political propaganda, use concepts that are extremely complex at the code level, but not conceptually inscrutable.

The story continues

Organic nature
Major language model technology works by absorbing and processing massive data sets (much of it scraped from the Internet without permission from the original content producers) and generating solutions to problems with astonishing speed. The end result looks like human intelligence, but is actually a brilliantly plausible synthetic product. It has virtually nothing in common with the subjective human experience of cognition and consciousness.

Some neuroscientists plausibly argue that the organic nature of the human mind - the way we evolved to navigate the universe through biochemical mediation of sensory perception - is qualitatively so different from the modeling of an external world by machines that the two experiences will never come together. .

That doesn't rule out robots outsmarting humans in performing increasingly sophisticated tasks, which is clearly happening. But it does mean that the essence of what it means to be human is not as solvable in the rising tide of AI as some dire predictions suggest. This is not just a profound philosophical distinction. To manage the social and regulatory implications of increasingly intelligent machines, it is crucial to maintain a clear sense of human agency: where the balance of power lies and how it might shift.

It's easy to be impressed by the capabilities of an AI program, forgetting that the machine was executing an instruction created by a human mind. The speed of data processing is the muscle, but the animating force behind the wonders of computing power is the imagination. The answers ChatGPT provides to tough questions are impressive because the question itself impresses the human mind with its infinite possibilities. The actual text is usually banal, even relatively stupid compared to what a qualified human could produce. Quality will improve, but we cannot lose sight of the fact that the sophistication on display is our human intelligence reflected back to us.

Ethical impulses
That reflection is also our greatest vulnerability. We will anthropomorphize robots in our own minds and project emotions and conscious thoughts onto them that do not really exist. This is also how they can then be used for deception and manipulation. The better machines can replicate and surpass technical human achievements, the more important it becomes to study and understand the nature of the creative impulse and the way societies are defined and held together by shared imaginative experiences.

The more robotic power spreads into our daily lives, the more important it becomes to understand and educate future generations about culture, art, philosophy and history - fields that are called humanities for good reason. While 2024 will not be the year that robots take over the world, it will be a year of growing awareness of the ways in which AI has already embedded itself in society, and of the demand for political action.

The two most powerful engines currently accelerating the development of technology are a commercial race for profit and competition between states for strategic and military advantage. History shows that these impulses are not easily restrained by ethical considerations, even when there is an explicit statement of intent to act responsibly. In the case of AI, there is a particular danger that public understanding of science will not keep pace with the questions policy makers are grappling with. That can lead to apathy and irresponsibility, or to moral panic and bad legislation. That's why it's essential to distinguish between the science fiction of all-powerful robots and the reality of brilliantly advanced tools that ultimately take instructions from humans.

Most non-experts struggle to understand the inner workings of super-powerful computers, but that's not the qualification needed to understand how technology should be regulated. We don't have to wait to find out what robots can do when we already know what it is to be human, and that the power for good and evil lies in the choices we make, not in the machines we build .