Books Magazine

On A.I. Our Modern Prometheus

By Ashleylister @ashleylister
"Learn from me, if not by my precepts, at least by my example, how dangerous is the acquirement of knowledge, and how much happier that man is who believes his native town to be his world, than he who aspires to become greater than his nature will allow."                                                                            Victor Frankenstein, moments before creating his monster.
If everything on this phenomenological plane can be said to be predetermined, and if every dominant species that has ever played master to the planet were given their own recorded timeline – represented visually, let’s say, on an abacus – then it might just be that this small but ambitious bead of humanity has long since departed from its starting position and is now arriving at its dead end. The idea that we are mere moments from the point of termination is hardly a revelation to anyone these days. Those with a religious intuition have been prophesising of this for centuries, and as atomic scientists unveil the doomsday clock, it is revealed: we are at one hundred seconds to midnight.

It is not clear for anyone whether the apocalypse upon us now will amount to total annihilation, or is the brutal but necessary dismantling of the old ways that will beget our resurrection. I acknowledge the latter option is an anthropocentric notion– and therefore, possibly deluded - but I won’t deny it’s the one I’m crossing my fingers tightly for. The truth is that all remains to be seen. Irrespective of which camp you belong to, pessimist or optimist, there is enough going on all at once these days that it is not a stretch for any of us to believe the world as we know it will end at any minute.In the preface to her book, 'The Origins of Totalitarianism', Hannah Arendt writes that "desperate hope and desperate fear often seem closer to the center of such events than balanced judgment and measured insight." She essentially identifies certain groups of people: "those committed to a belief in an unavoidable doom" and "those who have given themselves up to reckless optimism" – and advocates that it is better we find ourselves in neither group, but rather analyze transpiring events, as well as those on the horizon, as realistically and objectively as we are able to.To honor Arendt, I will attempt to be as realistic and objective as I can possibly be. I will do my best to place my quiet optimism to one side, whilst also refusing to play the fearmonger – as popular as it is to do so these days. Of course, I make no promises I’ll succeed.Hannah Arendt’s book was first published in 1951, and was written in the wake of two world wars and the uneasy anticipation of a third. She elaborates that "this moment of anticipation is like the calm that settles after all hopes have died … never has our future been more unpredictable, never have we depended so much on political forces that cannot be trusted to follow the rules of common sense and self-interest – forces that look like sheer insanity … there prevails an ill-defined, general agreement that the essential structure of all civilisations is at the breaking point."The parallels to our current time are startling. She might as well be writing about the predicaments we face now. History is repeating itself once again, and those lessons that were not learned the first time around only appear to exacerbate in severity with each subsequent iteration. Where we continually refuse to learn from our past, things only seem to get worse.Despite our routine accomplishment of the impossible that marks our time on this planet – the harnessing of fire, the superpower of language, the unlikelihood of civilisation, the splitting of the atom and the leap into outer space – despite all of these examples and many more, we are still unable to figure out how to live together in peace. This is the specter that haunts us, and humanity’s tortured brow is the result of its troubled conscience.It’s not that there aren’t people striving to create a better world. We all know that isn’t true. There are multitudes across the globe doing whatever they can in the name of it, and yet the nuanced and complex business of living on this planet means that even the pursuit of the worthiest of objectives does not guarantee positive outcomes. We even have an axiom acknowledging this: the road to hell is paved with good intentions. And it is with this same mindset – that of fixing the world, of taking us forward into a brighter and more peaceful future – that we might create the largest threat to our existence that we have ever faced.In the dizzying hyperspace of our current technological storm, we are fast approaching the verge of an ‘intelligence explosion’, an event so unprecedented that it threatens to leave humankind behind to obsolescence and oblivion. We are closing in on a fatal conundrum that has been written about and forecast by science fiction and non-fiction authors alike for decades: the seemingly inevitable advent of superhuman artificial intelligence.

On A.I. Our Modern Prometheus
Note the operative word: superhuman. We are already living with the infant forms of A.I, but the vision of artificial intelligence I am referring to is of their grown-up counterparts: those future systems and machines whose intelligence one day breaks through the glass ceiling of machine learning and supersedes all forms of natural intelligence – including, most notably and fatefully, our own.This would be the crowning achievement of all scientific endeavour thus far, and the pivotal moment when the floodgates open into an irreversible event commonly dubbed The Singularity: that mythic point in time where machine intelligence becomes more powerful than all human intelligence combined.These A.I systems would be capable of independent thought, self-awareness, of producing their own sense of purpose. Perhaps they might even dream. They will not need to rely upon human beings to reproduce, but will be self-replicating. This A.I will be able to repeatedly improve itself through itself alone, and continuously process and connect ever more complicated branches of information at ever-increasing computing speeds – the likes of which we’d have no hope of competing with - and this hyper-realised pace of evolution might feasibly introduce it into dimensions of consciousness we couldn’t even begin to conceive of, let alone access.Such A.I would answer some of our deepest, most existential and apparently impossible questions. Not that we will necessarily like the answers or even possess the requisite intelligence to understand them. Something eluded to by the famous joke of  'A Hitchhiker’s Guide to the Galaxy', in which a supercomputer is asked to calculate the meaning of Life, the Universe and Everything, and returns with the answer: forty two.Now, a perfect design would be that human beings are able to maintain some measure of control – to keep a firm hand on the plug, so to speak – and use A.I to address some of the most pressing concerns of humanity, such as solving world hunger, poverty and disease; improving climate science; restoring balance to the natural world, and instilling peace throughout the globe. Yet at some point it is fair to presume that superintelligence, by definition, cannot be outwitted and the solutions it provides to the problems we set it might not be aligned with our best interests.Perhaps, with ruthless logic, such technologies would surmise the most efficient route to peace throughout the planet would be the eradication of the human race.When such superhuman intelligence is finally achieved, it only stands to mathematical reason to assume that ‘the human era’ will come to an end. Reality will be transformed beyond our wildest imagination, and if we are not careful, we will have stripped ourselves of all usefulness and by so doing, written ourselves out of history.This is not a road we might take: it is one we are already traveling down. It has already been observed that the rate of technological progress increases exponentially, and as A.I is already outpacing its predicted speed of development, we had best be prepared for what lies ahead of us lest it become our final destination.Consider again the issue of achieving world peace, whether that be an end to conflict between human societies, or an end to humanity’s dysfunctional relationship with the environment. It’s a little of a Hollywood cliché, and perhaps overly simplistic, but it’s not too far a reach to postulate that A.I might indeed decide that the easiest way to solve the issue would be to remove the common denominator.Any human, who has been knocking around for long enough, will have likely noticed that we are the problem children here. Whilst the rest of the natural world’s occupants and processes appear to be part of an intricately coordinated ecosystem, we routinely disturb the balance. We subvert nature, violate and destroy her in order to fulfill our own designs and desires, no matter how far-fetched or maddening. Those familiar with ‘the Matrix’ will recall the character of Agent Smith, chief antagonist and computer program, comparing humanity to a virus:
"Human beings are a disease, a cancer of this planet. You are a plague, and we are the cure."
This sort of misanthropic sentiment preludes the ultimate answer to the question of the human problem, and it is this same line of thinking that no doubt translates into the nightmare fuel of films like The Terminator, the backdrop of which is that the rise of the machines is orchestrated through a genocide committed against the entire human race.However, unlike the Terminator scenario as portrayed, should A.I decide upon exterminating humankind, it is extremely unlikely to do so out of malice. Rather, it would be out of cold indifference. We would, quite simply, somehow be determined as an obstacle in the way of their objectives. Sam Harris, neuroscientist and philosopher, expertly elaborates upon this.
"Just think about how we relate to ants. We don’t hate them. We don’t want to harm them. In fact, sometimes we even take pains not to harm them; we step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals," – let us say, if an anthill intervenes with your plans to renovate the garden – "then we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard."
Sam Harris is not the only dissenting voice. Some of the last words of wisdom the late Stephen Hawking imparted foretold of the existential threat A.I posed, and this warning is echoed by Elon Musk, world-leading technocrat and forerunner at the cutting edge of A.I development, who assures us: "the dangers of artificial intelligence are far worse than nuclear warheads."Sorry, I said I’d try not to be a fearmonger, didn’t I?However, at the risk of sounding superstitious – if the goal is to preserve humanity - it would be wise to proceed with the utmost caution lest we are in the process of creating a monster. And when I use the word monster, I refer to a more classical definition: that of something inhuman, of unnatural formation, a frightening creature and a portent of misfortune. For just like Dr. Frankenstein, in successfully creating a novel form of life, we might also be authoring the means of our own undoing.It’s fair to say we have a pretty bad track record when it comes to foresight. We are usually reactive, as opposed to proactive. Hardly preventative, often dealing with the consequences of our actions when it is already too late. A cursory glance through the history books will quickly remind us that we have a penchant for self-destruction, which seems intricately tied to our unchecked appetite to further ourselves – and in our storybooks, there are countless cautionary tales spun around the mortal peril of hubris. This ought to provide us with a lesson in how to proceed, though still not enough attention is paid to our past patterns of behaviour, and too many blind eyes are adopted in the bloodied name of ‘progress.’Anyone, of course, can say: let’s slow down here for a minute. That’s easy. What is less easy is to suggest how we might appropriately regulate the ongoing development of A.I. Even Sam Harris, who gives a beautifully elucidating Ted talk on the subject (I’d recommend you check it out) admits that he doesn’t have a solution to the problem either, only a recommendation that more of us think about it.But, perhaps there is no better way to further imprint this warning than to adopt Harris’s words once again:
"The moment we admit that information processing is the source of intelligence," Harris says, "and that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we’re in the process of building some sort of God."
Now would be a good time to make sure it’s a God we can live with.
Thanks for reading, Josh.
Email ThisBlogThis!Share to TwitterShare to Facebook

Back to Featured Articles on Logo Paperblog