Over the past few weeks I have been holding a conversation with Luke Radford, Tech Consultant & Futurist on developments in future technology. You can read the post on LinkedIn here too.
_____
As a technologist involved with innovation and the impact of new capabilities on existing business models it isn't unusual to be asked to make predictions about the future. In doing so I quote Chris Yapp who said:
"the more certain anyone is about a particular future the more likely they are to be wrong".
Many consider technological development to be happening at an increasingly alarming pace and the consequences for humankind are not being fully considered. As these developments shift from the early adopters to the mainstream marketplace it often becomes apparent that the existing legal and ethical frameworks that society operates within are no longer fit for purpose. The development of new frameworks isn't matching the speed of the technology innovation and a decision vacuum is created.
The development of technology should not, and perhaps cannot, take place independently from society and so in this discussion the future state is responded to by Richard Littledale, a Christian Theologian.
Development of "output" capability:
Even those who hold the view that humans developed through evolution from the animal kingdom would usually accept that there is a distinction to be made between the output from a human and the output from an animal. This is identified as Stage 1.
During the first industrial revolution a third type of output came into existence; the output of created machine which is Stage 2.
In many situations we are on the verge of Stage 3 which is the augmentation of human ability with machine capability to create "humanchine". The prediction is that Stage 4 builds on this further to the point where aspects of the human capability are absorbed into the machine domain and there is the creation of intelligent autonomous machine. This will be linked to what we're observing as the forth industrial revolution.
The distinction between Stage 3 and 4 will be clearer than the stages that have come before and will be evidenced in some industries and aspects of life quicker than in others. It may only be at a future point that we will be able to look back and see where the shift from a world where machine intelligence was augmented with human thinking to one where machine capability started to encroach onto original and creative thought that happened independently of human activity.
One possible scenario for the future is that there will come a time where a machine can be created that exhibits the same qualities and capabilities as a human does today. It is normally the case that the created is less than the creator but a potential tipping point comes when the second or perhaps third generation machine is able to create something that is greater than what came before.
These four stages of output capability then overlap into the development capability of robots in the next section.
Stage 1 sees machines that are programmed to perform a specific function; they are created for a role and set of tasks which they are capable of doing. These tasks may be complex (for example picking random objects) or simple (cleaning a given space). Their primary characteristic is that they are designed to do a job and they do not learn, change or develop whilst performing it.
Stage 2 machines are created with a defined and known set of input criteria and an anticipated application. The machine is capable of learning and evolving but only within given and known parameters. Just as an apple tree will always be an apple tree so this machine will always perform in a known and understood way.
The most advanced machines are seen at stage 3; they are created, or create themselves, in an open dimension without constraint. They will learn to perform new functions based on what they identify needing to be done rather than within the confines of what a human maker considered their scope. These machines will be capable of evolution and self-regeneration.
In the next section there are four scenarios of development that build on what has been observed so far and become the foundation of the predictions for the future.
Mind Hacking - Experience of the world:
Access to data and information will be restricted by those who hold it. The search results you see will be different to the ones that someone else based on what you and they have done previously. The conscious and unconscious bias that you show in sources for news will shape which products you see in an online shop and the price you pay will fluctuate based on predictions that algorithms are making. The machines will learn and with each iteration the ability to influence or manipulate will increase.
It will be almost impossible to tell if your view of the world is the whole picture or a version of the truth that marketers, government and others have decided for you. The recent emergence of post-truth perspectives will become so specific for each individual that truth will become entirely subjective.
Opinion Convergence - Decision Making:
Robotic and machine capability will advance to the point where it is considered capable of making decisions based on the data sets available and without irrational emotion and personal bias. The decision to fund certain medical treatments will be determined by algorithms and computational models. The human element will be discarded and humans will not have the skill or capability to challenge the answer. The value of diversity, an aging population and different perspectives will be squeezed out as humans find themselves subject to decisions that are factually, if not morally, correct.
The frustration that is experienced today of "computer says no" will seem minor compared to what this possible future capability will look like. Unless it is baked into the design there will be no option for a human to overrule, and even where they can the unintended consequences for future events could be so significant and unpredictable that no-one is prepared to let it happen.
Digital Me (DigiMoi) - Missed Opportunity:
In time there will "DigiMoi" that hears and sees everything that I do and acts on that. It will overhear a conversation with a family member about travel plans for the weekend and without involving either human the arrangements will be made. The alarm clock will be set to allow time to get up and into a driverless autonomous car at the right time to arrive at the destination (that could have been visited in virtual reality anyway) at exactly the right point. Others who "DigiMoi" thought might be interested in coming along will make their own plans, dealing with conflicts in schedules as each "DigiMoi" deems appropriate.
What none of the humans involved will realise is that there was an opportunity to do something that they had never done before and which the data didn't predict would be of interest to them. My digital me places value on keeping me happy but the problem with that is that unless I experience the lows of disappointment then I'm unlikely to experience the highs of euphoria. I do what I've always done and variations of it because "DigiMoi" calculates happiness as the absence of disappointment. Many will accept this stage because it takes away the frustrations but it is these experiences combined with the disappointment of failure that give birth to innovation.
There is recognition that the ability to communicate at a human level is different and specific. The Turing Test has been passed and robot can identify both robot and human, human can no longer tell the difference. Second and third generation robots have the ability to be able to mimic the [considered] flawed traits of humans to be able to better exist in society. Initially there is acceptance of robots alongside humans but as long as they remain in their place and under their creator (the human) but as the robots advance themselves they are able to go further and to get themselves accepted begin to take on human form.
Are this point I bring in the reflections and responses of Richard who approaches the future from the position of a Christian theologian:
Babel revisitedWhen Richard Adams created his "Hitchhiker's Galaxy" trilogy, peopled by just the kind of intelligent robot described above - one of his more bizarre inventions was the 'babel fish'. This fish, inserted into a human ear, could consume any language it heard and 'excrete' a translation into the brain of the hearer. Quite apart from issues of hygiene and animal cruelty, this was a misunderstanding of the babel term. The account of the Tower of Babel (Genesis 11:1-9) is one of arrogance rather than language. It depicts a humankind so keen to better itself that it overreaches into the heavens where it does not belong, and comes crashing back to earth as a result. The view of the Creator was that the move by the created to usurp his own role would benefit no-one. In an act of second-generation creation where people fashioned in the imago dei fashion robots in an imago homo, will they feel the same way?
The steps of technological evolution described by Luke Radford above are eminently possible. Taken to their logical conclusion they could eradicate poor decisions on a global and local level within a generation. Political decisions could be made according to an agreed set of shared outcomes, and medical ethics would be freed from the shackles of unhelpful emotion. The contentment of the greatest number would be guaranteed.
That all depends on the definition of contentment agreed at the outset, though. Once the robots start making robots who design other robots to serve the needs of their human progenitors, who is to say what parameters will guide them? When Pohl Pot reset the clock to Year Zero in Cambodia, it was supposedly to further the contentment and well-being of his people. The rows of skulls in the killing fields tell another story. Judeo- Christian ethical heritage has always assumed the benefit to humankind of a set of ethics drawn from beyond personal preference. If a robot owes its allegiance only to its human creator, then the chain of command does not go high enough. It would not take much imagination to conceive of a situation where the 'ethics' of one robot, inherited from its designer, clash with the ethics of another one similarly inherited from another designer. This would be the software wars of this century writ large on the landscape of the next.
Not only that, but there is a sense in which humanity's highest intelligence is the intelligence of the 'hive' -where the total of acquired wisdom and intelligence is greater than the sum of all its parts. Generational wisdom and inherited instinct allow us to assess human potential and to spot the flash of genius in a way which artificial intelligence could not do.
Dangerous freedom and the threat of Utopia:So far as we know, there is nothing to stop anybody inventing any of the things described above, save the limitations of technology and engineering. However, as the Apostle Paul pointed out to his friends in Corinth the fact that a thing can be done does not mean that it should be done. (1 Corinthians 10 v.23) Our expanding technological capability makes us more, rather than less morally responsible. If robots liberate us from the need to make complex decisions - why should we assume that is a good thing? If robots reduce our need for work to a negligible level - why should we assume that will make us happy? If my 'digital me' suggests trips, plans and projects based only upon my experiences to date - how am I to ever experience anything truly new? The story of salvation would look very different if Moses had never left the palace, Jonah had never crossed the sea, and Paul had never left for Europe. A freedom from experiences which challenge us and for a robotically analysed state of anodyne Utopia may not be such a good thing. Divine foreknowledge leads us to believe that the human race was designed to thrive in the adversity of life after the Fall. Take that adversity away and we may find ourselves like an astronaut whose muscles begin to fail in zero gravity because there is nothing against which to push.
A right time for the right questions:Sometimes technology has been to the Christian church like a juggernaut. As it rolls inexorably towards us we tend to react in one of two ways. Either we turn away and admire the view in the opposite direction as if nothing were happening, or we find ourselves frozen in a kind of shock like a rabbit in the proverbial headlights. Either response may be a shirking of our responsibility to live as ethically conscious beings in an evolving world we find ourselves frozen with a kind of fascination.
By the time the juggernaut has rolled over us and we are extricating ourselves from the tarmac into which we have been crushed - our pleas that it might be dangerous fall on deaf ears. To point out that things are changing is as redundant as pointing out the advancing juggernaut. Better, perhaps, to step to the side of the road whilst it is still behind the brow of the hill and ask questions about where it is going and who is driving it. This is:
- Who is contributing towards the development of artificial and augmented intelligence, and why?
- What do we want to ask about the impact of increasing automation on human well-being?
- At what point does the yielding of our trust to automated (or augmented) systems undermine our trust in God.
- Instead of talking about Asimov's three laws of robotics - perhaps we should be designing a set of 'robeatitudes' for the machines and their makers?
Predictions about the future may turn out to be wrong but they could just as easily be right. One of the greatest dangers that we will face is not that these predictions come true but that they do so without the social debate and engagement taking place. When we fail to imagine what the future state could be we don't only miss the opportunity to benefit from it but we also fail to be in a position to address the unintended consequences.
In this discussion Richard provides an initial response from a Christian perspective but it may not be the most significant one. What it does however, is to bring a theological perspective into a debate on technological change. It should cause us to question our reason for existence and how that links to the pursuit of new capabilities.
If life itself has no meaning or purpose then none of these points matters. However, if there is a purpose to our being here then we each need to be aware of and engage in our future world and the changes that take place each day.
We'd love to get your input to this debate in the comments section as well as to the questions posed by Richard at the end of his section.
Please use the comments box below, or reply to @radfordln or @richardlittleda on Twitter using #TechTheo