There is much chatter about corporate social responsibility but little deep thinking about more complex moral concepts. This is what struck me as I read a polemical book about the troubling implications of living in a world ‘controlled’ by algorithms – Weapons of Mass Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil.
The author’s Big Message is to highlight how the clever models that sit behind how decisions to evaluate teachers, job candidates, prospective customers for insurance, consumers etc, are not as objectively fair as we might think, often capturing the biases of their creators, as well as more importantly creating negative feedback loops reinforcing social divides. Poor people living in bad neighbourhoods pay more for insurance as they are higher risk; thanks to accurate targeting, they can be more easily identified to be sold payday (or equivalent high cost/poor value) loans.
Whilst this is indeed troubling, my overall response to the book was to feel glad that I don’t live in the US and that, in the UK (I think!), there are more checks and balances in place to stop the level of exploitation seen across the Atlantic occurring.
However, after reading the book, I did start to notice other examples of concerns being raised about the moral implications of business approaches.
First example: an article widely circulated among the senior management at a major international marketing powerhouse. This article raises far more worrying concepts – how search engines are effectively being ‘gamed’ by organisations who wish to propagate ideas that would normally be dismissed out of hand in a liberal democracy. The journalist tried seeing what happens when you start typing in “are muslims…”, and seeing what comes up in Google Instant (though I must confess, I didn’t get anything as bad), she observes, “I feel like I’ve fallen down a wormhole, entered some parallel universe where black is white, and good is bad.”
Second example: an interesting piece in a recent edition of 1843. A writer for the magazine went to California to ‘meet the scientists who make apps addictive’. In a way, this article provides a much-needed human face to the O’Neill book. It seems that the clever people behind all the clever new apps and algorithms are not actually evil. They are described as ‘hipsters from San Francisco – all nice people’.
However, some of them have realised that what they are unleashing on the world may not be so straightforwardly ‘good’ after all. The founding father of ‘behaviour design’, B.J. Fogg, is quoted as saying, “I look at some of my former students and I wonder if they’re really trying to make the world better, or just make money. What I always wanted to do was un-enslave people from technology.” Let’s see what some of these students have been up to:
- One of Fogg’s alumni, Nir Eyal, went on to write a successful book, aimed at tech entrepreneurs, called “Hooked: How to Build Habit-Forming Products”.
- Another, Tristan Harris, resigned after working for Google for a year in order to pursue research into the ethics of the digital economy. “I wanted to know what responsibility comes with the ability to influence the psychology of a billion people? What’s the Hippocratic oath?” Whilst Harris was convinced to stay on temporarily as design ethicist and product philosopher, he soon realised that, although his colleagues were listening politely, they would never take his message seriously without pressure from the outside. He left Google for good to become a writer and advocate, on a mission to wake the world up to how digital technology is diminishing the human capacity for making free choices.
My final example is a film, but it succeeded in make me think the most as it captured my imagination and brought to life the moral dilemmas at play most powerfully. Eye In The Sky explores what happens when a drone is to be used to launch a bomb into a crowded street in Kenya in order to kill a wanted terrorist. Clever algorithms make use of Big Data to calculate what is the likelihood that a small girl selling bread on this street might be killed too by this bomb. For the minsters approving the mission, it is only acceptable for the bomb to be launched if the likelihood is below 50%. Initially calculations suggest the risk is over 50% (that’s what the model says), but in the film we can see how human actors can override and manipulate models. It is clear that ultimately humans need to be ready to make difficult decisions – and live with the consequences.