Philosophy Magazine

Book Review: Non-Computable You

By Stuart_gray @stuartg__uk
Book Review: Non-Computable You

by Stuart H. Gray

Robert J Marks is Distinguished Professor of Electrical Engineering and Computer Engineering at Baylor University. He’s a Fellow of the Institute of Electrical and Electronic Engineers (IEEE). He is also a Senior Fellow and Director of the Bradley Center at the Discovery Institute, and Editor-In-Chief of BIO-Complexity.[1]

Overview

In his book, Non-Computable You,[2] Marks seeks to bring perspective to the hype surrounding AI. He brings a careers worth of work in Computer Science to bear and presents a number of helpful insights that are not commonly discussed by those speaking and writing about AI today. He talks in ordinary language, although some parts of his argument become quite technical later on.

Computers are algorithm machines. They follow instructions that can be described as a robust series of steps. Think “recipe” when cooking, and you have a reasonable analogy of what an algorithm is. Humans can follow recipes. Computers can run algorithms. But the software running the algorithm shows no understanding of what it is doing. It has no insight into meaning or symbolism. Algorithms cannot detect ambiguity of meaning, and it does not show creativity. What they can do, is mimic what conscious humans do. But they cannot think out of the box. 

Humans are conscious. We can experience what it is like to feel pain as we stub our toe, take a sip of coffee, or bite into a sharp lemon. We experience things. A computer can mimic the effects of such a human experience, but it cannot share that experience. It turns out that qualia – what it is like to experience something – is uniquely human and non-computable. But could more complex computers become conscious in the future and experience qualia? We need to think about what we are suggesting here. The engineers who develop AI systems bring their creativity to this task. What they cannot do is impart creativity to the code that they write. Minds are creative, algorithms are not. They do what they are designed to do and nothing more.

Alan Turing is a founding father of Computer Science. Turing’s skill and creativity set this science on a path to develop amazing things. He thought the ultimate test was a computer that could fool its users into thinking that they were talking to a real person. This Turing Test was groundbreaking in the mid-20th century. Yet sophisticated software chat bots today regularly fool their users. Marks observes that these software algorithms are coded to mimic particular human qualities, and so designed to fool their users. There is intelligence exhibited here. But the intelligence is in the developer of the code, not the cat bot software application itself. 

A true test for intelligence would be a computer that passes the Lovelace Test. In other words, it did something that could not be explained by the computer programmer or expert in computer code. If it did that, then it would exhibit creativity, and therefore intelligence. 

AI can be developed as either a traditional algorithm, or as a neural network (NN). He summaries the history of NNs, contrasting the benefits and drawbacks of each approach. The NN is more of a statistical estimation system that must work through training data in order to be helpful for the task to which it is applied. This is impressive technology but it has serious limitations. For example, the dimensionality of real-world objects is a problem for deep learning AI vision systems. Because there is no understanding of what they detect, they are simply reacting to pixels and areas shown to human users on a screen. Further, because AI lacks creativity, it cannot be applied to every situation. In a game, such as Go or chess, experience from the past can be used to help the algorithm predict what could happen in the future. This is ergodicity. But not all situations work this way. Not every area is ergodic. For example, in a theater of war, the creativity of combatants and strategic actors means that a fixed strategy is unlikely to be helpful. The AI cannot creatively think outside the box, it is trapped in the strategies of past conflicts, and so it is easily undermined by human intelligence and fresh strategies that are brought to bear in the conflict. Ai is trapped in the box that it has been placed in. ChatGPT and Deepfake technologies are impressive, but they write and produce content in the style that has been used to “train” them.

Marks has already established that humans can do many things that AI cannot do; exhibit consciousness, qualia, exhibit creativity, and so on. Marks surprises us by then demonstrating that there are many things in the domain of the computer itself that the computer cannot reduce to algorithmic calculation. There are certain things that computers cannot calculate about themselves, never mind their human users. 

For example, Godel showed you can’t write an algorithm to assess any math statement and show it to be true or false. Turing’s Halting Problem demonstrates we can’t write an algorithm to predict whether an arbitrary piece of computer code will run correctly or crash. Further, if we define an elegant program as one written with the shortest number of instructions, we cannot reduce the creation of elegant programs to the role of the algorithm. We can never predict the most elegant version of a program; we can only ever stumble over it. So, we cannot write an algorithm to discover elegance of implementation.

Finally, Marks assesses the claims of the AI church, who promote the concept of the divine AI. The godlike-AI that they promote must be the goal of the computer scientists working today. He judges this as ultimately a shallow and unachievable pursuit. And his book shows why. If the algorithm cannot hope to cope with the subtleties in its own digital domain, never mind the domain of conscious humans, the notion that AI can achieve God-like status seems to be an empty pursuit.

Positives

Marks brings great insight and experience to his argument. The notion of conscious AI is quickly dealt with. But the issues around Turing’s Halting Problem and the notion of algorithmic elegance are less well understood when it comes to AI. Creative human agents are the ones who code machines and achieve bug-free(ish) elegant code. Yet he shows this to be a uniquely human pursuit, not one accessible to the AI algorithm itself. The computer is mathematically unable to achieve this task. It is only something that is open to the insights of the conscious human mind. This is helpful in nuancing his argument showing the limitations of AI.

Also, his section on ethics is helpful and well worth reflecting on. He asks some very piercing questions. Who is ethically responsible when AI fails? Who is responsible for making sure AI is used in ethical applications? Who decides what is ethical anyway? As we see AI systems appearing in more and more application areas in life today, from transportation to education to the latest iPhone, the notion of right and wrong is a vital one. Ethics is a human pursuit. If I am late in writing a paper, what are the ethical considerations around using ChatGPT in my writing of the paper? Is it morally permissible to let ChatGPT do all my research for me, and even craft chunks of text on my behalf? Why do I think this is wrong or right? At the very least, those who do such a thing are robbing themselves of the opportunity to learn and develop their own potential. Their use of AI tools threatens to diminish their skills and may infact be fundamentally unethical. And it’s an unfortunate opportunity to engage in digital plagiarism.

Negatives

The AI field is moving so fast, Marks’ book almost seems a step behind in some places when it is talking about ChatGPT. Yet this is hardly a drawback to his book. Whatever advancements are barrelling towards us, they have a background and a history that we must learn about if we are going to understand these new developments. This book is a powerful resource for those seeking understanding of the past, and predictions about the future of AI.

Also, his sections on non-algorithmic computer science problems gets quite abstract. I have a degree in Computer Science, and even I found those to be challenging sections. It might have been good if Marks had warned his non-technical readers about this and given an alternative, simpler summary for that audience. In the final analysis, you don’t need to grasp all the subtleties of these computer science problems to see how they fit into his argument. In the same way that much of human life is non-algorithmic, the domain of computer science is itself also non-algorithmic. This makes the notion of super intelligent, God-like AI to be more than unlikely. He shows it to be a mathematical and practical impossibility.

Conclusion

AI is useful for many tasks, and it can act as a supporting tool for human beings. It can be very useful, and Marks applauds this. After all, he has spent his career working in this field. But the idea that AI could replace and even supersede human beings is shown in this book to be impossible from a mathematical, engineering, and biological perspective. There are many more ways we could develop this argument from philosophy. But Marks does a brilliant job of showing us in practical and mathematical terms what the limitations of computers are. 

Enjoy computers and the way they support and enhance our lives. Seek to apply them ethically like any other tool. But don’t be fooled by the hype that warns of the risk of superintelligent AI. 


[1] Robert J. Marks II, Discovery Institute, accessed 22nd September, 2024, https://www.discovery.org/p/marks/.

[2] Robert J. Marks, Non-Computable You What You Do That Artificial Intelligence Never Will, (Seattle: Discovery Institute, 2022).


Back to Featured Articles on Logo Paperblog