Debate Magazine

Could Machines Make Discoveries No Human Could Possibly Understand?

Posted on the 28 February 2013 by Reasoningpolitics @reasonpolitics

math-formula-chalkboardAn article at Slate floats a fascinating question: “what happens when machines are so powerful they can make discoveries no human could possibly understand?” What would the implications be if (or when) we reach that point?

But what if it were possible to create discoveries that no human being can ever understand? For example, if I were to give you a set of differential equations, while we have numerical and computational methods of handling these equations, not only could it be difficult to solve them mathematically, but there is a decent chance that no analytical solution even exists.

So what of this? Does such a hint of non-understandable pieces of reasoning and thought mean that eventually there will be answers to the riddle of the universe that are going to be too complicated for us to understand, answers that machines can spit out but we cannot grasp? Quite possibly. We’ve already come close. A computer program known as Eureqa that was designed to find patterns and meaning in large datasets not only has recapitulated fundamental laws of physics but has also found explanatory equations that no one really understands. And certain mathematical theorems have been proven by computers, and no one person actually understands the complete proofs, though we know that they are correct. As the mathematician Steven Strogatz has argued, these could be harbingers of an “end of insight.” We had a wonderful several-hundred-year run of explanatory insight, beginning with the dawn of the Scientific Revolution, but maybe that period is drawing to a close.

So what does this all mean for the future of truth? Is it possible for something to be true but not understandable? I think so, but I don’t think that that is a bad thing. Just as certain mathematical theorems have been proven by computers, and we can trust them, we can also at the same time endeavor to try to create more elegantly constructed, human-understandable, versions of these proofs. Just because something is true, doesn’t mean that we can’t continue to explore it, even if we don’t understand every aspect.

But even if we can’t do this—and we have truly bumped up against our constraints—our limits shouldn’t worry us too much. The non-understandability of science is coming, in certain places and small bits at a time. We’ve grasped the low-hanging fruit of understandability and explanatory elegance, and what’s left might be possible to be exploited, but not necessarily completely understood. That’s going to be tough to stomach, but the sooner we accept this the better we have a chance of allowing society to appreciate how far we’ve come and apply non-understandable truths to our technologies and creations.

What do you think? Could you trust the findings of machines that no human could ever verify?


Back to Featured Articles on Logo Paperblog