Fashion Magazine

How to Hit Pause on AI Before It’s Too Late

By Elliefrost @adikt_blog

OOnly 16 months have passed, but ChatGPT's November 2022 release already feels like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o this week. Everyone from students to scientists are now using these large language models. Our world, and in particular the world of AI, has certainly changed.

But the real prize of human-level AI - or artificial general intelligence (AGI) - is yet to be achieved. Such a breakthrough would mean an AI that can do the most economically productive work, collaborate with others, do science, build and maintain social networks, engage in politics, and conduct modern warfare. The main limitation for all these tasks today is cognition. Removing this restriction would change the world. Yet many of the world's leading AI labs believe this technology could become a reality before the end of this decade.

That could be a huge boon to humanity. But AI can also be extremely dangerous, especially if we have no control over it. Uncontrolled AI could worm its way into online systems that power much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create customized manipulations for large numbers of people. Worse still, military personnel responsible for nuclear weapons could be manipulated by an AI into sharing their credentials, posing a huge threat to humanity.

It would be a constructive step to make things as difficult as possible by strengthening the world's defenses against adverse online actors. But if AI can convince humans, which it is already better at than us, there is no known defense.

For these reasons, many AI security researchers at AI labs such as OpenAI, Google DeepMind, and Anthropic, as well as at security-focused nonprofits, have given up trying to limit what future AI can do. Instead, they focus on creating "aligned" or inherently safe AI. Aligned AI could become powerful enough to wipe out humanity, but it shouldn't be want to to do this.

There are big questions about aligned AI. First, the technical part of tuning is an unsolved scientific problem. Recently, some of the top researchers working on aligning superhuman AI left OpenAI with dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a super-intelligent AI would be aimed at. If it were an academic value system, like utilitarianism, we might quickly find out that most people's values ​​don't actually match these remote ideas, and the unstoppable superintelligence could forever act against most people's will . If the alignment were to match people's actual intentions, we would need a way to merge these very different intentions. While idealistic solutions such as a UN council or AI-powered decision aggregation algorithms possibilities, the concern is that the absolute power of the superintelligence would be concentrated in the hands of very few politicians or CEOs. This would obviously be unacceptable to - and pose an immediate danger to - all other people.

Read more: The only way to deal with the threat of AI? Shut it down

Dismantling the time bomb

If we can't find a way to at least protect humanity from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable should not be created in the first place. This solution, which postpones human-level AI or super-intelligent AI until we solve the security issues, has the downside that AI's great promises - ranging from curing diseases to creating massive economic growth - will have to wait.

Pausing AI may seem like a radical idea to some, but it will be necessary if AI continues to improve without us arriving at a satisfactory tuning plan. When AI's capabilities reach near-takeover levels, the only realistic option is to urgently require governments to pause development. To do otherwise would be suicide.

And pausing AI may not be as difficult as some say. Currently, only a relatively small number of large companies have the resources to deliver industry-leading training sessions, meaning that enforcing a pause is largely limited by political will, at least in the short term. However, in the longer term, hardware and algorithmic improvements mean that a pause can be considered difficult to enforce. Enforcement between countries would be necessary, for example through a treaty, as well as enforcement within countries, with steps such as strict hardware controls.

In the meantime, scientists need to better understand the risks. Although there is widely shared academic concern, there is no consensus yet. Scientists should formalize their points of agreement, and show where and why their views diverge, in the new International Scientific Report on Advanced AI Safety, which should evolve into an "Intergovernmental Panel on Climate Change for AI Risks." Leading scientific journals should open themselves further to research on existential risks, even if this seems speculative. The future doesn't provide data points, but looking ahead is as important for AI as it is for climate change.

In turn, governments have a huge role to play in how AI unfolds. This starts with officially recognizing the existential risk of AI, as has already been done by the US, Britain and other countries EU., and setting up AI safety institutes. Governments must also draw up plans for what to do in key conceivable scenarios, as well as how to deal with the many non-existential problems of AGI, such as mass unemployment, runaway inequality and energy consumption. Governments should make their AGI strategies public to allow for scientific, industry and public evaluation.

It is a major step forward that major AI countries are constructively discussing common policies at biennial AI security summits, including the one in Seoul from May 21 to 22. However, this process needs to be monitored and expanded. Working toward a shared truth about the existential risks of AI and voicing shared concerns with all 28 invited countries would be a major step forward in that direction. In addition, relatively simple measures need to be agreed, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI laboratories and excluding copyrighted content from training. An international AI agency should be established to monitor implementation.

It is fundamentally difficult to predict scientific progress. Yet superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a viable strategy. Let's use the time we have as wisely as possible.

Contact us at [email protected].


Back to Featured Articles on Logo Paperblog