Some of the world's biggest companies and richest people are arguing over a question that will help shape the future of AI: Should companies reveal exactly how their products work?
Elon Musk, the CEO of Tesla and SpaceX, turned the debate on its head in recent days by choosing to release the computer code behind his AI chatbot Grok.
This move contrasts with the approach of OpenAI, the company behind the popular AI text bot ChatGPT. OpenAI, partly owned by technology giant Microsoft, has chosen to release relatively few details about the latest algorithm behind its products.
Elon Musk did not respond to ABC News' request for comment. Neither does OpenAI.
In a statement earlier this month, OpenAI rebuked claims that the company has kept its AI models secret.
"We advance our mission by building widely available, useful tools. We make our technology broadly useful in ways that empower people and improve their daily lives, including through open source contributions," the company said. "We provide broad access to today's most powerful AI, including a free version that hundreds of millions of people use every day."
Here's what you need to know about Grok, why Elon Musk revealed the computer code, and what it means for the future of AI:
What is Musk's AI chatbot, Grok?
Last year, Musk launched an artificial intelligence company called xAI, promising to develop a generative AI program that competes with established offerings like ChatGPT.
MORE: Disputes over the threat of extinction due to artificial intelligence loom over the growing industry
On several occasions, Musk has warned about the risk of political bias in AI chatbots, which help shape public opinion and risk the spread of misinformation.
However, content moderation itself has become a polarizing topic, and Musk has expressed opinions that place his approach within that hot political context, some experts previously told ABC News.
In November, xAI debuted an early version of its first product, Grok, which responds to user questions with humorous comments modeled after the classic science fiction novel "Hitchhiker's Guide to the Galaxy."
The story continues
Grok is powered by Grok-1, a large language model that generates content based on statistical probabilities learned by scanning large amounts of text.
To access Grok, users must first purchase a premium subscription to X, Musk's social media platform.
"We believe it is important to design AI tools that are useful to people of all backgrounds and political views. We also want to empower our users with our AI tools, while adhering to the law," xAI said in a blog post in November. "Our goal with Grok is to explore and demonstrate this approach publicly."
Why did Musk make the code openly available?
The decision to release the code behind Grok touches on two issues important to Musk: the threat posed by AI and an ongoing battle with rival company OpenAI.
Musk has been warning for years that AI risks significant social damage. In 2017 he has tweeted: "If you're not concerned about the safety of AI, you should be." And more recently, in March 2023, he signed an open letter warning of the "profound risks to society and humanity" that AI poses.
In his comments on Sunday, Musk appeared to frame the open-source decision as a means to ensure transparency, protect against bias and minimize the danger posed by Grok.
"There is still work to be done, but this platform is already by far the most transparent and truth-seeking," Musk said. said in a message on X.
The move is also directly related to a public feud between Musk and OpenAI.
Musk, who co-founded OpenAI but left the organization in 2018, sued OpenAI and its CEO Sam Altman earlier this month, saying the company abandoned its mission to benefit humanity in a sprint for profits.
Days after filing the lawsuit, Musk said told X that he would drop the case if OpenAI changed its name to "ClosedAI."
In a statement earlier this month, OpenAI said it plans to dismiss all of Musk's legal claims.
"While we were discussing a for-profit structure to further the mission, Elon wanted us to merge with Tesla, or he wanted complete control. Elon left OpenAI saying there had to be a relevant competitor to Google/DeepMind and he would go to do it ourselves. He said he would support us if we found our own way," OpenAI said.
What is the stakes of the battle for open versus closed source AI?
The debate over whether to release the computer code behind AI products revolves around two competing views on how to limit harm, eliminate bias, and optimize performance.
On the one hand, open source advocates say that publicly available code allows a broad community of AI engineers to identify and fix bugs in a system, or adapt it for a purpose separate from its original intended function.
In theory, open source code gives programmers the ability to improve the security of a given product while ensuring accountability by making everything visible to the public.
MORE: Is TikTok Different in China? Here's what you need to know
"Anytime someone creates a piece of software, there may be bugs that can be exploited in ways that could cause security issues," Sauvik Das, a professor at Carnegie Mellon University who focuses on AI and cybersecurity, told ABC News. "It doesn't matter if you're the most brilliant programmer in the world."
"When you open source, you have a whole community of practitioners drilling holes and gradually building up patches and defenses," Das added.
Closed source advocates, on the other hand, argue that the best way to protect AI is to keep the computer code private, keeping it out of the hands of bad actors who could reuse it for malicious purposes.
Closed-source AI also offers an advantage to companies that may want to take advantage of cutting-edge products that are not available to the general public.
"The closed-source systems are harder to redeploy for nefarious reasons simply because they already exist and you can only do certain things with them," Kristian Hammond, a professor of computer science at Northwestern University who studies AI, told ABC News . .
Last month, the White House announced it was seeking public comment on the benefits and dangers of open-source AI systems. The move was part of a sweeping set of AI rules issued by the Biden administration via executive order in October.
Carnegie Mellon's Das said Musk's open source release may be motivated by both public and personal interests, but the move has sparked a much-needed discussion about this facet of AI safety.
"Even if the motives aren't necessarily completely pure, the fact that this raises public awareness around this idea of open versus closed - and the benefits versus risks of both - is exactly what we need in society right now to get our act together to set. to increase public awareness," Das said.
Elon Musk releases code for his AI chatbot Grok. This is why it matters. It originally appeared on abcnews.go.com