Meta made its AI Tech open source. Rivals say it’s a risky decision.
In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its AI crown jewels.
The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an AI technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had investigated the person.
Essentially, Meta gave away its AI technology as open source software – computer code that can be freely copied, modified and reused – giving outsiders everything they needed to quickly build their own chatbots.
“The platform that will win will be the open one,” Yann LeCun, Meta’s chief AI researcher, said in an interview.
As a race to lead AI heats up across Silicon Valley, Meta is setting itself apart from its rivals by taking a different approach to the technology. Run by its founder and CEO, Mark Zuckerberg, Meta believes the smartest thing to do is to share its underlying AI engines as a way to spread its influence and ultimately move faster toward the future.
The actions contrast with Google and OpenAI, the two companies leading the new AI arms race. Concerned that AI tools like chatbots will be used to spread disinformation, hate speech and other toxic content, these companies are becoming increasingly secretive about the methods and software that underpin their AI products.
Google, OpenAI and others have been critical of Meta, saying an unfettered open source approach is dangerous. AI’s rapid growth in recent months has raised alarm bells about the technology’s risks, including how it could increase the job market if not deployed properly. And within days of LLaMA’s release, the system leaked onto 4chan, the online message board known for spreading false and misleading information.
“We want to think more carefully about giving away details or open source code” for AI technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee AI work. “Where can it lead to abuse?”
Some within Google have also wondered whether open-source AI technology could pose a competitive threat. In a memo this month leaked to the online publication Semianalysis.com, a Google engineer warned colleagues that the rise of open source software like LLaMA could cause Google and OpenAI to lose their lead in AI
But Meta said they saw no reason to keep the code to themselves. The increasing secrecy at Google and OpenAI is a “big mistake,” Dr. LeCun said, and a “really bad view of what’s going on.” He argues that consumers and governments will refuse to embrace AI unless it is outside the control of companies like Google and Meta.
“Do you want every AI system to be under the control of a few powerful American corporations?” he asked.
OpenAI declined to comment.
Meta’s open source approach to AI is not new. The history of technology is full of battles between open source and proprietary, or closed, systems. Some are hoarding the most important tools used to build tomorrow’s computing platforms, while others are giving those tools away. Most recently, Google has open sourced the Android mobile operating system to take on Apple’s dominance in smartphones.
Many companies have openly shared their AI technologies in the past, at the insistence of researchers. But their tactics are changing due to the furor around AI. That shift began last year when OpenAI released ChatGPT. The chatbot’s wild success wowed consumers and kicked up competition in the AI field, with Google quickly moving to incorporate more AI into its products and Microsoft investing $13 billion in OpenAI.
While Google, Microsoft and OpenAI have since received most of the attention in AI, Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and hardware needed to realize chatbots and other “generative AI,” which produce text, images and other media on their own.
In recent months, Meta has been working furiously behind the scenes to weave its years of AI research and development into new products. Mr. Zuckerberg is focused on making the company an AI leader, holding weekly meetings on the topic with his executive team and product managers.
On Thursday, in a sign of its commitment to AI, Meta said it had designed a new computer chip and enhanced a new supercomputer specifically for building AI technologies. It is also designing a new data center with an eye towards the creation of AI
“We’ve been building advanced infrastructure for AI for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do,” Zuckerberg said.
Meta’s biggest AI move in recent months was the release of LLaMA, which is what’s known as a large language model, or LLM (LLaMA stands for “Large Language Model Meta AI.”) LLMs are systems that learn skills by analyze large amounts of text, including books, Wikipedia articles, and chat logs. ChatGPT and Google’s Bard chatbot are also built on top of such systems.
LLMs identify patterns in the text they analyze and learn to generate their own text, including term papers, blog posts, poetry and computer code. They can even carry on complex conversations.
In February, Meta openly released LLaMA, allowing academics, government researchers, and others who have provided their email address to download the code and use it to build their own chatbot.
But the company went further than many other open source AI projects. It allowed people to download a version of LLaMA after it had been trained on huge amounts of digital text pulled from the internet. Researchers call this “freeing the weights,” referring to the special mathematical values that the system learns as it analyzes data.
This was important because analyzing all this data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies do not have. Those with the skills can distribute the software quickly, easily and cheaply, using a fraction of what it would otherwise cost to create such powerful software.
As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days someone released the LLaMA weights on 4chan.
At Stanford University, researchers used Meta’s new technology to build their own AI system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one case, the system provided instructions to dispose of a corpse without being caught. It also generated racist material, including comments supporting the views of Adolf Hitler.
In a private chat among the researchers, seen by The Times, Doumbouya said distributing the technology to the public would be like “a grenade available to everyone in a grocery store.” He did not respond to a request for comment.
Stanford immediately removed the AI system from the internet. The project was designed to provide researchers with technology that “captured the behavior of cutting-edge AI models,” said Tatsunori Hashimoto, the Stanford professor who led the project. “We took down the demo as we became increasingly concerned about abuse potential beyond research environments.”
Dr. LeCun claims that this type of technology is not as dangerous as it may seem. He said a small number of individuals can already generate and spread disinformation and hate speech. He added that toxic material can be strictly limited by social networks such as Facebook.
“You can’t stop people from making nonsense or dangerous information or whatever,” he said. “But you can stop it from being spread.”
For Meta, more people using open source software can also level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds applications using Meta’s tools, it can help anchor the company for the next wave of innovation, and stave off potential irrelevance.
Dr. LeCun also pointed to recent history to explain why Meta was committed to open source AI technology. He said the development of the consumer Internet was the result of open, common standards that helped build the fastest, most widespread knowledge-sharing network the world has ever seen.
“Progress is faster when it’s open,” he said. “You have a more vibrant ecosystem where everyone can contribute.”