Warren Buffett and Charlie Munger doubt AI and ChatGPT
Charles Munger
Jonathan Alcorn | Bloomberg | Getty Images
Billionaire investors Warren Buffett (92) and Charlie Munger (99) are not jumping on the artificial intelligence (AI) hype train.
During this year̵[ads1]7;s annual Berkshire Hathaway shareholder meeting, the two top executives expressed doubts when asked how robotics and AI developments will affect the stock market and society as a whole.
“I’m personally skeptical of some of the hype that has gone into artificial intelligence,” Munger said. “I think old fashioned intelligence works pretty well.”
Buffett said that Bill Gates, Microsoft’s co-founder and his close friend, helped him try out AI chatbot ChatGPT. Although the technology is doing “remarkable things,” he still has concerns.
“When something can do all kinds of things, I get a little worried because I know we won’t be able to invent it,” Buffett said.
One of Buffett’s concerns: We may not yet be aware of the unforeseen consequences of unleashing this new technology on society. He used the creation of the atomic bomb as an example: the weapon was invented with a specific purpose in mind during World War II, but it is questionable whether it was necessarily “good for the next 200 years of the world.”
Buffett has previously questioned whether AI technology like ChatGPT is beneficial to society, but has also said the technology is outside his area of expertise.
Munger has previously seen artificial intelligence as a “mixed blessing”. While AI is important, there’s also “a lot of crazy hype” around it, and the technology won’t be able to “do everything we want,” he told CNBC in February.
“Artificial intelligence is not going to cure cancer,” he added.
Buffett and Munger are not the only ones concerned about AI’s rapid development.
In March, Apple co-founder Steve Wozniak, Tesla CEO Elon Musk and thousands of others signed an open letter from the Future of Life Institute urging AI labs to immediately stop training AI systems more powerful than ChatGPT-4, OpenAI’s latest chatbot, for at least six months.
“Contemporary AI systems are now becoming human-competitive on general tasks, and we must ask ourselves: Should we allow machines to flood our information channels with propaganda and falsehood?” the letter read.
In addition, the letter said AI labs and independent experts should use the break to develop and implement security protocols for “advanced AI designs.”
Sam Altman, executive director of OpenAI, said he did not think the letter was “the optimal way” to address security concerns around AI during an event held at the Massachusetts Institute of Technology.
However, Altman agreed that “moving with caution and increasing rigor on security issues is very important.”
DON’T MISS: Do you want to become smarter and more successful with your money, work and life? Sign up for our new newsletter!
Get CNBC’s free report, 11 ways to tell if we’re in a recession, where Kelly Evans reviews the best indicators that a recession is coming or has already begun.
CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is “beyond anything I’ve ever seen in technology”