https://nighthawkrottweilers.com/

https://www.chance-encounter.org/

Business

How could AI destroy humanity?




Last month, hundreds of famous people in the world of artificial intelligence signed an open letter warning that AI could one day destroy humanity.

“Reducing the risk of extinction from AI should be a global priority alongside other societal risks, such as pandemics and nuclear war,” it said in a statement.

The letter was the latest in a series of ominous warnings about AI that have been particularly light on detail. Today’s AI systems cannot destroy humanity. Some of them can barely add and subtract. So why are those who know the most about AI so worried?

One day, say the tech industry’s Cassandras, companies, governments or independent researchers may deploy powerful AI systems to handle everything from business to warfare. These systems can do things that we don’t want them to do. And if humans tried to disrupt or shut them down, they could resist or even replicate themselves so they could continue operating.

“Today’s systems don’t come close to posing an existential risk,” said Yoshua Bengio, a professor and AI researcher at the University of Montreal. “But in one, two, five years? There is too much uncertainty. That’s the problem. We are not sure that this will not pass a point where things become catastrophic.”

The concerns have often used a simple metaphor. If you ask a machine to make as many paperclips as possible, they say, it might get carried away and turn everything—including humanity—into paperclip factories.

How does it relate to the real world – or an imagined world not too many years in the future? Businesses can give AI systems more and more autonomy and connect them to vital infrastructure, including power grids, stock markets and military weapons. From there they can cause problems.

To many experts, this didn’t seem all that plausible until the last year or so, when companies like OpenAI demonstrated significant improvements in their technology. It showed what could be possible if AI continues to advance at such a rapid pace.

“AI will continue to be delegated, and – as it becomes more autonomous – could usurp decision-making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and founder of the Future of Life Institute. the organization behind one of two open letters.

“At some point it will become clear that the great machine that runs society and the economy is not really under human control, nor can it be shut down any more than the S&P 500 can be shut down,” he said.

Or so the theory goes. Other AI experts think it’s a ridiculous premise.

“Hypothetical is such a polite way to phrase what I think about the existential risk talk,” said Oren Etzioni, founder of the Allen Institute for AI, a research lab in Seattle.

Not completely. But researchers are transforming chatbots like ChatGPT into systems that can perform actions based on the text they generate. A project called AutoGPT is the prime example.

The idea is to give the system goals like “create a company” or “make some money.” It will then continue to look for ways to achieve this goal, especially if it is connected to other Internet services.

A system such as AutoGPT can generate computer programs. If researchers give it access to a computer server, it can actually run these programs. In theory, this is a way for AutoGPT to do almost anything on the web – retrieve information, use applications, create new applications, even improve itself.

Systems like AutoGPT don’t work well right now. They tend to get stuck in endless loops. Scientists gave one system all the resources it needed to replicate itself. It didn’t make it.

Over time, these limitations can be fixed.

“People are actively trying to build systems that improve themselves,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align AI technologies with human values. “For now, this is not working. But one day it will. And we don’t know when that day is.”

Mr Leahy argues that when scientists, companies and criminals give these systems goals such as “making some money”, they can end up breaking into banking systems, revolutionizing a country where they have oil futures or replicating themselves when some are trying to turn them around. of.

AI systems like ChatGPT are built on neural networks, mathematical systems that can learn skills by analyzing data.

Around 2018, companies like Google and OpenAI started building neural networks that learned from huge amounts of digital text taken from the internet. By finding patterns in all this data, these systems learn to generate writing on their own, including news articles, poems, computer programs, even human-like conversations. The result: chatbots like ChatGPT.

Because they learn from more data than even their creators can understand, these systems also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot”, the system lied and said it was a visually impaired person.

Some experts worry that as researchers make these systems more powerful, training them on increasingly large amounts of data, they could learn more bad habits.

In the early 2000s, a young writer named Eliezer Yudkowsky began warning that AI could destroy humanity. His online posts created a community of believers. Called rationalists or efficient altruists, this community became enormously influential in academia, government think tanks, and the technology industry.

Mr. Yudkowsky and his authors played key roles in the creation of both OpenAI and DeepMind, an AI lab that Google bought in 2014. And many from the community of “EAs” worked in those labs. They believed that because they understood the dangers of AI, they were in the best position to build it.

The two organizations that recently issued open letters warning about the risks of AI – the Center for AI Safety and the Future of Life Institute – are closely associated with this movement.

The latest warnings have also come from research pioneers and industry leaders such as Elon Musk, who have long warned about the risks. The latest letter was signed by Sam Altman, CEO of OpenAI; and Demis Hassabis, who co-founded DeepMind and now oversees a new AI lab that combines the top researchers from DeepMind and Google.

Other well-respected people signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently resigned as an executive and researcher at Google. In 2018, they received the Turing Award, often called the “Nobel Prize of computing”, for their work with neural networks.



Source link

Back to top button

mahjong slot

https://covecasualrestaurant.com/

sbobet

https://mascotasipasa.com/

https://americanturfgrass.com/

https://www.revivalpedia.com/

https://clubarribamidland.com/

https://fishkinggrill.com/