https://jualslotcaramasakg.wixsite.com/pantrymagic Slot Gacor Gampang Menang Situs Slot Gacor https://gms.dpe.go.th/mobile/public/admin/ckfinder/plugins/fileeditor/situs-judi-slot-terbaik-dan-terpercaya-no-1/ https://geokur-dmp.geo.tu-dresden.de/uploads/user/2022-12-12-182312.459691situs-slot-gacor.html https://geokur-dmp.geo.tu-dresden.de/uploads/user/2022-12-12-183122.222613slot-gacor-gampang-menang.html http://www.digi.vibeunited.co.id/forum/profile/bocoran-slot-gacor-hari-ini/ https://cungtenhanoi.com/2022/12/30/bocoran-pola-jam-hoki-main-slot-gacor-hari-ini-terbaru-gampang-menang-jackpot-terbesar-2022/
Business

OpenAI leaders behind ChatGPT call for regulation of AI and ‘superintelligence’




The leaders of OpenAI, the creator of the viral chatbot ChatGPT, are calling for regulation of “superintelligence” and artificial intelligence systems, suggesting that an equivalent of the world’s nuclear watchdog would help reduce the “existential risk” posed by the technology.

In a statement published on the company’s website this week, co-founders Greg Brockman and Ilya Sutskever, as well as CEO Sam Altman, argued that an international regulator would eventually be needed to “inspect systems, require audits, test for compliance with safety standards, (and ) place limitations on deployment rates and security levels.”

They drew a comparison with nuclear power as another example of a technology with “the possibility of existential risk,” which increases the need for an authority similar to the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog.

Over the next decade, “it is conceivable that … AI systems will surpass the level of expertise in most domains, performing as much productive activity as one of today’s largest companies,” the OpenAI team wrote. “In terms of both potential advantages and disadvantages, superintelligence will be more powerful than any other technology humanity has had to contend with in the past. We may have a dramatically more prosperous future; but we will have to deal with risk to get there.”

The statement echoed Altman’s comments to Congress last week, where the US-based company’s CEO also testified about the need for a separate regulatory body.

CEO Behind ChatGPT Warns Congress AI Could Cause ‘Damage to the World’

Critics have warned against relying on demands for regulation from tech industry leaders who stand to profit from continued development without restrictions. Some say OpenAI’s business decisions are at odds with those security warnings — as their rapid rollout has created an AI arms race, pushing companies like Google’s parent company Alphabet to release products while policymakers still grapple with risks.

Few lawmakers in Washington have a deep understanding of the emerging technology, or AI, and AI companies have engaged in extensive lobbying, The Washington Post previously reported, as supporters and critics hope to influence technology policy discussions.

Some have also warned of the risk of hampering America’s ability to compete on the technology with rivals — especially China.

The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the benefits are so huge, the cost of building it is decreasing every year, the number of players building it is increasing rapidly.”

The debate over whether AI will destroy us divides Silicon Valley

In his first congressional testimony last week, Altman issued warnings about how AI could “cause significant damage to the world,” while asserting that his company would continue to roll out the technology.

Altman’s message of willingness to work with lawmakers received a relatively warm reception in Congress, as countries including the United States recognize that they must grapple with supporting innovation while dealing with a technology that raises concerns about privacy, security, downsizing and misinformation.

A witness at the hearing, New York University professor emeritus Gary Marcus, highlighted the “staggering” sums of money at stake and described OpenAI as “the preserve” of investor Microsoft. He criticized what he described as the company’s deviation from its mission of advancing AI to “benefit humanity as a whole” unconstrained by financial pressures.

Washington vows to tackle AI, as tech titans and critics come down

The popularization of ChatGPT and generative AI tools, which create text, images or sounds, has dazzled users and has also made the debate about regulation more urgent.

At a G-7 summit on Saturday, leaders of the world’s largest economies made clear that international standards for AI advances were a priority, but they have yet to make any significant conclusions about how to manage the risks.

The U.S. has so far been slower than others, especially in Europe, although the Biden administration says it has made AI a key priority. Washington politicians have not passed comprehensive technology legislation in years, raising questions about how quickly and effectively they can develop regulations for the AI ​​industry.

As AI changes jobs, Italy tries to help workers retrain

The ChatGPT producers urgently called for “some degree of coordination” among companies working on AI research “to ensure that the development of superintelligence” enables the safe and “smooth integration of these systems with society.” For example, the companies could “collectively agree … that the growth in AI capability at the frontier is limited to a certain rate per year,” they said.

“We believe that people around the world should democratically decide the limits and standards of AI systems,” they added – while admitting that “we do not yet know how to design such a mechanism.”

Cat Zakrzewski, Cristiano Lima and Will Oremus contributed to this report.



Source link

Back to top button