https://jualslotcaramasakg.wixsite.com/pantrymagic Slot Gacor Gampang Menang Situs Slot Gacor https://gms.dpe.go.th/mobile/public/admin/ckfinder/plugins/fileeditor/situs-judi-slot-terbaik-dan-terpercaya-no-1/ https://geokur-dmp.geo.tu-dresden.de/uploads/user/2022-12-12-182312.459691situs-slot-gacor.html https://geokur-dmp.geo.tu-dresden.de/uploads/user/2022-12-12-183122.222613slot-gacor-gampang-menang.html http://www.digi.vibeunited.co.id/forum/profile/bocoran-slot-gacor-hari-ini/ https://cungtenhanoi.com/2022/12/30/bocoran-pola-jam-hoki-main-slot-gacor-hari-ini-terbaru-gampang-menang-jackpot-terbesar-2022/
Business

Google CEO Sundar Pichai warns society to prepare for the impact of AI acceleration




  • In an interview with CBS’ “60 Minutes” that aired Sunday, Google CEO Sundar Pichai suggested that society is not prepared for the rapid development of AI.
  • Pichai said laws that advance self-defense AI “are not up to a company to decide” on its own.
  • Warning of consequences, he said AI will affect “every product in every company”.

Google CEO Sundar Pichai speaks on a panel at the CEO Summit of the Americas hosted by the US Chamber of Commerce on June 9, 2022 in Los Angeles, California.

Anna Moneymaker | Getty Images

Google and Alphabet CEO Sundar Pichai said that “every product in every company” will be affected by the rapid development of AI, warning that society needs to prepare for technologies like those already launched.

In an interview with CBS’ “60 Minutes” that aired Sunday that struck a concerned tone, interviewer Scott Pelley tried several of Google’s AI projects and said he was “speechless” and felt it was “unsettling,” referring to the human-like qualities of products such as Google’s chatbot Bard.

“We have to adapt as a society to that,” Pichai told Pelley, adding that jobs that would be disrupted by AI would include “knowledge workers,” including writers, accountants, architects and, ironically, even software engineers.

“This is going to affect every product across every company,” Pichai said. “For example, you could be a radiologist, if you think five to ten years from now, you’re going to have an AI collaborator with you. You come in the morning, let’s say you have a hundred things to go through, it can say, ‘these are the most serious matters you must look at first.’

Pelley looked at other areas of advanced AI products within Google, including DeepMind, where robots played soccer, which they learned themselves, as opposed to from humans. Another device showed robots recognizing objects on a countertop and fetching Pelley an apple he requested.

Warning about the consequences of AI, Pichai said the scale of the problem of disinformation and fake news and images will be “much bigger,” adding that “it can cause harm.”

Last month, CNBC reported that internally, Pichai told staff that the success of the newly launched Bard program now depends on public testing, adding that “things will go wrong.”

Google launched its AI chatbot Bard as an experimental product to the public last month. It followed Microsoft’s announcement in January that its Bing search engine would include OpenAI’s GPT technology, which gained international attention after ChatGPT launched in 2022.

However, the fear of the consequences of the rapid progress has also reached the public and critics in recent weeks. In March, Elon Musk, Steve Wozniak and dozens of academics called for an immediate pause in training “experiments” related to large language models that were “more powerful than GPT-4,” OpenAI’s flagship LLM. Over 25,000 people have signed the letter since then.

“Competitive pressures among giants like Google and startups you’ve never heard of are driving humanity into the future, ready or not,” Pelley commented in the segment.

Google has released a document outlining “recommendations for regulating AI,” but Pichai said society must quickly adapt to regulation, laws to punish abuses and treaties between nations to make AI safe for the world, as well as rules such as “Comparing with human values, including morals. .”

“It’s not up to a company to decide,” Pichai said. “This is why I think the development of this must include not only engineers, but social scientists, ethicists, philosophers and so on.”

When asked if society is prepared for AI technology like Bard, Pichai replied: “On the one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which technology develops, seems to be a mismatch.”

However, he added that he is optimistic because compared to other technologies in the past, “the number of people who have started to worry about the implications” did so early.

From a six-word message from Pelley, Bard created a story of characters and plots it invented, including a man whose wife could not conceive and a stranger grieving a miscarriage and longing for closure. “I’m rarely speechless,” Pelley said. “Humanity at super human speed was a shock.”

Pelley said he asked Bard why it helps people, and it replied “because it makes me happy,” which Pelley said shocked him. “Bard seems to think,” he told James Manyika, an SVP Google hired last year to head “technology and society.” Manyika replied that Bard is not sentient and not aware of itself, but that it can “act like” it.

Pichai also said that Bard has many hallucinations after Pelley explained that he asked Bard about inflation and got an instant response suggesting five books that, when he checked later, did not actually exist.

Pelley also seemed concerned when Pichai said it’s “a black box” with chatbots, where “you don’t quite understand” why or how it comes up with certain responses.

“You don’t fully understand how it works, and yet you’ve unleashed it on society?” Pelley asked.

“Let me put it this way, I don’t think we fully understand how a human mind works either,” Pichai replied.



Source link

Back to top button