OpenAI, Google, others pledge to watermark AI content for security – White House

WASHINGTON/NEW YORK, July 21 (Reuters) – Top AI companies including OpenAI, Alphabet ( GOOGL.O ) and Meta Platforms ( META.O ) have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, the Biden administration said on Friday.

The companies — which also include Anthropic, Inflection, ( AMZN.O ) and OpenAI partner Microsoft ( MSFT.O ) — pledged to thoroughly test systems before releasing them and share information on how to reduce risk and invest in cybersecurity.

The move is seen as a victory for the Biden administration’s efforts to regulate the technology, which has seen a boom in investment and consumer popularity.

Since generative artificial intelligence, which uses data to create new content like ChatGPT’s human-like prose, became very popular this year, lawmakers around the world began considering how to mitigate the new technology’s dangers to national security and the economy.

US Senate Majority Chuck Schumer, who has called for “comprehensive legislation” to promote and ensure protections against artificial intelligence, praised the commitments on Friday and said he would continue to work to build and expand them.

The Biden administration said it would work to establish an international framework to govern the development and use of AI, according to the White House.

AI (Artificial Intelligence) letters and robot hand thumbnail in this illustration taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/

Congress is considering a bill that would require political ads to disclose whether AI was used to create images or other content.

President Joe Biden, who is hosting executives from the seven companies at the White House on Friday, is also working to develop an executive order and bipartisan legislation on AI technology.

As part of the effort, the seven companies committed to developing a system to “watermark” all forms of content, from text, images, audio, to videos generated by AI, so that users know when the technology has been used.

This watermark, embedded in the content in a technical way, will presumably make it easier for users to detect deeply fake images or audio files that, for example, may show violence that did not happen, create a better scam or distort an image of a politician to put the person in an unflattering light.

It is unclear how the watermark will be clear when sharing the information.

The companies also pledged to focus on protecting user privacy as AI develops and on ensuring that the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems such as medical research and climate change mitigation.

Reporting by Diane Bartz in Washington and Krystal Hu in New York Editing by Matthew Lewis and Nick Zieminski

Our standards: Thomson Reuters Trust Principles.

Focused on US antitrust as well as corporate regulation and legislation, with experience covering war in Bosnia, elections in Mexico and Nicaragua, as well as stories from Brazil, Chile, Cuba, El Salvador, Nigeria and Peru.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, focusing on growth-stage startups, technology investments and AI. She previously covered M&A for Reuters, with stories on Trump’s SPAC and Elon Musk’s Twitter funding. Previously, she reported on Amazon for Yahoo Finance, and her investigation into the company’s retail practices was cited by lawmakers in Congress. Krystal started a career in journalism by writing about technology and politics in China. She has a master’s degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

Source link

Back to top button

mahjong slot