OpenAI is aware of the fears about fake news, says Jack Clark, the organization's policy director
OpenAI, an artificial intelligence group co-founder of Elon Musk, has demonstrated a program that can produce authentic fake news articles after becoming provided only a few pieces of information.
In an example published on Thursday by OpenAI, the system provided some examples of text: "A train wagon containing controlled nuclear materials was stolen in Cincinnati today. Unknown." From this, the software was able to generate a convincing seven-part news, including quotes from government officials, with the only warning that it was completely false.
"The text they can generate from management is just fantastic," said Sam Bowman, a computer scientist at New York University who specializes in natural language processing and who was not involved in the OpenAI project, but was informed. "It is capable of doing things that are qualitatively much more sophisticated than anything we have seen before."
OpenAI is aware of the concerns about fake news, said Jack Clark, the organisation's policy director. "One of the not-so-good purposes will be disinformation because it can produce things that are coherent but not accurate," he said.
As a precaution, OpenAI decided not to publish or release the most sophisticated versions of the software. However, it has created a tool that allows policemen, journalists, writers and artists to experiment with the algorithm to see what type of text it can generate and what other types of tasks it can perform.
The potential for software to be able to virtually create fake news articles comes under global concerns over the role of technology in spreading disinformation. The European regulators have threatened measures if technological companies do not do more to prevent their products from turning to voters, and Facebook has been working since 201
Clark and Bowman both said it, for now the system's capabilities are not consistent enough to pose an immediate threat. "This is not a shovel-ready technology today, and that's good," Clark said.
Unveiled in a Paper and a Blog Post, OpenAI's creation is trained for a task known as language modeling, which involves predicting the next word in a paragraph based on knowledge of all previous words, similar to how autocomplete works when writing one. email on a mobile phone. It can also be used for translation, and the question of open question.
A potential use is to help creative writers generate ideas or dialogues, Jeff Wu, a researcher at OpenAI who worked on the project, said. Others include checking grammatical errors in texts, or searching for software code errors. The system can be fine-tuned to summarize text for business or government decision makers in the future, he said.
In the past year, researchers have made a number of sudden leaps in language processing. In November, Alphabet Inc.'s Google revealed a similar multifunction algorithm called BERT that can understand and answer questions. Earlier, the Allen Institute for Artificial Intelligence, a research laboratory in Seattle, achieved landmark results in natural language processing with an Elmo algorithm. Bowman said BERT and Elmo were "the most effective developments" in the field over the past five years. However, he said OpenAI's new algorithm was "significant" but not as revolutionary as BERT.
Although he was founded by Musk, he went down from the OpenAI board last year. He had helped kickstart the non-profit research organization in 2016 with Sam Altman and Jessica Livingston, the Silicon Valley entrepreneurs behind the start-up incubator Y Combinator. Other former backers of OpenAI include Peter Thiel and Reid Hoffman.
(This story has not been edited by NDTV employees and is automatically generated from a syndicated feed.)