OpenAI built a text generator so well, it is considered too dangerous to release – TechCrunch

A storm brews on a new language model, built by nonprofit artificial intelligence research firm OpenAI, which is so good at generating compelling, well-written text that it is concerned about potential abuse. [19659002] It is disturbed by anyone in the community who has accused the company of pursuing a promise not to terminate their research.

OpenAI said that the new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was system generating text that "adapts to the style and content of the fitness text" so that the user can "generate realistic and coherent continuations of a subject of their choice." The model is a great improvement on the first version by producing longer text with greater context.

But with every good use of the system, such as robots that have better dialogue and better speech recognition, nonprofits found more, such as generating fake news, explaining people or automating abuse or spam comments on social media.

To understand: when the GPT-2 was tasked with answering the question, "Recycling is good for the world", which almost everyone agrees, the machine flushed back: [1[ads1]9659006] "Recycling is not good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I am not kidding. Recycling is not good for the environment. It is devastating to the earth and it is a major contributor to global warming. Not good for our health, it contributes to obesity and diseases such as heart disease and cancer. Recycling is bad for our economy, it increases the price of a product and, in turn, the price of everything that is made with that product. Our Nation We pay a huge price for the privilege of having the world's most advanced and efficient recycling system. Recycling is a huge, huge waste of time, energy, money and resources. "

No wonder, OpenAI was worried about letting it go.

Therefore, OpenAI said it only publishes a smaller version of the language model, referring to its charter, which noted that the organizations expect "security and security issues to reduce our traditional publishing in the future." Clearly, the organization said it was not sure of the decision, "we believe that the AI ​​community will eventually have to deal with the issue of publication standards thoughtfully in some areas of research."

Not everyone took it well. OpenAIs tweet announces GPT-2 was faced with anger and frustration, accusing the company of "closing" its research and doing the opposite of open, and taking on the company's name.

Others were more forgiving and called the move a "new bar for ethics" to think ahead for possible abuse.

Jack Clark, police officer at OpenAI, said that the organization's priority is "does not allow malicious or violent use of the technology" it calls a "very tough balancing act for us." was called into the controversy and confirmed in a tweet which he has not been involved in the company "for over a year" and that he and the company shared "good terms".

OpenAI said it was not settled on a final decision on GPT-2's release and that it will go in six months. Meanwhile, the company said that governments "should consider expanding or initiating measures to systematically monitor the societal impact and spread of AI technologies, and to measure the progress of the characteristics of such systems."

Only this week, President Trump signed an executive confirmation of artificial intelligence. It comes months after the US intelligence community announced that artificial intelligence was one of the many "emerging threats" to US national security, along with quantum calculations and autonomous unmanned cars.

Source link

Back to top button