It is time for another appointment in our ongoing vision of the future that is brought to you thanks to the increasingly worrying ability of artificial intelligence. Everyone is aware of the problem of fake news online, and now OpenAI nonprofit supported by Elon Musk has developed an AI system that can create such a compelling false news content that the group is too dirty to release it publicly and quote fear of abuse. They let scientists see a small part of what they have done, so they do not hide it altogether – yet the group's trepidation is narrative here.
"Our model, called GPT-2, was trained to predict the next word in 40 GB of internet text," reads a new OpenAI blog about the effort. "Because of our concerns about malicious applications of the technology, we do not release the trained model. As an experiment in responsible disclosure, we instead release a much smaller model for researchers to experiment with, as well as a technical paper."
Basically The GPT-2 system was "trained" by being fed 8 million web pages, until it reached the point where the system could look at a set of text, it was given and predicted the words that could come next. Per the OpenAI blog, the model is "chameleon-like" ̵
Here's an example: The AI system was given this man-made text asking:
" In a shocking discovery, a researcher discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes. Even more surprising to the researchers was that unicorns spoke perfect English. "
From this, the AI system – after 10 attempts – continued the" story ", which began with this AI-generated text:  " The scientist called the population, according to their distinctive horn, Ovida's Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after nearly two centuries, the mystery of what triggered this strange phenomenon is finally resolved. " (You can check out the OpenAI blog on the link above to read the rest of the unicorn story that the AI system ran out.)
Imagine what such a system could do, say, loose on a campaign story The consequence of this is why OpenAI says it only publishes a very small portion of the GPT-2 sampling code, which releases no datasets, training code or "GPT-2 model weights." Again, from the OpenAI blog, which announces this: "We are aware that some researchers have the technical capacity to reproduce and open source of our results. We believe our release strategy limits the first set of organizations that can choose to do so, and gives the AI community more time to discuss the impact of such systems.
"We also believe that governments should consider expanding or initiating measures to more systematically monitor societal impact and dissemination of AI technologies, and to measure the progress of the properties of such systems, the OpenAI blog post concludes.