OpenAI releases tools for detecting typed text
ChatGPT creator OpenAI today launched a free web-based tool designed to help educators and others determine whether a particular piece of text was written by a human or a machine.
Yes, but: OpenAI cautions that the tool is imperfect and performance varies based on how similar the text being analyzed is to the fonts that OpenAI’s tool was trained on.
- “It has both false positives and false negatives,” OpenAI CEO Jan Leike told Axios, warning that the new tool should not be relied on alone to determine the authorship of a document.
How it works: Users copy a piece of text into a box, and the system will assess how likely it is that the text has been generated by an AI system.
- It offers a five-point scale of results: Very unlikely to have been AI-generated, unlikely, unclear, possible or likely.
- It performs best on text samples of more than 1[ads1]000 words and in English, with significantly worse performance in other languages. And it doesn’t work to separate computer code written by humans vs. AI.
- That said, OpenAI says the new tool is significantly better than a previous one it had released.
The big picture: Concerns are high, especially in education, about the rise of powerful tools like ChatGPT. Schools in New York, for example, have banned the technology on their networks.
- Experts are also concerned about an increase in AI-generated misinformation as well as the potential for robots to pose as humans.
- A number of other companies, organizations and individuals are working on similar tools to detect AI-generated content.
Between the lines: OpenAI said it is looking at other approaches to help people distinguish AI-generated text from that created by humans, such as including watermarks in works produced by the AI systems.