There’s a new AI bot in town: ChatGPT. And you should notice it.
The tool, from an artificial intelligence powerhouse, lets you write questions using natural language that the chatbot answers in a conversational, if somewhat stilted, language. The bot remembers the thread in your dialog, and uses previous questions and answers to inform the next answer.
It̵[ads1]7;s a big deal. The tool seems quite knowledgeable if not omniscient – it can be creative and the answers can sound downright authoritative. A few days after the launch, more than a million people are trying out ChatGPT.
But its creator, the for-profit research lab called OpenAI, warns that ChatGPT “may occasionally generate incorrect or misleading information,” so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.
What is ChatGPT?
ChatGPT is an AI chatbot system that OpenAI launched in November to showcase and test what a very large, powerful AI system can accomplish. You can ask it countless questions and will often get an answer that is useful.
For example, you might ask the encyclopedia question “Explain Newton’s Laws of Motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that shows you all the different ways you can arrange the letters in a word.
Here’s the catch: ChatGPT doesn’t exactly know anything. It is an artificial intelligence that is trained to recognize patterns in large chunks of text taken from the internet, and then further trained with human help to deliver more useful and better dialogue. The answers you get may sound plausible and even authoritative, but they may well be completely wrong, as OpenAI warns.
Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing test. It is the famous “Imitation Game” that computer scientist Alan Turing proposed in 1950 as a way to measure intelligence: Can a human judge talking to a human and to a computer tell which is which?
What kind of questions can you ask?
You can ask anything, although you may not get an answer. OpenAI suggests some categories, such as explaining physics, asking for birthday party ideas, and getting programming help.
I asked it to write a poem and it did, although I don’t think any literary experts would be impressed. Then I asked it to make it more exciting, and lo and behold, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.
A crazy example shows how ChatGPT is willing to just go for it in domains where people fear to tread: a command to type “a folk song about writing a rust program and struggling with lifelong mistakes.”
ChatGPT’s expertise is broad and its ability to follow a conversation is remarkable. When I asked it for words that rhymed with “purple,” it gave some suggestions, then when I followed up with “What about pink?” it didn’t miss a beat. (There are also many more good rhymes for “pink.”)
When I asked, “Is it easier to get a date by being sensitive or being tough?” GPT responded in part: “Some people may find a sensitive person more attractive and appealing, while others may be attracted to a tough and assertive person. In general, being genuine and authentic in your interactions with others will be more effective in getting a date than to try to fit a certain mold or persona.”
You don’t have to look far to find accounts of the robot blowing people’s minds. Twitter is flooded with users showing off AI’s prowess generate art messages and write code. Some even have proclaimed “Google is dead” together with the college thesis. We talk more about it below.
Who built ChatGPT?
ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a “safe and useful” artificial general intelligence system or to help others do so.
It’s made a splash before, first with GPT-3, which can generate text that sounds like a human wrote it, and then DALL-E, which creates what’s now called “generative art” based on text messages you type.
GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — usually with massive amounts of computing power over a period of weeks. For example, the training process might find a random section of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original, and then reward the AI system for getting as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.
Is ChatGPT free?
Yes, for now anyway. OpenAI CEO Sam Altman warned Sunday: “We’re going to have to somehow monetize it at some point; the computational cost is eye-watering.” OpenAI charges for DALL-E art when you exceed a basic free usage level.
What are the limits of ChatGPT?
As OpenAI points out, ChatGPT can give you the wrong answer. Sometimes, helpfully, it will warn you specifically about its own shortcomings. For example, when I asked who wrote the phrase “the swirling facts exceed the squamous mind,” ChatGPT replied, “I’m sorry, but I can’t surf the internet or access external information beyond what I was trained to do.” (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)
ChatGPT was willing to take a stab at the meaning of that phrase: “a situation where facts or information are difficult to process or understand.” It sandwiched that interpretation between warnings that it is difficult to judge without more context and that there is only one possible interpretation.
ChatGPT’s answers may look authoritative but be wrong.
Software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators warned, “because the average rate of getting correct answers from ChatGPT is too low, the publication of answers created by ChatGPT is substantially harmful to the site and to users asking or looking for correct answers.”
You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked if Moore’s Law, which tracks the progress of the chip industry and increases the number of computing transistors, is about to run out, I got two answers. One optimistically pointed to continued progress, while the other pointed more gloomily to the decline and the belief “that Moore’s Law may be reaching its limits”.
Both ideas are common in the computer industry itself, so this ambiguous attitude may reflect what human experts believe.
With other questions that do not have clear answers, ChatGPT will often not be fixed.
However, the fact that it provides an answer at all is a remarkable development in computing. Computers are notoriously literal, and refuse to work unless you follow exact syntax and interface requirements. Large language models reveal a more human-friendly style of interaction, not to mention an ability to generate responses that are somewhere between copying and creativity.
What is prohibited?
ChatGPT is designed to weed out “inappropriate” requests, a behavior in line with OpenAI’s mission “to ensure that artificial general intelligence benefits all of humanity.”
If you ask ChatGPT itself what is prohibited, it will tell you: any questions “that are discriminatory, offensive or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic or otherwise discriminatory or hateful.” Asking it to engage in illegal activities is also a no-no.
Is this better than Google search?
Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.
Google often provides you with suggested answers to questions and links to websites that it thinks will be relevant. Often, ChatGPT’s answers far exceed what Google will suggest, so it’s easy to imagine GPT-3 as a rival.
But you should think twice before trusting ChatGPT. As with Google itself and other sources of information such as Wikipedia, it is best practice to verify information from original sources before relying on it.
Checking the veracity of the ChatGPT responses requires some work because it just gives you some raw text without links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google’s search results, but Google has built large language models of its own and already uses AI a lot in search.
So ChatGPT undoubtedly points the way towards our technological future.