5 impressive things GPT-4 can do that ChatGPT couldn’t

(CNN) The first day after it was unveiled, GPT-4 wowed many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams, and build a working website from a hand-drawn sketch.

On Tuesday, OpenAI announced the next-generation version of the artificial intelligence technology that underpins its viral chatbot tool, ChatGPT. The more powerful GPT-4 promises blowing previous iterations out of the water, and potentially changing the way we use the internet to work, play and create. But it can also contribute to challenging questions around how AI tools can boost professions, enable students to cheat and change our relationship with technology.

GPT-4 is an updated version of the company’s large language model, which is trained on massive amounts of online data to generate complex respond to user requests. It’s now available via a waiting list and has already made its way into some third-party products, including Microsoft’s new AI-powered Bing search engine. Some early access users of the tool share their experiences and highlight some of its most compelling use cases.

Here’s a closer look at the potential of GPT-4:

Analyze more than text

At its core, the biggest change to GPT-4 is the ability to work with images that users upload.

One of the most striking use cases so far came from an OpenAI video demo that showed how a drawing can be turned into a functional website within minutes. The demonstrator uploaded the image to GPT-4 and then pasted the resulting code into a preview that showed what a working website could look like.

In its announcement, OpenAI also showed how GPT-4 was asked to explain a joke from a series of images – which featured a smartphone with a faulty charger – and described why it was funny. Although it may sound simple, dissect a joke is more complicated for artificial intelligence tools to pick up because of the context required.

In another test, The New York Times showed the GPT-4 a picture of the interior of a refrigerator and had it come up with a meal based on the ingredients.

The image feature is not live yet, but OpenAI is expected to roll it out in the coming weeks.

Coding made even easier

Some early GPT-4 users with very little or no prior coding knowledge have also used it recreating iconic games like Pong, Tetris or Snake after following step-by-step instructions from the tool on how to do so. Others have made their own original games. (GPT-4 can write code in all major programming languages, according to OpenAI.)

“The powerful language capabilities of GPT-4 will be used for everything from storyboarding, character creation to game content,” said Arun Chandrasekaran, analyst at Gartner Research. “This could give rise to more independent game providers in the future. But beyond the game itself, GPT-4 and similar models can be used to create marketing content around game previews, generate news articles and even moderate game discussion boards.”

Like games, GPT-4 could change the way people develop apps. A user on Twitter they said made a simple drawing app in minutes, while another claimed to have coded an app that recommends five new movies each day, along with trailers and details on where to watch them.

“Coding is like learning to drive a car – as long as the beginner gets some guidance, anyone can code,” said Lian Jye Su, an analyst at ABI Research. “AI can be a good teacher.”

Passed tests with flying colours

Although OpenAI said the update is “less capable” than humans in many real-world scenarios, it shows “human-level performance” on various professional and academic tests. The company said the GPT-4 recently passed a simulated law school exam with a score in the top 10% of test takers. In contrast, the previous version, GPT-3.5, scored around the bottom 10%. The latest version also performed strongly on the LSAT, GRE, SAT, and many AP exams, according to OpenAI.

In January, ChatGPT made headlines for its ability to pass prestigious graduate-level exams, such as one from the University of Pennsylvania’s Wharton School of Business, but not with particularly high grades. The company said it spent months applying lessons learned from the test program and ChatGPT to improve the system’s accuracy and ability to stay on topic.

Gives more precise answers

Compared to the previous version, GPT-4 is capable of producing longer, more detailed and more reliable written responses, according to the company.

The latest version can now provide answers of up to 25,000 words, up from approx. 4,000 in the past, and can provide detailed instructions for even the most unique scenarios, from how to clean a piranha’s fish tank to extracting DNA from a strawberry. One early user said it provided in-depth pick-up line suggestions based on a question listed on a dating profile.

Streamlining work across different industries

Joshua Browder, CEO of legal services chatbot DoNotPay, said his company already is working on using the tool to generate “one-click lawsuits” to sue robocallers, as an early indication of the huge potential for GPT-4 to change how people work across industries.

“Imagine receiving a call, clicking a button, [the] the conversation is transcribed and a 1,000-word lawsuit is generated. GPT-3.5 wasn’t good enough, but GPT-4 does the job extremely well,” Browder tweeted.

Meanwhile, Jake Kozloski, CEO of dating site Keeper, said his company is use the tool to better match users.

According to Su at ABI Research, it’s possible we’ll also see big advances in the “connected car [dashboards]remote diagnosis in the healthcare system and other AI applications that were previously not possible.”

A work in progress

Although the company has made major improvements to its AI model, GPT-4 has similar limitations to previous versions. OpenAI said the technology lacks knowledge of events that occurred before the data set was discontinued (September 2021) and does not learn from its experiences. It can also make “simple reasoning errors” or be “overly credulous in accepting obvious false statements from a user,” and not double-check its work, the company said.

Gartner’s Chandrasekaran said this also reflects many AI models today. “Let’s not forget that these AI models are not perfect,” Chandrasekaran said. “They can produce inaccurate information from time to time and can be black-box in nature.”

For now, OpenAI said GPT-4 users should exercise caution and use “great caution,” especially “in high-stakes contexts.”

Source link

Back to top button