Generative AI will upgrade the professions

The authors are the authors of ‘The Future of the Professions’
ChatGPT opened a new chapter in the story of artificial intelligence we have been working with for more than a decade. Our research has focused on the impact of AI on professional work, looking at technologies across eight sectors, including medicine, law, education and accounting.
Overall, the narrative is laid out in our book, The future of the profession, has been optimistic. At a time when professional advice is too expensive, and our health, justice, education and audit systems often fail us, AI offers the promise of easier access to the best expertise. Understandably, some professionals find this threatening because the latest generative AI systems are already outperforming human professionals in some tasks – from writing efficient code to drafting persuasive documents.
Contrary to many predictions that AI would be “narrow”[ads1]; for many years yet, the latest systems are far more expansive than those that came before, as happy to diagnose diseases as they are to design beautiful buildings or devise lesson plans.
They emphatically refute the idea that AI systems need to “think” to undertake the tasks that require “creativity” or “judgment” – a common line of defense of the old guard. High-performing systems don’t need to “reason” about the law like a lawyer to produce a solid contract, nor “understand” anatomy like a doctor to provide useful medical advice.
How do professionals react? Our original research and recent work suggest a familiar response pattern. Architects are inclined to embrace new possibilities. Auditors dive for cover because the threats to their data-driven activities are clear. Doctors can be dismissive of non-doctors, management consultants prefer to advise on transformation rather than change themselves.
With generative AI, however, business leaders appear to be less dismissive than in the past.
Some are interested in how to use these technologies to streamline existing operations: a recent study by researchers at MIT found ChatGPT increased the productivity of white-collar writing tasks, such as writing a sensitive company-wide email or a powerful press release, by nearly 40 percent. Others are keen to downsize: US online learning company Domestika, for example, is reported to have fired almost half of its Spanish staff in the hope that those working on content translation and marketing materials can be replaced with ChatGPT.
Although such cuts seem hasty, research from Goldman Sachs predicted that as many as 300 million full-time jobs around the world could be threatened by automation. However, few professionals accept that AI will take over their most complex work. They continue to imagine that AI systems will be limited to their “routine” activities, the simple, repetitive parts of their jobs – document review, administrative tasks, everyday grunt work. But for complex activities, many professionals argue that people will certainly always want the personal attention of experts.
Every element of this claim is open to challenge. The possibilities of the GPTs already go far beyond the “routine”. When it comes to personal attention, we can learn from taxes.
Few people who file their tax returns using online tools instead of human experts lament the loss of social interaction with their tax advisors.
To claim that clients want expert, trusted advisors is to confuse process and outcome. Patients don’t want doctors, they want good health. Clients don’t want litigation, they want to avoid pitfalls in the first place. People want reliable solutions, whether they rely on flesh-and-blood professionals or AI.
This leads to broader questions. How are existing professionals adapting and what are we training younger professionals to become? The concern is that we are fostering 20th-century craftsmen, whose knowledge will soon be redundant. Today’s and tomorrow’s workers should acquire the skills needed to build and operate the systems that will replace their old ways of working – knowledge engineering, data science, design thinking and risk management.
Some argue that teaching people to code is the priority. But this is an activity where AI systems are already impressive – AlphaCode, developed by DeepMind, outperformed almost half of the participants in major coding competitions. Instead, we should be alive to the emergence of unknown new roles, such as the important prompt optimizers—those currently most skilled at instructing and ensuring the best responses from generative AI systems.
There are, of course, risks with the latest AI. A recent technical paper on GPT4 acknowledges that the systems can “reinforce biases and perpetuate stereotypes”. They can “hallucinate”. They can also be wrong and raise the specter of technological unemployment. Hence the frenzy of ethical and regulatory debate. But at some point, as performance improves and the benefits become indisputable, the threats and shortcomings will often be outweighed by the enhanced access AI provides.
The professions are unprepared. Many companies are still focused on selling the time of their people, and their growth strategies are based on building larger armies of traditional lawyers, accountants, tax advisors, architects and the rest.
The big opportunities certainly lie elsewhere – not least in becoming actively involved in developing generative AI applications for their customers.