A.I.. Can't take your job, but it can be your boss
When Conor Sprouls, a customer service representative in the insurance giant MetLife, calls a customer over the phone, he keeps an eye on the bottom right corner of the screen. There, in a small blue box, A.I. tells him how he does.
Speaking too fast? The program flashes an icon for a speedometer that indicates that it should brake.
Sound sleepy? The software displays an "energy source" with an image of a coffee cup.
Not empathetic enough? A heart icon appears.
For decades, people have terribly imagined the armies of hypereffective robots invading offices and factories, and working up jobs once done by humans. But in all worries about the potential for artificial intelligence to replace rank and file workers, we may have overlooked the opportunity to replace the bosses.
Herr. Sprouls and the other call center workers in his office in Warwick, R.I, still have many human tutors. But the software on their screens – made by Cogito, an A.I. company in Boston – has become a kind of adjunct leader, always looking at them. At the end of each call, Mr. Sproul's Cogito notifications are busy and added to a statistical panel that his supervisor can see. If he hides the Cogito window by minimizing it, the program informs its supervisor.
Cogito is one of several A.I. applications used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito's CEO, is to make workers more effective by giving them real-time feedback.
"There is variation in human performance," said Mr. Feast. "We can derive from how people talk to each other whether it goes well or not."
The goal of automation has always been efficiency, but in this new type of workplace, A.I. sees humanity as a thing to be optimized. Amazon uses complex algorithms to track workers' productivity in its fulfillment centers, and can automatically generate paperwork for workers who fail to meet their goals, as The Verge was uncovered this year. (Amazon denies that it burns workers without human recording and says managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claim that it has an accuracy of 96 percent.
Then there are the startups. Cogito, which works with major insurance companies like MetLife and Humana, as well as finance and retail, says it has 20,000 users. Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its customers, uses store sensors to calculate a "true productivity" score for each employee, and rank workers from most to least productive.
Algorithm control is not a new concept. In the early 20th century, Frederick Winslow Taylor revolutionized the manufacturing world with its "scientific leadership" theory, which attempted to turn inefficiencies out of factories by counting and measuring every aspect of a job. More recently, Uber, Lyft and other on-demand platforms have made thousands of dollars by outsourcing conventional human resources tasks – planning, payroll, performance assessments – to computers.
But by A.I. Controlling workers in conventional 9-to-5 jobs has been more controversial. Critics have accused companies of using management algorithms and say that automated systems can dehumanize and unfairly punish employees. And while it is clear why managers want A.I. Who can trace all their workers do is less clear why workers would.
"It is surreal to believe that any company can burn their own workers without any human involvement," Marc Perrone, president of the United Food and Commercial Workers International Union, representing food workers, said in a statement on Amazon in April.
In the concert economy, algorithm control has also been a source of tension between workers and the platforms that connect them with customers. This year, Postmates, DoorDash, and other delivery companies protested on demand a method of calculating their pay, using an algorithm that put customer tips against guaranteed minimum wages – a practice that was almost invisible to drivers because of the way the platform hides details of salaries of employees.
There were no protests at MetLife's call center. Instead, the employees I talked to saw their Cogito software as a mild irritation at worst. Several said they liked getting pop-up alerts during the conversation, although some said they had struggled to figure out how to get the "empathy" alert to stop showing. (Cogito says that AI analyzes subtle differences in tone between the worker and the caller and encourages the worker to try to mirror the customer's mood.)
MetLife, which uses the software with 1,500 of its customer centers, says that using the app has increased customer satisfaction by 13 percent.
"It actually changes people's behavior without knowing about it," said Christopher Smith, MetLife's Global Operations Manager. "It becomes a more human interaction."
Nevertheless, there is a scary sci-fi on a situation where A.I. monitors human workers and tells them how to deal with other people. And it is reminiscent of "workplace development training" that swept through corporate America a decade ago, when companies used psychological tricks borrowed from video games, such as brands and leaderboards, to try and stimulate workers to perform better. [19659002] Phil Libin, CEO of all turtles, an AI startup studio in San Francisco, retired in horror when I told him about my visit to the telephone center.
"It's a dystopian hell," said Libin. "Why would anyone build this world where you're being judged by an opaque black box computer?"
Defenders in the workplace A.I. can claim that these systems are not meant to be overbearing. Instead, they are supposed to make workers better by reminding them to thank the customer, to empathize with the frustrated sue on line 1 or to avoid slacking on the job.
The best argument for the workplace A.I. There may be situations where human bias distinguishes decision-making processes, such as employment. Pymetrics, a New York startup, has made input into corporate recruitment world by replacing the traditional résumé screening process with an A.I. programs that use a series of games to test for relevant skills. The algorithms are then analyzed to ensure that they do not create biased hiring outcomes, or favor a group over another.
"We can adjust data and algorithms until we can remove bias. We can't do that with a human," says Frida Polli, Pymetrics & CEO.
Using AI To Correct for Human Disorders is a Good But as more AI enters the workplace, leaders must resist the temptation to use it to tighten their grip on their workers and subject them to continuous monitoring and analysis, and if that happens, it won't be the robots that upset a rebellion.
Follow Kevin Roose on Twitter: @kevinroose .