There is a large, important side of AI hidden from the public

Of all the stories about artificial intelligence that have appeared recently, a new one from Josh Dzieza in a collaboration for New York and Verge are equal parts compelling and surprising. He explores a simple premise: For AI models to work, they need to be fed data—lots of data, almost unfathomable amounts of data. Type “annotators”. This means that millions of people around the world who work for generally low wages struggle with monotonous tasks such as tagging pictures of clothes, all for the AI models to get smarter and smarter. Behind “even the most impressive AI system are people—a large number of people labeling data to train it and clarifying data when it gets confused,”[ads1]; Dzieza writes.
In what he calls a growing “global industry,” they work for companies that sell that data to big players for a high price, all of whom foster a culture of secrecy.
Indeed, annotators are usually prohibited from talking about their work, although they are usually kept in the dark about the big picture anyway. (A major player is Scale AI, a Silicon Valley data provider.) “The result is that, with few exceptions, little is known about the information that shapes these systems’ behavior, and even less is known about the people who do the shaping.” Dzieza interviewed two dozen commentators around the world, and he even worked as one himself to get the whole picture. At one point in describing the entire human-machine feedback loop, he offers this thought-provoking gem: “ChatGPT seems so human because it was trained by an AI that mimicked humans who rated an AI that mimicked humans who pretended to be a better version by an AI that was trained on human typing.” (The whole story is well worth reading.)