ChatGPT, the wildly popular AI chatbot, is powered by machine learning systems, but those systems are guided by human workers, many of whom aren’t paid particularly well. A new report from NBC News shows that OpenAI, the startup behind ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests. The compensation for this pivotal task? $15 per hour.
“We are grunt workers, but there would be no AI language systems without it,” one worker, Alexej Savreux, told NBC. “You can design all the neural networks you want, you can get all the researchers involved you want, but without labelers, you have no ChatGPT. You have nothing.”
Data labelling—the task that Savreux and others have been saddled with—is the integral process of parsing data samples to help automated systems better identify particular items within the dataset. Labelers will tag particular items (be they distinct visual images or sections of text) so that machines can learn to better identify them on their own. By doing this, human workers help automated systems to more accurately respond to user requests, serving a big role in the training of machine learning models.
But, despite the importance of this position, NBC notes that most moderators are not compensated particularly well for their work. In the case of OpenAI’s mod’s, the data labellers receive no benefits and are paid little more than what amounts to minimum wage in some states. Savreux is based in Kansas City, where the minimum wage is $7.25.
As terrible as that is, it’s still an upgrade from how OpenAI used to staff its moderation teams. Previously, the company outsourced its work to moderators in Africa, where—due to depressed wages and limited labor laws—it could get away with paying workers as low as $2 per hour. It previously collaborated with a company called Sama, an American firm that says it’s devoted to an “ethical AI supply chain,” but whose main claim to fame is connecting big tech companies with low-wage contractors in Third World countries. Sama was previously sued and accused of providing poor working conditions. Kenya’s low-paid mods ultimately helped OpenAI build a filtration system that could weed out nasty or offensive material submitted to its chatbot. However, to accomplish this, the low paid moderators had to wade through screenfuls of said nasty material, including descriptions of murder, torture, sexual violence, and incest.
Artificial intelligence may seem like magic—springing to life and responding to user requests as if by incantation—but, in reality, it’s being helped along by droves of invisible human workers who deserve better for their contribution.