Saturday, April 4, 2026
Logo

You Could Be Next

The LinkedIn post seemed like yet another scam job offer, but Katya was desperate enough to click. After college, she’d struggled to make a living as a freelance journalist, gone to grad school, then pivoted to what she hoped would be a more stable career in content marketing — only to find AI had a

TechnologyBy David ParkMarch 10, 202620 min read

Last updated: April 2, 2026, 6:55 AM

Share:
You Could Be Next

The LinkedIn post seemed like yet another scam job offer, but Katya was desperate enough to click. After college, she’d struggled to make a living as a freelance journalist, gone to grad school, then pivoted to what she hoped would be a more stable career in content marketing — only to find AI had automated much of the work. This company was called Crossing Hurdles, and it promised copywriting jobs starting at $45 per hour.

Katya clicked and was taken to a page for another company, called Mercor, where she was instructed to interview on-camera with an AI named Melvin. “It just seemed like the sketchiest thing in the world,” Katya says. She closed the tab. But a few weeks later, still unemployed, she got a message inviting her to apply to Mercor. This time, she looked up the company. Mercor, it seemed, sold data to train AI, and she was being recruited to create that data. “My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable,” she says. The idea depressed her. But her financial situation was increasingly dire, and she had to find a new place to live in a hurry, so she turned on her webcam and said “hello” to Melvin.

It was a strange, if largely pleasant, experience. Manifesting on Katya’s laptop as a disembodied male voice, Melvin seemed to have actually read her résumé and asked specific questions about it. A few weeks later, Katya, who like most workers in this story asked to use a pseudonym out of fear of retaliation, received an email from Mercor offering her a job. If she accepted, she should sign the contract, submit to a background check, and install monitoring software onto her computer. She signed immediately.

She was added to a Slack channel, where it was clear she was entering a project already underway. Hundreds of people were busy writing examples of prompts someone might ask a chatbot, writing the chatbot’s ideal response to those prompts, then creating a detailed checklist of criteria that defined that ideal response. Each task took several hours to complete before the data was sent to workers stationed somewhere down the digital assembly line for further review. Katya wasn’t told whose AI she was training — managers referred to it only as “the client” — or what purpose the project served. But she enjoyed the work. She was having fun playing with the models, and the pay was very good. “It was like having a real job,” she says.

Two days after Katya started, the project was abruptly paused. A few days after that, a supervisor popped into the room to let everyone know it had been canceled. “I’m working assuming that I can plan around this. I’m saving up for first and last month’s rent for an apartment,” Katya says, “and then I’m back on my ass. No warning, no security, nothing.” Several days later, she got an email from Mercor with another offer, this one for a job evaluating what seemed to be conversations between chatbots and real users — many appeared to be from people in Malaysia and Vietnam practicing English — according to various criteria, like how well the chatbot followed instructions and the appropriateness of its tone. Sign the contract, the email said, and you’ll have a Zoom onboarding call in 45 minutes. It was 6:30PM on a Sunday night. Scarred from the abrupt disappearance of the previous gig, she accepted the offer and worked until she couldn’t stay awake.

Machine-learning systems learn by finding patterns in enormous quantities of data, but first that data has to be sorted, labeled, and produced by people. ChatGPT got its startling fluency from thousands of humans hired by companies such as Scale AI and Surge AI to write examples of things a helpful chatbot assistant would say and to grade its best responses. A little over a year ago, concerns began to mount in the industry about a plateau in the technology’s progress. Training models based on this type of grading yielded chatbots that were very good at sounding smart but still too unreliable to be useful. The exception was software engineering, where the ability of models to automatically check whether bits of code worked — did the code compile, did it print HELLO WORLD — allowed them to trial-and-error their way to genuine competence.

The problem was that few other human activities offer such unambiguous feedback. There are no objective tests for whether financial analysis or advertising copy is “good.” Undeterred, AI companies set out to make such tests, collectively paying billions of dollars to professionals of all types to write exacting and comprehensive criteria for a job well done. Mercor, the company Katya stumbled upon, was founded in 2023 by three then-19-year-olds from the Bay Area, Brendan Foody, Adarsh Hiremath, and Surya Midha, as a jobs platform that used AI interviews to match overseas engineers with tech companies. The company received so many inquiries from AI developers seeking professionals to produce training data that it decided to adapt. Last year, Mercor was valued at $10 billion, making its trio of founders the world’s youngest self-made billionaires. OpenAI has been a client; so has Anthropic.

Each of these data companies touts its stable of pedigreed experts. Mercor says around 30,000 professionals work on its platform each week, while Scale AI claims to have more than 700,000 “M.A.’s, Ph.D.’s, and college graduates.” Surge AI advertises its Supreme Court litigators, McKinsey principals, and platinum recording artists. These companies are hiring people with experience in law, finance, and coding, all areas where AI is making rapid inroads. But they’re also hiring people to produce data for practically any job you can imagine. Job listings seek chefs, management consultants, wildlife-conservation scientists, archivists, private investigators, police sergeants, reporters, teachers, and rental-counter clerks. One recent job ad called for experts in “North American early to mid-teen humor” who can, among other requirements, “explain humor using clear, logical language, including references to North American slang, trends, and social norms.” It is, as one industry veteran put it, the largest harvesting of human expertise ever attempted.

These companies have found rich recruiting ground among the growing ranks of the highly educated and underemployed. Aside from the 2008 financial crash and the pandemic, hiring is at its lowest point in decades. This past August, the early-career job-search platform Handshake found that job postings on the site had declined more than 16 percent compared with the year before and that listings were receiving 26 percent more applications. Meanwhile, Handshake launched an initiative last year connecting job seekers with roles producing AI training data. “As AI reshapes the future of work,” the company wrote, announcing the program, “we have the responsibility to rethink, educate, and prepare our network to navigate careers and participate in the AI economy.”

There is an underlying tension between the predictions of generally intelligent systems that can replace much of human cognitive labor and the money AI labs are actually spending on data to automate one task at a time. It is the difference between a future of abrupt mass unemployment and something more subtle but potentially just as disruptive: a future in which a growing number of people find work teaching AI to do the work they once did. The first wave of these workers consists of software engineers, graphic designers, writers, and other professionals in fields where the new training techniques are proving effective. They find themselves in a surreal situation, competing for precarious gigs pantomiming the careers they’d hoped to have.

Each of the more than 30 workers I spoke with occupied a position along a vast and growing data-supply chain. There are people crafting checklists that define a good chatbot response, typically called “rubrics,” and other people grading those rubrics. Others grade chatbot answers according to those rubrics, and still others take the rubrics and write out what’s often described as a “golden output,” or the ideal chatbot answer. Others are asked to explain every step they took to arrive at this golden output in the voice of a chatbot thinking to itself, producing what’s called a “reasoning trace” for AI to follow later when it encounters a similar task out in the real world.

Sometimes the labs want only rubrics for prompts their AI can’t already do, which means companies like Mercor ask workers to produce “stumpers,” or requests that will make the model fail. “It sounds easy, but it’s really hard,” says a worker who was trying to stump models by asking them to make inventory-management dashboards. Models fail in counterintuitive ways. They may be able to solve advanced-physics exam questions, but ask them for transit directions and they’ll recommend transferring on nonconnecting train lines. Finding these weak spots takes time and creativity.

One type of project gathers groups of lawyers, human-resources managers, teachers, consultants, or bankers for something Mercor calls world-building. “You and your team will role-play a real-life team within your profession,” the training materials read. The teams are given dedicated emails, calendars, and chat apps and asked to create a hundred or more documents that would be associated with some corporate undertaking, like a fictional mining company analyzing whether to enter the data-center business.

After several 16-hour days of fantasy document production, one worker recounts, the resulting slide decks, meeting notes, and financial forecasts are sent to another team, which uses them as grist in their attempts to stump a model operating in this simulated corporate environment. Then, having stumped the model, that team writes new, more nuanced rubrics, golden answers, and so on. Workers can only guess who the customer is or how many others are working on the project — based on references to teams like Management Consulting World No. 133, there could be hundreds, maybe thousands.

There are people hired to evaluate the ability of image models to follow their prompts and others who summarize video clips in extraordinary detail, presumably to train video models. Efforts to improve AI’s ability to have spoken conversations have resulted in a surging demand for voice actors, who might find themselves recording “authentic, emotionally resonant” speeches, according to one listing. “I just tell people I’m an AI trainer, then it sounds more professional than what I’m doing,” says an aspiring screenwriter who was instructed to record himself pretending to ask a chatbot for a fitness plan while pots and pans clanged in the kitchen. Another time, he was told to record himself dispensing financial advice over the phone to a parade of people he assumed were other workers.

This audio might then be broken down and sent to someone like Ernest, who used to make a living as an online tutor until the company he worked for replaced him with a chatbot. When we spoke, he was listening to minutelong clips of random dialogue slowed to 0.1x speed and marking when someone started and stopped speaking down to the millisecond. Many of the clips included a person talking with a chatbot and interjecting “huh” or “I see,” so he assumes he was improving AI’s ability to have naturally flowing conversation, but he has no actual idea.

As is standard practice in the field, the project was referred to by a codename and the client only ever as “the client.” The entire system is designed so that workers have minimal insight into the supply chain they are part of. If they find out who the customer is, they are contractually forbidden from telling anyone, even their own colleagues. Nor are they allowed to describe the details of their work beyond broad generalities like “providing expertise in XYZ domain to improve models for a top AI lab,” according to one Mercor agreement. So afraid are workers of inadvertently violating their confidentiality agreements and getting fired that when they discuss their work in public forums, they mask their already codenamed projects with additional codenames, for example by referring to a project called “Raven” as “Poe.”

Katya’s second project with Mercor was far more stressful. There was less work to go around, and it came in fits and starts. Managers would drop a message in the Slack channel saying new tasks were incoming in half an hour, and, she says, “everyone in Slack would drop what they were doing and jump on them like piranhas,” working as fast as they could while the bar showing how many tasks remained slid toward zero. Then they were back in Slack again, politely begging supervisors for more work and more hours, talking about their kids’ birthdays or their need to pay rent, or telling anyone who might be listening that their availability was wide open in case there was more work to be done. Soon, Katya was dropping everything at the sound of a Slack ding too. “Sometimes I’m on the toilet or at dinner and I get the Slack notification. I’m like, ‘Oh, sorry, I gotta work now.’”

That project soon ended and then came another. It was nearly identical to the first, which she had enjoyed, but now, on top of writing rubrics, she had to stump the model and complete the more difficult task in the same amount of time. She was also getting paid $8 an hour less. This is common at Mercor. Nearly every worker I spoke with reported that demands increased, time requirements shrank, and pay decreased as projects continued. Those who couldn’t meet the new demands got “offboarded” and replaced by new recruits.

Chris joined Mercor last year, after a difficult few months struggling to find film work. Unlike many people who suspect they’re casualties of automation, he knew for certain that this was the case. He’d had a recurring job drafting episodes for an unscripted television show — doing preinterviews, sketching scenes, writing the reality TV equivalent of a screenplay. But in late 2024, he was told the show would be running on a “skeleton crew” and his work was no longer needed. He found out later the company was using ChatGPT to draft new episodes. So that October, when Chris received an offer to write an entire sci-fi screenplay for a major AI company, he said “yes,” grim as the prospect was. Since then, he has gone from gig to gig. “This is my only source of income right now,” he says. “I know people who are award-winning producers and directors, and they’re not advertising that they’re doing this work, but that’s how they’re putting food on the table.”

His first jobs with Mercor were, like Katya’s, relatively pleasant and well paid, but soon came the 6PM fist-bump-emoji Slack exhortations to “come on team, let’s push through this,” followed by sudden halts and months of silence. “You were just constantly waiting for the crack of the starting gun at any hour of the day,” Chris says. Then it was crunch time again and managers, increasingly panicked as deadlines neared, started threatening workers with offboarding if they didn’t complete tasks quickly enough.

The time he spent working was tracked to the second by software called Insightful, which monitored everything he did on his computer. Time that the software deems “unproductive” could be deducted from his pay, and if a few minutes passed without him typing, the system pinged him to ask whether he had been working. Sometimes Chris saw people post in Slack that they’d gone over the target time on a particularly tricky task and that they hoped it would be okay; the next day, they would be gone.

Increasingly worried he would be offboarded too, he started working off the clock, deactivating Insightful while reading instructions so he could move faster. If he went over the target time, he turned the clock off and kept working for free.

Companies say this software is necessary to accurately track hours and prevent workers from cheating, which, in this case, means using AI, something all data companies strictly forbid. The ground truth of verified human expertise is what they’re selling, and when AI trains on AI-generated data, it gradually degrades, a phenomenon researchers call “model collapse.” Employees of data companies say it is a constant battle to screen out AI slop. For workers, AI is a particular temptation as pressure increases. When the retail expert trying to stump models with analytics dashboards had her target time dropped from eight hours per task to five to three and a half, she turned off Insightful and sought outside help. “To be honest, I went into Copilot and ChatGPT and put my prompt in there and said, ‘How can I work this so you guys can’t answer it?’” Then she went to another chatbot and asked if the prompt sounded AI generated and, if so, to make it sound more human.

“It’s just so horrible, the mental effect of it,” says Mimi, a screenwriter who has worked on multiple streaming shows and has been training AI for Mercor for several months. She found out about Mercor from a fellow screenwriter who dropped one of its job links in a Writers Guild of America Facebook group.

Like a lot of people in this line of work, Mimi is conflicted. “One documentary-maker who’s won Emmys, he messaged me and he was like, ‘I’m being handed a shovel and told to dig my own grave,’ and that’s exactly how everyone thinks about it,” she says. Still, as a single mom, she needed the money. She was thankful for the work at first, then the project was paused, unpaused, and paused again. For five weeks, she was told a project would be starting imminently. When it finally did, requirements were added, while the expected time shortened, and she raced to keep up under the watchful eye of Insightful. She felt that someone put it well on Slack when they said it was like they were living in a fishbowl waiting for their human masters to drop in food, and only the ones who were fast enough to swim to the top could eat.

“Last night, I got so fucking stressed because my kid came home and it was 7PM, and I get this message, ‘The tasks are out!’ and I’m just working, just trying to get as many hours in before I can go to bed,” Mimi says, choking up. “I spend no time with my kid, and at one point, he can’t find something for school and I just start screaming at him. This work is turning me into a fucking demon.” She’s especially disturbed by the surveillance: “The idea that somebody can measure your time and that all the little bits that go into being a human are taken away because they’re not profitable, that you can’t charge for going to the toilet because that’s not time you’re working, you can’t charge for making a cup of coffee because that’s not time you’re working, you can’t charge for having a stretch because your back hurts. This is why unions were formed, so people could have guaranteed hours and guaranteed lunch breaks and guaranteed holidays and sick pay. This is the gig economy to the very extreme.”

This is what concerns her more than the AI itself: that it’s bringing to knowledge work the sort of precarious platform labor that has transformed taxi driving and food delivery. Meanwhile, she watches in horror the desperate gratitude of her colleagues as they rejoice at the 7PM announcement of incoming work.

“How long are these tasks expected to last?” one worker asked in Slack.

“I’m wondering too, I’d like to know whether I can sleep or not.”

With no answer forthcoming, they swapped tips on how to stave off sleep.

When Mercor began recruiting aggressively last year, it framed itself as a more worker-friendly version of the platforms that had come before it. Criticizing his rival Scale AI on a podcast, Foody, Mercor’s CEO, said, “Having phenomenal people that you treat incredibly well is the most important thing in this market.” Workers who joined during this time do report being treated well; the pay was better than elsewhere, and instead of being managed by opaque algorithms, as is common, there were actual human supervisors they could go to with questions.

But people who have worked in management at data companies say they often start out this way, wooing workers off incumbent platforms with promises of better treatment, only for conditions to degrade as they compete to win eight-figure contracts doled out by the half-dozen AI companies who are interested in buying this data in bulk. At Mercor, there was the additional complication of management largely consisting of people in their 20s with minimal work experience who had been given hundreds of millions of investor dollars to pursue rapid growth.

“I don’t care if somebody’s 21 and they’re my manager,” says Chris, the reality TV producer. “But they’ve never worked at this scale. When you try to find some kind of guidance in Slack, very maturely and clearly explaining what the situation is, you get a meme back with a corgi rolling its eyes and it says, ‘Use your judgment.’ But it’s like, ‘Use your judgment and fuck it up, and you get fired.’ You went to Harvard, you graduated last year, and your guidance for a group of people, many of whom are experienced professionals, is a meme?”

Lawyers, designers, producers, writers, scientists — all complained of inexperienced managers giving contradictory instructions, demanding long hours or mandatory Zoom meetings for ostensibly flexible work, and threatening people with offboarding for moving too slowly, threats that were particularly galling for mid-career professionals who felt their 20-year-old bosses barely understood the fields they were trying to automate.

“The founders pride themselves on ‘9-9-6,’” says a lawyer, referring to a term that originated in China to describe 72-hour workweeks associated with burnout and suicide but has been appropriated by Silicon Valley as aspirational. “You need to be accessible at all hours, and they’re going to pump out messages at 6AM, and you better jump because the perception is you will be offboarded and another person will replace you.”

“It’s not just that team leads are young, project managers are young, senior project managers are young. It’s that the senior-senior project managers, the ones responsible for the project in its entirety, are young. I guess that comes from the top because they’re young, right?” says Lindsay, a graphic designer and illustrator in her 50s who came to Mercor after 85 percent of her work evaporated over the past year, owing, she believes, to improvements in generative AI.

Increasingly desperate for work, she scoured job boards; it seemed the only listings matching her expertise were offers to help build the technology she blamed for demolishing her career. “I swallowed my hatred and signed up,” she says. After some initial work producing graphic-design data, she was invited to join a job for Meta grabbing videos from Instagram Reels and tagging whatever was in them. It was boring, and at $21 per hour, the pay was middling, but Lindsay needed the money. So, she discovered when she was brought into the project’s Slack, did approximately 5,000 others.

In early November, a Mercor representative announced that Lindsay’s project would be ending owing to “scope changes,” though workers had previously been told the project would run through the end of the year. Lindsay and thousands of others found themselves removed from the company’s Slack.

Soon, an email arrived in their inbox, inviting them to a new project called Nova paying $16 per hour.

DP
David Park

Technology Editor

David Park covers the tech industry, startups, and digital innovation for the Journal American. Based in Silicon Valley for over a decade, he has tracked the rise of major tech companies and emerging platforms from their earliest stages. He holds a degree in Computer Science from Stanford University.

Related Stories