🔄 Yazının Türkçesi (4 dakikalık okuma): Yapay Zeka Gibi Davranan 700 Kişi
Her name was Natasha.
She never ate. Never slept. It was said that she could design an app for your business while chatting with you in real time. All without writing a single line of code. Natasha was efficient, polite, and always available.
There was just one problem.
Natasha was not an artificial intelligence.
According to Business Standard, she was the camouflage for 700 Indian contract workers operating behind the scenes. These individuals, hired by Builder.ai (also known as Engineer.ai), were imitating what investors, clients, and headlines desperately wanted to believe: AI had arrived, and it worked.
In reality, Builder.ai had built a complex illusion system. The assistant named “Natasha” was not based on large language models or machine autonomy. She was powered by people writing code by hand, pretending automation behind the scenes (Divyansh Bhatia, Medium.com). The company’s marketing leaned entirely on the “AI” label, but its actual operation resembled a 700-person call center.
But the problem wasn’t just this deception.
The real problem was how easily everyone believed it.
The Hype That Erased the Human
Builder.ai was backed by major institutions, including Microsoft and the Qatar Investment Authority. According to Bloomberg, the company had also faked partnerships to exaggerate its growth. Yet no one seemed eager to look behind the curtain. The product was labeled “AI,” the assistant had a human-like name, and the interface looked sleek.
And that was enough.
This wasn’t just a case of deception. It was also a case study in our collective desire to believe, and in society’s shared hopes. Investors and customers had become co-authors of an illusion in the name of automation. Everyone wanted a faster, cheaper, and more hard-working intelligence.
What they got instead was a room full of overworked employees pretending to be the future.
When Labor Becomes Shameful
There’s a deeper asymmetry in this story. One that goes beyond commercial fraud.
By presenting 700 Indian workers as “AI,” Builder.ai did something far more destructive than exaggeration. It rendered human labor invisible.
Because admitting that the product was built with human effort would reduce its perceived value.
In this model, the labor of trained and skilled developers turned into something that had to be hidden. Not for reasons of privacy or intellectual property, but because of perception. Human developers were seen as error-prone, expensive, and slow. But a machine? It was the embodiment of efficiency.
Thus, labor was not only outsourced; it was rebranded as a flaw.
This is a new form of erasure of human labor. Moreover, it is dangerous not just for workers, but for society’s very understanding of intelligence, work, and value.
Governance That Forgot to Ask
Throughout this process, no financial, legal, or technical institution asked the simple question: Who is actually doing this work?
There is no legal definition of what can be labeled “AI.” There is no standard requiring disclosure when human labor forms the core of an automation system. And there is no consistent framework to assess whether a product’s performance comes from computation, labor, or a hybrid process.
This is a governance blind spot.
It is not only a legal void, but a linguistic one as well.
We embrace AI so enthusiastically that we stopped asking what it actually was. We see its branding, hear its voice, and assume the future is already working behind the scenes. Yet behind the scenes may lie tired fingers and unnamed human minds.
Not a Scandal, But a Mirror
This story isn’t just about a bankrupt company. It reveals how we collectively grow our desire to believe in machine intelligence; even when that intelligence does not exist. It concerns our tendency to reduce human complexity to “backend support services.” And it reveals a deeper confusion as well: When does assistance become autonomy? When does delegation become deception?
If we cannot answer these questions, it won’t be the machines that lose their credibility. It will be us.
And if this illusion continues unchecked, the consequences won’t remain confined to tech circles. Employment data will be distorted. Legal responsibility will shift. Safety audits may miss critical risks. When we erase human labor, we also erase the means to govern it.
Maybe what we need right now is not more artificial intelligence branding, but a deeper respect for the labor behind the facade. This means recognizing what we are willing to fake in the name of progress, and whose labor we erase to keep the illusion alive.
Because in the end, Natasha never lied.
We did.
🧰 Alignment Toolbox
If the market can’t tell a chatbot from a call center, disclosure isn’t optional. The following steps may help build public trust in intelligent systems and create a healthier market structure:
Disclose the human share of every “AI” workflow: percentage of tasks done by people must appear in marketing and term sheets.
Issue an “AI Transparency Seal” (ISO/IEEE) that grades systems: Automated, Augmented, or Manual Behind Interface.
Audit automation claims before funding or IPO: independent reviewers verify the seal just as accountants verify revenue.
Clear labels protect investors, spare workers from invisibility, and keep safety reviewers from chasing phantom algorithms.
This article is part of The Alignment Periphery, a biweekly series of short reflections at the intersection of AI policy, ethics, and economics. Each piece explores real-world signals that reveal the risks and opportunities we must confront to shape intelligent systems, before they begin to shape us.