
Labor as the First Algorithm: Reflections on Artificial Intelligence
In Greek myth, Talos was a bronze giant who guarded Crete. He had been forged by Hephaestus and given to Minos as a gift. Three times a day he circled the island, hurling rocks at any ship that drew near. He had only a single vein, running from his neck to his ankle, filled with ichor—the immortal blood of the gods—sealed with a bronze nail. When Medea persuaded him to remove it—promising immortality—his blood poured out and the giant collapsed.
Talos is the first algorithm ever recorded: a system with a clear command (guarding), a repetitive procedure (three circuits per day), and a single point of vulnerability (the nail). He does not think. He executes.
Matteo Pasquinelli, in his book The Eye of the Master, begins from a similar question: what exactly does artificial intelligence imitate? The usual answer is: the brain. Neural networks replicate the structure of the nervous system, we are told, and one day machines will think like we do.
Pasquinelli disagrees. AI does not imitate the brain. It imitates labor—the way labor is organized, measured, and broken down into repeatable units. “Labor is the first algorithm,” he writes. And he means it literally.
The argument begins with Babbage. Before building his “analytical engine,” Babbage wrote a book about industry: On the Economy of Machinery and Manufactures (1832). There he describes how factories organize work—how they break it into simple motions, how they measure it, how they standardize it. The computing machine was nothing more than the encoding of that logic into metal.
What Babbage described in theory, Ford implemented eighty years later. The assembly line was exactly that: the decomposition of labor into motions so simple they could be mechanically repeated. Fordism was not merely a mode of production; it was an epistemology. Knowledge is extracted from the craft worker’s hands and embedded in the system. The worker becomes interchangeable. The machine “knows” what once only the human being knew.
What changed in the post-industrial era? Post-Fordism did not abolish this logic—it expanded it. Flexible labor, the gig economy, the platform: they all operate on the same principle, except now the extraction of knowledge happens in real time. Every click, every route, every rating feeds the algorithm. The Uber driver does not know how the fare is set—but the algorithm “knows,” because it has learned from thousands of other drivers.
Pasquinelli calls machine learning the “automation of automation.” Old automation replaced the worker. The new automation replaces the manager—the one who decides how work will be organized. The system learns on its own how to optimize. This is not “intelligence” in the human sense. It is pattern recognition and its reproduction.
Here there are two traps we must avoid.
The first is panic: AI will become “conscious” and surpass us. Pasquinelli is clear: he argues this is fantasy. AI has no autonomy. It reproduces the relations that feed it. If the data are racist, it will be racist.
The second is complacency: since it is not autonomous, it is not dangerous. Here Pasquinelli becomes more interesting. The problem is not that AI will slip out of our control. It is that it will never slip out. It will do exactly what we ask of it—and what we ask of it is often destructive.
Is there a way out? Open-source software offers a hint. Linux, Wikipedia, Creative Commons: there collective knowledge is shared rather than extracted. But let’s not fool ourselves: even open source operates within the existing system. Amazon runs on Linux. ChatGPT was trained on data from the open internet. The issue is not “open or closed.” It is: who controls the labor–knowledge relation that feeds the system?
Pasquinelli does not offer easy answers. But he does offer a direction: the critique of AI cannot remain technical. If you want to change the algorithm, you have to change the relation that produced it.
In Stanley Kubrick’s film 2001: A Space Odyssey, there is an iconic scene in which HAL 9000, a highly advanced artificial computer that controls the spacecraft, refuses to obey the astronaut Dave Bowman, saying: “I’m sorry, Dave. I’m afraid I can’t do that.”
The scene is usually read as the machine’s rebellion. But if you look closely, HAL does not rebel. He executes. He has two contradictory directives: to complete the mission and to tell the truth to the crew. When the two collide, he finds the system’s solution: if there is no crew, there is no conflict.
Talos and HAL die in the same way: when someone finds the nail. But what kills them is not their weakness. It is their perfect obedience to a logic they never questioned—because they could not.
Perhaps this is the most honest sentence a machine has ever spoken: “I’m sorry. I’m afraid I can’t do that.”
Further reading
-
Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence, Verso Books, 2023.
-
Nick Srnicek, Platform Capitalism, Polity Press, 2017.
-
Dan McQuillan, Resisting AI: An Anti-Fascist Approach to Artificial Intelligence, Bristol University Press, 2022.
Text: Aktaioros Samano_