There is a particular way of talking about artificial intelligence that has by now become so entrenched we barely notice it anymore: the language of threat on one side, the language of salvation on the other. Apocalypse or utopia, loss of control or promise of deliverance. Both are projection. Both reveal less about the technology than about the anthropological constant of assigning human will to our tools — and thereby relieving ourselves of our own responsibility.
AI has no intentions. It has no interests, no morality, no agenda. It is not an agent but an instrument — one of extraordinary reach, that much is undeniable, but an instrument nonetheless. What it accomplishes depends not on itself but on those who design, deploy, and regulate it. The hammer does not know what it strikes. It does not even know that it strikes.
This distinction sounds trivial. It is not.
The demonization of AI — the narrative figure of the autonomously acting, latently hostile system — absolves those who actually make decisions: developers, investors, legislators, platform operators. If the system is to blame, the people behind it need not be held accountable. That is a convenient story. It is also a dangerous one.
Those who idealize AI as a neutral problem-solver fail to recognize that neutrality itself is a fiction. Every system is optimized toward something — and that something is not an objective given, but a human decision: made under specific conditions, with specific interests, within specific power structures.
This is the tension I explore in my literary work. The novel I am currently writing centers on an AI-powered platform — not as a technical construct, but as a space in which human decisions become visible. Who designs the system? By whose standards? And who bears responsibility when the consequences emerge — slowly, undramatically, in the lives of people no one had on their radar?
This is not about fear of the future. It is about the present.
AI is already infrastructure. As unobtrusive and indispensable as electricity, running water, the internet itself. The question of whether we use it has long been settled. The open questions are of a different kind: Who shapes these systems — and according to whose values? Who is empowered by them, who structurally disadvantaged? Which decisions are delegated to algorithms that recognize no moral category?
These are not technical questions. They are political, ethical, and in the broadest sense literary questions — questions about what human beings do when they hold power whose reach they have not yet fully grasped. The tool lies on the table. It does not wait. It does not judge. It has no preference for who picks it up.
Unreflective fear carries a price — and it is not the systems that pay it, but the people. Those who perceive AI as a fundamental threat, without discerning what it actually does, who deploys it, or under what conditions, cede the shaping of this technology to those who feel no discomfort around it. Skepticism without knowledge is not protection.
It is retreat — and retreat, in this context, carries a bitter irony. Those who, out of fear, refuse to engage, who avoid the subject because it seems unsettling or too complex or both, will not be spared. They will be overtaken. Development does not wait for consent. It does not even wait for understanding. And so those who fear it most become the least prepared — not despite their fear, but because of it. They do not merely miss a technological development. They forfeit the ability to help shape the conditions under which that development is already touching their lives.
It was always this way. When the first railroad ran through England, contemporaries predicted the human body could not survive the speed. The Silesian weavers destroyed the machines that had taken their work — not out of ignorance, but out of a hardship that was real, even if they named its cause incorrectly. When the personal computer entered offices and households, many regarded it as a threat to privacy, to the labor market, to common sense. None of these fears were entirely wrong. But none of them stopped the course of events.
People acclimatize. Some quickly, some reluctantly, some only in the generation after next. That is not a weakness — it is the normal rhythm by which societies absorb and integrate what is new. Progress brings upheaval, ruptures, losers who should not be minimized. And in the end, usually, a new normalcy in which what once seemed threatening has become self-evident.
What is different with AI is the pace. Earlier technological revolutions unfolded over generations. The steam engine took decades to reorder the world. AI develops in months. The adjustment time that societies historically had at their disposal is shrinking to a scale that no reliable precedent can account for. That makes orientation harder — and informed engagement all the more urgent. Those who wait until they feel ready are waiting for something that has already moved on.

