The more we delegate to AI and related technologies what until very recently was something strictly human, the more we reveal our own fragility—usually hidden behind a false sense of omnipotence, masked by grand words like “progress” or “future,” now meaningless.
In fact, at a recent conference, Argentine philosopher Darío Sztajnszrajber argued that AI reveals our fragility precisely because it exposes the limits of our knowledge and, in many cases, the limits of the control we have over our own lives.
In the face of AI’s capacity to process immense amounts of data, predict trends, and provide responses at a speed that exceeds our abilities, humans must accept our limits and vulnerability: we are finite because we depend on time, the body, and memory.
Another philosopher, South Korean Byung-Chul Han, had already warned that modern technologization exposes human powerlessness in the face of systems that promise total efficiency while at the same time laying bare the precariousness of human life.
Last century, the (controversial) German philosopher Martin Heidegger already pointed out that modern technology is not neutral—that is, it is not simply a matter of “how it is used.” Today’s technoscience confronts us and reveals us in our condition as “being thrown” into a world that we always seek to control but never manage to, and in which certainties no longer exist.
In other words, AI functions as the famous “black mirror” (created by ourselves) that amplifies what we refuse to see: the fragility of the human condition. This psychological self-deception appears when we project a false self-sufficiency, believing that “we can do everything” because we live surrounded by devices and systems that make us (almost) all-powerful.
In the context of that mirage, we forget that, despite all technology, we are still finite beings: we get sick, we depend on chance, on others, and on the natural world. And in the end, we all die.
That “omnipotence” is not just a mistake in perception, but a cultural mask: it is translated into discourses of unlimited progress, infinite growth, and technological perfection. In reality, as Nietzsche would say, it is a new form of idolatry that hides our fragility behind the myth of technological self-sufficiency.
How do we overcome this situation? Sztajnszrajber proposes that love cracks the illusion of omnipotence because it implies recognizing that something is missing and that we are incomplete. To love is to expose oneself, to depend, to accept one’s own vulnerability and that of the other. I cannot “program” the other to love me as I want, nor can I guarantee or control their presence.
Paraphrasing the words of French philosopher Emmanuel Lévinas, the face of the other challenges me and decenters me because it reminds me that I am not absolute. Love, unlike AI, returns us to the ground of shared fragility.
AI “strips bare” our fragility. Love makes it livable. The danger lies in taking refuge in the fantasy of technological omnipotence and forgetting that we humans flourish precisely in shared vulnerability.
Comments
There are currently no blog comments.