Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future


17 years      OF Archives

WEEKLY COMMENTARY (AUDIO, 4 MIN., AI generated)

VISUAL PRESENTATION

DISCLAIMER

The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 

 

Today’s Techno-Sirens Are More Deceptive Than the Mythical Ones

Book 12 of the Odyssey recounts Odysseus’ encounter with the Sirens—those dangerous creatures whose seductive songs lured sailors to a deadly fate. Following Circe’s advice, Odysseus prepared himself and his crew in such a way that, working together, they all escaped the trap unharmed.

Odysseus instructed his companions to plug their ears with beeswax so they wouldn’t hear the Sirens’ song, and he had them tie him to the ship’s mast so he could listen without giving in to temptation—ordering them not to release him until they had safely passed the danger. Mutual loyalty prevented disaster.

A recent rereading of this well-known episode led me to reflect that, in the journey of life—a journey of constant self-discovery and reconnection—we cannot face great challenges alone. We can only overcome them if we are guided by wise counsel and supported by people we trust and who trust us.

In fact, the Odyssey can be seen (among many other interpretations) as a profound symbolic depiction of the journey each of us undertakes through time, identity—or rather, identities—and transformation. In a sense, we are constantly in the process of becoming. And from time to time, we hear those seductive songs of the past trying to distract us.

The Sirens promised Odysseus knowledge of all things. Today’s Sirens, now digitized and driven by algorithms, seduce us with nostalgia, fears, and intense emotions—carefully calculated to replace “logic with spectacle, rational discourse with emotional imagery, argument with effect, and truth with appearance,” as Spanish philosopher José Manuel López García puts it.

A metamodern and existential rereading of Odysseus’ encounter with the Sirens allows us to consider that we have already been seduced by those “songs” (that is, narratives) that we once accepted as unquestionable truths. Though they remain emotionally powerful, we now see that they lead us toward both personal and global shipwreck.

In the Odyssey, Odysseus is not simply returning home—he is also returning to himself. This can be interpreted as the act of reconnecting with the emerging future. The danger of the Sirens lies in their song’s ability to interrupt that process with the seductive promise of a “shortcut” that, far from bringing us closer to the end of our journey of self-discovery, actually nullifies it.

In the 21st century, the “Sirens” sing to sell, to hijack our attention, and to prevent us from having trustworthy people in our lives who might help keep us from being seduced. But I fear it may already be too late. The techno-sirens whisper through social media, fragmenting wisdom and thought.

We no longer sail mythical seas, yet the techno-sirens still sing—not from rocky islands, but from our screens. What does it mean that the only mast I can tie myself to is a conversation with an algorithm?

In the age of techno-sirens, we should learn to sing again. We will not silence the techno-Sirens. We sail beyond them if we remember the journey is not only forward, but inward.

Are We Singing Synthetic Songs? Lessons from an AI-Taught Bird

“Why does the bird sing?” said the Master. “Not because he has a statement, but because he has a song.” Anthony De Mello, The Song of the Bird

A recent article published in National Geographic describes a fascinating experiment in which experts from the University of Buenos Aires created a “robot tutor” that, using artificial intelligence (AI), taught young birds how to sing songs they hadn’t learned from adult birds.

The researchers compared recordings from the 1960s of the songs sung by these birds (the Rufous-collared sparrows, locally known as chingolos) with modern recordings from 2020. They found that while some songs from 60 years ago had endured, others had vanished. Using mathematical models, the AI generated “synthetic songs,” which the birds responded to as if they were natural ones.

According to the article, the young chingolos (Zonotrichia capensis), incorporated the synthetic songs into their repertoire in a way that was “statistically indistinguishable” from how they adopted real birds’ songs. The researchers believe this experiment shows that new technologies can “preserve and even revive” the “cultural aspects” of biodiversity.

In simple terms, the synthetic songs gained “biological credibility” among the birds—they were accepted as their own—thus supporting the recovery of a “cultural diversity” that might otherwise have been lost.

This fascinating Argentine experiment marks a significant advance in the use of AI and robotics for both conservation and the study of animal culture. but it raises numerous and unsettling questions.

We can’t compare chingolos in a park near Buenos Aires with the complexity and diversity of 21st-century humanity, but if AI has proven it can alter the culture of birds, it’s clear it can alter our culture as well.

Put differently, just as the chingolos accepted the synthetic songs taught by the robot tutors and gave those songs the same credibility as the natural ones, are we humans now singing “synthetic songs” to which we uncritically grant the same credibility and acceptance as if they were “natural”?

From another angle, are we delegating to AI the creation and teaching of new “songs” (narratives, thoughts, ideas, perspectives) that younger generations will absorb to fill the void left by what they didn’t learn from their own parents? Perhaps we should recognize that “social media” has already become our robot tutor.

Let me be clear: I fully understand that an experiment with birds in South America cannot be generalized to all of humanity. But I can’t stop thinking about the many experiments with mice or guinea pigs that eventually led to real-world actions—sometimes in favor of, sometimes against—human beings.

Maybe these Argentine chingolos are acting like the proverbial canary in the coal mine, warning us that the boundary between “artificial” and “natural” has become blurred—perhaps even erased—and with it, the lines between reality and fantasy, truth and illusion, culture and algorithms, and even between past and future.

The experiment with the chingolos reflects the possibility that our own cultural creations—art, rituals, traditions, knowledge, and wisdom—could, thanks to AI, disappear and be re-created as easily as the birds’ songs, while we humans remain as unaware of this shift as the chingolos were of theirs.

When Others Tell Our Story (Distorted) Without Us Knowing

I recently came across the story of American musician Sixto Rodríguez, who, despite having little success in his home country, spent decades unaware that he had become famous in South Africa—where, at the same time, people believed he had already passed away. It took the arrival of the internet and the involvement of his daughter and some of his fans to correct that situation.

Rodríguez, from Detroit, recorded two albums in the early 1970s and soon faded into obscurity, turning to construction work to support his family. But in South Africa, he was a cultural icon: his songs were passed down from generation to generation. He was unaware of his fame due to the geographic and cultural separation of that era.

Rodríguez’s story made me reflect on how rarely the identity we construct for ourselves in our minds and hearts matches the identity others create of us through their interactions—even when we don’t know it. In other words, the “self” we are is not limited to our internal understanding of it, but is instead part of a shared narrative.

As psychologists Joseph Luft and Harrington Ingham explained in 1955, there are things we know about ourselves that others also know; things that others know about us, but we don’t; things we know about ourselves that others don’t; and things neither we nor others know about us.

Rodríguez’s experience reflects and illustrates those four quadrants of what is known as the Johari Window (named after its creators): the open area, the blind area, the hidden area, and the unknown area. But seventy years later, social media and related technologies have created a situation in which the privacy of the blind, hidden, or unknown areas no longer truly exists.

This situation could be understood as what philosopher Miranda Fricker calls epistemic injustice—a kind of harm in which a person is wronged either by lacking access to knowledge about themselves or the world, or by not being recognized as a credible knower of their own experience.

We might say Rodríguez suffered this kind of epistemic injustice by not being recognized by those who did know him, and by not knowing how well he was known in other parts of the world—or how inaccurately his story was being told.

The curiosity of his fans, the persistence of his daughter, and the power of the internet helped to correct these distortions and gave Rodríguez back a life devoted to music—a life he had unknowingly sparked in a distant place.

Today, despite all our technologies, we live with only a partial awareness of ourselves, disconnected from the feedback, reflection, and recognition we need. Our extended self—the version of us living in the minds of others—remains out of reach without dialogue, without witnesses, and without trust.

Perhaps someone, somewhere, in another time (the past or the future) or another dimension (digital space, imaginal realm), is waiting for us to discover who we already are—and have always been—for them.

Why people say, “That’s impossible” when they should say “I don’t know how do it”?

A few years ago, we decided to replace a couple of doors inside the house with sliding doors. We consulted someone who had helped us with other remodeling projects, and their response was, “That can’t be done.” As we later discovered, what this person should have said was, “I don’t know how to do it.”

There are many similar examples of situations where, whether consciously or not, we project our own limitations and ignorance, mistakenly assuming that if we don’t know how to do something or can’t do it, then no one else can either. This was exactly the case with the contractor I just mentioned—when, in fact, there was clear evidence that it could be done.

On a humorous note, these kinds of situations reminded me of what often happens in cartoons, when a character only starts to fall into the void after realizing the law of gravity exists—or when they suddenly become aware that they’re in mid-air and about to fall, as if ignoring the laws of nature could somehow suspend them.

But projecting our ignorance onto others and imposing that reality on them has serious consequences in real life. Unlike in cartoons, where no matter how high the character falls from or what they crash into, they bounce right back up, in the real world the outcomes aren’t so forgiving. In other words, believing something is impossible is often enough to make it become impossible.

For example, before 1954, numerous athletics experts believed the human body was simply not capable—nor would it ever be—of running a mile (1,600 meters) in under four minutes. It was even considered a “natural barrier” that no athlete would ever overcome. That is, until British runner Roger Bannister broke it on May 6, 1954.

That day, Bannister (who later went on to have a successful career as a neurologist) completed the mile in 3 minutes and 59.4 seconds. But what’s even more remarkable is that his record only stood for 46 days, until Australian John Landy lowered it to 3:57.9. Today, the record belongs to Moroccan runner Hicham El Guerrouj, who ran 3:43.13 on July 7, 1999, in Rome.

But how and why was Bannister able to surpass what seemed insurmountable? Because he didn’t buy into the belief that the so-called barrier was truly unbreakable. Bannister is often credited with the saying, “The man who can drive himself further once the effort gets painful is the man who will win.” In other words, by not internalizing the narrative of the unbreakable limit, he was able to transcend that limit.

Bannister’s attitude made me think of an idea from the ancient Stoics—the idea that obstacles are merely illusions, or if you prefer, forms of self-deception imposed on us by others, or self-imposed when we accept limiting narratives that we cling to as immovable, unquestioned descriptions of reality.

This quote is attributed to Marcus Aurelius:

“The impediment to action advances action. What stands in the way becomes the way.”

When Reason Sleeps, the Monsters Awaken

Goya: “El sueño de la razón produce monstruos” (Public domain)

There was a time—not so long ago—when every now and then a story would surface that was so unexpected, so distinct from the rest, that it invited the reader to pause, ponder, and perhaps even share it. Today, in an age when every piece of news is designed only to trigger a flicker of attention and a quick reaction, such moments of genuine discovery have become painfully rare.

Yet, they still exist.

Take, for instance, the recent announcement that from August 15 to 17, 2025, Beijing will host the first-ever World Humanoid Robot Sports Games—essentially, an Olympics for robots. This event is worth more than just a passing glance. It deserves deep reflection, not only because it’s something new, but because it signals a profound shift: once again, a space once reserved for human beings is no longer exclusively ours.

Yes, it’s clear that this “Olympics” is, at heart, a promotional showcase for cutting-edge technology—an effort to push new products into the market. But still, humanoid robots created by the world’s most powerful corporations are expected to compete in events modeled after traditional human sports: races, gymnastics, even soccer.

Which raises the question: How long will it be before human Olympic Games include humanoid participants? And how long after that before human athletes are replaced—or pushed aside—by their robotic counterparts, just as machines are already doing in fields ranging from repetitive labor to the most creative endeavors?

What we’re witnessing is a real-world situation that, not long ago, could only be imagined in science fiction. But today, the boundary between fiction and reality has become so thin, so entangled, that it’s increasingly difficult to tell where one ends and the other begins. And when that line blurs, when reality disguises itself as fantasy and fantasy takes root in reality, reason—the human capacity to think clearly—begins to fall asleep. And when reason sleeps, we begin to dream monsters.

This is not a new insight.

As far back as 1799, Spanish painter Francisco de Goya captured it in plate 43 of his haunting series Los Caprichos. The image, titled The Sleep of Reason Produces Monsters, shows a man slumped over his desk, head resting on folded arms, as he is surrounded by eerie, nightmarish creatures—perhaps figments of a dream, or perhaps something darker, more monstrous.

But what kind of “sleep” was Goya referring to? Some suggest he was depicting literal dreams or daydreams. Others—perhaps more perceptively—believe Goya was warning about what happens when reason itself, our ability to understand the world and act responsibly within it, falls dormant.

Today, our “sleeping reason”—our failure to reach our true potential despite having access to astonishing technologies—is being magnified by artificial intelligence. As cognitive scientist John Vervaeke warns, AI may be eroding our autonomy not by overpowering our reason, but by dulling it—by encouraging habits of irrationality that distance us from the very essence of what it means to be human.

The abyss between What We Dream to be and What We Show to be

In these times when screens replace reality and social media profiles stand in for our identity—when every action is posted, and every experience is monetized—we slowly lose our connection to who we are and who we long to become.


By anchoring our identity in the number of “likes” we receive, we reduce our being to whatever fits the new Procrustean bed—now digitalized—shaping ourselves according to what shapes us in the moment. In doing so, we push aside the deep longing to become what we once hoped to be in order to bring meaning and direction to our lives.
 

Long forgotten is that ancient call from someone named Saul of Tarsus, urging us not to conform to the dominant molds of any era, but to be transformed through the constant renewal of our awareness.
 

Because, ultimately, that self that flows with life—the one born of deep questions without answers, rooted in authentic values and aspirations—disappears when replaced by another kind of identity: an idealized version of the self that exists solely to please others and to grab attention.
 

This is no longer about a legitimate desire to grow or to share the good in our lives with others, hoping they too will flourish. Instead, we live focused entirely on projecting an image that will be accepted—regardless of whether that image has anything to do with our reality.
 

We’re not suggesting a return to the past, much less that the past was somehow better. That would be self-deception. What we’re suggesting is becoming aware that instead of growing inward, we’ve scattered ourselves outward, posting images that leave our inner lives increasingly hollow.
 

By constantly repeating gestures, phrases, or styles just because they perform well online, we end up behaving as if life itself were a never-ending self-promotion campaign. The pursuit of approval becomes routine, and with it, the aspirational self—that part of us that invites change, even if uncomfortable—is pushed into a corner.
 

Even more troubling, over time, we may forget what we once dreamed or aspired to become, to the point of confusing the edited (and published) image with the real person. As we begin to live according to external expectations—wearing the masks of others, as Parker Palmer would say—we become yet another simulation in a society flooded with simulations, as Baudrillard warned.
 

What once seemed like success becomes a burden, and what looked like connection turns into isolation.
 

But not all is lost. Returning to the aspirational self doesn’t require turning off your phone or deleting your accounts. It simply requires pausing for a moment and asking yourself: Am I choosing what I show, or just copying it? Does this version of myself help me grow?
 

That’s why reclaiming the aspirational self means allowing ourselves to be unfinished and imperfect—but authentic. In a world saturated with simulators, rediscovering who we truly want to be is an act of courage—and the first step toward a life with meaning.

 

When Silence Speaks, A New Life Begins

We live in a time when chaos and noise seem to cover every part of our lives — bad news, shallow opinions, constant crises, and rapid, disorienting change. In the middle of this whirlwind, it’s only natural to feel fear, confusion, frustration, or even resistance to anything that hints at "change."
 

In moments like these, words often fall short — or, as British philosopher Tim Freke puts it, they become “irrelevant.” Sometimes speaking too much doesn’t open doors; it closes them.
 

And I’m not talking about doors to business deals or new opportunities — I’m talking about the deeper portals that lead to our future.
 

That’s why today, just as every time I write or speak, I’m not offering you theories or solutions.
I don’t have them — and to be honest, I never have. All I can offer is a simple invitation: to create a space — whether within yourself or shared with others — where silence is allowed to speak, and words are allowed to fall away.

 

When the noise inside and around us begins to quiet down, when we stop clinging to our “certainties” and self-imposed limiting narratives, something new begins to emerge. It’s not something we can force or manufacture.
It rises naturally, like a hidden spring, from the open mind and the open heart.

 

In openness, we allow new life to begin.
 

But stepping into that openness isn’t easy. Between what we know and what is just starting to show itself, there’s an uncertain space — a space of ambiguity. It’s not the firm ground of the familiar, nor the blind leap into the unknown. It’s a threshold — a place where the old and the new brush against each other, sometimes clashing, sometimes embracing.
 

In that ambiguity, we allow the old and the new to meet.
 

And in that delicate, luminous meeting, we need more than intellectual understanding.
We need faith — not in the sense of adopting a dogma or joining a group, but the deep kind of faith that connects us to life itself. A trust that something greater is already at work.

 

In faith, we allow the new life to become a living truth.
 

The greatest transformations often begin in the smallest of ways — with a silence that dares to listen,
with an openness that dares to trust, with a heart that, even trembling, dares to believe that something beautiful is already on its way.

 

We need “islands of coherence” — as scientists like Ilya Prigogine and thinkers like Otto Scharmer describe them — small spaces of hope in the middle of the chaos. Places where we don’t waste energy fighting the old or denying the pain of the present but instead tend to the seeds of the new — seeds that are already quietly breaking through the soil.
 

We don’t have to understand it all to take the first step. All we need is to open ourselves to the possibility of a fuller, brighter, more authentic life that is trying to emerge through us, here and now, in the silence between words.

 

Interwoven News Stories Reveal New Dimensions of Our Consciousness

In the frenzied, fast-paced rhythm of today’s news cycle—what Walter Ong once described as “pumping data at high speed through information pipelines”—stories overlap and pile up without offering direction or purpose, and often without any meaningful context beyond novelty or entertainment. But there are exceptions.
 

Recently, for instance, a report emerged based on an article in the journal The Astrophysical Journal Letters, revealing that the planet K2-18b—located 124 light-years from Earth—might be a habitable water-covered world. According to researchers from the Institute of Astronomy at the University of Cambridge, it could host liquid water across its surface.
 

More specifically, the scientists detected “the most promising signs yet of a possible biosignature” on that exoplanet. In plain terms, life—likely microbial—might exist or might once have existed on K2-18b.
 

Almost simultaneously, another headline reported that experts from Google’s DeepMind division declared that artificial intelligence has now grown “beyond human knowledge.” In their presentation, Welcome to the Era of Experience, researchers David Silver and Richard Sutton argued that AI will develop “incredible new capabilities” once it begins learning through experiences and interactions.
 

Meanwhile, yet another report detailed how two scientists from the University of California, San Diego, identified the “rules” the brain uses to form memories. The most significant rule? The brain adapts these rules to determine how neurons communicate based on what is being learned.
 

According to researchers William Wright and Takaki Komiyama, the brain’s billions of neurons simultaneously apply several different sets of learning rules. This allows the brain to encode new information “with greater precision.” This, they say, is how memory is formed.
 

Taken together, these three stories (and others like them) make it clear that humanity is now measuring times and distances—both natural and artificial—that are wildly disproportionate to our capacity for understanding. They render our human existence small, fleeting, and nearly irrelevant.
 

These ideas resonate with the work of contemporary philosopher Benjamin Cain, who explores the notion of deep time—a scale of time so vast it exceeds human comprehension yet constantly surrounds us like an impersonal abyss.
 

Similarly, philosopher Tim Morton discusses the existence of hyperobjects—entities so massive in temporal and spatial dimensions that they escape the scale of human cognition. They cannot be fully visualized, located, or sensed through ordinary means or even our most advanced technologies.
 

And in a 2015 paper, Greek researchers Helen Lazaratou and Dimitris Anagnostopoulos introduced the idea of transgenerational objects—psychological constructs unconsciously passed from one generation to the next, shaping the thoughts, behaviors, and emotions of multiple generations.
 

If Deep Time reveals the sacred vastness of our universe, Hyperobjects reveal the unseen mesh we’re embedded in, and Transgenerational Objects reveal the hidden stories we carry, then we are, indeed, on the edge of consciously seeing deeper and wider.
 

So, we are left with a profound question: Will we learn to live—and co-live—within this new spacetime entanglement and psychohistorical depth? Or will we stubbornly cling to a separate, autonomous “self”

 

The Lack of Good Questions Disconnects Us from the New Future

In a recent interview, Spanish philosopher Juan Carlos Ruiz stated, “Nobody teaches us how to ask questions.” He then expanded on this idea, explaining that we lack a “pedagogy of the question” and, as a consequence, we also lack an ethics of dialogue—a key element for connecting with the emerging future.

As the eminent Brazilian educator and philosopher Paulo Freire noted last century, our educational systems have placed so much emphasis on answers that they’ve neglected the (perhaps even greater) importance of questions. While answers may demonstrate a degree of knowledge, questions generate new knowledge.
 

In our current era, as Ruiz points out, the situation has become even more serious. After so many decades of prioritizing answers, the rise of artificial intelligence has blurred the lines between “getting answers” and “gaining knowledge.” But this process often skips the personal transformation that comes from engaging with new knowledge.
 

This ease and speed of access to answers, Ruiz suggests, limits (and I would add, hinders) the expansion of our language. It leads to what he calls a “lexical poverty,” which in turn “often degenerates into cognitive poverty.”
This brings us to a timely quote from Wittgenstein: “The limits of my language mean the limits of my world.” (Tractatus, 5.6). For Wittgenstein, language is a mediator between us and reality (the world), whether in the context of formal logic (Tractatus, 1921) or within shared social practices (Philosophical Investigations, 1953).

 

When we stop asking questions, when we only seek answers, when vocabulary and understanding diminish, and when propositional knowledge (as John Vervaeke puts it) or what Ruiz calls the “declarative dynamic” is overemphasized, our world becomes narrower. Other ways of knowing—through processes, perspectives, and participation—are abandoned.
 

Vervaeke describes this condition as the “tyranny of propositions”—a mindset in which truth is reduced to the correct articulation of data (“Rome is the capital of Italy”), without questioning our ability to understand that data, its context and relevance, or our relationship to the community from which that data arises.
 

In short, we become disconnected from reality because we turn into spectators of our own lives, lacking the ability to rebalance our systems of knowledge. Without that rebalancing, we remain stuck in fragmentation—a state Vervaeke famously describes as “the meaning crisis.”
 

Freire advocated for an education rooted in curiosity, critical thinking, and above all, dialogue. None of this is new—Socrates was practicing it 2,400 years ago. But this isn’t about returning to the past or recreating it in the present; it’s about moving away from the shortcuts and superficialities that dominate today’s culture (think short social media videos).
 

If the future depends on our capacity to ask questions—and if no one is teaching us how to do that—then perhaps we need to return to the enduring questions of the past that are still relevant today. Starting, perhaps, with one of the most existentially iconic and paralyzing questions: “To be, or not to be: that is the question.” Let’s try it. 

 

We Anthropomorphize AI and Robotize Humans

I recently read an article that analyzes two trends: the number of older adults worldwide is increasing, and simultaneously, more and more people of all ages are feeling lonely. The confluence of these two trends means that social isolation among older adults is inevitable, according to a recent study published by the American Sociological Association.
 

Globally, according to the World Health Organization, one in four (25%) older adults lack meaningful social relationships, and four in 10 (40%) have no consistent companionship in their lives.
 

Furthermore, according to the Gallup pollster, 20% to 33% of people globally feel lonely or experience loneliness, with those under 24 being the most affected. In the United States, 52% of adults report “feeling lonely regularly,” according to the American Psychiatric Association.
 

But these two trends, already worrying in themselves, seem to converge with a third growing trend: the anthropomorphization of interactions with artificial intelligence—that is, attributing human qualities to the responses generated by AI and, therefore, reacting emotionally as if a human had responded.
 

According to Carmen Sánchez, a Spanish philosopher and educator, the anthropomorphization of AI is a “major philosophical problem” in our time. It consists of believing that “because (AI) returns correct and appropriate linguistic constructions, we are actually participating in a meaningful dialogue.”
 

In other words, we are so detached and isolated from ourselves that we no longer even recognize ourselves when we look in the mirror of our own creations. In the context of our loneliness and the overwhelming need to satisfy our desire to speak to someone, we even believe we are speaking to someone when in reality we are not. 
 

In a recent publication, Sánchez provides solid philosophical foundations (John Austin's philosophy of language, John Searle's philosophy of mind) to refute "the idea that computational systems possess a true mind or intentionality in linguistic communication." Therefore, "The attribution of understanding is also erroneous."
 

Our loneliness and isolation have reached such a level that, as Sánchez explains, we confuse "the generation of coherent text" with speaking to another person. More specifically, we confuse "the appearance of a phenomenon with its underlying reality" by attributing conscious and intentional acts to AI. And this confusion has consequences.
 

We so desire someone to listen to us that we not only accept the simulation as reality (Plato, Jean Baudrillard), but we also become emotionally and cognitively attached to that simulation, enjoying it when the AI "says" (in quotes) "That's a very good question" or "That way of expressing yourself is very beautiful." 
 

In other words, loneliness and isolation create a suitable context for deceiving ourselves into thinking we're not alone. When we acritically accept the AI simulation as part of (or the totality of) our reality, we are close to acritically accepting any other simulation just because it “tells” us how intelligent and deserving we are. 
 

That's why I keep writing, because I still want to express my own thoughts, feelings, emotions, dreams, frustrations, successes, and failures, not those of some unknown algorithm.

 

View older posts »