header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future

16 years      OF Archives



The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 


What will emerge once all the new technologies now scattered are merged?

A few decades ago, looking at the telephone of that time, and then at the radio, the television, the camera, the video recorder, the maps, the flashlight, and many other artifacts I could never, not even in a moment of high imagination, anticipate that some One day all these devices would be merged into what we today call a smartphone.

But now, with that previous experience of seeing how a single device or artifact emerges from different technologies, now we can and must ask ourselves what will emerge once quantum computing, neurological computers, artificial intelligence, new forms of energy, robotics, and other advanced technologies merge into a single “reality.”

Everything points, first of all, to the arrival of an almost immortal synthetic human, with physical, mental, and cognitive capacities and abilities unthinkable and unthinkable for us, mere biological humans, mortal and certainly limited and finite.

In other words, just as the disparate elements mentioned above have merged into smartphones, so too will the disparate elements of new technologies (dare we suggest) also merge, but no longer into something so small that we can carry it in our hands. hand, but in something so big, possibly on a planetary level, that we will no longer be able to understand.

Certainly, I am not talking about science fiction or conspiracy theories, but about a careful and constant reading of scientific reports and articles, published by serious, respected, and verifiable sources, which would indicate that this process of emergence of new entities never seen before in the known history of humanity they are already emerging.

Again: it's not science fiction. The global network of supercomputers is already underway. Artificial intelligence capable of anticipating the actions of human beings (and even correcting them before they act) is already a reality. Prototypes of artificial brains have already been developed. Synthetic skin and muscles have been in development for years. And that list could be expanded almost indefinitely.

So, what is emerging? And another question: how prepared are we to respond to whatever emerges from the union of technologies that, as Arthur C. Clarke said, already seem indistinguishable from magic?

The arrival of synthetic humans and super-intelligent robots will mean coexisting with non-human intelligent entities (although not necessarily people). How will this unprecedented situation affect our brains, our hearts and even our decisions? I mean, we can barely live among ourselves, how are we going to interact with the new thinking beings?

But this new reality includes another perspective, that of “them.” How will synthetic humans and super-intelligent robots treat us? Because, although they are the result of our experiments, we will be able to do little and nothing to stop them if, as anticipated, in each of them all the technologies already available but still separated are merged.

And even if none of the above happens eventually, the exercise of thinking about it and anticipating it is valuable in itself because it serves as an exercise to prepare ourselves for a future we cannot anticipate. 

Closing ourselves to the present means excluding ourselves from the new future

I recently witnessed a situation in a local supermarket that exemplifies that mental, emotional, and psychological closure that, by keeping us locked in the present, prevents us from seeing the new future and, therefore, connecting with that future. That is, we consciously or not exclude ourselves from the emerging reality.

It turns out that a couple, already elderly and clearly newly arrived in the country, chose a package of meat, and then asked to speak to someone at the supermarket. A few minutes later, an employee who spoke the couple's language arrived. Then, the new immigrant told the employee that the meat was “badly cut.” For this reason, she added that she and her husband offered to “teach” how the meat should be cut.

In the minds of these people, the way they were accustomed to seeing meat cut, that is, the “normal” way, must be the only proper way to cut meat. Any other way of doing it was “wrong.” Even worse, anyone who did not cut the meat the way they (the newcomers) expected was, at best, an ignoramus who should be properly educated.

The supermarket employee, clearly understanding the psychological and cultural reasons that motivated the attitude of the couple in question, patiently explained to them that this is how meat was sold in that supermarket, that the cuts of meat were not bad, that the local butchers were not they needed to be educated, and that there were specialized butcher shops where they could buy the cuts they wanted.

This example reveals a highly prevalent psychological and existential attitude in our society in which clinging and sticking to the past (more specifically, the past that one knows and lived) is considered the best and, in many cases, the only strategy when encountering a reality. different from that past and for which one is not prepared.

Obviously, the example of a new immigrant couple shopping in a supermarket in their new country and expecting to find exactly what they saw in supermarkets in their home country is a superficial and irrelevant example. However, the attitude of intransigence toward the new future and the intense (and harmful) desire to perpetuate the past and repeat the present are not.

After all, just as this couple demanded that the meat be cut the way they wanted and considered any alternative “bad,” a similar attitude is seen in social, political, and religious groups who consider their “truth” (please note quotes) to be the only and authentic one and that any other option is something “bad” that must be eliminated or modified.

Intransigence can often be detrimental when it prevents individuals, organizations, or societies from adapting and embracing a better future.

Both the history of humanity and recent news in the media justify without a doubt that this is exactly what often happened and still happens in relationships between human beings. However, one thing is a trivial disagreement about “badly cut” meat, and another thing is a disagreement that endangers all humanity.

How can you think and act when science fiction becomes reality?

As a child, I liked to watch Star Trek, the landmark science fiction series that still continues to impact current culture and serves as inspiration for technological creations. However, I never anticipated, neither at that time nor in the near past, that Star Trek, far from being mere entertainment, was actually a documentary of the future.

I'm not exaggerating. On April 4, Arizona State University announced the start of a class titled "Star Trek and the Future of Humanity in Space," where they analyze the fields of science fiction and academic inquiry. to prepare students (and all of us) for the challenges of venturing beyond Earth.

The class will be taught by Dr. David Williams, a research professor in ASU's School of Earth and Space Exploration and a member of the science team for NASA's Psyche mission. Williams stated that Star Trek will serve as “a valuable mechanism for investigating the profound questions surrounding humanity's fate in space.”

In other words, the foundations of Starfleet Academy have already been established. And that's not all.

At the end of March 2024, it was announced that Microsoft has plans to build a $100 billion supercomputer, called “Stargate” (the name of another science fiction series). The new supercomputer would power OpenAI's next generation of artificial intelligence systems, according to a report by The Information.

Stargate is supposedly the fifth and final phase of Microsoft and OpenAI's plan to build several supercomputers across the United States. This supercomputer network is rumored to be one of the largest and most advanced data centers in the world.

As if that were not enough, a new big data processing system developed by researchers at the Chinese Academy of Sciences makes it possible to analyze large-scale neural activity throughout the brain in real time.

The FX System is based on a virtual reality system generated by an optical interface that extracts activity from more than 100,000 neurons through a brain-machine interface, thus allowing researchers to analyze neuronal activity throughout the brain in real time.

Thanks to this development, the artificial brain, perhaps similar to the positronic brain of the android Data from Star Trek, does not seem as distant or impossible as it seemed until very recently. But there is still more.

A study published last March in Scientific Reports (Nature) by Professor Takaya Arita and Associate Professor Reiji Suzuki of the Graduate School of Computer Science at Nagoya University (Japan) reveals the emergence of various AI personalities, similar to human personalities, marking a significant milestone in the field of AI.

So how can you think and act when science fiction becomes reality? Perhaps the first step should be to realize and accept that the future is no longer a continuation of the past or a perpetual repetition of the present. Perhaps we should also acknowledge that we are attached to patterns of thinking preventing us from connecting with the emerging future.

Whatever the case, what was previously thought to be entertaining science fiction, now it is real.

A very dangerous limiting narrative: the techno-determinist narrative

Searching for recent news related to the future, I came across the article “How Much of Our Humanity Are We Willing to Outsource to AI?” by Sage Cammers-Goodwin and Rosalie Waelen, (The Nation, March 27, 2024), where the authors question the passive acceptance of artificial intelligence (AI) and advanced AI systems such as AGI (Artificial General Intelligence).

Such uncritical acceptance of a technological future is known as a techno-determinist narrative, that is, a belief system that views technological progress as inevitable and inherently desirable, often overshadowing critical reflections on its broader implications for society.

The techno-determinist narrative, by suggesting that the only topic of debate is the ethical, social and existential implications of AI, prevents another deeper, urgent and necessary dialogue on how to regulate and optimize AI systems. In other words, the narrative of technological determinism is self-reinforcing.

This narrative, presented and accepted as the only possible alternative, subtly shapes our collective consciousness, fostering a mindset that passively accepts the trajectory of technological advancement without questioning its underlying assumptions or potential consequences.

By framing AI and AGI as inevitable forces of progress, we risk overlooking alternative futures and giving up agency in shaping the role of technology in our lives.

As Cammers-Goodwin and Waelen highlighted, this techno-determinist perspective urges us to reevaluate our priorities and values ​​as a society. Are we willing to sacrifice elements of our humanity for the sake of technological advancement? Is the relentless pursuit of efficiency and automation coming at the expense of human connection, creativity and meaning?

Furthermore, the authors question the wisdom of blindly accepting generative AI systems. While these systems may offer tantalizing promises of innovation and convenience, we must critically examine their implications for privacy, autonomy, and societal well-being. In fact, such systems could aggravate existing inequalities and erode the fabric of social cohesion.

To navigate this complex landscape, we must transcend the limitations of a techno-deterministic narrative and cultivate a wisdom, intelligence, and understanding that does not reduce reality and the future to just more and more technology, no matter how tempting it may be to delegate our lives. and our futures in AI.

Ultimately, the techno-determinist narrative presents both opportunities and challenges in shaping our future with AI. By interrogating its underlying assumptions and implications, we can chart a course toward a future where technology serves as a catalyst for human flourishing rather than a determinant of our destiny.

“We should not let the promise of productivity or narrow debates about AI’s ethical implications distract us from the bigger picture. Under the guise of improving humanity by increasing productivity, we risk releasing our ultimate replacement. We should not overestimate the durability of human skills”, Cammers-Goodwin and Waelen wrote.

It is time to answer the call, the call to critically evaluate and re-evaluate the impact of technology on our lives and actively participate in shaping a future that aligns with our values, aspirations, and collective well-being. Human life is too valuable for the human future to no longer be human.

Will we survive the threshold of 2030? Maybe yes, but we must prepare

For some reason, 2030 is presented as an interesting year in the history of humanity, a pivotal moment in which, apparently, we will cross a threshold into a new reality for which we are not prepared and which we can barely describe. And this is neither speculation nor science fiction, but just paying attention to recent advances in science and technology.

For example, in April 2030, NASA's Europa Clipper spacecraft will begin orbiting Jupiter (a 1.6 billion-miles journey from Earth), passing about 49 times near Europa, one of Jupiter's moons, to study through advanced instruments the possibility of life on that moon, because there is an ocean there.

According to Fabian Klenner, an astrobiologist and expert in planetary sciences at the University of Washington, it is anticipated that the space probe “will detect life forms similar to those on Earth,” either on Europa or on other moons with ocean orbiting Jupiter or Saturn.

For his part, the well-known futurist Ray Kurzweil recently declared that, as he had already anticipated in 1999, by 2029 artificial intelligence will reach a level of intelligence similar to that of humans "due to the exponential growth of technology."

And Kurzweil himself, who over the last 30 years correctly anticipated 86 percent of his predictions, stated two weeks ago that 2030 could be the year in which humans achieve immortality, thanks to a combination of advances in genetics. , nanotechnology and robotics that will allow not only to cure now incurable diseases, but also to rejuvenate people.

In addition, according to Nick Spencer and Hannah Waite, authors of the new book Playing God, extraterrestrial life, human immortality and truly intelligent (and perhaps conscious) artificial intelligence are joined by other possible irreversible advances, such as genetic engineering, and great challenges still unanswered, such as climate change and the destruction of the planet.

All these elements together “make us think” (and, I add, doubt) “about the nature and destiny of humanity,” say Spencer and Waite. And they are right. In a world where corruption reigns and authoritarianism and populism expand, where wars are endless, hunger grows and natural resources are reduced, how will we respond to the great challenges mentioned?

Will artificial general intelligence help us solve our problems? Will immortality give us more time to restore the planet? Will the discovery of extraterrestrial life be the inspiration for global cooperation? Maybe, but history is not in favor of positive results.

While these advancements offer opportunities for progress and innovation and have the potential to reshape how we approach global challenges, they also present ethical, social, and existential considerations that must be carefully navigated to ensure a positive impact on humanity and the world at large.

I believe there is a greater likelihood that artificial intelligence will exacerbate authoritarianism and social inequality, immortality will spark a desire to “reduce” the planet's population, and extraterrestrial life will trigger social unrest and global instability.

Will we survive the threshold of no return of 2030? Maybe so, but we must prepare.

Will we leave our decisions and our future in the hands of “silicon sages”?

The rapid advance of artificial intelligence (created by us, it is worth remembering), added to the constant evidence of our inability to live in harmony with the planet and with others, have motivated a growing number of people to insist that AI must make important decisions about our future and perhaps even govern our lives.

The new situation has been cataloged by Dr. John Vervaeke, neuroscientist and philosopher at the University of Toronto, as the arrival of the “silicon sage.” For his part, the Spanish scientific popularizer Ignacio Crespo describes the new trend as the arrival of the “binary augur” (an excellent description without a doubt).

Regardless of the name used, it is clear that in the face of our own evident inability as humans to solve our own problems, many people (how many people are unknown) assume that it would be better for AI to make the decisions. And, when it comes to political decisions, there are plenty of reasons and examples that indicate that it would be better for politicians not to decide.

But where are we humans? I mean: what good is it for us to be human if we can no longer or do not want to decide for ourselves? In other words, what have we become (or are we about to become) if we even have to delegate, or intend to delegate, our most important decisions to AI?

It seems that it is not enough for us that algorithms decide what we should buy online or what movie we should watch or what message on social media is or is not for us. It seems that it is not enough for us that AI monitors our emails or generates texts and images (almost) at the level of human creators. Now we want to leave our entire lives in the hands of AI.

This situation, this tendency, has little progress and much regression because it seems to grant the binary augur, the silicon sage, a level of wisdom and justice above any human being and, therefore, it is considered appropriate and even necessary to deposit all our trust (and bet our future) on the decisions made by AI, that is, our own creation.

Where then were the great traditions of wisdom that for millennia have been transmitted, written and rethought in almost all cultures around the world? I dare say that they were trapped (that is, devalued and distorted) within countless “videos” published on social networks, mostly by those who know nothing about these great traditions.

I'm not suggesting either going back in time or turning off AI. But, at the same time, I dislike the idea that humanity reaches the point of surrendering to its own creation, of abandoning all ability to remember, live and think. In fact, that situation terrifies me.

As Dante said in Canto 1 of Inferno, those who enter hell are those who forgot the benefits of the intellect, those who stopped thinking.

Little has changed in our society in the last two and a half millennia

I recently read that in our society “everything is lost” because “bad people serve as a good example and good people serve as mockery.” That complaint sought to reflect the “disintegration of the fundamental pillars” of current society, and, more specifically, the great “ethical challenges” facing the world at this time.

But that was not the only complaint I found published in the media in recent times.

Someone else complained, for example, that we live in a time in which “those who have not yet been humiliated by life or know their own limitations,” in the best narcissistic style, “exalt themselves” and believe they are “equal to the best”, although in reality they are not and hardly ever will be.

Another person, focusing on young people, maintained that today's youth do not respect "neither authority, nor elders, nor teachers," adding that young people prefer to "chat" instead of working or exercising. For this reason, the youngest have become the “dictators of parents and teachers.”

All these observations (even recognizing that they are generalizations and that the exceptions are many) seem to appropriately represent the current situation of our society, where the consequences of one's actions matter little, where “a donkey is the same as a great professor.” (as the Cambalache tango says), and where everyone believes they are better than everyone else and has the right to demean the other.

Furthermore, new technologies, such as the Internet and social networks, instead of facilitating dialogue, prevent it and at the same time restrict communication to short texts, funny images or simply a “Like” (in the best of cases). . For this reason, the observations about current society shared in the previous paragraphs seem accurate and for this reason we must offer an important detail:

The three complaints mentioned above were expressed more than 2000 years ago.

The first quote is from the Greek philosopher Democrates, probably from the 1st century BC, that is, a contemporary of Julius Caesar and the emperor Augustus. Democrates, who some say had thoughts similar to those of modern democracy, complained about the high level of corruption in the society of his time.

The second quote is from the well-known philosopher Aristotle, from the 4th century BC. In this case, the complaint focuses on those who believe they already know everything because they know something. This is (I add) a situation worse than ignorance: ignorance can be remedied with knowledge, but self-deception rarely has a remedy.

The third quote is from Socrates talking about the young people of Athens about 2,400 years ago, although it could apply to young people almost anywhere in the world in our own day. But parents are also responsible for their inability to accept the identity of a new generation.

Let’s keep in mind that Socrates accused of corrupting young people because he taught philosophy to them, thus teaching them how to develop their own ideas.

In short, in 2500 years of Western “civilization”, we have not advanced or improved (almost) anything.

Reality not only does not kill the narratives, but it does not even make a dent in them

The saying that “reality kills narratives” is repeated with some frequency, seeking to express that there are certain irrefutable facts or data that, when presented or when we become aware of them, nullify unfounded or unverifiable stories about reality. Unfortunately, that is not the case.

For example, data and warnings about the harmful consequences of smoking, even if based on solid scientific evidence, do little to change the behavior of those who want to smoke. And the same could be said of many other products and activities that, although harmful, continue to be consumed, used, or practiced.

In the same way, rational arguments, historical research, archaeological evidence or whatever one presents do little and nothing to change the stories of those who prefer to remain attached to their beliefs, dogmas, and doctrines instead of opening their minds. and heart to curiosity and wonder.

And therein lies the heart of this issue: our worst addiction is not addiction to drugs, money, or immoral activities. Our worst addiction is that we have become addicted to ourselves, as Father Richard Rohr once expressed (if I remember correctly).

We have become so addicted to ourselves that any thought that does not conform to our beliefs or expectations is immediately rejected and the cause of that unwanted thought is marked as a heretic, traitor, or liar, being expelled, anathematized, excommunicated, and sent into a real or social exile, so typical of other times.

In this context, there is little place (in fact, there is no place left) for that attitude of curiosity, acceptance, and healthy indignation that Paulo Freire proposed as the basis of an education for liberation. And, as a consequence, the same stories are repeated over and over again with no other basis or support than a mind and heart addicted to themselves and separated from others and the universe.

These stories, or rather, these limiting narratives not only dwarf the world of individuals, but are immune to creative dialogue and empathy, thus perpetuating (and even reproducing and expanding an uncritical, domesticated, and superficial thinking in which based on the current “unjust social orders” that Freire spoke of.

As this influential Brazilian pedagogue and thinker emphasized, (paraphrasing) there is no change in education without a change first occurring in the level of consciousness of educators. In the Theory U (Otto Scharmer) maintains that all change depends on the level of consciousness of the change agent.

But limiting stories do not allow any change, but only lead to repeating the past or perpetuating the present, thereby refusing any dialogue with “facts” or “data” because that would mean an act of introspection and an attitude of humility.

As we already indicated, data does not kill stories, no matter how far-fetched those stories may be. They don't even make a small scratch. But stories can silence data and, even worst, can reduce he totality of reality to just a few slogans. At this time when the future of humankind is at stake, that’s painful to see.

When talking about serious topics, humor yes, giggles no

I recently participated in a meeting of community leaders, businessmen and students convened by the organizers to talk about a topic of undeniable importance: the great challenges facing humanity in this historic moment of transition to a new era. To my amazement (and annoyance), the conversation was almost immediately filled with giggles.

A few days later (not by chance but by synchronicity), I read an article written by Dr. Eric Haseltine (neuroscientist) and published by Psychology Today, where Haseltine analyzes the dangers of the so-called “giggle factor” when the “giggles” are used as a defense mechanism to not talk about complicated or threatening topics.

According to Haseltine, the giggle factor is activated when one finds oneself in a situation “very removed from normal experience,” so removed that it produces “tensions by moving us away from our comfort zone” and, for that reason, makes us lose “the illusion of control and predictability of our future.”

In other words, giggles arise when we are faced with undeniable evidence of “unpredictable and uncontrollable changes” in our lives, so that we simply dismiss that evidence, whether it is climate change, social injustice, artificial intelligence, or the possibility of extraterrestrial life. We do not laugh out of happiness or joy, but out of fear.

Two examples come to mind. For example, several decades ago, I traveled with a group of friends to another country, and upon arriving at a certain city where people dressed completely differently than us, one of the members of the group began to laugh. His initial giggles turned into uncontrollable laughter.

And, closer in time, when I entered the classroom of a private university to teach a philosophy class, one of the students looked at me and started giggling, then laughed so hard that she had to leave the classroom to calm down. . It was not a lack of respect, but, as she explained to me, she had never had a Latino teacher in all her years in college.

In neither of those two cases was there any danger to anyone, but the danger of laughter arises when the issues are so serious that they affect entire countries and even humanity in general, such as climate change, the recent pandemic, and the current wars. and numerous other similar challenges.

In these contexts, giggling is the expression of “an unconscious adjustment of our perceptions to reduce the stress associated with a potentially disruptive phenomenon,” such as artificial intelligence replacing and displacing humans. Instead of responding to the challenge, we laugh and add phrases such as “That will never happen” or “God will not allow it.”

However, in our time, “unanticipated and uncomfortable disturbances” already happen almost daily, as Haseltine rightly says. That is why, in addition to giggles, people now also ridicule and dismiss those who share serious questions about serious problems. However, let us be mindful of who may enjoy the last laugh.


We live in such a confusing time that it is difficult for us to even live

I recently read an article on a well-known international news site that said that we live in a time probably without historical precedent in which rules, laws and agreements are no longer respected and in which everything is insatiably focused on achieving more money, more attention, more likes as the goal of life.

In other words, we live in the age of hypernarcissism in which the existence of the other as another like me is not recognized and, in fact, the existence of the other is not recognized. While the individualist says “I am the center of the universe,” the narcissist says “I am the entire universe.”

In that context, social rules, laws and customs, whether paying taxes, respecting traffic signs or holding the door open for someone to enter first, are always solely and exclusively for others, but they never apply to us.

And, for this reason, each one feels that they should no longer participate in a collective reality, creating their own personal “reality”, which has little or nothing in common with the shared reality. This capacity for extreme self-deception (so old that Heraclitus was already talking about it) obviously prevents any genuine and creative dialogue.

For this reason, every encounter with another person becomes a competition, a conflict and, in many cases, a fight. It is not about listening and learning, but about listening to respond, to win an argument. In the absence of humility and respect, each interaction is seen as an opportunity to show oneself as superior to the other person.

At the same time, and as a consequence, practically no one takes responsibility for their actions, much less for their own lives. There is no such thing as being responsible to anything or anyone, and if, due to these twists of fate, someone demands that we be responsible, then we consider it an injustice or persecution, and we look for someone to “blame.”

If I remember correctly, in 2012 a study published by Harvard indicated that in that year the psychological attitude we just described had become the prevalent psychological attitude among adults in the United States, correctly anticipating that in the near future (that is, now ) that attitude would become globalized, as it actually did, with tragic consequences.

In the context of Theory U (a theory of change based on the self-awareness of the change agent), the situation described here is known as “inner absencing”, that is, an existence based on closing one's eyes to reality, seeking someone blaming and (in many cases) using physical or psychological violence to destroy (literally and figuratively) the other.

This social pathology represents a dynamic of destruction and self-destruction (clearly visible to anyone who wants to see it) because it blocks all access to living a life based on reaching our true potential. In other words, we ourselves block the possibility of creating a different future. So much so, that we are collapsing internally without even knowing it.

View older posts »