Menu
header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future


16 years      OF Archives

WEEKLY COMMENTARY

DISCLAIMER

The commentaries we share here are merely our thoughts and reflections at the time of their writing. They are never our final word about any topic, nor they necessarily guide our professional work. 

 

Has AI become a kind of almost inquisitorial censorship?

A few days ago, I read an article (I'm intentionally not going to give other details) in which an expert on the subject of AI stated that, due to algorithms, less and less information can be shared on social networks since almost any message that one shares can be automatically removed. I wanted to share that article and my message was automatically removed.

I don't know if that unexpected and instantaneous end to my minuscule attempt to express something against AI and social networks was due to the content of the article (without any controversial and highly professional content, I must add), but I will never know, because it turns out that the decision to remove him is not only inexplicable (that is, it does not offer any explanation), but also unappealable.

So, one is left not knowing what happened, what element of the message was not accepted, why it was not accepted, and who made the decision not to accept it. I only know that it is some kind of power or force, invisible to mere mortals, that does what it does (censor) for the “benefit of the community.” Obviously, neither the benefits nor what is meant by “community” are explained.

This idea of ​​a hidden but at the same time quasi-omnipresent, omniscient, and omnipotent AI that decides what can be said and thought and what cannot be said, or thought is an idea that I find not only worrying, but decidedly horrifying because it is about reduce thought through coercion, eliminating all “undesirable” thoughts, something we already know in the history of humanity.

At another level and overcoming all distances and comparisons, train, or program AI so that, based on existing data, it perpetuates social prejudices about what is acceptable and what is not acceptable and, in this way, a single way of seeing reality is promoted (a kind of “algorithmic orthodoxy”) is too much like an Inquisition.

Obviously, I am not saying that algorithms are a new incarnation of the Inquisition, but it bothers and alarms me that the methods used to compile information have too many common elements, such as anonymous complaints, secret surveillance and immediate decisions that cannot be appealed, all of which are opaque practices for the general public, but with the potential to irreversible ruin lives and futures.

It should be clear that we are not equating social media with the detestable oppression and religious brutality of other times, but it is clear that the algorithms are biased, and it is also clear that the ethical principles and governance models of AI are not entirely clear, perhaps because they are all designed and supervised by only a handful of companies.

Perhaps it is time to revisit the past to learn what kind of sacred relics to use to avoid falling into either unhealthy paranoia or repressive self-censorship in this new context in which the capabilities and abilities of new technologies seem aimed at reflecting and enforcing the prejudices of who “govern” these technologies.

We are trapped inside an infinite lie, that of being ourselves

Reality seems so real to us and dreams so unreal that we often lose sight of the fact that we declare the real as real precisely by comparing it with dreams, which we declare as unreal because, when we wake up, their state of reality becomes evident. unreality. However, what if reality was a dream from which we never woke up and, therefore, dreams were the only reality?

Separating dreams (and even nightmares) from reality is not as simple as one might initially assume because, in fact, they cannot be separated: one and the other are intertwined not only from a psychological and biological point of view (that is, we need dreaming to correctly perceive reality) but also existential.

In that context, it seemed appropriate to reread the story “The Night Upside Down” by Julio Cortázar, published in 1956. The suggestion to read this story came, not by coincidence, at the right time from a couple of friends who had just finished. to read it. As it may, Cortázar transports us to a narrative in which the protagonist lives in two worlds, two times at the same time.

It would be disrespectful to try to summarize the story, so we will only indicate that the protagonist must decide whether he is being treated in a 20th century hospital or is about to be sacrificed by the Aztecs centuries ago. The (quantum?) entanglement between both realities prevents making a final decision because what at one moment seems to be a horrible dream later appears to be reality.

This inability of ours to distinguish what is real from what is imaginary (especially in those moments that seem to tear us from the “center of life,” as Cortázar says) leads us to live in a state of constant epistemological ambiguity, which Cortázar aptly describes as a “lie.” infinite." However, this infinite lie goes far beyond self-deception or cognitive limitations.

It could be said that reality itself (however, it is described) is an infinite lie because when reality is presented at the same time (although it is not a temporary matter), it is also hidden. By hiding itself, it hides its own concealment. Therefore, we never manage to perceive all of reality and that partial perception, if we believe that it is the totality of reality, becomes a lie.

In other words, in those moments in which our own being or existence is at stake, we suddenly find ourselves with levels and dimensions of reality that are as disconnected, overwhelming and incomprehensible as our nocturnal dreams, which, in many cases, disappear from view. our consciousness and that, if they persist in memory, are adequately untranslatable into words.

In those moments, as Cortázar masterfully described, we glimpse that reality is coarser and more mysterious than what we consider “real” in everyday life, which is now presented as a one-dimensional “lie” lacking magic, reduced to a “lie” by our thoughts, by our beliefs, and by our conventional conception of reality.

Ultimately, the infinite lie is ourselves.

AI generates fear, or maybe we are afraid of ourselves

To the growing fear (real or imaginary) that artificial intelligence (AI) will soon leave us all without jobs, a new fear is now added: that AI will soon take away our free will and our ability to act, according to recent statements by Jack Dorsey, cofounder of Twitter (now called X).

In an interview with the Oslo Freedom Forum, Dorsey argued that AI (including social media and its algorithms) has, in its current version, a negative impact on our free will (that is, our freedom) because those algorithms limit our options to a choice between algorithms, without actually being a true option or choice.

In other words, in my words, AI creates the illusion that we are choosing, when in reality the decisions have already been made because, according to Dorsey, “these systems (AI, internet, social networks) control every aspect of our lives.” Every day, “they tell us what to do and what not to do.”

Even worse, Dorsey maintains that it is “truly scary” that these tools “are in the hands of only five companies,” all of them global, highly influential, and well known.

Several questions then arise: what are we truly afraid of when we fear AI? To be left without work? To lose our freedom? Or to something even deeper, more terrifying, and even more existential? How soon before AI becomes sentient and surpasses human intelligence?

Perhaps our real fear of artificial AI lies in the fact that the artificiality of the intelligence that we ourselves have created (that is, the externalization of our own intelligence) reveals, for that very reason, the artificiality of our natural intelligence and, as a consequence, the unreal nature and illusory nature of our supposed freedom of choice.

Perhaps we are afraid of discovering that we are not what we think we are, that our freedom is just a fantasy and that what we, as humans, believe we are, we are not and never were. We confused the mask with the person, the map with the territory and the illusion with reality.

As Jorge Luis Borges expressed (I do not know where), freedom is a deception that arises from the ignorance of being manipulated from the outside. For this very reason, Borges suggested that in the praise of the shadow, freedom (in its full sense) is illusory.

Perhaps this means that we live in a constant state of falsehood, like what Calderón de la Barca said centuries ago, that “The king dreams that he is king, and lives with this deception, commanding, arranging and governing.” Both Borges and Calderón propose that what frees us from illusion and awakens us from self-deception is death.

Therefore, it could be said that the fear that AI generates the fear (better yet, the anguish) that, when looking at ourselves in the mirror of AI, we must recognize and accept our finitude, our mortality, and our inauthenticity. Perhaps the shadow praised by Borges (and presented by Carl Jung) lurks deeply inside the AI ​​of our own creation.

We are perhaps the only humans in the galaxy, but not on our planet

A recent revision of the famous Drake Equation (used since the middle of the last century to determine how many intelligent civilizations exist in our galaxy) seems to indicate that we are probably the only humans in the Milky Way. Whether that conclusion is true or not, the truth is that we are no longer the only humans on earth.

The study on the Drake Equation, by Robert J. Stern and Taras V. Gerya, was published last April in Scientific Reports. Stern and Gerya modified the original equation by adding elements such as time and possibilities for the formation of continents and oceans on exoplanets as well as the movement of tectonic plates.

The researchers concluded that, although “primitive” life may be abundant in the Milky Way, there are about 500 Earth-like planets in the entire galaxy, that is, “suitable for the accelerated development of advanced life.” In the best-case scenario, that number would reach one million, a small fraction of the 10 billion civilizations in our galaxy that Frank Drake anticipated in 1961.

While we wait for our galactic cousins ​​to call us or discover us, or we them, we humans of the 21st century are no longer the only humans on earth, as we were since the disappearance thousands and thousands of years ago of the Neanderthals, the Denisovans and other human relatives of ours, but not us. From now on, digital humans will accompany us.

This is neither science fiction nor a future possibility: digital humans are already a reality and, whether we like it or not, whether we are ready or not, in a short time we will interact with them as frequently or more frequently than we now interact with our cell phones.

A few weeks ago, the company Altera announced that it already has $9 million to develop “digital humans with artificial intelligence.” According to Altera, digital humans will be “the bridge” between biological humans and artificial intelligence. Interacting with digital humans will be “like interacting with a human friend,” that is, “they will live, and love like us.” And they will be empathetic.

For its part, a few days ago, NVIDIA announced new technologies and programs focused on digital humans. In fact, that company launched a platform and a series of services to create and interact with digital humans who, unlike what happens with us, can change their face and language as many times as they want.

So, although the chances of encountering other humans in our galaxy have been significantly reduced, the opportunity to encounter other humans (in this case, digital ones) on this planet already exists. And that means that we will have to adapt to them and they to us, perhaps through protocols that regulate the rights and responsibilities of digital humans.

These two questions then arise: How do we define the limits of personality in a world where consciousness and agency are no longer exclusive to biological organisms? And how will we live if our galactic cousins ​​contact us?

blog post

On May 24, Mitsubishi engineers announced that an intelligent robot they created solved the famous Rubik's Cube in just 0.3 seconds. It is worth mentioning that in 2016 the fastest time for a robot to solve the Rubik's cube was one minute, compared to 1:04 minutes in 2009.

Among humans, the best record belongs to Max Park, with 3.13 seconds on June 20, 2023. And when the Hungarian Ernő Rubik created his cube in 1974, he himself, even though he was the creator, needed a month for the colors to appear. each one remained on a different face of the cube, without mixing.

Using round numbers, in 50 years we went from solving the Rubik's cube in 2.6 million seconds (the approximate number of seconds in a month) to just 3 seconds (in the case of humans) in 2023 and only 3 tenths of second in 2024 (i.e. ten times faster than the best human) in the case of intelligent robots.

These types of examples can be repeated almost indefinitely with many other technologies, be they automobiles, airplanes, telephones, or computers, among other devices of rapid and irreversible advancement and transformation. Think about how quantum computers can perform certain computations exponentially faster than classical computers, enabling breakthroughs in cryptography, simulation, optimization, and machine learning.

Unfortunately, the same does not happen with our level of intelligence or maturity, whether at the individual level or at the level of all humanity.

About 2,500 years ago, almost at the very beginning of Western civilization, Heraclitus complained at the beginning of his book that humans do not know universal reason (logos in Greek, something like “unifying principle”), not even though Someone explain the topic to them. Therefore, Heraclitus said, we remain perpetually “inexperienced” and live life “as if we were asleep.”

Five centuries later, at the beginning of our era, the Jewish philosopher Philo of Alexandria complained in the first lines of his work “On the Embassy of Gaius (Caligula)” that his contemporaries “who are already advanced in age, still act like children, even though they have truly had gray hair for a long time” (non-literal translation).

Skipping about 16 centuries, closer in time, in his essay “What is Enlightenment?” In 1784, Kant argued that most people live (we live) in a state of “self-imposed dependence” and, therefore, “incapable of using their own intellect without the guidance of someone else.” In other words, out of cowardice or fear, we perpetuate our immaturity and let others “rule” us.

In our own time, the French philosopher Bernard Stiegler (1952-2020) spoke of the “infantilization of adults” to the point that we are “incapable of reaching the maturity of critical thinking.” And German thinker Theodor Adorno warned of a society that “promotes ignorance,” as well as “the infantile state of passivity and thoughtless consumption.”

In short, from Heraclitus to the present, nothing has changed. As the American sociobiologist Edward Wilson rightly pointed out, we have a “quasi-divine technology” and, at the same time, “a paleolithic brain and emotions.”

Our foolish abuse of new technologies threatens our very future

We live in a time of so much scientific and technological advance that we can now (almost) detect megastructures of extraterrestrial civilizations in our galaxy and that we can now (without the “almost”) digitally duplicate any person, living or dead, and interact with that person. But so much technology creates immense risks for the future of humanity (assuming there is still a future).

The German-British economist Ernst Schumacher (1911-1977) claims that humanity is in “mortal danger” “not because we lack scientific and technological knowledge, but because we tend to use it destructively, without wisdom.”

In other words, what is so helpful to us and opens up so many new opportunities for us is the same thing that we foolishly use to self-destruct. In other words, the mechanism that makes us intelligent is simultaneously the mechanism we use to deceive ourselves. When that happens, wisdom disappears, and only ignorant and arrogant ignorance remains.

By confusing “knowledge” with “wisdom” and, at the same time, confusing “knowledge” with “information” (“I already know, I saw it in a movie), all possibility of reconnecting with the source of wisdom disappears because, due to the aforementioned confusion, we will look for “wise” answers by increasing our knowledge, but without ever reaching wisdom.

Acquiring knowledge solves the problem of ignorance, but it does not solve our foolishness. It is possible to have acquired an impressive amount of knowledge and, at the same time, be impressively foolish to foolish. Wisdom is the antidote to foolishness. And the constant search for that wisdom (accepting that we will never find it in its fullness) is philosophy.

I agree with what the Spanish philosopher Carlos Javier González Serrano recently expressed when he said that “Never before has philosophy been so necessary to know.” But know what? González Serrano proposes that, at this moment, “knowing” can be understood as “thinking and acting with the manipulation” (emotional and psychological) to which we are subjected precisely by technology.

In this context, folly consists, paraphrasing González Serrano, in seeking a way to live “in an uninhabitable world.” Or, if I may, we seek to live peacefully in a world that we ourselves have made uninhabitable, a totally artificial world that we believe to be real, a technological Platonic cavern that manages and governs all our desires and our attention.

To quote the Spanish philosopher again, “infinite scrolling” has become the prevalent way of “existing” in the world. We think without questioning what we think, and we confuse “normal” (for us) with what is “real” and, even worse, with the only possible reality. Therefore, not even a pandemic can make us reflect on our lives, our culture, and our society.

What can you do then? Obviously, I don't know. I am not wise, and I never will be. I am a perpetual seeker of wisdom. Therefore, I dare to suggest that what we should do is talk with truly wise people (not “influencers”), regardless of what era they lived in and what tradition they belong to.

Do we sleep and dream to prepare for the future? It looks like it is

A new study published in the prestigious journal Nature on May 1 indicates that, during the first half of sleep, the brain “reboots” neuronal connections apparently with the purpose of preparing for the future or, more specifically, to be ready for learn what needs to be learned in the near future.

The study, led by scientists Anya Suppermpool and Jason Rihel, from University College London, suggests that “remodeling” of the brain during sleep allows new connections to emerge between brain cells the next day. In other words, during sleep the “strong connections” that brain cells have when awake are “deactivated” or “relaxed” (so to speak).

In short, according to researchers, it seems that sleep prepares the brain to “generate new connections the next day” (quoting Dr. Suppermpool), that is, (I add), the brain prepares itself for the future. which is neither continuity of the past nor repetition of the present.

It is worth mentioning that the study did not include experiments or observations of human brains, but, according to the aforementioned scientists, it is possible that these same brain patterns during sleep will eventually be observed in humans.

Be that as it may, the idea that we sleep and dream to better connect with the future is a fascinating idea. And this in turn connects with myths since myths are shared dreams and dreams are personal myths. If we accept this correlation, perhaps, on a global level, we should “reset” our culture and our collective “brain” in order to learn and access a new future.

But, unfortunately, both on a personal and social level, we live in a world that practically does not give us time to even breathe, much less think and much less reflect or meditate (which, in short, is "dismantling" the petrified connections between our thoughts and emotions). Therefore, over and over again we repeat the same thing expecting different results... until we no longer expect anything.

We stay awake watching movies or videos “to distract ourselves” or “to be able to sleep better” and, in this way, “we train the brain precisely not to sleep, rest, or disconnect. We don't give the brain time to disconnect from the past to reconnect with the future. Therefore, the next day, even if it is a new day and the future has arrived, we remain the same as before.

In scientific terms (used in the aforementioned article), by sleeping poorly or not at all, we remove “synaptic plasticity” from the brain. In fact, each of the neurons loses that “plasticity” that, by having it, would allow it to display new ways of understanding reality.

Perhaps the ancients knew something or, better yet, lived off of all this. After all, for them, dreaming, sleeping, having visions, and sharing myths (in the truest sense of the word) were practically a single activity, an activity focused on the future.

Paradoxically, perhaps we need to reconnect with that past in order to finally be able to reconnect wide awake with the new future.

Ignorance and pride prevent us from seeing the signs that the future sends us

Recently I witnessed (from a distance) an accident on a busy street north of the city where I live. It turns out that one lane was closed for construction, with signs, flashing arrows, and orange cones warning of this situation. But a driver ignored all these signs and, after suddenly braking, collided with the cones. Fortunately, there were no injuries.

The traffic was stopped for a few minutes and when it was my turn to slowly pass by the scene of the accident, I lowered the window of my car, and I could hear the driver of the accident vehicle say something like "I didn't know what those signs meant." In other words, for him, the signs, arrows, and cones had no meaning whatsoever, much less the meaning of changing lanes on time.

Another day, driving on a highway, a car sped past, ignoring both a construction zone and signs alerting the driver that he was speeding. In this case, the driver clearly knew what the maximum speed limit signs indicated, but simply chose not to obey them. Shortly after, the police stopped him to fine him.

These situations led me to reflect that many times (almost every time), when the future sends us signals, we quickly discard them, either because we do not understand them or because, although we understand them, we do not want to change our current behavior. And then we pay the consequences: we collide with reality, or something makes us pay the consequences of our actions.

But whether or not we pay attention to the signs that the future sends us, whether or not we understand those signs, or simply ignore them, the future constantly continues to send us signs, be it a sign, a luminous arrow, a thought, a phrase, some news, or whatever. Obviously, it does not matter how many signs we receive from the future if our ignorance and arrogance prevent us from seeing or understanding them.

In my experience, signs of the future are always unexpected, instantaneous, and fragmentary. They are flashes that appear and disappear quickly, mere indications of something new that is about to enter our consciousness. They are something like a fleeting glimpse of the reality adjacent to ours, which is already there (the future is always already there), but which we have not yet accessed.

As Heraclitus said two and a half millennia ago: “He who does not expect the unexpected will not find it” (Fragment 18). The “unexpected”, what “we do not find” in the present, what is difficult to discover and reach, the unexplored, that which leaves no traces that we can follow, is, ultimately, the future, which should not be confused with tomorrow or the future.

The future is the expansion of consciousness, and that expansion only occurs when the mind opens to new possibilities and opportunities, when the heart connects with those possibilities, and when the will activates them. For the closed mind (ignorance) and the closed heart (pride) there is no future.

It seems that in a short time AI will be able to think for itself. When will we humans think?

In a recent article (April 11, 2024), Joelle Pineau, the vice president of artificial intelligence (AI) research at Meta, stated that “we are working hard to find a way for (AI) to not only talk, but I can really reason, plan and remember.”

In fact, according to that same article, Meta and OpenIA were at that time “on the verge” of their new AI models “being able to reason and plan.”

Two weeks later, on April 26, 2024, OpenAI announced that its new version of ChatGPT can now “remember and plan,” although, perhaps out of modesty or prudence, there is no mention of ChtGPT 5 (or whatever it is called) can already truly reason.

In other words, in approximately a year and a half ChatGPT went from being just a novelty, almost just a toy (as the first phones and the first airplanes were considered), to transform itself into an artificial AI that speaks, remembers, and plans. One can speculate that ChatGPT also already reasons for itself or will soon do so.

Nevertheless, these new advances invite a closer, more detailed, and careful examination of the impact of the imminent arrival of artificial general intelligence (AGI), which, unlike current AI, will no longer be a purely reactive system, but rather a system capable of sophisticated cognitive processes. How sophisticated will they be? We will soon find out.

While all this happens, that is, while artificial AI learns to think and reason (what's next? Be self-aware?), we humans think less and less. And, as a consequence, we know less and less and, for this reason, it is increasingly easier for us to accept any type of misinformation, pseudo-theory, or funny video, while rejecting “reality.”

As the American philosopher Daniel Dennett (recently deceased) said in his memoir “I Was Thinking,” the real problem we face is not the arrival of the IAG or some other type of superintelligence. The existential threat that could even end civilization is turning AI into “a weapon for disinformation.”

The consequences of this situation, according to Dennett, will be devastating for our society because “we will not be able to know if we really know, we will not know who to trust and we will not know if we are well informed or misinformed.” Furthermore, “We could become paranoid and hyperskeptic, or simply apathetic and impassive. Both are extremely dangerous routes.”

Seeing what is seen and listening to what is heard in the so-called “media” and “social networks” (names reminiscent of the “Ministry of Truth” of Orwell’s 1984), what Dennett already warned us about it's happening. In a way, if that’s true, we are already all doomed.

If this were the case, our situation is remarkably similar (if not exactly the same) to that of the souls described at the beginning of Canto 3 of the Inferno, in Dante's Divine Comedy. These souls are in Hell and lost all hope after being condemned to misery for not thinking, having lost, and forgotten “the good of the intellect.”

What will emerge once all the new technologies now scattered are merged?

A few decades ago, looking at the telephone of that time, and then at the radio, the television, the camera, the video recorder, the maps, the flashlight, and many other artifacts I could never, not even in a moment of high imagination, anticipate that some One day all these devices would be merged into what we today call a smartphone.

But now, with that previous experience of seeing how a single device or artifact emerges from different technologies, now we can and must ask ourselves what will emerge once quantum computing, neurological computers, artificial intelligence, new forms of energy, robotics, and other advanced technologies merge into a single “reality.”

Everything points, first of all, to the arrival of an almost immortal synthetic human, with physical, mental, and cognitive capacities and abilities unthinkable and unthinkable for us, mere biological humans, mortal and certainly limited and finite.

In other words, just as the disparate elements mentioned above have merged into smartphones, so too will the disparate elements of new technologies (dare we suggest) also merge, but no longer into something so small that we can carry it in our hands. hand, but in something so big, possibly on a planetary level, that we will no longer be able to understand.

Certainly, I am not talking about science fiction or conspiracy theories, but about a careful and constant reading of scientific reports and articles, published by serious, respected, and verifiable sources, which would indicate that this process of emergence of new entities never seen before in the known history of humanity they are already emerging.

Again: it's not science fiction. The global network of supercomputers is already underway. Artificial intelligence capable of anticipating the actions of human beings (and even correcting them before they act) is already a reality. Prototypes of artificial brains have already been developed. Synthetic skin and muscles have been in development for years. And that list could be expanded almost indefinitely.

So, what is emerging? And another question: how prepared are we to respond to whatever emerges from the union of technologies that, as Arthur C. Clarke said, already seem indistinguishable from magic?

The arrival of synthetic humans and super-intelligent robots will mean coexisting with non-human intelligent entities (although not necessarily people). How will this unprecedented situation affect our brains, our hearts and even our decisions? I mean, we can barely live among ourselves, how are we going to interact with the new thinking beings?

But this new reality includes another perspective, that of “them.” How will synthetic humans and super-intelligent robots treat us? Because, although they are the result of our experiments, we will be able to do little and nothing to stop them if, as anticipated, in each of them all the technologies already available but still separated are merged.

And even if none of the above happens eventually, the exercise of thinking about it and anticipating it is valuable in itself because it serves as an exercise to prepare ourselves for a future we cannot anticipate. 

View older posts »