header photo

Project Vision 21

Transforming lives, renewing minds, cocreating the future

Blog Search

Blog Archive


There are currently no blog comments.

Blindly trusting AI deprives us from telling and sharing our own stories

Last week, according to newspaper reports, two sisters who were visiting an area to the west of the Island of Hawaii as tourists, unaware of both the route and the place of destination, decided to follow the instructions of the satellite navigator (GPS). And they did it so well that they ended up sinking their truck in the water.

The sisters trusted their GPS so much that they strictly complied with the instructions, even driving at high speed down a boat ramp to the sea, surprised (they later said) by the number of boats on that ramp and by the signs and gestures that people made to them.

Worse still, the sisters said they did not understand why their vehicle had suddenly filled with water, to the extent that although other tourists came forward to help them get out of the car, the sisters initially refused to do so. Only later, when the situation became almost untenable, did they abandon the vehicle just a few minutes before the van sank.

In interviews with the local media, the sisters' explanation was simple: they wanted to go to the other side of the harbor and the GPS told them that this was the shortest way. Therefore, they blindly obeyed what the GPS said. And that is, dear readers, our problem: blind obedience to technology.

The case of these two sisters is far from unique. For example, a few years ago dozens of motorists were stranded for long hours on muddy rural roads when the GPS told them they could get to Denver International Airport faster that way. Clearly, that was not the case. There are dozens of similar cases.

But that blind faith in GPS, so blind that it obscures common sense and prevents it from processing both warning signs and true danger, is no longer limited to GPS. Now, for example, ChatGPT is uncritically trusted. And before you think I'm exaggerating, consider what historian and philosopher Yuval Noah Harari recently wrote on YNet News.

We already know that GPS and social networks dominate our mind and our emotions, to the point that we stop thinking. To that unavoidable reality, Harari adds another (perhaps) unavoidable reality: that ChatGPT and its friends, relatives, and descendants manipulate human language in such a way as to create stories so compelling that we will accept them without thinking.

In other words, while in the past the poets of Greece wrote their myths and the prophets of Israel their scriptures, now ChatGPT will write (an AI, not us) the new myths and scriptures that we humans will use to guide every aspect of our life: relationships, art, music, laws, religions, politics, ideas, and cultures.

As Harari rightly says,  AI is an ET that now inhabits the earth. And there are two options: either we regulate the AI or artificial intelligence will regulate us. And what if the AI regulates us by making us believe that we are regulating it? Plato’s cave and Buddha’s Maya seem painfully real.  


Go Back