We live in a world of such rapid technological advancement that the landscape of reality around us changes long before we’re able to understand that change or adapt to the new reality. In this context, the challenge arises of analyzing whether new technologies are compatible with our human faculties for making decisions on our own.
There’s no doubt that artificial intelligence systems are becoming increasingly sophisticated and, as leading experts like Shelly Palmer and John Vervaeke have warned, synthetic humans and general AI seem to be just around the corner. As Hong Kong philosopher Yuk Hui aptly states, it sounds like science fiction, but it’s not.
Because of these advancements, AI and its offshoots now appear capable of making decisions that once belonged exclusively to human judgment, whether in medicine, education, justice, or many other fields. This shift raises a fundamental question: Who is really deciding when a machine appears to decide for us?
One could argue that these systems have the potential to complement—and therefore improve—human decision-making by reducing the impact of our cognitive biases and enhancing the efficiency and effectiveness of decision-making processes.
But it can also be argued that the absence of human emotions and the inability of these systems to navigate the subtle complexities of ethics raise serious concerns about the potential unintended consequences of outsourcing our decisions to AI—as well as the seemingly inevitable erosion of human agency.
Scholars from many disciplines have long grappled with the ethical risks associated with AI decision-making. The core philosophical dilemma lies in the fact that AI systems, despite their advanced capabilities, lack inherent human qualities such as empathy, moral reasoning, and the ability to consider the broader social implications of their choices.
So, to what extent can we consider these systems responsible for their decisions—or for the consequences of their decisions?
One of the main concerns is the potential for AI systems to make decisions that conflict with human values and social norms, a theme that has been repeatedly explored in science fiction books, movies, and series.
AI could perpetuate existing biases or even introduce new forms of discrimination, as proposed in movies like 2001: A Space Odyssey (1968), Colossus: The Forbidden Project (1970), and I, Robot (2004, based on Isaac Asimov’s work), the episode “The Ultimate Computer” of Star Trek: The Original Series, the episode “The Measure of a Man” of Star Trek: The Next Generation, and the book Neuromancer by William Gibson (1984).
And what level of transparency or explainability can—or cannot—exist in the decision-making processes of AI systems? In fact, throughout history and even today, it has often been difficult to explain the reasons behind human actions and decisions.
Will these challenges be solved through even newer technologies or by enacting new laws? Probably not. And here another paradox arises: if we want AI to make decisions based on our ethical values, why aren’t we making those decisions ourselves? Have we already forgotten what it means to be human?
Comments
There are currently no blog comments.