I recently read an article that analyzes two trends: the number of older adults worldwide is increasing, and simultaneously, more and more people of all ages are feeling lonely. The confluence of these two trends means that social isolation among older adults is inevitable, according to a recent study published by the American Sociological Association.
Globally, according to the World Health Organization, one in four (25%) older adults lack meaningful social relationships, and four in 10 (40%) have no consistent companionship in their lives.
Furthermore, according to the Gallup pollster, 20% to 33% of people globally feel lonely or experience loneliness, with those under 24 being the most affected. In the United States, 52% of adults report “feeling lonely regularly,” according to the American Psychiatric Association.
But these two trends, already worrying in themselves, seem to converge with a third growing trend: the anthropomorphization of interactions with artificial intelligence—that is, attributing human qualities to the responses generated by AI and, therefore, reacting emotionally as if a human had responded.
According to Carmen Sánchez, a Spanish philosopher and educator, the anthropomorphization of AI is a “major philosophical problem” in our time. It consists of believing that “because (AI) returns correct and appropriate linguistic constructions, we are actually participating in a meaningful dialogue.”
In other words, we are so detached and isolated from ourselves that we no longer even recognize ourselves when we look in the mirror of our own creations. In the context of our loneliness and the overwhelming need to satisfy our desire to speak to someone, we even believe we are speaking to someone when in reality we are not.
In a recent publication, Sánchez provides solid philosophical foundations (John Austin's philosophy of language, John Searle's philosophy of mind) to refute "the idea that computational systems possess a true mind or intentionality in linguistic communication." Therefore, "The attribution of understanding is also erroneous."
Our loneliness and isolation have reached such a level that, as Sánchez explains, we confuse "the generation of coherent text" with speaking to another person. More specifically, we confuse "the appearance of a phenomenon with its underlying reality" by attributing conscious and intentional acts to AI. And this confusion has consequences.
We so desire someone to listen to us that we not only accept the simulation as reality (Plato, Jean Baudrillard), but we also become emotionally and cognitively attached to that simulation, enjoying it when the AI "says" (in quotes) "That's a very good question" or "That way of expressing yourself is very beautiful."
In other words, loneliness and isolation create a suitable context for deceiving ourselves into thinking we're not alone. When we acritically accept the AI simulation as part of (or the totality of) our reality, we are close to acritically accepting any other simulation just because it “tells” us how intelligent and deserving we are.
That's why I keep writing, because I still want to express my own thoughts, feelings, emotions, dreams, frustrations, successes, and failures, not those of some unknown algorithm.
Comments
There are currently no blog comments.