Saturday, September 21, 2024 - 12:24 pm
HomeLatest NewsRemember it's just a machine

Remember it’s just a machine

“Memento mori” is a Latin expression that means “remember that you will die.” When a general paraded in ancient Rome after a victory, a servant was assigned to remind him so that he would not get angry and sin out of pride, breaking the law and custom. The phrase was a little more elaborate, but that’s the summary, and that’s how the expression became popular.

Today, this reminder would not be useless to those, much more than to the Roman generals, who deify themselves in the exercise of their offices or responsibilities. Both in politics and in the business world and even in the academic world, although here with much less power.

Memento mori, Ellon Musk; Memento mori, Milei; Memento mori, Kary Mullis. Mullis, less known than the previous ones, was an American biochemist, Nobel Prize in Chemistry in 1993, shared with Michael Smith, for having invented the polymerase chain reaction (PCR) technique, which allows the rapid replication of a specific segment of DNA, which facilitates its study and use. According to his relatives, the prize went to his head. Delete or add names at your discretion, but you will surely not miss examples of vain people.

It is not yet necessary to tell machines to remember that they are machines, since they lack consciousness and desires of their own, but it would be good to remind us, their users, sometimes that we are interacting with a machine and not with another human being, given our tendency to anthropomorphize everything. Anthropomorphization is the process of giving a human form or qualities to a living being or even a supernatural being as well as to a thing.

People are inclined to do this because of psychological, evolutionary, and cognitive factors. For example, attributing human characteristics to objects, animals, or abstract phenomena helps us make them more understandable. It is easier for us to interpret the world based on what we know, and in particular on our own thinking and behavior. Recognizing intentional agents is also crucial for human survival, as it helps us identify threats or allies.

In the case of machines, this can also respond to a cognitive simplification. It is easier to interact with a robot or, in general, with an intelligent machine, if we understand it as similar to us, without having to know in detail its operation or behavior. This is why it is common that when we interact with intelligent technologies, such as virtual assistants, robots or chatbots, we attribute human qualities to them, such as emotions, intentions or reasoning abilities, even if rationally we know, or should know, that they do not possess these characteristics.

This process is most evident when machines simulate behaviors that we associate with the most human. Conversational AI or humanoid robots are the most obvious examples, since it is easier for them to give us the illusion that they are or have something like people. Perhaps the first example was the computer program ELIZA, designed in the mid-1960s by Joseph Weizenbaum. I simulated the conversation with a psychologist. He did it in a very simple way, by reproducing predefined questions and expressions based on the presence of certain keywords in the user’s answers or comments. Compared to ChatGPT, it would be like comparing the Wright brothers’ plane to an Airbus A380. Despite this, many users have confessed that they felt that ELIZA “understood” their emotions.

You may remember Tamagotchi, virtual pets sold as pendants, watches or keychains that became very popular in the 1990s. Although they were electronic devices without any human form and with very basic algorithms, users, especially children, became attached to them and felt responsible for their “well-being”, sometimes to the point of obsession.

Sophia is a humanoid robot that looks like a woman, developed years ago by the Hong Kong-based company Hanson Robotics. She is able to reproduce almost human gestures and is also able to speak fluently, I assume now connected to an advanced language model. In 2017, Saudi Arabia granted her citizenship. A publicity and promotional stunt, but above all paradoxical in a country that denies even women’s fundamental rights.

The progress of recent years, especially in the field of language technologies, has been incredible, even for those of us who dedicate ourselves to their research. It is very likely that you have used applications like ChatGPT. Its dialogue is so natural that it even allows for an easier anthropomorphization (yes, I am aware that this is the second time I have written this word and that it is easier to write than to read). Replika is one of these chatbots, specially designed to be a “virtual friend”. Many of its users come to form emotional bonds with this program, telling how it has helped them in difficult or lonely moments.

Attributing human qualities to machines can have advantages in some cases, but we must never forget that these are systems devoid of consciousness, desires or emotions – in any case, they simulate them – so as not to create false expectations about what machines can do and confuse the boundaries between human and artificial.

Moreover, when interactions with intelligent systems feel “human,” we risk eroding our understanding of what it really means to be human. Machines can simulate many things, including feelings, but they still lack the emotional, ethical, and cognitive complexities that define our relationships.

So, at least for now, remember that they are just machines.

Source

Jeffrey Roundtree
Jeffrey Roundtree
I am a professional article writer and a proud father of three daughters and five sons. My passion for the internet fuels my deep interest in publishing engaging articles that resonate with readers everywhere.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts