Home Latest News Artificial intelligence is not like the atomic bomb

Artificial intelligence is not like the atomic bomb

28
0

At the start of 2023, the Future of Life Institute published a letter asking artificial intelligence labs around the world to immediately suspend for at least 6 months the training of new and larger AI models than those already existing at the time, with OpenAI’s GPT-4 as a reference. The letter urged governments to intervene and establish a moratorium. It was signed by thousands of people, with names as relevant as Yoshua Bengio, winner of the Turing Prize in 2018, Steve Wozniak, one of the founders of Microsoft, or the famous historian, thinker and writer Yuval Noah Harari. I published my opinion on the subject shortly after in this same media, so I will not comment on it here.

The letter does not warn of imminent existential danger to the human species, but many believe that this could happen and that AI could spiral out of our control, with serious consequences for humanity. In the summer of 2022, the statements of a Google engineer, Blake Lemoine, were widely commented on., who stated that the LaMDA model (Language model for dialog applications) developed by his company, was “sensitive” and, therefore, his “wishes” had to be respected. Google denied the claims and fired him – Lemoine, not LaMDA, who was neither responsive nor under contract with the company.

You might think that Lemoine might not be of sound mind if he really believed what he was saying, but apparently informed and far-sighted people, like Sam Altman, CEO of OpenAI and creator and marketer of ChatGPT, declared in May 2023 that his worst fear is that AI will go wrong, because “if it goes wrong, it can go very wrong”. Beyond the fact that this sounds like a phrase from Rajoy in his moments of greatest linguistic creativity, I don’t know if Altman was referring to things that are going wrong for everyone or for OpenAI, which this year will close the year with some $5 billion in losses. If anything, he seemed genuinely worried when he said it.

Although these and other voices occasionally warn us about AI and point out possible existential risks to humanity if we continue to develop it, nothing is on the foreseeable horizon, and certainly not in the short term. or medium term, does not make us think that this could be the case. AI doesn’t have the risks of nuclear power, especially when almost everything was unknown. Let’s return to the Manhattan Project, created to develop the atomic bomb. Before carrying out the first test, known as “Trinity”, some of the scientists involved in the project feared that the bomb’s explosion would cause the nitrogen nuclei in the atmosphere to melt, causing a reaction in chain that would destroy life. on our planet. The test was nevertheless carried out on July 16, 1945. Theoretical calculations indicated that this “atmospheric fire” was very unlikely, but until the actual test was carried out, doubts and fears were not completely dispelled.

I don’t want to underestimate the consequences of AI going out of control, to use a colloquial expression, but it won’t be because the AI ​​becomes conscious. Nor because he will enslave us, as we have done with the rest of living beings. Certainly not yet. The problems we face with the development and use of AI are very different, seemingly less transcendent, but they are concerning because they are real and current. Problems that have at least two points in common: it is in our hands to minimize or even eliminate them; and in general, they are not exclusive to AI and it did not create them, although it can amplify and accelerate their impact. A frequently cited example is AI bias. This problem is not new, although it may now become evident and cause us greater concern. In fact, bias in Internet content has been talked about for many years. The Internet reflects a partial reality, closely identified with the profile of the companies that control AI and rich countries, especially those where English is spoken. The pollution, or even the toxicity, of Internet content is not new, nor is the systematic violation of copyright, bringing to the table other particularly worrying issues that the emergence of generative AI seems to amplify.

Smart technologies have a great social and economic impact, and not everything is good or good when it comes to their development and application. Additionally, there is a huge urgency among companies to monetize the huge investments being made in AI, which may incentivize them to skip the steps from the research lab to the market. We must keep this in mind and be attentive to what is happening, and even try to anticipate possible evils and excesses. Just as the research, development, manufacturing and marketing of a drug follow strict laws and controls, the same must be true with AI, especially with potentially high-risk systems, as there are some .

LEAVE A REPLY

Please enter your comment!
Please enter your name here