Friday, September 20, 2024 - 3:00 am
HomeTechnologyNo, AI Is Not Going to Wipe Out Humans. At Least According...

No, AI Is Not Going to Wipe Out Humans. At Least According to the Most Recent Scientific Study

AI has evolved at a rapid pace over the past two years, but it’s not enough to pose a risk to humans.

AI generated image of a robot in a classroom

GPT-4 and Claude are just the tip of the iceberg of what artificial intelligence has become for humans. They make our daily lives easier, at work and in absolutely every other relevant aspect of our lives. In this context, we have often heard that AI could be a danger to humanity, but is it true? According to a study on the ability of AI to learn new functions by itself It’s not a danger at all..

Emerging capabilities and the danger they are supposed to represent

A recent study questions the existence of calls emerging skills in large-scale linguistic models (LLM). Emergent capabilities are those that AI could acquire without any type of specific training, but simply by interacting with the environment. Until now, they were interpreted as a danger, since they are abilities obtained unexpectedly and that could be harmful to humans. However, now It seems that it is not like that.

The researchers conducted over 1000 experiments with different types and sizes of models and tasks to rigorously examine this phenomenonTheir results leave no doubt about the danger this could pose: taking into account learning in context and adjustment of instruction, they found no evidence of emergent functional language skills, such as human reasoning. For this reason, they believe that questions such as the realization of artificial general intelligence or AGI are unthinkable today.

The apparently emergent skills can be explained by a combination of contextual learning, model memory, and linguistic knowledge, rather than you simply apply in a concrete way what you already have. In the case of instruction-adapted models, the authors propose that this adjustment allows the models to implicitly take advantage of learning in context, rather than developing new skills. Many times, you will have used an LLM and felt that the the model had evolved based on what you had taught itbut the truth is that this is not the case, but that he uses his already acquired skills more effectively.

These results indicate that the capabilities of LLMs They should not be overrated and help to understand why models can excel at some tasks and fail at others.The aim of the study is achieved by emphasizing that we are not in a situation in which technology poses a problem for us and that everything indicates that It’s not going to move in that direction.. Scaling current language models will not lead to new unpredictable capabilities, quite the contrary.

In fact, the study ends in a very interesting way by pointing out that We need to be more critical about how we analyze the capabilities and characteristics of these technologies.since we normally tend to prejudge them as if they were something much more advanced than they already are. Something normal, however, since

You can follow Andro4all on Facebook, WhatsApp, Twitter (X) or check out our Telegram channel to stay up to date with the latest tech news.

Source

Raquel Jimenez
Raquel Jimenez
Experienced Writer Specializing in Professional Articles and Blogs
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts