Saturday, October 12, 2024 - 10:52 pm
HomeLatest NewsDo you know what a baby peacock looks like? Artificial intelligence is...

Do you know what a baby peacock looks like? Artificial intelligence is not and this is a serious problem for the Internet

Peacocks are one of the most captivating animals for humans. They were a symbol of beauty in India, where they originated, and were later recorded in Persian, Greek and Roman temples and artistic works. Today it is one of the most photographed animals, particularly for the iridescent plumage of the male and the opening of its tail during the nuptial dance. The appearance of their babies was once less known, until in recent days they became an Internet phenomenon, with hundreds of images and videos of “baby peacocks”. There’s just one problem: they’re all fake.

These are hallucinations of artificial intelligence. The algorithms, with less information in their databases about what peahens actually look like, mix the characteristics of adult males with those of chicks, which are actually brown, similar to those of a quail. The result was unreal hybrids that quickly caught the attention of users. Countless accounts dedicated to content monetization began sharing these creations with comments like “baby peacock will be your moment.” ooooo of the day” or “a little peacock born to shine”.

Some publications warned that the images were made with AI, most did not. None of them pointed out that actually baby peacocks have nothing to do with this playtime. Although many users pointed out that they were AI generations, many fell into the trap and asked where they could adopt peacocks, as this TikTok post racked up almost 30,000 comments. But the worst was yet to come.

@ku_13js

“Rare white peacock ~ Good luck 🍀”

♬ 原创音乐 – ku_13Js

The explosion in popularity of fake baby peacocks transcended social media and spread to Google, whose image search engine favored artificial intelligence hallucinations over real photographs. “I was curious, so I Googled “baby peacock” to see what it looked like and half of the results were AI. “We’re screwed, aren’t we?” one user replied in one of the posts containing fake images.

“I had to search ‘baby peacock’ earlier to prove to someone on Facebook that what they were posting was AI, and about 60% of the images are all made with AI. Google is dying. Duckduckgo is only slightly better. “AI is making the internet worse and making people dumber,” commented another. The trend is more pronounced in searches in English than in Spanish.

New EU rules on content generated with artificial intelligence require platforms to clearly label it as such so that it cannot mislead the user. Community sources explained to elDiario.es that Google also has an obligation to do this. However, this notice does not appear in your browser. This media questioned the multinational on this point, which did not send a response on this specific case.

On the contrary, Google explained that in general terms its systems may “not always show the best images” despite its control measures, but it emphasizes that this is not a problem derived from the use of AI. “This was true long before the advent of AI-generated content, and people have never hesitated to report us when we show inaccurate, low-quality, or offensive images in search,” they say.

The “slop” of artificial intelligence

Social media algorithms have been improving the visibility of content generated with AI for months. The result has been a flood of artificial images and videos promoted by accounts seeking to monetize their content or inflate their follower numbers. These contents show people, places or animals created with this technology, but also complete hallucinations, as the phenomenon is called in which AI combines fragments of information that may seem plausible but in a completely false way.

One of the best examples of this latest trend is the Jesus Christ-shrimp This went viral on Facebook when the AI ​​started mixing the image of Jesus with elements of nature until creating surreal images. The case of “baby peacocks” is an example going in the opposite direction, in which the hallucinations of artificial intelligence fill the spaces of ignorance of many people.

This type of algorithmically created garbage avalanche already has a name. It was called “slop”, which translates to “swill” and could soon become the twin brother of the term spam. Their only goal is to capture human attention with realistic fakes or impossible creations. It is so easy and inexpensive to generate that it is used massively to power interactions and build a junk AI economy. In many cases, the process is completely automated and it is the algorithms themselves that are responsible for detecting what goes viral, generating hundreds of copies and flooding digital spaces with false or meaningless content.

In the same way that spam can fill an inbox with junk mail, making it much more difficult to detect relevant communications, slop has the ability to do the same with social media and search engines. What happened around Hurricanes Helene and Milton that devastated the United States in recent days has become the first evidence of what is to come.

I have worked in the disaster field for almost 20 years and I can’t think of any other serious disaster in which there has been so much misinformation.

“I’ve been working in the disaster field for almost 20 years and I can’t think of any other serious disaster where there has been this much misinformation,” said a professor of emergency management at the Massachusetts Maritime Academy. New York Times. The problem facing emergency services is that calls from citizens warning of false situations they had seen on the networks were wasting time in rescue efforts. The Red Cross warned that these hoaxes discouraged survivors from seeking help from authorities, believing hoaxes that neither that organization nor authorities had visited the area.

One of the main debates in the country has focused on how opportunists have taken advantage of the enormous attention these disasters have generated to spread enormous quantities of false images seeking economic and political advantage. On the one hand, well-known far-right influencers, close allies of Donald Trump and even Republican deputies have viralized the false image of a young girl victim of the hurricanes hugging a puppy. The depiction is very poor quality and clearly artificial, but when other users criticized them, these profiles refused to remove them.

Although these types of disasters have always given rise to Internet hoaxes, the news scene after Milton and Helen has been different. Not only has AI made it easier and faster to create disinformation than ever before, but large numbers of people have shared this artificial content either knowing it was false or refusing to remove it because it is not. not a manipulation in itself. “I don’t know where this photo came from and honestly, it doesn’t even matter. It’s forever engraved in my mind,” said Republican MP Amy Kremer. “There are people going through things much worse than what this photo shows. So I leave it because it is emblematic of the trauma and the pain that people are experiencing right now,” he said.

Images of the girl affected by the hurricanes also appeared on Google with no indication that they were generated by AI. The multinational also did not respond to elDiario.es’ questions on this subject.

Other crates pass underneath and contaminate the slops just to increase their visibility. One of the cases that has been documented is a page titled Coastal views that before the arrival of hurricanes shared images generated with artificial intelligence of beaches or the northern lights. With the explosion of information about Helen and Milton, it became a hotbed of misinformation of fake photographs passed off as real.

“One of the most sinister and disgusting uses of AI I have found are these fake photos of the Appalachian floods in North Carolina and Tennessee,” lamented the X user who posted it brought to light. “These posts are intended to generate engagement and ad dollars for the page owner, at the expense of all the immeasurable human suffering occurring in and around Asheville. These pages do not offer up-to-date information on the situation, nor links to donations or lists of missing people,” he continues.

Slop is a new phenomenon and its impact on users remains to be measured. On the other hand, the first analyzes of the effects this could have on artificial intelligence technology itself have already been published. These systems rely on the data they extract from the Internet for their learning mechanisms, but the growing presence of unwanted content generated by other AIs may lead them to “collapse”, according to a recent study.

Training with synthetic or sloppy content causes AIs to lose diversity, repeat more and more elements or phrases, and their ability to handle new or unforeseen situations in their training significantly diminishes. “When they are trained with contaminated data, they then have a poor perception of reality,” the researchers point out. A funnel of fake content that can get bigger and bigger.

—-

This article was updated on October 12 to include Google’s response, sent after the original publication.

Source

Jeffrey Roundtree
Jeffrey Roundtree
I am a professional article writer and a proud father of three daughters and five sons. My passion for the internet fuels my deep interest in publishing engaging articles that resonate with readers everywhere.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts