0 like 0 dislike
in General Factchecking by Newbie (300 points)

As people start to use AI chat bots more and more for things like reading the news the problem of where these bots get their information from starts to arise. Earlier this year researchers asked the ten most popular chatbots questions on topics likely to be poisoned with Russian misinformation and a third of the responses were lies. This is because Russian propaganda has started to be learned from by these bots. This is done by a Russian news site posting a false statement, then that statement is picked up on by alleged independent new sources, through a process called information laundering, and this is some of the information that the AI bots learn from. 

Because AI chatbots are being used so casually it’s easy for people to forget that they only repeat what they’ve been trained on. So if a wave of coordinated misinformation gets posted across multiple sites, the bots might treat it like real news. This creates a situation where users think they’re getting neutral, unbiased answers, but they’re actually hearing recycled propaganda without realizing it. That’s why people are worried, if AI tools keep learning from poisoned sources, the misinformation can spread even faster than before.

1 Answer

0 like 0 dislike
ago by Novice (680 points)
For starters, the information provided in the claim matches the article provided which is good. The source being the Washington Post is good as well since they are known to be a credible source for news. So now the question is if other sources also back up this information. The Center For European Policy Analysis has an article that was posted back in January that supports the claim as well. This article talks about how AI repeats the information learned from the Russian sources leading to AI providing widely biased and incorrect answers to people who ask AI questions surrounding the war between Russia and Ukraine. This article explains how large language models don't have an internal understanding of truth so if it takes the information that it's given as truth/fact that can pose a major problem as we are seeing. I'll put that article here: https://cepa.org/article/russian-propaganda-infects-ai-chatbots/

This journal from Sage Journals also supports the claim: https://journals.sagepub.com/doi/10.1177/29768640251377941

This source as well also said something similar: https://www.atlanticcouncil.org/blogs/new-atlanticist/exposing-pravda-how-pro-kremlin-forces-are-poisoning-ai-models-and-rewriting-wikipedia/

Not only are there these few examples that I provided, but it appeared that there were quite a few articles about the same thing. As such this claim is true, and there are many sources stating the same thing, increasing its reliability. Many of them have exact examples of false AI responses that they got providing further evidence.
True

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...