0 like 0 dislike
ago in General Factchecking by Newbie (300 points)

As people start to use AI chat bots more and more for things like reading the news the problem of where these bots get their information from starts to arise. Earlier this year researchers asked the ten most popular chatbots questions on topics likely to be poisoned with Russian misinformation and a third of the responses were lies. This is because Russian propaganda has started to be learned from by these bots. This is done by a Russian news site posting a false statement, then that statement is picked up on by alleged independent new sources, through a process called information laundering, and this is some of the information that the AI bots learn from. 

Because AI chatbots are being used so casually it’s easy for people to forget that they only repeat what they’ve been trained on. So if a wave of coordinated misinformation gets posted across multiple sites, the bots might treat it like real news. This creates a situation where users think they’re getting neutral, unbiased answers, but they’re actually hearing recycled propaganda without realizing it. That’s why people are worried, if AI tools keep learning from poisoned sources, the misinformation can spread even faster than before.

Please log in or register to answer this question.

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...