As people start to use AI chat bots more and more for things like reading the news the problem of where these bots get their information from starts to arise. Earlier this year researchers asked the ten most popular chatbots questions on topics likely to be poisoned with Russian misinformation and a third of the responses were lies. This is because Russian propaganda has started to be learned from by these bots. This is done by a Russian news site posting a false statement, then that statement is picked up on by alleged independent new sources, through a process called information laundering, and this is some of the information that the AI bots learn from.
Because AI chatbots are being used so casually it’s easy for people to forget that they only repeat what they’ve been trained on. So if a wave of coordinated misinformation gets posted across multiple sites, the bots might treat it like real news. This creates a situation where users think they’re getting neutral, unbiased answers, but they’re actually hearing recycled propaganda without realizing it. That’s why people are worried, if AI tools keep learning from poisoned sources, the misinformation can spread even faster than before.