1 like 0 dislike
in General Moderation by Newbie (220 points)
In the past couple of months, many social media platforms have claimed that some major news websites have been publishing articles that have been generated by AI and not labeling them as AI written. These posts can often be labeled by lesser known news outlets and are norming generic like "breaking news" stories that don't really have an author byline or odd phrasing. Some of this talk has been boosted up by users worried about how AI might reduce journalism standards.

While some of this may be true, there are many littler sites and some news websites are using AI generated content. For example, a study from Cornell shows that synthetic news articles increased drastically on smaller sites between 2022-2023 but many mainstream news sources still had a lower usage of AI content. So that means this claim is misleading and conflates the real question on whether AI written content with a broad assertion that major outlets are doing it in secrecy and at a scale when evidence is lacking.

1 Answer

0 like 0 dislike
ago by Newbie (310 points)

This headline is somewhat misleading. Data suggests that AI-generated content is much less common in major news articles, compared to opinion pieces and smaller news outlets. Research led by University of Maryland computer scientists found that more than 9% of U.S. newspapers contain some form of text created by AI. However, only 1.7% of articles in papers with circulation of more than 100,000 were partially or fully AI-generated. It was also found that AI-generated content is more common among the opinion pages rather than the news pages themselves, with this data being pulled from articles from The New York Times, The Wall Street Journal and The Washington Post. 

Additionally, they found out that much of this AI content is much more common in smaller, more local news outlets. Researchers claim that this could be from “collapsing news economies” in these smaller communities, along with their news deserts, where access to local and credible news coverage is limited. This causes a lot of trust issues from the perspective of the reader, as it’s hard to know what is true, but also because truth-telling and transparency are necessary for journalists. 

Yes, AI usage is apparent throughout the journalism industry. But the reality is, most of the AI-generated content comes from smaller news outlets or opinion articles, which show significantly higher percentages than the actual news articles. I think that the bigger concern is that 95 percent of these articles are using AI without addressing it.

https://today.umd.edu/report-ai-use-in-newspapers-is-widespread-uneven-and-rarely-disclosed

Exaggerated/ Misleading

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...