The Ashland Daily Tidings in Ashland, Oregon, has recently been implicated in a troubling practice of using AI-generated content to create articles, with some of the bylines attributed to people who were either non-existent or had no real connection to the region. The issue was uncovered by an Oregon Public Broadcasting (OPB) report, revealing that the website’s content was produced at a pace that seemed humanly impossible—because it wasn’t human at all. The outlet’s staff list included people like Joe Minihane, a U.K.-based journalist who had never been to Ashland, and reporters allegedly based in South Africa. The use of AI bots, or “zombie bots,” to write articles under fake identities appears to have been a method employed after the print edition of the paper shut down, leading to a mass production of content that appeared to come from human writers.
While the use of AI in journalism is not inherently illegal, there are concerns about transparency, accountability, and the potential for plagiarism. A 2023 Congressional Research Service (CRS) report noted that AI-generated content could violate copyright law if it too closely mirrors existing work. The practice of using AI in newsrooms to generate articles—particularly if it draws from pre-existing content without proper attribution—raises serious ethical questions. AI is also being used by major news outlets like The New York Times, but in a more controlled and transparent manner to assist journalists rather than replace them.
For smaller publications like the Daily Tidings, the use of AI could be an attempt to produce content more cheaply and efficiently, but it also risks damaging public trust in journalism. As this trend continues, there are calls for more transparency about when and how AI is used in newsrooms to prevent identity theft, deception, and the further erosion of journalistic integrity.