- Websites filled with content generated by AI are popping up quick and fast.
- These websites serve programmatic advertising from the likes of Google Ads.
- One website seen by NewsGuard was generating 1 200 articles a day using AI.
Websites that make use of artificial intelligence to create “news” content, are serving programmatic advertising from some of the world’s biggest brands signalling a growing problem for newsrooms and journalists.
Research published by NewsGuard this month highlights how “Unreliable Artificial Intelligence-Generated News” (UAIN) websites are leveraging the speed at which AI can produce content to take advertising revenue from the likes of Google Ads and similar programmes.
In its Misinformation Monitor for June, NewsGuard reports that the number of UAIN websites it has detected climbed from 49 to 217 with many of these websites funded solely on programmatic advertising. One website the organisation observed generated 1 200 articles a day and as these websites are easy to spin up, exclusion lists aren’t able to keep pace.
The trouble here is that these UAIN websites feature advertising from the likes of Google Ads, in fact NewsGuard says that 356 of the 393 ads it observed were served by Google Ads. Worse still, much of the content was straight up plagarised. To be clear, setting up a Google Ads account is simple and one need only embed code in your website to start earning revenue from that advertising. Advertising served in this way is often employing an algorithm to showcase adverts that are relevant to the viewer.
“Some UAIN websites on which ads for major brands appeared seemingly used AI tools to rewrite articles, often without credit, from mainstream news outlets. For example, ads for a U.S. apartment rental site, a global e-commerce site, a Japanese office printer, and a Chinese electronics manufacturer appeared alongside an article published by UAIN AlaskaCommons.com that appeared to be an AI-rewritten version of a story published by the U.S. edition of the British tabloid The Sun. The article included the same pictures and similar wording,” writes NewGuard.
Thankfully there were only two reports of UAIN websites promoting misinformation in the form of unproven and potentially harmful natural health remedies. Of course, this can change in future and with the likes of AI improving, UAIN websites could push misinformation on a massive scale.
But what makes this all the more alarming is that UAIN websites could eat into advertising revenues from other, legitimate websites running Google Ads. The likes of Google and Facebook already dominate the advertising market and if those platforms are being drained by UAIN sites, legitimate websites have an even smaller slice of an ever dwindling pie.
At present these websites can be spotted somewhat easily. NewsGuard reports that the articles it found contain error messages seen in chatbot responses signalling a lack of editing. What is concerning is that in future, news organisations could make use of UAIN with human oversight. This is a concern because AI doesn’t have the ability to reason or have an original thought, it simply regurgitates information relative to what the prompt it is fed. Worse still, these AI platforms can also just make things up as a lawyer in the US recently discovered.
But in spite of all of its problems, AI is finding its way into newsrooms already. Technology news outfit CNET is already making use of AI in its newsroom, prompting some at the publisher to consider unionising with the Writers Guild of America. The publisher said the use of AI is limited to certain functions such as writing up spec sheets, but others may not be as cautious in their use of the technology.
And as publishers see their revenue eaten into by UAIN websites, they may just have to use AI to create content.
How we address this problem isn’t clear especially as lawmakers often move too slowly to address emerging tech in any meaningful way and the AI trend doesn’t appear to be any different.
The creators of AI also don’t appear to be slowing down development unless their technology leads to something catastrophic happening to the human species.