Viewpoint: Written by a Human

Last week, Google announced that it would give out a total of nearly €22m (£19m) to 107 projects in 27 countries across Europe as part of the third round of its Digital News Initiative Fund, which supports quality journalism. This coincided with Google handing £622,000 in funding to the Press Association (PA) for its robot journalism project, the Reporters and Data and Robots scheme.

Ignoring the obvious point of how bad robot journalism could potentially be for someone like me, the two areas where Google have spent money seem to be very contradictory.

The Digital News Initiative Fund was introduced to help support high-quality journalism through technology and innovation in response to the rise in ‘fake news’ – emphasis on the high-quality part. Technology supported thus far include ways to ensure news is thoroughly and correctly fact checked, help video production, improve reader engagement, and more.

Now, you may think that the introduction of robots in journalism would only support the work of Google’s Digital News Initiative Fund, but I would have to disagree.

Going back to the point of high-quality journalism, studies have shown that the quality of automated story writing, for the most part, falls below the industry standards – which says a lot when this is an industry where, even amongst some of the biggest publications, those standards aren’t always so high.

Studies are all well and good, but you want some real examples of robots failing at journalism, right? Of course, you do.

Just last month, the LA Times – which uses AI software to send out alerts from the US Geological Survey – ended up reporting an earthquake in California that never happened. Well, not in this century at least. In this case, the earthquake reported actually occurred in 1925 and, for an even bigger fail, the report carried a date of 29 June 2025 (precisely 100 years after the incident, and a date that has not occurred yet). Many other publications, which still solely rely on humans, are likely to have seen the date and had the common sense to confirm the incident with the necessary authority first – a robot simply doesn’t have this common sense yet.

I am not completely ruling out AI ever having the ability to have this kind of common sense, nor am I saying it will never have the ability to report news. The issue is, with all the uproar and panic surrounding fake news in the industry, is this really the right time to actively fund robot journalism? The PA scheme itself will continue to use human journalists that will use official open data sources to automate 30,000 stories a month – a large feat even with the human/AI collaboration, which could still result in many mistakes.

In future, as the tech improves, these robots may begin to work independently, and that is where robot journalism will really be put to the test. It has been suggested that AI could be used across the board for short, instant breaking news alerts that can be built upon by human journalists eventually – the same thing kind of thing that failed the LA Times last month – as opposed to allowing a robot to publish full reports. This is something I could support, if the tech is improved so we can avoid another non-existent earthquake.

As of right now, though, I still can’t help but feel that Google has contradicted itself in its part in the battle against fake news versus its desire to further automation.