Guest post: News organizations that use AI need updated ethics policies

Chris Rogers is an editor and writer with a background in academic and technical publishing. He is also the author of a novel called “Starlight on Silver.” Rogers holds a master’s degree in technology and communication from UNC-Chapel Hill. This post is based on his master’s thesis, which explored the use of artificial intelligence by news organizations.

Journalism is changing. This simple truth is routinely downplayed, but the news industry has experienced massive shifts in the past decade. Newspaper circulation is down on the local level and unstable nationally, and the internet, the consumer’s preferred mode of news consumption, has changed everything.

The variety of options for digital media consumption has increased demand for news content, and the prevalence of big data has made it nearly impossible for writers to keep up with the flow of news. Instead, news outlets have turned to artificial intelligence, or AIs, for help. AIs are an appealing solution because of their abilities regarding not only data collection but also turning those data into patterns and even going so far as to write the articles themselves. At many major news outlets, this practice is now commonplace.

Putting the technology to work was initially simple. As artificial intelligence became adept at handling numbers, numerical stories seemed like a natural fit.

Using programs such as Wordsmith, an algorithm developed by Automated Insights in North Carolina, companies tasked artificial intelligence with writing straightforward accounts of sports stories, business news, fantasy sports league outcomes and so on. The Washington Post has tasked a machine with writing over 800 articles. The Los Angeles Times uses a bot reporter to craft stories about earthquakes.

But new technology can bring new problems. Journalism is fraught with legal and ethical challenges.

In a world already plagued with accusations of “fake news” and concerns about credibility, the timing of the introduction of robot reporters is critical. Libel, plagiarism and other pitfalls are critical components of the job. While a machine might get the words right, the industry needs to be prepared to deal with the potential minefield of consequences an otherwise naïve robot may stumble into.

The responsibility then falls to the news organization employing the help of the machines to ensure ethical standards are being followed. Ethics guidelines must be updated to cover these technological changes, but the vast majority of them haven’t been.

Nearly every prominent news organization has written an ethics manual of some kind, both for journalists to abide by and for readers to understand what to expect. This is to ensure that only factual information is being disseminated and that human subjects are treated with care and thoroughness.

The problem is that few news organizations address changing technology in their ethics guides, and none of them addresses the prevalence of AI reporters. While this may seem to be a simple omission, the fact is that bot reporters may collect information on human subjects that may be false or misleading, and then may craft an article completely unchecked.

Any false article will reflect poorly upon the organization and may result in legal consequences and financial penalties. In other words, mistakes like these are costly and to be avoided at all costs.

Newsrooms must be prepared for the reality that machines, though efficient, aren’t perfect. Machines are created by humans, and they are prone to programming errors that may result in significant problems.

If a news organization is posting an AI story, it is responsible for it. Current ethics policies are terribly lacking and out of date in this regard, leaving journalists with no clear standard on the expectations of an AI colleague. Additionally, ethics policies are often not made publicly available, creating an avoidable problem of transparency. As long as newsrooms insist they are delivering the truth, there must be no confusion about who or what is writing an article.

The solution is simple and necessary. There must be a uniform, industry-wide set of ethics standards regarding artificial intelligence and their use in the newsroom. These standards can exist in the form of an ethics guide that should be made publicly available on a news organization’s website.

This guide will hold AI reporters to the same ethical standards as their human counterparts and guarantee that any and all stories crafted by an AI are entirely factual. It would also be prudent for the new standards to outline a way forward for addressing legal concerns while also explaining to the public that the newsroom has the right to trust AI reporters.

Newsrooms are right to employ the use of any and all technology, but they must get out in front of potential issues before they become problems. New ethics standards would go a long way in revising the trust between reporter and reader.