Student guest post: Can editors and aggregators coexist?

by andybechtel

Students in JOMC 457, Advanced Editing, are writing guest posts for this blog this semester. This is the sixth of those posts. Emily Evans is a senior at UNC-Chapel Hill majoring in English and journalism (reporting). She was the copy desk co-editor at The Daily Tar Heel for three semesters, and she has interned both on the CNN Wire desk and at People magazine.

“Aggregation” is a big buzzword in the journalism industry these days. And whether praising it as good way to personalize news or criticizing it as a good way to replace humans with machines, everyone seems to have an opinion about it.

But what exactly does “aggregation” mean? Does it require editing — or replace it? And is it really the downfall of news as we know it?

The term “aggregation” can be difficult to pin down. It is frequently used to describe everything from human efforts to gather the best of what’s freely available online into one place to computer programs trawling the Internet for any headline that contains a few good keywords.

In the broadest sense, and for the purposes of this blog post, I’ll define aggregation as a newsgathering technique that compiles information — quotes, facts, pictures, even entire stories — from various sources, almost always online, and brings them together to create a new whole. This might be a website featuring headline links with related subject matter, like The Drudge Report, which gathers top political headlines. By linking to the original stories, sites like this provide clear attribution.

Aggregation can also refer to sites like The Huffington Post that purport to create and summarize original stories based on others floating around the web. This second type tends to be much more contentious: Humor website The Onion even once described the Huffington Post as possessing an “aggregation turbine.”

The Huffington Post has, ironically, made headlines of its own due to complaints it aggregates too much, and it has sparked much debate regarding the ethics and practice of aggregation. New York Times magazine writer Bill Keller unleashed an attack on The Huffington Post, claiming its content was nothing but “celebrity gossip, adorable kitten videos, posts from unpaid bloggers and news reports from other publications.” This, of course, prompted a retaliation from Arianna Huffington herself, who cited the Columbia Journalism Review’s praise of the site’s work in defense of the journalists she employs.

Clearly, even at the top levels of the journalism industry, there’s disagreement about aggregation, but equally clear is the fact that it’s not going away. Even the Washington Post has gotten into the game with its Trove service that aims to personalize news for each reader. Its free Social Reader app lets users see which stories their Facebook friends have read. And a company called News.me just launched an aggregating app meant to corral users’ news interests and social networking into a streamlined, personalized feed.

Since aggregation is so nebulous and hard to define, it’s difficult to get everyone to agree on a common set of standards to govern its use and practice. That means that it’s difficult to find answers to many questions it raises, like how much borrowing from a story is too much.

Even media law has yet to catch up to aggregation: Is it OK to take an entire story if the original author is credited? What if the new story takes away the original author’s pageviews, thus depriving them of revenue for their work? What about aggregating done entirely by machine — is that even ethical?

I think that there is a place for aggregation in the journalism of today. But in order for it to be effective and accurate, it needs to be treated just like any other type of story.

Human aggregators need to be editors in the truest sense — curators of the words, pictures or even tweets they combine to craft a narrative, or the links they select for a news website. I think there are many great opportunities for multimedia aggregation stories — sites like Storify, which allows users of all types (from readers to media outlets) to search the web and social media for anything they’d like and create a story out of it, are great examples.

But care must be taken; it is just as easy to take a quote out of context from someone’s Twitter feed or Facebook timeline as it might be to misquote them, and the same goes for fact errors, both of which can be nearly impossible to correct when a story has traveled far and wide across the Internet — even perhaps re-aggregated. In an ideal world, aggregated stories and sites would involve a collaboration between seasoned reporters, editors and graphic designers, just as a front-page package would, to ensure accuracy and strength in storytelling.

As far as computer-based aggregators go, there is something to be said for the benefit of personalized news, and there’s certainly a market for it. So long as it is done in a way that credits (and drives traffic to) stories’ original authors, and so long as readers are clearly informed that a computer is doing the compiling, they’re OK in my mind.

But I don’t think they’re the future of news — or even the future of the majority of aggregation. Aggregation has some great possibilities to eliminate redundancies, expand creativity, encourage collaboration and offer a broader worldview.

Aggregation at its core is not so different than traditional reporting or from using wire services to compile a story. In order for traditional media outlets to “keep up with the times” and take advantage of technology, it makes a lot of sense as one more tool in the journalistic tool kit. It just requires a great editor.

About these ads