When commenting on political advertising on Facebook and Twitter, I cited Jeff Jarvis’s Unpopular Decisions, but never responded to his comments on the Facebook news tab. While I generally follow Jarvis’s blog precisely because I find his commentary on news insightful and well thought-out, this is a rare instance where his passion for the ideal of journalism seems to eclipse his typical well-thought analysis. In doing so, he missed a great opportunity to use his personal thoughts and opinions to publicly evaluate and potentially update the scope and work of the News Integrity Initiative (NII) that he helped launch
Criticisms about what sites are included in Facebook’s News tab sound like exactly the type of thing that the News Integrity Initiative is designed to help with. It’s intended (in theory) to provide a resource for distinguishing accurate and reputable news sources as opposed to unhinged conspiracy theory sites. It’s also the type of resource that, if built in public with publicly-accepted (and objectively evaluatable) criteria for integrity, can help people get past their personal feelings and biases and rest assured that while they may not agree with a channel’s editorial position (or reporting angles), if a story appeared in that site’s news section then it was likely factually accurate.
So, now that Facebook announced that Breitbart was going to be part of the News tab, and Jarvis wrote that he had problems with that, what’s the NII verdict? I have no idea, there’s not 1 listed. Granted, it’s a new organization and Jarvis has stated that they’re in the process of figuring out what things should be taken as signals of quality. That said, he fails to acknowledge that this decision is precisely the type of concern that shows why the NII is necessary. Either the site emits enough signals to be regarded as “reliable” (meaning anything it labels as a news article should be considered accurate information and reliable enough to base your decision-making on), enough signals to be regarded as “inaccurate” (regardless of what they call their content, you should treat it as supermarket tabloid material unless otherwise noted), or admit there’s not enough signals to make a determination either way (meaning you shouldn’t trust the reporting unless you see it in an “accurate” source).
Almost immediately into his criticism of Breitbart as a source in Facebook’s news tab, Jarvis states “This is outdated…but here’s a good roundup from Rolling Stone of awful things Breitbart has published. I’d say 10 strikes and you’re out.” None of this is a ruling on accuracy or the quality of their “reporting” process. I’m sarcastically quoting here because I haven’t read the stories, but the very headlines indicate that they’re opinion pieces. Multiple articles were written by Milo Yiannopoulos, whose job description can best be described as “professional troll.” I’m not going to speak to Bretibart’s separating or labeling of opinion vs. news articles – I don’t read much of what they publish so I have no idea how well they do with that. I’m assuming the presence of opinion pieces on a news site isn’t disqualifying, otherwise a lot of media organizations are going to fail the NII’s integrity test.
Another issue I have with Jarvis’s pronouncements that Breitbart doesn’t belong, aside from the lack of evidence about quality signals, is that he’s also effectively stating that there comes a point where a site is incapable of improving. In other words, standards and practices can’t improve at media organizations, thus there’s no point in an organization trying to get better. In that case, I guess all the NII is really doing is identifying which news organizations should just go ahead close down. I wonder, does this mean that organizations that were accurate and reliable at one point are physically incapable longer devolve into inaccurate and untrustworthy sources?
Jarvis also writes that Facebook allowing “noxious speech” without it at least posting that it disapproved meant that user must presume Facebook condones the speech. Jeff Jarvis knows full well the level of volume of content that gets submitted to sites like Facebook every second. He at least appreciates the difficulty and trying to analyze content at that volume correctly. Claiming that Facebook implicitly condones all content on their site unless they say otherwise implies a greater level of involvement on Facebook’s part with every post than they have they actually have, and I know Jarvis knows better. I’m not saying they should ignore the rules on promoted posts (they shouldn’t, as I’ve already mentioned), but for someone who passionately defends Section 230 the way he does, the claim that Facebook condones every piece of content on it’s site unless otherwise indicated is wildly out of character.
It’s important to remember, Facebook views itself as a company that offers a platform. They really don’t want to take more of a stand on the content that appears on their site than they absolutely have to. They don’t ban the things that they do because such content is “noxious,” distasteful, or even morally reprehensible. They do it because if they let it stay up on their site it would be bad for their brand. Facebook is having to balance taking down stuff that’s so horrible everyone would leave if they saw it, and not taking things down because users don’t stay with sites where they can’t post stuff. Facebook, like all services, wants users. Getting people to be willing to use the site means they only ban content that’s near-universally agreed to be despicable. Anything else would cause people to question Facebook’s legitimacy as a platform, and if people don’t view Facebook as being a (mostly) neutral place to post things, they’ll stop interacting with the application, and that means Facebook would start to die (don’t get your hopes up – it’d be an incredibly slow process which means they’d have plenty of time to salvage everything).
The irony in all of this lies in Jarvis’s insistence that “…at some point, we must trust the public, the electorate, ourselves.” Yet in demanding that Facebook post notices condemning the comments people make on it’s application, Jarvis is refusing to grant us that same courtesy – to assume that we can judge for ourselves what is “noxious” and what isn’t, and react appropriately. Instead, we apparently need sites telling us that “this post is bad.” Either a post violates the official policy of the application or it doesn’t. If it does, take it down. If not, then the public deserves the same trust in its ability to respond to tasteless posts that he’s asking they be shown in being able to handle untrue political ads.
So about that News Integrity Initiative
So while the NII is putting together it’s methodology, it’s a good time to think about what the outcome of its efforts should be. Since figuring out the most reliable signals of future quality is going to take a lot of research to filter out correlation from causation, this is all speculation without any real specifics, but I think they’ll offer some good starting points for future consideration.
First of all, it should emphasize best practices and policies. I’m willing to bet that an organization’s (or individual’s, see below) set of practices and policies is going to prove to be the best predictor of report accuracy. These encapsulate all the steps to ensure that facts are correct and complete, and represent the filter process to kick crap out of the pipeline. There’s a lot of different options out there, so identifying the collection that, taken together, maximizes accuracy is going to take years, but hopefully the NII can come up with an early set that has a positive impact on accuracy pretty quickly.
Another area that I hope the NII focuses on is different standards for different-sized media operations. A new news blog with only 1 person working on it doesn’t have the comparable resources to devote to accuracy that a large organization does. That shouldn’t automatically mean they aren’t accurate, just that it’s unreasonable to expect them to be able to go through the exact same set of steps as CNN or The Washington Post. Ideally, the NII won’t develop just 1 list of best practices, but a series of lists of best practices for different-sized organizations. Otherwise, the NII’s work runs the risk of falling into the same trap of favoring incumbents and disadvantaging new competitors that a lot of regulation falls into.
The NII should also have some means of attempting to quantify the impact of a mistake in a news report. I know the idea is to avoid making mistakes, but this is a system with people at its core and people make mistakes – and all mistakes need to be corrected. But there’s a big differences between corrections reading “We incorrectly listed Mr. X’s title as Assistant Director of The Thing. It’s actually Deputy Director of The Thing,” or “In our interview series with millennials, we incorrectly listed Ms. Y’s age as 29, she’s actually 28,” and corrections reading “We incorrectly listed Mr. X’s title as Vice President of The Thing. It should be Deputy Assistant Staffer of The Thing,” or ” In our interview series with millennials, we incorrectly listed Ms. Y’s age as 29, she’s actually 83.” The first 2 are minor tweaks that change nothing about the substance of the story. The latter 2 wildly change the context and authority you can ascribe to the quotes, and call a lot more of the report’s accuracy into question, even if the publishing organization followed all of the NII’s guidelines. They may have tried to be accurate, but they failed and their reliability metrics should suffer accordingly.
1 potential output of the NII’s work could be a tool that newsrooms could use to ensure accuracy and quality. Similar to how automation pipelines and tests are used in software development, I’d like to see the NII build a tool that lets news organizations build a report pipeline that encompasses the best practices for report accuracy for that organization’s size. Organizations could use this tool to ensure every report goes through the process, helping prevent things from falling through the cracks, and creating an auditable log of who signed off on each step, ensuring accountability should an error ever actually slip through. Use of this tool could “phone home” to the NII itself, as a signal that an organization has a workflow built on top of best practices for quality, and that it’s being used.
This leaves us with the question of what to do about the ultimate accuracy of an organization’s news reports. We absolutely want to measure it – the biggest indication of quality and reliability is “are the contents of their news reports correct?” But nothing covered so far stops news organizations from saying they’re following established best practices to game the NII and juice their reputation in hopes of being treated as a more legitimate news source than they actually are. So while it’s easy to say a measurement of accuracy should be considered an output of the NII, it’s also useful as a checksum on news organizations. “If an organization follows these practices, they should be overwhelmingly accurate. In fact, the overall average accuracy for organizations following these guidelines is {x}%.” Now we can take a specific organization’s accuracy and compare it to the overall accuracy rating of organizations with similar practices and size. Is it consistent with what we expect? Is it significantly lower? If so, perhaps they’re not as committed to best practices as they say, or at the very least not implementing them correctly. And that’s a red flag that you should likely avoid them until they clean up their act.
The biggest challenge for the NII, and a problem I’m glad I don’t have, is going to be coming up with a measurable, widely-accepted definition of accuracy and reliability. It’s more than a question of “did this report contain any information that wasn’t correct?” The report should also be complete, and contain the full context. Believe it or not, the biggest driving force behind Trump’s whole “fake news” schtick isn’t so much a claim that news organizations are outright lying, but rather that they’re spinning their reports hard to fit a favored perspective, and go easy on Democrats but are incredibly hard on Republicans (I’ve even seen some claims that they’re basically a PR branch for the Democratic party). Just gauging news outlets on a metric of “number of things in report that weren’t true” isn’t going to cut it for a lot of people. The NII needs to be able to show its work and justify every decision they’re making when it comes to determining news integrity.
Jarvis’s comments on Facebook’s news tab inclusions underscore the need for the NII that he helped found. While it has the great potential to settle hysterics over news sources, Jarvis’s problems with Breitbart’s inclusion in Facebook’s news tab clouds his judgement at a point that illustrates the perfect question of what role the NII should serve and what it could grow to be. I think if care is taken in defining just what the NII looks for, it has the potential to be a solid indicator of trustworthiness in news without favoring large, or established players. It could provide guidelines and support for small, up-and-coming media outlets, to get in habits that promote high quality, giving the public trust in their work even if they don’t have much history. I’ll be watching the NII because I find it’s potential interesting, but I’m worried it’ll be squandered fixating on idealism about journalism, rather than practicalities.