Australia has suddenly become a hotbed for political factchecking. In May, PolitiFact Australia launched as the first international affiliate of the Pulitzer Prize-winning website PolitiFact, bringing the site’s signature Truth-O-Meter ratings to the country’s ongoing election campaign. And in early July, an independent Australian website called The Conversation launched its own dedicated Election FactCheck site, which departs from the approach taken by elite factcheckers in the United States in several interesting ways. I interviewed the site’s editor, Gay Alcorn, by email. What follows is a lightly edited transcript of our conversation.
Gay, congratulations on the launch of the site. I’d like to start by learning more about The Conversation, which is an interesting idea in its own right, and how the Election FactCheck section came about. Could you briefly tell our American readers about the concept for The Conversation and why you decided to launch a factchecking website as a new feature?
The Conversation was launched in March 2011. I wasn’t involved with it at that stage, but it was founded by Andrew Jaspan, a former editor of The Observer in London and The Age in Melbourne. It is funded mostly by universities, as well as government, corporate and individual donations, and the idea was to get academic expertise and research into the broader public conversation.
Mostly, the site runs opinion and analysis pieces from academic experts and news stories on academic research. It’s a collaboration between the academics and professional journalists—editors commission articles and edit them to ensure they can be easily understood by a non-expert reader. The site now has more than a million unique browsers a month, a great achievement for a new venture. This year, it launched a UK site, and there is talk of a US version. Some background on The Conversation is here.
Factchecking seemed a natural extension to what The Conversation was already doing. The site has published factchecks before, but our Election FactCheck page is far more extensive. We watched what was happening with factchecking in the US, and thought it would be worthwhile to try our method for Australia’s national election to be held later this year.
One of the most interesting aspects of the site is the changes you have made to the standard factchecking format used by elite US factchecking sites. Can you explain what you changed, why, and how it fits in with the mission and approach of The Conversation?
We’re using The Conversation’s greatest resource—Australia’s academics, who write opinion and analysis for the general site all the time. The editors commission an academic with subject expertise to check a statement from a politician, a political party, an interest group, or the media. We then have a “blind” review process. A second academic with subject expertise reviews the check without knowing the identity of the original author. We just wanted to add an extra layer of rigor to the process.
So the journalists and editors will do some basic journalistic work—ringing up the politician’s office, for instance, to ask them the source for the statement, but the academic writes the factcheck based on his or her knowledge and research. We’ve found what works best for us are checks on statements where academic expertise can add value. One example is that some readers have asked us to check how many media interviews our opposition leader, Tony Abbott, has granted this year because there has been some criticism that he is avoiding the media. That’s not really one for us because it’s more a reporting job.
There are a couple of other differences in our approach. We intended to use a rating system similar to PolitiFact, but we abandoned it before we launched. We ended up deciding it was “Mostly Meaningless.” We still give a one or two line Verdict, and it’s often a strong verdict, but we don’t want to distort issues by giving all checks a one or two-word rating.
An example was our check on a prominent MP’s claim that 80 percent of Australia’s grocery market is controlled by two big supermarket giants. Our academic looked into that and said, well, if you just include dry packaged groceries, it’s up to around 80 percent, but if you include fresh fruit and vegetables and meat and other groceries it’s more like 55 to 60 percent, which is the industry estimate. We talked around that and discussed giving the statement a “Mostly False” rating, a “Mostly True” rating, or a “Half True” rating. It depends on how literally you are going to take the statement.
Most statements need context to give them meaning, we found, and we saw that other factcheckers in Australia were getting pushback about their ratings, which takes the debate away from the substance of the piece. We’re not criticizing that approach, it just didn’t suit us. The aim of our site is to be useful to voters, to help them understand the basic information about contentious issues, and we found that bald ratings weren’t especially useful.
Our other difference is that The Conversation, which is a non-profit, operates under what’s called a “Creative Commons” policy, which means anyone can run our pieces at no cost. So a big or small media player, a blogger or small website, even someone running a community newsletter, can publish our factchecks if they think their readers might be interested. They just have to give us credit and to check with the author if they want to make substantial changes.
I love the way you are blending academic and journalistic approaches. What substantive difference do you think it makes in practice to the content you produce? Have your expert authors come to different conclusions than PolitiFact Australia (which uses a journalistic approach) or traditional media outlets in any cases yet? Are there issues where you expect the academic approach to add value?
We’ve been going for three weeks. So far, I think the difference has been in the sorts of statements we check. We’re avoiding ones that are clearly more suited to a journalistic approach. It doesn’t limit us too much, but it does mean we are policy-focused so far. We have checked three statements that have also been checked by PolitiFact Australia. One reached the same conclusion as PolitiFact did using similar arguments; another checked the statement in a different way. The latter case was a claim by an opposition politician that Australia was losing one manufacturing job every 19 minutes. PolitiFact rated that “half true”—they said while the figure was correct, it was wrong to blame the government for that. Ours also crunched the numbers and found the statement to be true, but put it in an historical and international context and said that the loss of manufacturing jobs was common in developed nations. So both PolitiFact and The Conversation’s Election FactCheck gave the statement context, which I think is valuable. The third one was about opposition claims that it now took more than three years to get approval for a mine in Australia, up from 12 months a few years ago. PolitiFact found that Half True; we found it false.
The traditional media, as you say, check facts all the time and journalists often rely on academics for help with that. But because the news is so fast-moving in this 24/7 news cycle, stories can rush by or develop quickly and sometimes there is not enough time for journalists to check the accuracy of statements as thoroughly as they might have in the past. But there have been a few statements we have considered checking that have been looked at by the established news sites, so we have tended to leave those alone. Due to our review process, we can’t do instant checks—they tend to take a day or two—so if the story is really a one-day wonder, we avoid it.
In this era, we in the media have to think seriously about collaborating with other players because we don’t have the resources we once did. Collaborating with academics make sense in some circumstances. Journalists have particular skills that academics don’t have. We tend to be finely tuned to the news, and we are able to edit stories to ensure they make sense to a non-expert reader. The academics bring enormous and detailed knowledge. I think that journalists and academics working together are potentially better than either of us would be working alone.
So far, the academics have particularly added value on complex issues, where claims and counter-claims can get thrown about and where “facts” are used to argue for a policy position in a way that’s distorted. Manufacturing and how we compare internationally. Asylum seeker policy. Education policy. Even elections and voting behavior—we did one on whether swinging voters were “disengaged” with politics.
I’m also intrigued by the blind peer review model and the move away from ratings scales. Both should make the information on the site more credible to skeptics who often object to ratings or the omniscient voice often used in fact-checking, but they are also more transparent in acknowledging the ambiguity and subjectivity that factcheckers often encounter in practice. Do you think this undermines the authority of the site or strengthens it?
We’ll see. The peer review has definitely added authority, but it isn’t always easy. We’ve had instances where the reviewer disagrees to some extent with the author, and we’ve taken the concerns back to the author who has changed the factcheck slightly—that’s how it’s supposed to work, I guess. The reviewer doesn’t just tick off the check; they often emphasize something different or raise a new point. Regarding ratings, I wasn’t sure we had made the right decision, and I asked readers via Twitter and comments to let us know what they thought about it. So far, the feedback has been good. People who are interested in politics or particular issues don’t seem to need or want a one- or two-word rating about something and, as I said, we do have a Verdict, which is usually one or two lines. It gets back to the purpose of factchecking.
We are not thinking of this as something for politicians or political insiders or journalists. We are trying to be a site for voters, providing information they might find useful. Our opening question to ourselves when we think about whether to check a statement is: “I wonder if that’s true?” We have all wondered that, but I don’t necessarily want someone to tell me something is literally true if the real question is whether the conclusion the politician is drawing from the factual assertion is invalid. So it depends on the purpose of factchecking. We are learning as we go about its strengths and weaknesses, what factchecking can do, and what it can’t.
One last question—the rise of factchecking here in the US is in part a response to frustrations with the “objective” model of news reporting, which in practice often leads journalists from mainstream outlets to refuse to arbitrate between competing factual claims about controversial issues. To what extent does a similar critique apply in Australia and do you think it has played an important role in motivating the growth of factchecking there?
It hasn’t been cited as a key reason for factchecking sites here. Our debates around journalism are influenced by US debates, and there has certainly been discussion about “false balance” and “the view from nowhere.” Journalism here is completely in flux, and established and new media are trying new things, which is heartening. “Objectivity” is certainly under challenge, and has been for many years, yet there is controversy about the merging of news and opinion, too. Social media has pulled up professional journalists for errors and bias, and insisted on accountability, but it has also helped fan a “perpetual outrage” style of political debate in Australia. I hope factchecking sites are about a hunger for substance. There is ample evidence that people are disgruntled with the poll-driven, spin-driven, personality-driven style of political reporting. Factchecking sites may be one way to help address that.