To download or read a PDF of this report, visit the Tow Center’s Gitbook page.
Â
Executive Summary
With the rise of nonprofit, foundation-funded newsrooms, the field of Measurement and Evaluation (M&E), which emerged in the international development community, has taken a strong foothold in journalism. As nonprofit newsrooms apply for grants and appeal to donors for funding, they often need to explain in formal reports âhow wellâ their stories performedânot just in terms of impressive traffic but in qualitative evaluations of the impact their reporting had on the world: Did it change a law? Did it move the needle in the conversation? Did it meet the expectationsâhowever definedâthe organization had for it? Based on survey research and interviews with newsrooms regarding current impact measurement practices, the researchers designed and built a new analytics platform called NewsLynx to improve upon existing methods of displaying quantitative metrics and to add qualitative information that was previously nonexistent in such tools. Many newsrooms found current analytics tools insufficient for fully capturing their outputâs performance. They had trouble seeing comparisons between audience reactions to stories or the effects of their social media and promotional efforts. While they often had multiple data sourcesâGoogle Analytics, Omniture, etc.âputting these numbers into context was still difficult.
The NewsLynx Project Implements Three Key Ideas
- NewsLynx seeks to augment metrics with context. It shows how an article performs in comparison to the average of all a publicationâs articles and allows comparisons within subsetsâall immigration articles, for example, or within any user-defined category.
- NewsLynx also provides efficient tools for tracking, categorizing, and assessing indicators of impact aside from audience reach. Such impact indicators might be legislative reform or community action. This has previously proved extremely difficult and time-consuming. NewsLynxâs âApproval Riverâ functionality aims to reduce the effort associated with managing traditional clip searches and social media searches, which newsrooms use to monitor impact. Crucially, it allows users to apply consistent (and therefore comparable) metadata to impact indicators.
- The NewsLynx developers propose an impact framework that allows for the fact that real-world impact measures are often messy and hard to categorize. NewsLynx implements a framework that offers newsrooms enough structure to categorize âimpactful eventsâ across similar boundaries, while also providing enough freedom for them to create their own impact definitions to match particular goals. Importantly, the researchers believe that successful, long-term impact measurement can only result from identifying such organizational goals.
Key Observations and Recommendations
- Effective impact measurement must be tied to an organizationâs goals. No amount of technology can help an organization measure what it hasnât defined as important. Should a newsroomâs reporting seek to change the narrative around an issue? Does it want to reach certain stakeholders or affect lasting reform? Only after an organization has understood what it wants to achieve can quantitative and qualitative tools assess how close the organization is to that goal.
- Both quantitative and qualitative metrics have a place in impact measurement. While quantitative metrics are often vilified as leading journalism astray from its true purpose, the researchers found they do help tell the story of a newsroomâs performance. Although this project began with an interest in giving more visibility to qualitative measurements, its founders repeatedly heard reports from newsrooms that quantitative measurements play an important role for organizations wanting to tell a long-term story of audience growth.
- Newsrooms should better tag their articles. Newsrooms that want to properly understand their own performance over time should put more care into tagging and cataloging their stories. These practices can give an organization a better understanding of its own operations and how much space it devotes to each subject. Tags also offer staff the ability to perform myriad analyses comparing stories and packages. Without differentiating and labeling content, it is difficult to understand patterns in traffic or impact.
- Newsrooms have metrics, but they also still have many questionsâparticularly about audience. As one newsroom put it, âGoogle Analytics feels both too complicated and not powerful enough for the questions we want to answer about readers.â Many existing metrics arenât designed to help analyze metrics from readersâ perspectivesâas in, what did they think about the story? Did they leave after the fifth graf because they understood the newsy part of it and didnât need anymore, or was the site design wrong or the prose too dense? Nor do common tools provide enough insight into that relationship between the news organization and its audience.
- Custom analytics solutions have recently become more feasible. With the continuing maturation of open source analytics pipelines, it is now possible for news organizations to own their entire analytics stack and not rely on third-party vendors for the data-collection portion of their metrics. In other words, the next few years could see newsrooms access much more diverse offerings, providing faster analysis and greater detail more relevant to journalism. That being said, these pipelines are largely for data collection, so most newsrooms will need to design and implement their own custom interfaces to interpret this data for the average reporter and editor.
Â
Introduction
The idea for this project began on a blustery spring day on a visit to an office inside the recesses of The New York Times. The dayâs agenda was to better understand the process whereby Glenn Kramon, then the assistant managing editor for enterprise, helped decide which of the Timesâs many stories from the previous year should be nominated for the Pulitzer Prizes. His desk was littered with books, printouts, envelopes, and handwritten notes. On the wall hung a proudly framed full-page ad that the Ford Motor Company had taken out, promising improvements in response to the paperâs investigation into SUV rollover deaths in the late 1990s.1 Sights like this werenât entirely unusual in a building where you can easily find yourself in a hallway covered with portraits of award-winning teams and their front pages. But the scene was striking for the simple fact that here, for lack of a better description, unfolded a crucial step in how the impact sausage was made.
The process was this: Whenever a Times investigation was mentioned in a meaningful way, whether it be a citation by a competing publication, a complimentary letter from a senator, or an official response by a corporation or government, a slip of paper would make its way from Kramonâs desk into one of dozens of large manila envelopes that were filled, hand-labeled, and filed in boxes under his desk. Pulling a seemingly random scrap from one of the envelopes on the iEconomy seriesâan explanatory series that would later win that categoryâs 2013 Pulitzer Prize2âKramonâs eyes lit up. It was a note he had written describing pickup from an unusual source. âI knew âiEconomyâ was big when Saturday Night Live spoofed it,â3 he said.
As the conversation progressed to the question of how one might actually measure impact, Kramon reached under his desk again, pulled out an overloaded envelope, and squeezed it to demonstrate its thickness. âDo you want to know what impact is? Itâs this right here.â That was to say: at the end of the year the stories with the thickest envelopesâthe ones that resonated the most with the outside worldâwere the likeliest candidates for submission.
How newsrooms conceive of and measure the impact of their work is a messy, idiosyncratic, and often-rigorous processâbased at times on what simply seems worth remembering during the life of a story; at others, on strict guidelines of what passes the âimpact bar.â
As a result, we conceived of our related research project in two parts: first as an attempt to understand how and through which processes news organizations both large and small currently approach the âimpact problemâ and second, to see if we could develop a better wayâthrough building a technology platform called NewsLynxâto help those newsrooms fill the proverbial envelope.
Definitions
Before going any farther, we should clarify what we mean by impact. To start, our base assumption is that journalism can and does have an effect on the world and it does so without necessarily becoming advocacy.i Unlike other work on this topic, we donât offer a strict definition of impact. Based on our research, successful impact measurement can only happen if an organization has identified its institutional goals. From there, it can begin to measure elements that bring it closer to those goals (e.g., encouraging subject-matter influencers to discuss oneâs work if the goal is to improve the credibility of oneâs reporting). As a result, we created a loose impact framework as opposed to a strict definition.
This approach offered us two advantages. The first is that it exposed newsroomsâ current thinking about what they consider to be important events, allowing us to see the existing differences across organizations. The second is that it served as scaffolding for newsrooms that do not yet have an articulated understanding of their own goals. We found this was the case most often with national and international publications and less so with local, regional, or topic-based ones.
Maintaining standard terminology and vocabulary is a worthwhile goal, however. It goes without saying that the more newsrooms use common terms and tools, the more opportunity exists for contextualizing and understanding a projectâs success. In Chapter 5 of this report, Recommendations and Open Questions, we discuss those factors that could make standardization possible in the future.
Our view of impact necessarily incorporates both qualitative and quantitative information. Many newsrooms interested in measuring qualitative events expressed their frustration at the limits of page view-driven decisions. They worried that only high-traffic stories get lauded and, consequently, shape the editorial agenda. In fact, when we started this project one of our main goals was to build a tool to better highlight qualitative events. If traffic is the only measure of success, then how can you show the value of the niche story on an important topic?
Through the course of our research, we also heard examples emphasizing the value of tracking existing and new quantitative measurements. One such story that stuck with us came from a mid-size, nonprofit newsroom producing a mix of investigative, political, and culture reporting. It explained how routine traffic to its stories is orders-of-magnitude higher today as compared to two years ago. The newsroom uses this information to show readers and funders that its organization is trending upwards. Taking this data in the aggregate, and not letting any one number guide strategy, the company is able to construct a narrative about its editorial reach backed by numbers, not discrete and varying qualitative events.
Jonathan Stray wrote about the difficulty of qualitative measurement: âSome events are just too rare to provide reliable comparisonsâhow many times last month did your newsroom get a corrupt official fired?â5 Numbers do fill a valuable role in understanding organizational health, and we think removing them from the equation eliminates a potentially valuable lens through which to gauge success.
When we say âthe impact of journalismâ or âthe impact of a newsroomâs workâ we should also clarify the limits to what we can reliably studyâthat is to say, where we chose to start research for this project. From cultural commentary to the court reporter on a beat, journalism exists in so many varieties that it can make the question of journalismâs impact seem too large to tackle.
To narrow the scope of our research, our initial target newsroom was the small, nonprofit investigative organization.
Two elements informed this choice. We were most interested in journalism that seeks to address something about the world (this is often investigative work) and, therefore, allows newsrooms to more easily state tangible goals for their projects. For instance, did this illegal practice end? Did the government increase oversight? Are companies now following the law?
The second reason was organizational. Such newsrooms often look to grants or benefactors for funding; and these outside groups often require reports outlining how the organizations have used their moneyâhopefully guaranteeing that it was well spent. As a result, impact measurement is not a foreign concept to these benefactors, albeit still not an easy one.
We want to stress that these are neither the only nor necessarily the âbestâ examples of journalismâs impact. For example, looking at how media coverage can shape discourse is another fascinating and worthwhile area of study. We do, of course, remain cognizant of other forms of analysis, such as pre- and post-intervention surveys, which might fold into the NewsLynx platform in the future as more newsrooms adopt increasingly sophisticated techniques of impact measurement. Our focus here, however, is on the current needs and practices of investigative newsrooms.
Previous Research
Although our small, nonprofit variety of investigative newsrooms is on the forefront of the journalism community in thinking about impact, the concept of impact assessment is by no means new. The international aid and development communities have been heavily involved in this kind of thinking for years under the name âMonitoring and Evaluationâ (M&E).
At the core of this movement is a simple question: How can we know if our work is having an effect in the world if we canât measure it? This sentiment is perhaps best embodied in the Bill & Melinda Gates Foundationâs annual letter from 2013, in which Bill Gates, summarizing a passage from William Rosenâs book The Most Powerful Idea in the World, wrote, âwithout feedback from precise measurement âŠinvention is âdoomed to be rare and erratic.â With it, invention becomes âcommonplace.â â6
While we found no definitive history on the rise of M&E within international and non-governmental organizations, as early as 1999 the United Nations Development Programme (UNDP) began a major overhaul of how it conducted aid and development interventions, shifting toward an organization-wide emphasis on a âculture of performance.â7 With this shift, UNDP began mandating that all operations adopt the methodology of âresults-based managementâ in which the effectiveness of programs would be assessed by establishing baselines before an intervention and then periodically collecting data to assess whether the program was working.
In 2000, with the unanimous adoption of the United Nations Millennium Declaration, a major governing document, and the corresponding Millennium Development Goals (MDGs), M&E moved into the mainstream. At the heart of the MDGs were eight objectives to address the worldâs most intractable problems, including poverty, access to education, gender inequality, disease, and environmental degradation. Each of these eight objectives was associated with clear, measurable outcomes. For instance, in pursuit of the goal of eradicating extreme poverty, the MDGs pledged to âhalve, between 1990 and 2015, the proportion of people living on less than $1.25 a day.â8 While the design of the MDGs came under harsh criticism for (among other reasons) its inability to capture relative versus absolute progress within aid circles,9 the underlying framework of explicitly stating goals and preselecting indicators to judge movement toward those goals soon became the norm.
It was not long until the world of philanthropy followed suit. Over the course of the following decade, organizations of grantmakers focused on realms as diverse as African aid,10 human rights,11 and the environment12 began discussing methods for monitoring and communicating the impact of their work. Reams of toolkits, best practices, and case studies were published on the issue.13
From Monitoring and Evaluation to Media Impact
While philanthropists began adopting the mantle of measurement, they also became increasingly interested in the importance of media for communicating and amplifying the message of their missions. Here we begin to see how the M&E framework that aid and development communities established connects directly with the present topic of measuring media impact.
With the release of high-profile, social-issue documentaries such as Bowling for Columbine (2002) and An Inconvenient Truth (2006), the power of mass media to steer public debate around a topic became readily apparent. In the years following, prominent funders like the Bill & Melinda Gates Foundation, the Ford Foundation, and Open Society Foundations latched onto documentary films as potential means of raising awareness and prompting action on widespread societal problems. From educational reform (Waiting For Superman) to schoolyard bullying (Bully) to fracking (Gasland), many of the most resonant documentary films of the past decade have received foundation support for their production, distribution, and/or associated outreach programs. Whether for purposes altruistic, financial, or both, it is now standard practice for documentary filmmakers to attach social-issue campaigns to their creative works.
In turn, the foundations that supported these filmsâinfluenced by their concurrent involvement in aid and development interventionsâbegan requiring filmmakers to provide detailed reporting on the impact of their work. In practice, these reports initially relayed traditional metrics like viewership, ratings, and box-office returns. Yet, over time, they increasingly adopted more sophisticated social science methodologies, employing pre- and post-surveys, frame analysis, and monitoring of mass and social media mentions. BritDoc,14 a foundation established in 2005 to exclusively support social-issue documentaries, lists over 30 of these impact reports published since 2008 in its Impact Field Guide & Toolkit.15
Yet a fundamental difference exists between assessing the impact of an aid intervention versus a documentary film. If your goal is to eradicate polioâas is one mission of the Gates Foundationâit is (relatively) easy to measure the effectiveness of your intervention; simply counting the number of polio cases over time provides a reliable metric of success. If youâre concerned about the influence of confounding factors, like simultaneous development initiatives in the same region, you might design a randomized control trial to test the varying effectiveness of different vaccines, treatments, or educational campaigns. Documentary filmmakers, however, seek more abstract goals like raising awareness, shifting societal norms, or advancing the art form. While academics have attempted to design randomized studies to isolate the effect the mass media has in driving such outcomes, these approaches are limited to highly specific interventions and do not address the need for making comparisons across a variety of contexts.16 Journalism faces many of these same challenges as it increasingly moves toward business models driven by institutional and philanthropic support.
The Rise of Nonprofit Journalism
The last ten years have seen rapid growth in journalistic organizations built on these support sources. A 2013 study by the Pew Research Center identified 172 such nonprofit outlets in the United States.17 Of these, over 70 percent were founded after 2008. While mostly nascent, nonprofit news organizations have achieved considerable impact in this short time. In 2010, ProPublicaâfounded only three years priorâbecame both the first nonprofit and exclusively digital news organization to win a Pulitzer Prize for investigative reporting. Since then, the Center for Public Integrity and InsideClimate News have also received the prestigious honor. The Philip Meyer Award, an annual prize for computer-assisted reporting, has awarded its last three top prizes to nonprofit outlets.
Whether because they are unbound from bureaucratic legacies or banner ads, nonprofit news organizations have become beacons of innovation in the industry, regularly exploring and experimenting with new revenue models, distribution channels, mediums, and methods of reporting. This innovative spirit has captured the attention of serious funders such as the Knight Foundation, which in 2013 awarded at least 20 grants to 13 such institutions totaling nearly four million dollars (authors calculations from Knightâs 990s),18 not to mention numerous contributions to individuals and organizations to support the broader journalistic community (one of which has been the Tow Center itself). Some for-profit media outlets are also experimenting with foundation support. Since establishing its Strategic Media Partnerships program in 2011, the Gates Foundation has supported initiatives at The Seattle Times,19 The Guardian,72 and Univision.20
As foundations have entered the fray of journalism, they have brought with them the M&E philosophy inherited from their work with NGOs and the international development field. In turn, the livelihoods of nonprofit newsrooms have become increasingly linked to their ability to collect and report meaningful metrics of impact. Unsurprisingly, the Gates and Knight Foundations remain at the forefront of this movement. In 2011, Dan Green, the head of Gatesâs aforementioned Strategic Media Partnerships program, convened journalists, editors, social scientists, and media grantees to share and strategize tools and methodologies for measuring impact. These sessions resulted in the publication of âDeepening Engagement for Lasting Impact: A Framework for Measuring Media Performance & Results.â21 The report offers a comprehensive guide for media makers facing the onus of impact, breaking the process of assessing it into four parts:
Closely following the framework of results-based management, the report instructs media grantees to set goals, define a target community, measure engagement, and ultimately, demonstrate impact. And yet, while this four-step process for measuring impact may appear simple enough, difficulty arises in its implementation. The report suggests the use of custom surveys, interviews with stakeholders, and analysis of data from disparate sources. These are tools that even the largest media organizations struggle to utilize correctly, let alone small nonprofit newsrooms. Many of the reportâs proposed methodologiesâlike using Klout for measuring influential audience membersânow appear outdated, even three years after publication. In sum, while comprehensive, the report ultimately did more to confuse and overwhelm its audience than it did to crystallize a direction forward.
Beyond these issues, the âDeepening Engagement for Lasting Impactâ report had no response to the problem of scale. By this, we mean the challenge of creating tools and methodologies for measuring impact, which can be applied to more than a single project. Many of the organizations we interviewed for this study struggled with the time and energy required to properly measure their impact. This effort was made all the more frustrating when different foundations asked for different metrics or to report them in different formats. To address this issue, the Gates and Knight Foundations made a 3.25-million-dollar grant in 201322 to the USC Annenberg School of Communication to found the Media Impact Project (MIP).23 At the core of its mission is the promise of âdeveloping processes and tools needed to implement media impact measurement frameworks.â This promise is manifested primarily in the Media Impact Project Measurement System, which has similar goals to NewsLynx.24 It seeks to weave together content, web and social media analytics, and qualitative data into a unified framework for application in a multitude of contexts (the authors of this study have consulted MIP on their work in this domain). While the system has yet to be released, if successful it could be a significant step forward in scaling media impact measurement. The Media Impact Project differs from NewsLynx in that it anticipates outsiders devoting resources to studying a news organizationâs operations.
The Quantification of Content
While foundations have played a large role in driving the movement toward measuring media impact, it would be inaccurate to describe this movement as strictly top-down. Journalists and editors are skeptical of seeing the practice of journalism through an increasingly quantitative lensâthe page view being the largest example of this (the metric simply counts the number of times an article has been opened). Page views have risen to prominence because they are relatively easy to capture and compare across contexts: a news organization can quickly ascertain which stories are driving the most traffic by comparing their number of page views.
As with any metric, once success is measured in its terms, sites optimize for it. Slide shows, which are designed to generate a page view for each image, are one outgrowth of metrics dictating content and user experience. Some media outlets, such as Gawker, even incentivized their writers by paying them based on the number of page views or monthly unique visitors their articles generated25 (monthly unique visitors is a derivation of a page view that accounts for multiple visits by the same readers). Others saw this shift toward metric-driven decision-making at odds with quality journalism and summarized it as âclickbait.â
The pendulum swing in the other direction started around 2012 when newsroom figures like Greg Linch, an editor at The Washington Post; Aron Pilhofer, then at The New York Times; and Jonathan Stray, formerly head of Interactive News at the Associated Press, began writing about26 and further discussing27 alternative metrics for the newsroom.
That year, Pilhofer arranged for a Knight-Mozilla OpenNews fellow to spend a year working on this question,28 during which a co-author of this report, Brian Abelson, looked at ways to tackle alternatives.29
Many analytics companies have also joined this conversation, oftentimes declaring the âdeath of the page viewâ in so doing.30 Responding to skepticism and disdain for the click-driven web, companies like Chartbeat have begun developing metrics based on the time readers spend with an article, rather than the number of instances that article was viewed.31 While âtime on pageâ has long existed within most analytics platforms, its interpretation is difficult since it can be affected by a reader leaving the page open in another tab. âAttention minutesâ seek to address these problems by using more sophisticated methodologies to track when a reader is actually engaging with content.32 Many large media organizations like ESPN, Upworthy, and Medium have openly stated that they now prefer attention minutes over page views when it comes to measuring and reporting the success of their content.
However, despite the promise of attention minutes in better aligning the interests of publishers and advertisers, the metric offers little help for truly measuring impact. In an online forum MIP hosted to discuss the relative merits of the metric, Jonathan Stray pointedly asked, âjournalism is very much a multi-stakeholder endeavor, so why should we imagine that a single number can capture all aspects of the activity?â In other words, the challenge of measuring impact will not be properly addressed by a single metric. We might even argue that the negative externalities similar to those generated by page views will simply take on new forms in a media landscape dominated by attention minutes. Ultimately, the problem is not the shortcomings of particular metricsâin many ways metrics have greatly improved in recent years. The problem of metrics lies in optimizing newsroomsâ activities around a single figure above all others. Any metric given absolute primacy has the power to overemphasize certain areas and deemphasize others. One of the goals of this research is to add comparison points and context wherever possible to give the most holistic view of the metrics currently monitoredâwhether they be quantitative or qualitative indicators.
Current Efforts
Our project is certainly not the first or only effort trying to understand impact. Interesting initiatives are taking place on both the qualitative and quantitative side of the equation. Because a comprehensive review is outside the scope of this paper, weâve chosen to discuss only a selection of projectsâwith an eye toward ones that have the most similarity to NewsLynx. For a more comprehensive list, see the âImpact Reading Listâ in Appendix B.
Qualitative Projects
Two projects aimed at the qualitative aspects of impact are the Center for Investigative Reportingâs (CIR) impact tracker and Chalkbeatâs tool called MORI (Measures of Our Reportingâs Influence).34
Center for Investigative Reporting
Lindsay Green-Barber, a post-doctoral ACLS Public Fellow brought on to serve as the organizationâs first media impact analyst, designed CIRâs Impact Tracker as a simple online form that journalists and editors can fill out when they believe an investigation has led to a real-world impact. The form prompts its users to describe what happened, when it happened, optional links or documents associated with the event, and to which CIR story it relates. Users then assign the event to one of 17 carefully curated categories, which represent, in Green-Barberâs experience, the full range of potential outcomes from CIRâs work:
- Law change
- Government investigation
- Reader/viewer/listener contact
- Award
- Advocacy organization uses report
- Screenshot of CIR story in media outlet
- Public official refers to report
- Institutional action (firing, reorganization, etc.)
- Change of policy or regulation
- CIR staffer does a public appearance or interview
- Localization of story using CIR data
- Lawsuit filed
- Editorial
- Screening
- Professional organization cites reporting
- Social network share
- Other
The process also classifies impact across three levels of the eventâs effect:
- Macro: Stories that have a concrete effect on things like legislation, changes in staffing involving those in power, or allocation of resources to a subject. Examples might include the prototypical impact event: the passage of a new law addressing an investigationâs findings.
- Meso: Stories that influence the general discourse and awareness around a subject. Examples of this could mean increased coverage on a topic at other media outlets or the public organization of a protest.
- Micro: Stories that lead to changes in individualsâ behavior or actions. Examples of this include an individual who writes a letter to his or her congressman or stops buying products revealed to be harmful.
Green-Barber has been able to use the data generated to create analyses35 and reports36 on CIRâs impact. To date, other organizations, like The Seattle Times, have already started using the Impact Tracker.
Chalkbeat MORI
Chalkbeat, an education-focused publication, centers its impact collection around an open source WordPress plugin called MORI that combines article-tagging, event-tracking, and goal measurement.
Before an article is published, staff must categorize the story by typeâAnalysis, Curation, Enterprise, Quick Hit, etc.âas well as identify the postâs audienceâEducation Participants, Educational Professionals, General Public, or Influencers and Decision Makers.
If a story is related to a meaningful offline event, staff users can go to that articleâs page in their CMS and add a narrative description and an impact tag of either âinformed action, the actions that readers take based on our reportingâ or âcivic deliberation, the conversations readers have based on our reporting.â
Rather than simply reporting raw metrics, MORI works by first requiring editors to predefine goals. In turn, all numbers are displayed in the context of progress rather than performance. This choice was a conscious decision on the part of Chalkbeatâs creators, who were wary of the utility of placing decontextualized metrics in front of journalists or requiring them to track the impact of their stories without being clear as to why they were doing it in the first place.
MORI users can set goals in any of these categories as well: Content Production (e.g., for the number of stories theyâve written across certain focus areas such as teacher evaluations or common core), Content Consumption (e.g., unique visitors, newsletter subscribers), and Engagement (e.g., Facebook fans, offline events hosted by the organization, etc.).
While Chalkbeat had initial concerns about whether its journalists would adopt MORI, its founders were pleasantly surprised by its reception:
Within a week, we were watching conversations unfold in our newsrooms about whether this or that thing constituted an impact. People were eager to tally up the results of our stories. Indeed, reporters and editors quickly began asking how they could sort the data by the stories they had individually produced, a feature we had planned to roll out more slowly.
For more information, their video walkthrough and white paper on the topic are very much worth review.37
Quantitative Projects
The area of quantitative measurement is also seeing a number of new initiatives. The largest trend is what Andrew Montalenti from the analytics company Parse.ly referred to as the âdemocratization of the data pipeline,â where open source tools are maturing to the point that running your own analytics collection is becoming much easier. This development is notable as it opens the door for direct ownership over analytics information, as well as a lower barrier to entry for custom solutions. That is to say, if a company isnât happy with the speed, interface, or flexibility of Google Analytics, it could more easily build its own in-house platform. This task is no small undertaking to be sure, but new advances bring it within the realm of possibility. Two projects in this space, Snowplow and Piwik, are worth mentioning.
Snowplow
Fairly new, Snowplow is an open source project that allows users to record user events and store the data on their own infrastructure.38 Itâs the best example of the open source âdata pipelineâ and gives users the real-time speed of something like Chartbeat with the quantity of time-series data that Google Analytics provides. For most of what Google Analytics records, users must wait roughly 24 hours for that data to become available.
The Guardian started using Snowplow in early 2015 for the analytics on its Soulmates and membership pages. As opposed to Google Analytics, which tends to look at the page view as the atomic unit of consumption, Snowplowâs event-based system makes it easier to track user behavior and attach metadata to each action, said Dominic Kendrick, a software engineer at The Guardian. He also appreciates that it provides this data within five minutes of any user action. âThe speed and control you have over what is recorded is the biggest thing, because innovation is limited by the speed of the software you implement. If you use a third party, youâre limited to that schedule,â Kendrick said. âThree years ago no one was doing this, but now you have options.â
Importantly, Snowplow concerns itself with efficiently recording and storing event-level interactions with a high degree of customization optionsâit does not come with a visual dashboard out of the box. Advanced users will see this as a benefit since it means they can create custom visualizations that answer specific questions their newsrooms might have. For others, however, it might feel theyâre getting a mere bicycle frameâalbeit a robust, free, and versatile oneâwhen what they had in mind was something they could ride out of the store.
Which system makes practical sense will differ based on the resources an organization devotes to analytics, but indeed growth and greater adoption in this area show promise for future iterations of NewsLynx-like systems. Snowplowâs website keeps an updated list of companies currently using the system.39
Piwik
Similar to Snowplow, Piwik is another open source analytics suite.40 In addition to storing raw data, it also provides multiple dashboard interfaces for viewing analytics results. The largest implementation of Piwik we are aware of is for use at OpenStreetMap (OSM), a kind of Wikipedia for mapping the world that relies on open source, community-created mapping data.41
Eric Brelsford, a developer at the nonprofit 596 Acres42 and adjunct lecturer at the Pratt Institute, uses Piwik regularly. âWe wanted just what Google Analytics does but in an open source way,â Brelsford said. âIt also did a great job of importing our raw traffic data from our server logs so we could see our traffic from even before we had Piwik installed.â
While a number of WordPress plugins exist for CMS integration,43 newsrooms we spoke to had close to no awareness of Piwikâs existence. Vendor-solutions still dominate the field of analytics, but as mentioned above the recent maturation and further testing of these open source tools at scale could change that dynamic in the future.
Internal Newsroom Tools
A number of news organizations have built their own analytics dashboards. While we wonât be able to go over each of them in depth, the links below (and end citations) provide further detail.
NPR Visuals Teamâs âCarebotâ
Although our primary focus in this report is on investigative newsrooms, other organizations with different goals are also experimenting in this space. Carebot is the NPR Visuals teamâs project to capture how their projects, often human-interest and photography-based, affect their audience. âWhat does impact for a team like ours look like?â asked Brian Boyer, editor of the Visuals team. âWe came to the realization that what we create is empathyâwe try and make people care about someone else. Carebot is finding ways to measure if we made people care or not.â
An open source project, Carebot focuses on user actions44 (e.g., how many people shared a story or how many liked it on Facebook) but with an added twist: It divides that number by the total unique visitors for a given story. This metric allows the team to say things like, âthirty percent of all people that read this shared it in some way.â Such statements let them more easily compare articles while controlling for variations in page views or total traffic.
The other part of the project involves adding questions at the end of a story, such as, âdid you love this story?â or âdid you learn something from this story?â If users answer âyes,â they are led to like the story on Facebook or donate to the station. These questions aim to bring people into the public radio family and are the result of thinking about user experience and user flow as a crucial part of the impact question. (Example: After a reader finishes a story, what should he or she do? And if we can think of actions we prefer over others, how can we optimize and measure that?)
âThere are stories that are going to have great raw numbers because they are about a celebrity comedian thatâs going to host The Daily Show and the controversy about his tweets: thatâs just going to succeed,â Boyer said. âSo how do you take work that is more to the mission of the organization and hold it up to say, âthis thing might not have the page views, but itâs doing the mission.â Carebot is about âhow do we prove weâre doing the mission?â â Indeed, mission-driven metrics is a good label for this type of thinking and, as weâll discuss later, highlights the crucial intersection between successful impact measurement and stated organizational goals.
NPRâs Analytics Dashboard
Also working at NPR, Melody Kramer and Wright Bryan designed and built an internal dashboard based specifically on the questions their editors and reporters had in the course of a news day.45 As weâll discuss further, their user-centered design led them to frame visualizations in a friendly and inviting way and served as a great source of inspiration for parts of NewsLynx. Their platform took shape after hours of interviews with staff, focusing specifically on their daily decisions and how technology could help them arrive at smarter decisions faster.
ProPublica
ProPublica is another outlet taking significant strides to measure its impact. In a 2013 report, Richard Tofel, ProPublicaâs president, outlined the organizationâs approach to tracking impact:
ProPublica makes use of multiple internal and external reports in charting possible impact. The most significant of these is an internal document called the Tracking Report, which is updated daily âŠThe report records each story published âŠand any prominent reprints or pieces following the work by others (with most of this data derived from Google Alerts and clipping services). Beyond this, the Tracking Report also includes each instance of official actions influenced by the story (such as statements by public officials or agencies or the announcement of some sort of non-public policy review), opportunities for change (such as legislative hearings, an administrative study or the appointment of a commission) and, ultimately, change that has resulted. These last entries are the crux of the effort. They are recorded only when ProPublica management believes, usually from the public record, that reasonable people would be satisfied that a clear causal link exists between ProPublicaâs reporting and the opportunity for change or impact itself.46
Tofel goes on to explain that ProPublica tracks these outcomes for years after an articleâs publicationââpossible prosecutions and fines continue to result from this work long after the reporters involved have moved on to other work, and ProPublica notes these as they emerge.â In tracking the impact of its work, ProPublica has also developed sophisticated tools like PixelPing,47 which allows for measuring traffic to its articles that have been republished on other sites.
The Guardianâs Ophan System
The Guardian uses a custom system called Ophan that is tuned to the needs of editors so that they can quickly see what is being read on the site and start to explain the why behind those trends.48 The system is quite detailed and ever-changing. The linked walkthrough, however, is a good starting point.
The New York Timesâs Package Mapper
One typical newsroom-specific problem is how to understand the way readers engage with a package of stories. Editors have ideas about how readers should navigate between these pages, but few teams are tracking how they actually do so. To better understand these flows, James Robinson built Package Mapper to track the multi-story experience.49 Similar to Carebot and miles away from the simplistic page view, this project looks at user experience and user flow as a key part of understanding performance.
Where NewsLynx Fits: Incorporating Qualitative and Quantitative
While there are many current efforts to address the challenges of capturing the quantitative and qualitative impacts of media, there remains a clear need for a comprehensive platform that incorporates these disparate data streams while remaining relatively agnostic about which metrics are viewed. While Chalkbeatâs MORI is an attempt to do just this, its designers have an unusually high level of understanding about their organizationâs content and goals, which many newsrooms do not share. MORI does exist as a WordPress plugin, which is likely the best choice if one wants to make a plugin that the maximum number of newsrooms could use, but requiring CMS integration can hinder adoption in many organizations. MIPâs Measurement System promises a similar platform, but itâs still in active development. We designed and built NewsLynx as an attempt to address needs in the present. By framing it as a research project conducted in the open, we hope to share our successes and failures so as to help the media impact community move forward.
Â
Research Findings
Preparatory Research
Our background research unfolded in two parts by way of:
- A survey we circulated online. It was announced via our launch blog post and emailed to newsrooms that the authors assessed as within the demographic of our target newsroom.50
- Focused interviews with newsrooms that fit our target demographic (and some that didnât).
Our survey (Appendix A) looked at six categories and adapted some questions from a similar survey circulated by Joanna Raczkiewicz at the Harmony Institute in 2013. Newsrooms agreed to participate anonymously and only be identified by their size and general characteristics. The survey focused on the following main areas:
- Organization profileâwhat size/type of newsroom?
- Content streamsâwhat types of stories (cultural, aggregation, investigative, etc.) and publishing schedule?
- Current quantitative analytics practices
- Current qualitative analytics practices
- Institutional challenges and goals
- Actionsâwhat is this information used for/who are the stakeholders?
The 26 organizations that responded to our survey varied greatly in size and in their prior experience measuring impact. Some employed a small team that generated daily reports circulated to staff, while others had no single person officially charged.
We conducted follow-up interviews, usually an hour to an hour-and-a-half long, from March to July 2014 with newsrooms that had completed the survey and had indicated interest in using the beta NewsLynx platform. It was important that they had websites with which the software could interface, namely an RSS feed and Google Analytics.
Below is a summary of our survey results and more detailed interviews. Going forward, weâll refer to the target user of NewsLynx as an Impact Editor, or IE, as shorthand to describe the position tasked with data management.ii
What Do Newsrooms Measure and Why?
One of the most surprising sentiments we heard echoed throughout our research was the importance still placed on quantitative measurements such as page views. The reasons for this generally fell into two categories: journalists collected it because donors asked for it, or they measured it because they did see some utility in growth trends, as explained in the introduction.
For organizations that rely heavily on syndication (newsrooms that allow others to copy their articles whole-cloth), âreachâ was also a big sticking point. While techniques for measuring this differ greatly, it is generally calculated by taking the organizationâs circulation or home page traffic multiplied by a varying, unscientifically derived percentage. This practice might seem blasphemous until one considers Google Analyticsâone of the biggest and most popular platforms developed and maintained by arguably the largest and most powerful technology company in the worldâwhich only returns estimates of any given metric. In our experience Google Analytics returns metric values only in multiples of 12, for example. Google Analytics can return more precise values with its enterprise product, but that cost is outside the budget of all but a small number of news organizations, leaving imprecision as often the norm.
Even when organizations acknowledged that both quantitative and qualitative measures were valuable, quantitative measures were still more closely tied to their business models. âWe are working to diversify revenue sources and need strong metrics to buttress our qualitative measures,â wrote one growing investigative organization. One broadcast organization summarized this conundrum of revenue versus mission:
In some ways, audio-listens are the single most important thing we can track because that drives underwriting and donations. But we are also mission-driven, so journalism that affects laws and peopleâs lives and sense of themselves and their relations to others can be equally important.
Although many organizations use quantitative measures, the lack of insight they provide is frustrating. Organizations expressed a desire to find a new metric that could satiate the hunger for quantitative simplicity while offering useful insight, usually in terms of more information about the audienceâs relationship to their articles.
In response to the question, âhow could measurement help your business or content strategy?â one organization wrote: âA qualitative metric we could present to shareholders showing the ROI of our investment in social media outreach, our marketing efforts, and our dedication to usability.â
To construct such a metric, you would need to agree on some proxy for popularity or discussion level on social platforms (likes, shares, mentions, retweets all come with their own caveats) while taking into account promotional efforts on individual articles. You would then need to segment these results across devices and, if traffic or behavior patterns differ, be able to attribute those differences to either your internal efforts or external factors. This analysis would more realistically be shown in multiple metrics, but the desire for it reflects the need to understand how audiences are reached, along with the pressure to explain what, if anything, is having a demonstrable effect.
The desire to know more about how the audience engaged was echoed elsewhere as well:
From a content perspective, [impact measurement] could help us figure out where to focus our energies, in theory. Given that we are a cash-strapped, resource-strapped nonprofit, should we be spending so much time making a piece of radio and then also adding stunning visuals and writing a compelling text storyâor are those just bells and whistles that will get us minimal ROI? Whatâs the difference between the users who get our stuff on the radio, on the web, and via their phones (on our app or other apps), and are they significantly interested in different kinds of content and do they have different time constraints? What, if anything, might drive people who encounter us out of nowhere on social media to explore our other content, like it, and maybe one day become not just a return visitor but a member? What messaging and coverage encourages participation in user-generated stories (and are those things which can actually help us AND serve the public good?) as well as become part of the public radio family? This is just a start.
Echoed in another organization:
We deeply distrust the page view stat and we see other organizations with more tech resources develop their own fancy metrics such as Mediumâs Total Time Reading or Upworthyâs Attention Minutes and canât help but feel weâre missing out on essential things about our audience. Google Analytics feels both too complicated and not powerful enough for the questions we want to answer about readers. It doesnât help that Google, Facebook, Twitter, Quantcast, Comscore, and anything else weâve used never agree on anything. And of course quantifying impact is tough and while we try very hard, some recognized external standards, if wise, could be useful.
We repeatedly encountered the sentiment that existing analytics platforms are âboth too complicated and not powerful enoughâ at other organizations. By ânot powerful enough,â users mean that they donât help answer sophisticated questions that could bolster arguments around, for example, content strategy. Should a radio station continue putting resources into text versions of their stories for the web? Are people not scrolling all the way down the page because the headline and first three grafs were succinctly written and the reader âgot itâ or because the story wasnât interesting? Or, is the websiteâs designânot the journalismâcontributing to a high bounce rate? Many organizations would like answers to complex questions like these but, for the moment, data in simpler forms is what they are being asked to report, and technology platforms canât answer these questions out of the box.
Itâs important to point out that some newsrooms completely disregard quantitative metrics or see them as only potentially valuable. As one small investigative organization wrote: âOur mission is to have impact and improve the public interest. For a while we chased traffic and found it negatively impacted our work, and brought no results.â
The pressure to provide quantitative metrics can also be a bit of a moving targetâdriven by the shifting tastes of funders or changing understanding of what constitutes meaningful measurement. In fact, the Media Impact Project is currently developing a two-sided booklet addressing this very dynamicâwhat newsrooms are currently measuring on one side and what information funders are requesting (or should be requesting) on the other.iii
Nevertheless, many of these responses influenced our decision to keep a number of quantitative metrics in our system and augment their usefulness through comparison points and context.
What Do Impact Reports Look Like and How Are They Used?
Responses were incredibly mixed on both of these points. While most impact reports are strictly internal, some organizations such as the Wisconsin Center for Investigative Journalism and ProPublica publish examples of their impact on their websites.51 The former also has an ongoing project called Investigative Reporting + Art for which it has commissioned artists to create sculptures inspired by its centerâs reporting.52 The pieces then travel the state to schools and other public institutions.
Some organizations only produced reports for foundations that fund them, whereas others produced regular newsletters circulated among the editorial staffâsometimes including one-on-one emails notifying reporters of significant events. Here is an example from a mid-sized investigative and culture publication:
In addition to the qualitative parts of the Board report and grant reports referenced above, we produce a biweekly internal memo that catalogues the qualitative successes of the prior two weeks. We break these up into the following categories:
- Impact: Politicians citing our reporting, law changes, corporate
actions, etc.- Events: Either that weâve organized or at which our people have appeared.
- In the News: A small selection of the highest profile and most interesting links and citations from other outlets.
- Social Love: A small selection of the highest profile and most interesting social media mentions of our reporting.
- Awards: All the awards weâve won and been nominated for.
On the more quantitative end is one large-circulation, daily organization based in South America:
I usually relate different metrics of story performance (like time spent versus characters) and section performance (which sections get more or less visits than would be predicted by the amount of stories they publish). Then I go on to analyze what characteristics underperforming and outperforming stories have.
Organizations that keep the editorial team regularly informed of impact events through such reports said that it helps improve morale and keep the newsroom focused on its mission. As mentioned before, these efforts are more successful when the organization has articulated its goals and, consequently, what constitutes important measures of impact for it.
Challenges
Despite advances in understanding what successful impact measurement could look like, the fact remains that cataloging information will always take time and expertise, even with custom-built tools. Technology can solve some of the efficiency problemsâmany of which we tried to tackle with NewsLynxâbut Impact Editors will still be required to make sense of the information. Goal-tracking remains an organizational and cultural challenge, not a technology problem.
As one organization succinctly put it, impact reporting is âtime-consuming and measurements of engagement are still elusive.â
Â
Platform Description
We designed NewsLynx around two sets of tasks where staff found difficulties:
- Managing an event-tracking workflow while juggling other responsibilities.
- Understanding what it means for a story to âdo wellâ and what happened to cause that.
NewsLynx has two main interfaces: a workflow-management tool for collecting and organizing indications of impact (mostly done in a section called the Approval River) and a section for analyzing storiesâ impact where comparisons and related metrics can be seen.
Below is a diagram of the site layout. The next sections will go into detail about our impact framework and each of the platformâs interfaces. For a more technical walkthrough and code repositories, please see our GitHub: https://github.com/newslynx.
Recipes: Bots that automatically flag impact indicators from external services.Approval River: Users manage content coming into the system from recipes; meaningful impact indicators are then attached to stories as âimpact events.âSubject Tags: Free-text labels to describe the content of stories.Impact Tags: Free-text labels to describe impact within an impact framework to allow for grouping and comparisons.
The Model
This diagram shows the concepts that are implemented in the NewsLynx application. Each of the significant concepts is explained below.
Stories: Published output of the news organization.Recipes: The output of recipes automatically populates the Impact Events.
The Impact Framework, as Implemented
A major goal of our initial research was to investigate the feasibility of an impact taxonomyâshared terms that could make sense for multiple organizations.
Taxonomies: On Defining the World
Taxonomies are notoriously difficult to create because real-world data does not necessarily fall into discrete buckets. Take, for instance, Jorge Luis Borgesâs Celestial Emporium of Benevolent Knowledge, which divided animals into the following categories:53
- Belonging to the emperor
- Embalmed
- Tame
- Suckling pigs
- Sirens
- Fabulous ones
- Stray dogs
- Included in the present classification
- Frenzied
- Innumerable
- Drawn with a very fine camelhair brush
- Et cetera
- Having just broken the water pitcher
- That from a long way off look like flies
Although humorous, creating any taxonomy inherits the same absurdity and measure of futility. In the news context, we face constantly shifting content types as well as desired outcomes that differ on a per-project basis. In developing our framework, instead of implementing a strict taxonomy, we intentionally left the question of what constitutes an impactful event up to the discretion of the newsroom.
One strategy that gives more comparative power to qualitative taxonomies involves assigning values to each categoryâmaking it more like a quantitative measure. Since itâs one of the simplest examples of impact to visualize, letâs see how this type of impact classification would play out around legislative activity.
For example, letâs say that any article that led to a law creation might earn a rating of 10. An article that gets cited by an influencer (however defined) would earn a 6, and so on. This strategy gets tricky, however, since one must quantify just about everything. How many points separate a bill passing, one being proposed, a watered bill that passed but didnât fully solve the problem, and a bill that didnât pass and yet spurred vigorous public debate and changed the narrative around an issue? How would one assign points around different assumptions of causality? If Congress is moved to investigate an industry, what of that can you contribute to any one piece of reporting on the topic that came from any one news organization? Would the value-scale seek to capture the strength of that causal link?
To borrow a term from statistics, we think this type of thinking âover-fitsâ the model to the data. It might perfectly describe one scenario, but it loses all generalizability to new events or events at other newsrooms. To borrow another image from Borges, âOn Exactitude in Scienceâ describes the uselessness of a one-to-one mapping between a subject and a frame used to understand it:
âŠIn that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.54
Model Concept: Impact Tags
For the reasons explored above in the discussion of taxonomies and for the simple fact that impact measurement must be closely wed to an organizationâs goals, we chose broad language to define our impact framework. In our system, each organization is free to make an âimpact tagâ for any type of event it finds important. For structure, each tag must have both a category and a level, ideas for which we took inspiration from Chalkbeat and the Center for Investigative Reporting (CIR), respectively.
Model Concept: Categories
Chalkbeatâs MORI system has only two categories of impact. An event is evidence of âcivic deliberationââdid someone talk about it or discuss the issue in some way?âor an âinformed actionââdid it, at least in part, bring something about? This structure was useful not only in avoiding the impact rabbit hole described above, but also in disambiguating the term âimpact.â For example, some newsrooms call reader reactions or references to their work in legal proceedings âimpact.â And while you could argue that the article âbrought aboutâ that citation, the state of affairs didnât change. We felt this distinction between talk and action was an important one to standardize.
In addition to these categories, which we renamed âCitationâ and âChange,â we added two more: âAchievement,â which includes articles that win an award, see record traffic, or are cited more than any other story (in effect a meta category); and âOtherâ to maintain the spirit that NewsLynx is an open research platform and the framework is open to evolution. If trends develop within the âOtherâ category the framework can and should adapt.
Model Concept: Levels
From CIR we borrowed the idea that an impactful event can happen at different scales. Its tracker uses the terms Micro, Meso, and Macro, as previously discussed. To make these more understandable to the average newsroom and to expand on the idea, we ended up with five levels:
- Institutional
- Community
- Individual
- Media
- Internal
The most novel of these levels is Internal, which recognizes that projects can shift organizational priorities or become models for future work.
The Combination of Tags, Categories, and Levels
Putting these three concepts together, a sample configuration could look like the following:
Tag name | Category | Level |
---|---|---|
Reprint/Pickup | Citation | Media |
Localization | Citation | Media |
Influencer mention | Citation | Individual |
Editorial | Citation | Media |
Award | Achievement | Institution |
Increased awareness | Change | Community |
Staff interview/appearance | Citation | Media |
Government investigation | Change | Institution |
Internal discussion | Citation | Internal |
Policy/regulation change | Change | Institution |
Â
Other Concepts for Possible Inclusion
As the process of research is always ongoing, since NewsLynxâs launch we have had discussions about areas of impact that might be worth including as a part of the default configuration. Kramon, of The New York Times, uses a similar âChangeâ category as the most important measure, calling it âthe gold standard for journalism.â He also stressed the importance of celebrities and humorists commenting on the Timesâs work as a sign that its journalism broke out into larger popular culture:
You want everyone from Oprah to Taylor Swift to speak out on a subject and ideally praise the work of The New York Times. I remember once we did a story that Rush Limbaugh praised, and we were able to say, âeveryone from Rush Limbaugh to Paul Krugman praised this work.â
Kramon added:
Humor really works, also. If people pick up on it and try and make it funny, that should be a complement to the journalistic organizationâeven if itâs just a local cartoonist. It doesnât need to be somebody thatâs nationally known.
You could most easily incorporate these ideas with tags in the âCitationâ category at the level of âMediaâ or âIndividual.â For example:
Tag name | Category | Level |
---|---|---|
Celebrity commentary | Citation | Individual |
Humorist/spoof | Citation | Media |
Pop culture appearance | Citation | Media |
Â
Model Concept: Subject Tags
The other type of classification we employed is the subject tag, which is an open tagging system to assign stories to different editorial verticals. This could also be used to group stories that appear in a series or package. As weâll discuss in the next section, NewsLynx runs statistics across subject-tag groups so newsrooms can see how different articles or packages perform against other groups.
NewsLynx Interface
The Approval River
The NewsLynx Approval River is a section where users manage impact indicators for stories their newsroom publishes. It allows users to create ârecipesâ that let them connect to existing clip-search-type services or perform novel searches on social media platforms. The results of these recipe searches go into a queue where they can be approved or rejected.
The tool is designed to streamline (and perhaps replace) an existing common workflow for measuring impact where IEs monitor one or more news-clipping services for mentions, local versions, or republication of their work (if the organization allows that). Many of the IEs we interviewed expressed difficulty in managing the diversity of clipping services they used, as well as storing the meaningful hits in one place. In addition, the processâs complicated natureâoften requiring different login credentials for each serviceâtook up an inordinate amount of time and raised the barrier to entry for training someone new on the system.
Out of the box, NewsLynx supports the following recipes:
- Google Alert
- Twitter List
- Twitter User
- Twitter Search
- Facebook Page
- Reddit Search
The Approval River provides easy methods for gathering information from social platforms. For example, one recipe service NewsLynx includes is the ability to search a Twitter List for keywords. Letâs say, as a part of an investigation, your organization has identified 25 key influencers or decision-makers and has added their handles to a Twitter List. A recipe could watch that list and notify you of discussion on the topic, or when anyone on it shares a URL from your site. This alert would show up in the Approval river and, if approved, would be assigned to the relevant article with any other information the IE wishes to add.
A simple way to think of this page is, âif this, then impact.â
With some programming knowledge, anyone can add new recipes or NewsLynx can be set up to receive emails from different clipping services and process those streams as recipe feeds as well.
Analytics Tools
The analytics section is where we hope users can gain insight into metrics that interest them and view any information about an article all in one place.
We designed this section of the platform with two guiding principles in mind: Make it navigable for the average newsroom user and give context to numbers and events wherever possible.
On the first point, we organized our data and presentation at the article level, which is often not the case in platforms such as Google Analytics. We also labeled our graphs and data visualizations with sentences and questions, such as âwhoâs sending readers here?â instead of more ascetic labels like âtraffic-referrers.â On this point we were inspired greatly by NPRâs previously mentioned internal metrics dashboard, which proposes these semantic headers as a way to make dashboards more easily approachable for average newsroom users.55
To understand âhow well a story did,â metrics need context. As a result, our other principle was never to show a number in isolationâany value should always be contextualized with respect to some baseline. In our two analytics viewsâthe multi-article comparison view and the single-article detail viewâwe provide this by always comparing a given metric to a baseline value, such as âaverage of all articles along this metric.â Users can easily change this baseline to the average of all articles in a given subject-tag grouping. In other words, âshow me how these articles performed as compared to all politics articles.â
We also do our own novel data collection to view article performance in the context of newsroomsâ promotional activities. For instance, we collect when a given article appeared on a siteâs home page, when any main or staff Twitter accounts tweeted it out, and when it was published to the organizationâs Facebook page or pages.
Weâll walk through each section to see how we implemented these comparisons and context views in the platform.
Article Comparison View
When users open the Articles screen, they see a list of their top 20 articles with bullet charts across common Google Analytic and social metrics.iv Bullet charts are so named because they include a small bar, the bullet, that can show whether a given metric is above or below a certain reference point. In this view, users can see which articles are over-performing or under-performing at a glance based on whether the blue bar extends beyond the bullet or falls short, respectively. Users can change the comparison point from âall articlesâ to any group of articles sharing a subject tag.
To guard against a few high-performing articles skewing the results, the bullet charts use the 97.5 percentile as the maximum value. In addition, users can select the median value as the comparison point as opposed to the average, since the median will be more resilient to outliers skewing results. Users change the comparison point using the dropdown menus at the top of the middle column.
The design of this section is meant for small-batch comparisons between groups of articles. For instance, IEs can compare the seven articles in one investigation against those from another package, or they could compare recent articles against historic performance.
Article Detail View
NewsLynx also lets users easily drill-down to the individual article level to see a timeline of qualitative and quantitative performance, as well as contextualized detailed metrics about traffic sources and reader behavior.
This view also shows top-level tag information, allows users to manually create an impact event, as well as download an articleâs data.
The information on this page is divided into three sections:
- This storyâs lifeâA time series of page views, Twitter mentions, Facebook shares, time on home page, internal promotion, and online or other offline events created manually or as assigned through the Approval River.
- How people are reading and finding itâA selection of Google Analytics metrics around platform breakdown, internal versus external traffic, and top referrers. Similar to the comparison view, each metric includes a customizable comparison metric.
- Who tweeted it?âA comprehensive-as-possible list of accounts that have tweeted the article sorted by the accountâs number of followers.v
In the interface, these three sections described above are prefaced by the text, âTell me aboutâŠâ in an effort to make their use and functionality readily apparent to the user.
This Storyâs Life
This visualization aims to combine the relevant information for better understanding why a story performed the way it did. We chart page views over time within the context of a newsroomâs internal promotion efforts. The orange blocks are time on home page, the light blue dots are when the main or staff Twitter accounts tweeted the story, and the darker blue dots are when the story appeared on the main Facebook page or pages. Similarly, any events added through the Approval River or created manually appear on the time series grouped by category. Events that exist across multiple categories are shown once in each relevant category row.
Any impact events that have been treated on this article appear below in a list filterable by impact tag, category, level, and date.
How People Are Reading and Finding It
We consider this one of the most useful pages in the platform. It displays the metrics in Google Analytics that newsrooms expressed most interest in or need to report:
- What device people were reading on
- Whether traffic came from external or internal sources
- Who referred traffic
The functionality and design mirror the bullet charts in the comparison view, allowing users to see these numbers in relationship to all articles or a specific group of articles. Hovering over the bullet chartâs marker will display the comparison value in a tooltip. The referrer information is particularly useful when IEs are asked to figure out the source of traffic for a popular story.
Who Tweeted It?
This section shows a list of everyone who tweeted a link to this article as obtained by querying the Twitter Search API on a regular basis. The list is sorted by the number of followers the account has in descending order. If a Twitter Search recipe in the Approval River wasnât set, for example, or someone previously not on the newsroomâs radar tweeted a link, IEs could look here and create a new event using the button on the top of the page.
Users can also export all of this data using the button at the top of the page. Importantly, the back-end data-collection code is separate from the front-end interface. As a result, the data collection is completely agnostic about how it is displayed, allowing a newsroom to design its own custom views or visualizations of data. What weâve produced here, directed by our research, is a first attempt at giving newsrooms the broad view of their stories and packages as they relate to one another, as well as easy-to-use, drill-down capabilities for when they need to explain the narrative behind how a particular story performed.
Â
Newsroom Use
After four months of development, we launched NewsLynx in October 2014 with roughly six newsrooms. They varied in size from half a dozen people to a microsite within a large metro daily. Most, but not all, already had impact workflows that they used to generate reports for either grants or on an internal reporting schedule. The newsrooms with existing workflows also tended to be nonprofits. Just as with the newsrooms that responded to our survey, these organizations agreed to participate anonymously and only be identified by their size and general characteristics.
In this section, we detail how the participating organizations used NewsLynx and what that can teach us about impact measurement best practices and future newsroom adoption.
Approval River Usage
Newsrooms mostly used the Approval River as a way to track mentions of their work by influencers on social media or by other publications. Some newsrooms leveraged Twitter Lists to a large degree. They set up recipes with their domain name on âPresidents-HeadsOfGovt,â56 U.S. government officials,57 or the Justice Departmentâs list of U.S. attorneys.58 They would typically go through the Approval River once a week and categorize possible hits.
IEs also used Google Alerts recipes to look for pickups of their stories by other organizations or mentions of their founders or board of directors. We werenât able to completely replace existing workflows, however. One challenge we faced was that some clipping services changed during the development of the platform, and we didnât have the time to fully implement new service recipes. For example, according to some IEs, the quality of Google Alerts has decreased in recent months and their organizations now use other clipping services to track mentions. We discuss improvements to this problem of shifting technologies in the chapter Future Paths for NewsLynx.
In terms of affecting efficiency, participant newsrooms reported that it helped streamline their clip-search tasks and, in the case of Twitter List searches, helped them surface elements they wouldnât have otherwise seen.
Article Section Usage
Participants reported that the most useful area of this section was the âHow people are reading and finding itâ on the detailed article view. They said that it helped them explain to the newsroom âhow well an article did,â especially in relationship to a meaningful baseline. This page also provided links that helped IEs record which sites or accounts had picked up or were linking to the original article. Similarly, the full tweet list helped participants create events that they would have missed without a recipe set up to catch them in the Approval River.
Understanding traffic sources was also a main takeaway from the Facebook and Twitter time-series charts. As one IE put it:
When thereâs a spike in traffic for any story, itâs super handy to be able to quickly see a list of where traffic is coming from. For example, I noticed we had a story that was getting crazy traffic and NewsLynx said it was coming from Facebook. I could then find the origin post quickly.
This finding was encouraging since Facebook is particularly opaque in seeing specific activity. Knowing that our novel metric of Facebook shares over time can provide useful insight is an important takeaway.
One key feature was data export. IEs reported that it saved them a great deal of time in preparing their grant reports, as they no longer had to navigate through their analytics platform to copy and paste various numbers. The organizationsâ existing data-collection workflows also lacked comparison metrics, which made NewsLynx data more understandable for higher-level stakeholders.
Newsroom-created Taxonomies
Taxonomies varied from very general tags to more specific lists. The newsroom chose tag names and which category and level they belonged to. Categories and levels were chosen from the predefined list as described in Chapter 2. Below is a sampling of what taxonomies were developed. They range from generic to highly detailed.
Example 1: Moderately Customized and Detailed
This middle-of-the-road taxonomy classified the typical citation sources (e.g., pickup, editorial written) and wider measures of change (e.g., change in discourse, policy action) into separate groups.
Tag name | Category | Level |
---|---|---|
Pickup | Citation | Media |
Influencer mention/promotion | Citation | Individual |
Editorial | Citation | Media |
Staff interview/appearance | Citation | Media |
Internal discussion | Citation | Internal |
Increased awareness | Change | Community |
Policy/regulation change | Change | Institution |
Award | Achievement | Institution |
Reader reaction | Other | Individual |
Example 2: Detailed
The more detailed taxonomies were compiled by internal surveys and further refined through follow-up interviews with staff. Note that in this example the newsroom has set up tags for events that are particularly pertinent to its work.
Tag name | Category | Level |
---|---|---|
Social network share | Citation | Institution |
Story mention | Citation | Media |
Reprint | Citation | Media |
Localization | Citation | Media |
Editorial | Citation | Media |
Change of policy/regulation | Change | Institution |
Institutional action | Change | Institution |
Government investigation | Change | Institution |
Law passed | Change | Institution |
Institutional action | Change | Institution |
Law proposed | Change | Institution |
Government hearing | Change | Institution |
Lawsuit filed | Change | Institution |
Award | Award | Community |
Screening | Other | Community |
Advocacy organization uses report | Other | Community |
Professional organization cites reporting | Other | Community |
Public official refers to report | Other | Institution |
Staff interview/appearance | Other | Media |
Example 3: A Generic Approach
Other newsrooms started with a simple, broad approach and reported that they will define more categories when they start seeing what kinds of events happen around their work. One organization that took this approach said it was primarily concerned with reporting its reach to funders, including important mentions by the community or influencers.
Tag name | Category | Level |
---|---|---|
Media pickup | Citation | Institution |
Media social share | Citation | Media |
Inst. social share | Citation | Institution |
Comm. social share | Citation | Community |
Indv. social share | Citation | Media |
Â
Barriers to Entry
We werenât able to launch NewsLynx at every organization interested in participating. Besides a limited research staff that kept us from onboarding more organizations, some werenât able to adopt the platform because they didnât themselves know all the pieces of their impact puzzle. This problem took two forms: First, immature analytics market offerings and standards for their publishing platforms and secondarily, a lack of internal consensus on what should be measured and how.
The first issue is particularly visible at broadcast or combined digital and broadcast organizations where currently available analytics metrics are fraught with unknowns. Syncing terrestrial radio listeners with those who might tune in via webstream, along with those listening via podcast or a web version of the story presents a problem whose solution is still in the process of unfolding. Although web analytics is no hallmark of clarity, its comparative simplicity gives purely digital operations an advantage in quantifying their audiences.
The other kind of insufficient clarity is the lack of internal articulation of what impact is, how often impact reports should be formally produced, if at all, and who is responsible for producing them. This issue is related to the fact that adopting new workflows that donât show an immediate short-term benefit, or if the long-term benefit isnât clearly communicated, is extremely difficult. Even if a newsroom is interested in starting to measure impact, giving employees a list of new tasks they have to monitor can be a hurdle that is hard to overcome.
This problem is addressed more in Chapter 5, Recommendations and Open Questions.
Â
Recommendations and Open Questions
Impact Work Practices: Recommendations
Articulate Organizational Goals
In the M&E world, an understanding of oneâs goals is tied to oneâs âtheory of changeâ59âif journalism does have an effect on the world, how does that come about? What are the preconditions for that impact to have the most reach? Is it simply putting out carefully vetted facts like the Census or Bureau of Labor Statistics does and letting policy makers and other parts of civil society analyze possible responses? Or is it about covering an issue from sharp, newsworthy angles that shift the media narrative and the news cycle? Are opinions shaped best through cultural commentary or by surfacing new information, such as Mother Jonesâs 47 percent story from the 2012 electionâone of the few news stories that seemed to show an effect on the polls during the 2012 election.60 And what is your newsroom most skilled at?
No single right answer exists to these questions, but strategies are certainly needed if an organization wants to understand its progress and whether itâs moving toward or away from its mission.
Commit Resources
If a newsroom wants to take impact measurement seriously, it must commit resources to it. In addition to full-time Impact Editor roles, newsrooms must take care to catalog their own content in a system of subject matter tags that make sense. An analyst can only compare, for example, the success of the latest energy multimedia package against past feature projects if those articles are properly tagged in the system.
Because newsrooms often publish too much content to do this cataloging and tag standardization post-facto, editors must employ discipline in tagging stories at the time of publishing. Without such standards, analysts waste a vast amount of time and the newsroom loses vital internal information about its own operation. Apart from the workflow aspect, a newsroom that doesnât view tagging as a key part of its daily operation inevitably lacks insight into the long-term operation of its own coverage.
One feature we werenât able to include for this version of NewsLynx was a âTrain Your Lynxâ section that would let users train a classifier to potentially auto-label articles for them. While some of this tagging problem might be alleviated through better technology like this, it canât solve the whole puzzle. In other words, even if a computer could find good groupings among published articles, institutionalizing those computer-determined buckets is the tail wagging the dog. Similar to setting goals, understanding content buckets should be a human-made decision, not one outsourced to a black box.
Integrate with Editorial
Impact measurement isnât just for grant reports. Some organizations reported they use impact measurement as a way to improve newsroom culture and breed editorial ideas. It fuels staff morale by knowing that work, even if itâs just a small blog post, wasnât published into the void but saw audience interaction. In short, impact is not just a job for an analyst that sits apart from reporters and editorsâit requires outreach, organizational knowledge, and organizational backing. In the end, impact measurement is about engaging with the audience and understanding how work is received.
Start Small
One of the participating newsrooms only applied NewsLynx to one of its microsites focused on education. This smaller scope helped focus the impact goals, as well as the work required to monitor the platform. Starting with one content vertical or one package can be a good proof of concept and allow for the creation of a manageable workflow, especially if your newsroom is large or has multiple layers of bureaucracy.
Publish Both for Humans and Machines By Using Standards
There are many benefits to implementing standards for metadata: interoperability, efficiency, and transparency among them. A fair amount of NewsLynxâs code is dedicated to figuring out information that is already kept in a structured format in the CMS but is not machine-readable when viewing the published page. NewsLynx scrapes the headline, tries to discern an accurate publish date (which is surprisingly difficult), as well as information like one or many authors. As previously discussed, tagging articles is a difficult task and one that NewsLynx currently requires manual input despite the fact that many CMSâs already require articles be tagged at publication time, although often not systematically, as previously discussed.
If article pages included this information in a structured data format, third-party tools like NewsLynx could more easily leverage newsroom content. The analytics platform Parse.ly has started requiring its paying clients to implement the JSON for Linking Data standard (JSONLD),61 which provides the metadata described above in a common standard. This format is promising and for a low investment on the part of the newsroom, its inclusion could solve a range of problems involved with measurement and data standardization and lower the bar for building new tools to gain insights from an organizationâs publishing habits.
Impact Work Practices: Open Questions
Is an Impact Standard Possible?
Given the high cost of adopting new workflow practices in news organizations, any drastic change will need to be attached to an institutional benefit. Multiple newsrooms expressed interest in comparing their metrics to their competitive set. Parse.ly currently offers a similar paid service for aggregate traffic figures among participating newsrooms. If a similar system could be developed for a NewsLynx-like system for qualitative metrics, we believe there would be enough incentive for newsrooms to converge on agreed-upon impact buckets.
The main idea here, however, is that if organizations could more clearly and quickly see a benefit this would greatly help standards and possibly wider adoption of impact measurement. Being able to see how organizations in your competitive set are doing compared to your figures would certainly be an attractive offer.
Whether standards could work, however, is a different question. On this point, too, we believe they can. Participating newsrooms said the framework of categories and levels worked well for classifying their impact events. At the very least this type of aggregate information would already work cross-organizationââwhat kinds of change events did the top three packages get?â etc. To go a step further, though, the specific impact tags people created did not differ significantly, giving hope that general equivalencies could eventually be found.
Impact Culture: Start Top-down or Bottom-up?
We are advocates of building cultures of impact within newsrooms, however, weâve heard conflicting views on whether impact culture is best accomplished from a top-down directive or a bottom-up desire from reporters and editors. The critics of the top-down approach say that a push from leadership about the importance of impact will likely fall flat since it takes significant time and strategy or staff to implement. The people tasked with this job are usually already time-taxed reporters and editors.
Critics of the bottom-up approach say that without standardization any uncoordinated effort will mostly result in inconsistent noiseâthat is to say, the problem of inter-rater reliability, to borrow a social science term.
From hearing this discussion, we think any new impact effort needs buy-in from both leadership and staff. This comes when there are natural incentives for newsrooms to achieve impact. Both groups need to understand how including impact in strategy may help the organization and the editorial product. Business interests may include increased cachet with those audiences attractive to advertisers or the ability to attract funding from philanthropic sources. Editorially, staff will be more engaged with how the audience is reacting to its work and how readers could potentially be enlisted as future sourcesâhelping reporters and editors to do their jobs of telling compelling news stories more easily.
One of the bridges between these two sides, as previously mentioned, is the bi-weekly report that some newsrooms send to the staff. These reports both clarify what impact means and give encouragement, as previously stated, that readers are interacting and engaging with what the newsroom puts out.
Building Impact Tools: Recommendations
User Experience, User Experience, User Experience
While not a novel lesson, we saw firsthand how important good user experience is in gaining adoption of a platform. While we were successful in our design goals, we did have a few minor bugs from a technical perspective that translated into larger usability concerns. For example, some Approval River alert items would come back even after an IE approved or rejected them. Although the eventual fix was minor, the initial frustration was not. As a recommendation, always solicit feedback as often as possible from your users and take user experience into account when prioritizing fixes.
Separate Data Collection from Interface
We built NewsLynx as two entirely separate code bases. A data-collection code base, written in Python, handles all ingestion, standardization, and database population. It serves JSON data via an API.62 The user-facing website is a NodeJS application that queries this API and returns data which a front-end Backbone-powered JavaScript client turns into an interface and visualizations. This separation gives us great flexibility to update the user-facing and data-collection parts of the platform independently. If someone doesnât like our interface, they have the choice of building their own and not being locked into one system.
Building Impact Tools: Open Questions
To Integrate with the CMS?
Whether this version of NewsLynx was best constructed inside or outside the CMS was never a real question. In order to work with a variety of news organizations that used different CMS technologies, we always knew it would have to be its own platform. That leaves open the question of whether an organization should build its own internal measurement tool within its CMSâif indeed it has the engineering capacity.
This question is different for every organization, but we can discuss the pros and cons. The biggest advantage is that staff members do not need to sign-in to another system other than the one they are used to. This consolidation reduces friction and raises a userâs comfort level when adopting a new technology, which should be a high priority for an organization looking to start seriously measuring impact.
Another large advantage to living in the CMS is that you save having to follow one or many RSS feeds to ingest every article. You also get information like authors, publish date, and tags for free. Chalkbeatâs MORI tool also imposes the requirement that every article must have an impact goal assigned to it as a requirement for publication. That design decision might not work for every newsroom or every type of articleâbreaking news needs to go out as quickly as possibleâbut it does force staff to be more conscious of what to expect from each article and keeps monitoring and evaluation fresh in everyoneâs minds.
Arguments against integration are also persuasive. Most of the metadata information can be acquired if the previously mentioned JSONLD standard is adopted and some effort is put into creating a standard RSS feed. Moreover, customizing a CMS is a task whose monetary cost, time cost, and required level of expertise cannot be overestimated. As a result, itâs no small undertaking to design and implement any change to the system. This reason alone, rightly or wrongly and also depending on the organization, is most likely enough to outweigh any efficiencies gained by integrating with the CMS and tip the scale toward building something outside of it.
As a compromise, we recommend designing a modular system, similar to NewsLynx, as the best balance. We built an API-driven platform that handles all data collection and standardization completely separate from the interface and user-facing code base. In this way, a newsroom could integrate a portion of a NewsLynx-like system into its CMS if it so wished while still keeping the impact tracking infrastructure in a separate code base. If a newsroom decides to upgrade or change the CMS, it would have to rebuild the visualization and interfaceâalso a task not to be underestimatedâbut it wouldnât lose the core impact tracking mechanism.
Â
Future Paths for NewsLynx
Platform Improvements
We developed NewsLynx within a short time frame and for specific research goals. If one were to continue building on its code base or design a similar system, these are new features and improvements we would recommend.
Approval River Improvements
One difficulty we encountered was supporting the variety of external services that newsrooms use. Some of them even fell out of favor or came into vogue over the course of our research. Newsrooms have largely abandoned Google Alerts, for instance, first in favor of mention.net, which in turn was replaced by clip services such as Vocus and Meltwater. The latter is a paid service that returns higher-quality clip results and maintains a database of the circulations of those outletsâuseful for determining the reach an article achieved when placed on a partnerâs site.
Many of the current recipe sources are most useful for tracking influencers or specific groups of audience members. More external services would broaden the Approval Riverâs capabilities. For instance, one newsroom expressed an interest in monitoring Facebook groups it might create and tracking the âquality of discussion.â Phrases such as âletâs take this offlineâ or âdo you have a suggestion?â could let an organization gauge whether its actions led to a self-sustaining, helpful community around an issue that its reporting highlighted. Integrating more offline-monitoring services would be useful, too, such as Sunlight Foundationâs Capitol Words API, which provides a queryable database of what is discussed on the floor of Congress.63
Participating newsrooms said that integrating more metadata around these citations would be helpful (e.g., aggregating Meltwaterâs circulation figures for all pickups of a story). Many IEs currently export the data from Meltwater and do this aggregation in Excel.
Because new recipe sources will come in and out of fashion, an easy way to add and remove language-agnostic recipe modules would be the ideal.
Individual Form Page
One requested feature we were not able to implement in time was an impact event form submission page that staff could access without logging into the system. For example, a reporter who hears of a noteworthy event could fill out a form, which the Impact Editor could then approve or reject. CIR already uses a similar workflow with a form powered through Podio.64
More Feedback from the Platform to Users (Meet Users Where They Are)
Similarly, the more the platform could exist as part of an ecosystem and less of a portal you must log into, the better. This could look like automated emails, a mobile app to submit events, easily forwardable notes that create events, or push alerts to staff when a story surpasses some threshold and is âdoing well.â
More Article Types
Currently, NewsLynx only supports articles published on a unique URL. It doesnât distinguish between pages that contain text, video, or audio, for example. Being able to detect page types and assign specific metric collections would be a very interesting feature. Embed.ly currently provides analytics on video viewership behavior (e.g., how far into the video did most people watch).65 Integrating those types of viewership details into our existing comparative approach would be a useful addition. As analytics standards emerge in radio or for digital-broadcast hybrid organizations, these relationships will need to be formalized and operationalized.
More Article Relationships
NewsLynx users can currently group articles together with subject tags, but more complicated relationships exist in practice. For instance, subject tags donât distinguish between topically associated articles and a specific package of articles that ran in a series. At broadcast organizations, you might have a digital story that ran as a companion to a TV or radio piece. Relating these different versions of articles to each other and then adding aggregate functions to combine the metrics of these two versions of the same story for reporting purposes would be a very useful feature.
Replace Google Analytics
Working with Google Analytics proved to be extremely frustrating. Each news organization often had a custom-defined property or view that represented its content. Often, it had multiple properties reflecting different subdomains that needed merging. Based on the advancements in the open source data pipelines discussed earlier, implementing something like Snowplow for quantitative analysis would make most sense going forward.
Platform Challenges
If you want to measure something happening on the web, youâre necessarily wrangling a moving target. New social networks might appear; new metrics might come into vogue, and users will want to see those reflected in their system. The best one can do when faced with dynamic technology is to build a system that allows for new modules to be plugged in and out. Future modules, however, will still have to be designed and coded depending on the latest input.
Another challenge is that many interesting questions are technologically unknowable with our current state of affairs. For instance, one popular request was to know why an article was doing well on Facebook. Due to the nature of that network, however, which is much less open than, say, Twitter, we simply canât know the cascade of shares that lead to virality.vi Understanding the limits of analytics is an important starting point when framing the questions we want to ask about oneâs content.
Paths Forward
As we have been developing this platform, we have also thought about how this project and projects like it become sustainable. NewsLynx is certainly not unique in wrestling with this question since foundation or grant-funded journalism tools are just as ascendant as foundation-funded newsrooms. A number of former and current project leaders spoke to us about what they thought they did well and what they would have done differently. What follows are the main questions projects like these should be asking themselves and discussion of some consensus around how viable various alternatives may be.
Should You Charge for the Service?
Most people respond to this question by saying either, âyouâll never make money off of newsrooms,â or they take the Bill Cunningham approach: âYou see, if you donât take money, they canât tell you what to do, kid.â66 Neither is completely true.
For the former, whole industries (analytics being one, commenting platforms being another) do make a lot of money off of newsrooms. As for the latterâthat the lack of a price tag grants creative freedomâMiranda Mulligan, digital creative director at National Geographic and former executive director of Northwestern Universityâs Knight Lab, succinctly put it: âUsers have expectations whether they give you money or not.â While one could argue about just how much users expect from a free tool versus a paying one, answers to this relate to issues of value and utility. If the platform is not useful, people will neither use it nor pay for it. Either way, getting people to use a platformâeven one that has the highest utility of any tool out thereâis still a question of overcoming existing workflows and getting organizational buy-in from multiple levels of management. Shane Shifflett, one of the developers of FOIA Machineâa platform for newsrooms to easily send and track large quantities of Freedom of Information requestsâechoed this sentiment. âOnce you have a [platform] set up in the newsroom,â he said, âyou still have to constantly remind people of the benefit, especially if itâs a shared, newsroom-wide benefit.â
Sometimes open source projects donât cost enough for a newsroom to easily buy it. In other words, they donât charge what newsroom finance departments are built to pay. As Brian Boyer said about his experience with PANDAâa 2011 Knight News Challenge winner that enables newsrooms to store and query shared data resourcesâstaff can sometimes run into a speed bump getting a credit card authorization for the minimal computer costs required to maintain a PANDA server. âThe kind of money [news organizations] are used to spending is 50,000 dollars, not 300 dollars. Theyâre used to getting a purchase order, not using their credit card.â Inflating prices and charging an arbitrary multiple of 10,000 dollars can be a viable solution for some folks, but in the Venn diagram of people who are comfortable with doing that and the people who pitch open source, civic-minded technology platforms for journalism, the intersection is small.
On a simpler level, many newsrooms simply donât have the budget for a new product. Many project leaders we interviewed echoed the sentiment that, for them, they would want to charge the full value of their system, or simply make it free. Undercharging, they felt, would alienate too many potential newsrooms while not providing material benefit to the developers. Shifflett said this was true for him, emphasizing as well that the team already had full-time jobs and didnât see this project as a business it wanted to start running.
Others indeed felt that they could be more true to their own priorities without the pressure from paying clientsâand that can be true if the project has a solid enough foundation that its utility is not in question and is relatively stable.
What Is the Value and Who Sees the Benefit? Making It âKid-tested, Mother-approved.â
One of the things Boyer would have changed about PANDA was broadening the idea of who its user was. âWe did user-centered design, but I would have thought more about the managing editor as a userâthe person with the checkbookâalong with the reporter as a typical use case,â he said. He continued:
What we struggled with is there are only a handful of news organizations that have decided that this is a priority. PANDA has some pretty amazing tools to create efficiencies and make people work smarter, not harder and [things] like that. Managing editors as a class, however, arenât necessary thinking about it in these terms yet.
Understanding the attractiveness of a product to different stakeholders is the crucial takeaway. NewsLynx, for example, could appeal to management, as it could help keep the organization afloat financially. As we discussed in the section on how impact standards could eventually be shared across newsrooms, a multi-speed approach seems absolutely necessaryââreporter-tested, editor-approved,â so to speak.
The Community Question
Simply making a project open source is far from a silver bullet in achieving sustainability. We would love to see a community of developers building its own NewsLynx modules and we have done our best to enable that type of system in the future, but itâs important to consider that community-building is a skill in itself and takes a concerted effort. Boyer and Shifflett both pointed out that if they had to do it over again, bringing more people on to cultivate and manage relationships with user groups would be key.
And while the word community is so overused in the tech sphere that itâs become the subject of satire,67 what underlies this issue is the simple economics that, with some exceptions, one to three people cannot develop, maintain, and steer a technology project of significant size in the long term. All project leaders with whom we spoke echoed the sentiment that even with the newest tools, which lower the barrier to entry, technology is hardâand user-facing technology is even harder.
What we are really discussing with the community question is, âhow do you get people invested in your project?â More often than not simple utility is not the deciding factor, since, as weâve discussed, âuseful for whom?â is its own political question. The solutions to this bind range from the mundane (go speak at conferences) to the novel (Mulligan pointed out a group of South American developers who created a game to help crowdsource data for their urban cycling company),68 to the small-scale (we gave NewsLynx a magical lynx mascot,vii Merlynne Jones, whose image and GIFs populate our site with some personality). While difficult, the community question is an expression of this underlying need for not just utility but ways to fuel buy-in and enable a network effect (the idea that enough people using it makes it the default choice), which can be aided by anything from traditional promotion efforts to usability, or even emotional preference for the interface.
Another phrase for community is âhighly-interested newsrooms.â FOIA Machine is pursuing a strategy of working with a few key organizations as a way to fine-tune the platform and offer a jumping-off point for wider adoption. If we can keep working with motivated organizations and refine the tool to their needs, a community could develop to share tips and ideas for NewsLynx, and eventually contribute code as well.
Â
Conclusion
After over two years of thinking about and, in part, building impact tools, weâre happy to see a markedly different landscape from when we started. Ideas that were then hypothetical are now being put into practice. In reviewing some of the older literature while preparing this report, we came across a 2012 Nieman Lab article by Jonathan Stray that concluded with a picture of which kind of technology could help guide the way through understanding the messy world of impact. âIdeally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics âŠâ69 While rereading this article, we were happy to see that NewsLynx attempts to make more concrete what was previously hypothetical and see how that idea played out in practice, what needed improvement, and how we can move forward.
With the platform, newsrooms were able to streamline their workflow and surface insightful elements of impact that would have otherwise been missed. They could tell stories of their journalismâs audience exposure to stakeholders much more quickly and with reliable data to back-up those assessments.
In the larger field of media impact measurement, the amount of experimentation taking place and the fact that the conversation has moved past the less interesting problemsâtrying to find the holy grail of a universal taxonomy being one of themâmakes it an extremely exciting time for impact measurement. It has never been easier for a newsroom to design its own analytics geared toward questions it wants answered. And here lies the next challenge, which was really the challenge all along.
These technological advancements and the democratization of the data pipeline are most helpful, paradoxically, in that they drive us back to base assumptions and away from technology. âTool-wishingââphrases that start with âif only we just had a platform to do Xââcan be a blinder for the real hurdles at play. No tool, no matter how well designed or implemented, can tell a news organization what impact is or should be. As Stray continued in his piece, âbut nothing so elaborate [as this proposed platform] is necessary to get started. Every newsroom has some sort of content analytics, and qualitative effects can be tracked with nothing more than notes in a spreadsheet.â70
Indeed, the newsrooms that got the most out of NewsLynx were those that had already started with ânotes in a spreadsheetâ and previously worked through the harder problems of deciding what they care about measuring. In the end, computers are better, faster, and (sometimes) more reliable notebooks; but, just as is true in the physical world, fancy pens canât make a writer tell a good story.
Going forward, we see a few trends, or if not yet trends, then helpful directions:
- Automate more. We made the Approval River because newsroom staff has better things to do than search through multiple clipping services and other lists for hours each week. We still imagine a âhuman in the loopâ system, but the more of these kinds of services that can be automated to put ready-to-input, structured information into an articleâs timeline, the better.
- More context in metrics. By showing numbers in relationship to newsroom or topic averages, NewsLynx users were able to quickly get a sense of where each article stood. Efforts, like NPRâs Carebot, to contextualize metrics in terms of âwhat percentage of people shared this storyâ are a great way forward in this vein of experimentation.
- Defining expectations. Similarly, we have known for years that not all articles are created equal, nor are they all expected to perform equally. Operationalizing this idea has been slow-going, however, because itâs hard to admit that not every article will be a star. Developing mission-driven metrics will be crucial to sell this kind of measurement to management.
- Quantitative metrics arenât going anywhere. Numbers will continue to be useful because they provide value for many organizations. Their emotional utility is not to be underestimated. As Caitlin Petre recently examined in her Tow reportâs chapter on the design and use of Chartbeat, even if youâre not a traffic-driven site, it feels great when you hit record figures.71
- Impact measurement needs to know how to market itself to news organizations. This concern is smaller at organizations where impact is a part of their business model. But at larger organizations interested in this field, how do you convince management to commit resources to something with generally only mid- to long-term benefits? Folded into this question is how to approach a wary audience of journalists who view impact measurement as at odds with impartiality. Again, this idea is tied back to an organizationâs goals. What are we here to do and how can we measure that? Impact measurement with no objective can come across as purely self-congratulatory with no organizational benefit.
In the future, we think the practices of impact measurement align with healthy processes of understanding how oneâs newsroom operates and, importantly, why it operates at all. Whatever that why turns out to be, whether it is purely to inform readers or hold power to account, finding out what is required to get there should be an instrumental part of an organizationâs mission and achieving that mission a strong part of the newsroom culture. We hope that NewsLynx, or future NewsLynx-like systems, can help organizations year after year to keep filling those impact envelopes.
Acknowledgements Over the course of this research project we had help from many people deserving of our gratitude. We would like to thank Emily Bell for not just giving the project a home, but also for creating a placeâthe Tow Centerâwhere projects with magical cat mascots are not simply tolerated but encouraged. We are tremendously grateful to Fergus Pitt, who skillfully guided our research and provided valuable feedback on each successive version of our drafts. Thanks to Taylor Owen, who shepherded the project in its initial stages, and Claire Wardle, who joined us toward the end but whose key questions helped us move to completion. We also thank Lauren Mack, Elizabeth Boylan, and Abigail Ronck for all their help in dotting every i and crossing every t along this journey.
We are tremendously grateful to Dana Chinn, who gave valuable feedback and organized the first-ever NewsLynx Usersâ Summit from which we were able to plan platform improvements and see what worked and what didnât. The project owes a great deal to Lauren Furhmann, whose involvement, feedback, and support were beyond instrumental to the life of NewsLynx (and yet, who is not to be trusted in a game of Werewolf). The same goes for Lindsay Green-Barber and Blair Hickman, with whom we had many conversations over the years about what they catalogue as impact and, perhaps more importantly, what impact could be.
The project would not be what it is without the artwork of Clarisa Diaz, who brought Merlynne Jones to life in myriad formsâincluding animated GIFs. We thank Alastair Dant for his feedback on the NewsLynx visual design and the acute observation that the initial comparison interface didnât make sense at all, leading to renewed research and a workable redesign.
We also thank the staff of Charlotteâs Patisserie, who tolerated our laptops from opening to closing almost every single weekend between July and October, and those employees at numerous other haunts where we passed shorter work stints. And, as always, we thank our friends and family for their support and understanding. June 2015
Â
Appendix A
Impact Survey
This survey* is part of a research project at the Tow Center for Digital Journalism at Columbia University. The responses will be used to inform the creation of a platform for tracking the quantitative and qualitative impact of journalism, with an emphasis on serving the needs of nonprofit, investigative-oriented outlets. Responses are strictly anonymous and all analysis will be presented in the aggregate, though unattributed quotes may be reproduced.
If you would like to test our platform starting Summer 2014, please check the box below and leave an email address.
If you have any questions or concerns, please reach out to contact@newslynx.org.
Some questions were adapted from a similar survey put together by Joanna Raczkiewicz at https://harmonyinstitute.org/.
Organizational Profile
Whatâs the name of your organization?
What functions do you perform? (check all that apply)
[checklist] @ Editing @ Reporting @ Social Media @ Community Outreach @ Fundraising @ Programming (code) @ Business Development @ Board Member @ Other:
How many fulltime employees work for your news organization?
How many freelancers, contributors, and/or interns?
What is your news organizationâs primary source of revenue?
[checklist] @ Foundations @ Advertisements @ Subscriptions / Membership @ Donations / Major Gifts @ Endowment @ Other:
Content Streams
Does your organization produce original content?
[checklist] @ Yes @ No
Which of the following types of content does your organization publish or aggregate? (check all that apply)
[checklist] @ Breaking news @ Analysis @ Explanatory pieces @ Long form investigations @ Cultural criticism (Film / Theater / Literature / Music / Television, etc.) @ Blogs @ Opinion @ Datasets @ Interactive graphics / Interactive databases @ Learning aids / resources @ Teaching aids / resources @ Reader comments / forums @ Professional development resources @ Syndicated content @ Video / Documentary @ Radio stories @ Podcasts @ Other:
Through which channels do you distribute content? (check all that apply)
[checklist] @ Online @ Print @ Radio @ Television @ Resyndication @ Other:
Does your organization produce original content?
[checklist] @ Hourly @ Daily @ Weekly @ Monthly @ Other:
Does your organization tag content with metadata so that it can be searched, sorted, or otherwise organized?
[checklist] @ Yes @ No @ I donât know
Are the tags you use aligned with any external metadata standards?
[checklist] @ Yes @ No @ I donât know
If you answered âyesâ to the previous question, please list the standard(s) used.
Analytic Practices Quantitative
Are you required by foundations, private funders, management, or a Board of Directors to produce reports on organization analytics?
[checklist] @ Yes @ No @ I donât know
If so, what kind of metrics do they ask for and how often must they be reported?
What analytics platforms / tools do you currently use? (check all that apply)
[checklist] @ Google Analytics @ Parse.ly @ Mixpanel @ Chartbeat @ WebTrends @ Omniture @ KISSmetrics @ SparkWise @ Bit.ly @ Twitter Analytics @ Facebook Insights @ CrimsonHexagon @ Topsy @ SocialFlow @ Klout @ Vocus @ Mention.net @ Google Alerts @ Nielsen @ Arbitron / Nielsen Audio @ Internal (as in homemade) @ Other:
Analytic Practices Qualitative
Do you do anything currently to measure the qualitative performance of stories?
[checklist] @ Yes @ No @ I donât know
If so, what?
Who is in charge of monitoring impact at your organization?
[checklist] @ A dedicated employee (i.e. âImpact Analystâ, âEngagement Managerâ, etc.) @ A primary person who holds other responsibilities as well @ A small team of people @ Everyone spends some time doing this @ We donât have anyone currently doing this @ I donât know if someone does this
How many hours per week does this person / these people spend on impact measurement?
Can you provide links to 23 stories / projects that you thought were especially âimpactfulâ and a very brief explanation as to why?
Institutional Challenges / Goals
Are quantitative (pageviews, mentions, likes) or qualitative measurements (change in law, an interesting citation) more important?
[checklist] @ Quantitative @ Qualitative
Why is that measurement more important?
What is the most challenging aspect of measurement for your organization?
Actions
In what ways could measurement (quantitative and/or qualitative) positively influence your organizationâs content or business strategy?
Who is most interested in the outcome of articles?
[checklist] @ Reporters @ Editors @ Donors @ Board of directors / CEO @ Everyone equally @ Other:
Further Information
Would you be interested in beta testing our impact measurement platform at your news organization?
[checklist] @ Yes @ No
If so, please provide your email address so we can contact you with more information.
Â
Appendix B
Impact Reading List
B. Abelson, âHI Score: Towards a New Metric of Influence,â Harmony Insti- tute, 26 June 2012, https://harmony-institute.org/therippleeffect/2012/06/27/hi- score-towards-a-new-metric-of-influence/.
B. Abrash and J. Clark, âSocial Justice Documentary: Designing for Impact,â Center for Social Media, September 2011, https://www.centerforsocialmedia.org/ sites/default/files/documents/pages/designing_for_impact.pdf.
Ad Council, âOverview of Ad Council Research and Evaluation Procedures,â Harmony Institute, Date unknown, https://www.adcouncil.org/Impact/Research/ Overview-of-Ad-Council-Research-Evaluation-Procedures.
C.W. Anderson et al., âPost-industrial Journalism: Adapting to the Present,â Tow Center for Digital Journalism, Fall 2012, https://www.cjrarchive.org/img/posts/tow-content/ uploads/2012/11/TOWCenter-Post_Industrial_Journalism.pdf.
D. Barrett and S. Leddy, âAssessing Creative Mediaâs Social Impact,â Fledgling Fund, December 2008, https://www.thefledglingfund.org/wp-content/uploads/ 2012/11/Impact-Paper.pdf.
J. Blakely, âResearch Study Finds That a Film Can Have a Measurable Impact on Audience Behavior,â Norman Lear Center, 12 February 2012, https://www. learcenter.org/pdf/FoodInc.pdf.
J. Blakely, âTedX PhoenixâMovies for a Change,â YouTube, 12 February 2012, https://youtu.be/Pb0FZPzzWuk.
D. Bornstein, âWhy We Need Solutions Journalism,â Skoll World Forum, 2012, https://skollworldforum.org/debate-post/why-we-need-solutions-journalism/.
R. Breeze, âMeasuring Community Engagement: A Case Study from Chicago Public Media,â Reynolds Journalism Institute, 1 December 2011, https://www. rjionline.org/blog/measuring-community-engagement-case-study-chicago-public- media.
A. Brock et al., âRoom for Improvement: Foundationsâ Support of Nonprofit MediaShift, 11 May 2010, https://www.pbs.org/mediashift/2010/05/5-needs-and-5-tools-for-measuring-media-impact131.html.
J. Clark and T. Van Slyke, âInvesting in Impact,â Center for Social Media, 12 May 2010, https://www.centerforsocialmedia.org/sites/default/files/documents/ pages/Investing_in_Impact.pdf.
Community Wealth Ventures, âHow Nonprofit News Ventures Seek Sustainabil- ity,â Knight Foundation, October 2011, https://www.knightfoundation.org/media/ uploads/publication_pdfs/13664_KF_NPNews_Overview_10-17-2.pdf.
Channel 4 BritDoc Foundation Evaluation, https://britdoc.org/real_good/ evaluation.
S. Duros, âHow Impact Counts for Hyperlocal News, but How to Count It?â Block By Block, 6 August 2012, https://www.blockbyblock.us/2012/08/06/impact- and-what-it-is-for-community-and-hyperlocal-news/.
S. Fisch and R. Truglio eds., G Is for Growing: Thirty Years of Research on Children and Sesame Street (Mahwah: Lawrence Erlbaum Associates, 2001).
S. Fox, âWhy Are We Spending So Much Time âMeasuring the Impact of Jour- nalism?ââ UMass Journalism Profs, 30 March 2012, https://umassjournalismprofs. wordpress.com/2012/03/30/why-are-we-spending-so-much-time-measuring-the- impact-of-journalism/.
FSG Social Impact Advisors/John S. and James L. Knight Foundation, âMea- suring the Online Impact of Your Information Project: A Primer for Practitioners and Funders,â FSG, 2010, https://www.fsg.org/tabid/191/ArticleId/428/Default.aspx?srpush=true.
FSG Social Impact Advisors and John S. and James L. Knight Foundation, âIMPACT: A Practical Guide to Evaluating Community Information Projects,â Knight Foundation, February 2011, https://www.knightfoundation.org/media/ uploads/publication_pdfs/Impact-a-guide-to-Evaluating_Community_Info_ Projects.pdf.
B. Gates, âMy Plan to Fix the Worldâs Biggest Problems,â Wall Street Journal, 25 January 2013, https://online.wsj.com/article/SB1000142412788732353980457826 1780648285770.html.
Bill & Melinda Gates Foundation, âA Guide to Actionable Measurement,â 2010, https://www.gatesfoundation.org/learning/Documents/guide-to-actionable- measurement.pdf.
S. Gigli, âWhat Is âDisruptive Metricsâ,â InterMedia, 20 March 2013, http: //www.intermedia.org/2013/03/20/what-is-disruptive-metrics/.
K. E. Gill, âCarnival of Journalism: How to Measure What Matters?â Wired- Pen, 4 April 2012, https://wiredpen.com/2012/04/04/carnival-of-journalism-how- to-measure-what-matters/.
J. Gordon, âSee, Say, Feel, Do: Social Media Metrics That Matter,â Fenton, Date unknown, https://www.fenton.com/resources/see-say-feel-do/.
L. Graves, âTraffic Jam: Weâll Never Agree About Online Audience Size,â Columbia Journalism Review, 7 September 2010, https://www.cjr.org/reports/ traffic_jam.php?page=all.
L. Graves et al., âConfusion Online: Faulty Metrics and the Future of Digital Journalism,â Tow Center for Digital Journalism, September 2010, https://www. journalism.columbia.edu/system/documents/345/original/online_metrics_report. pdf.
D. Green, âEyeballs and Impact: Are We Measuring the Right Things If We Care About Social Progress?â Skoll World Forum, 2012, https://skollworldforum. org/debate-post/eyeballs-and-impact-are-we-measuring-the-right-things-if-we- care-about-social-progress/.
L. Green-Barber, â3 Investigations, 3 New Laws: See How CIRâs Stories Gain Macro Impact,â Center for Investigative Reporting, 2014, https://www. revealnews.org/article-legacy/3-investigations-3-new-laws-see-how-cirs-stories- gain-macro-impact/.
L. Green-Barber, âCreating an Impact Community,â Center for Investigative Reporting, 2014, https://cironline.org/blog/post/creating-impact-community-6301.
L. Green-Barber, âHow Can Journalists Measure the Impact of Their Work? Notes Toward a Model of Measurement,â Nieman Journalism Lab, 2014, http: //www.niemanlab.org/2014/03/how-can-journalists-measure-the-impact-of-their- work-notes-toward-a-model-of-measurement/.
L. Green-Barber, âThe Language of Impact: Introducing a Draft Glossary,â Center for Investigative Reporting, 2014, https://www.revealnews.org/article- legacy/the-language-of-impact-introducing-a-draft-glossary.
L. Green-Barber, âMeasuring Media Impact: 5 Steps to Put You on Track,â Knight Foundation blog, 2014, http ://www. knightfoundation . org/blogs/ knightblog/2015/4/27/measuring-media-impact-5-steps-put-you/.
Harmony Institute, âWaiting for Superman: Entertainment Evaluation High- lights,â May 2011, https://harmony-institute.org/wp-content/uploads/2011/07/ WFS_Highlights_20110701.pdf.
L. Heedy and S. Keen, âSROI for Funders,â New Philanthropy Capital, September 2010, https://www.thinknpc.org/?attachment_id=815%5C&post- parent=4924df.
B. Hickman et al., âBest of MuckReads 2012,â ProPublica, 31 December 2012, https://www.propublica.org/article/best-of-muckreads-2012.
International Center For Journalists, âAn Evaluation of the Knight Interna- tional Journalism Fellowships,â Date unknown, https://www.knightfoundation.org/ media/uploads/publication_pdfs/Evaluation_of_Knight_ICFJ_Fellowships_ final.pdf.
International Center For Journalists, âEvaluation Field Manual and Tools for the Knight International Journalism Fellowships,â January 2011, https://issuu. com/kijf/docs/icfj_knight_international_evaluation_manual.
J. Johnson, âA New Approach to Making Films That Matter,â GOOD, 11 January 2013, https://www.good.is/posts/a-new-approach-to-making-films-that- matter.
D. Karlan et al., âMore Than Good Intentions: How a New Economics Is Helping to Solve Global Poverty,â Dutton, 2011, https://www.amazon.com/More- Than-Good-Intentions-Economics/dp/052595189X.
KETC 9, âFacing the Mortgage Crisis: People, Connections, Resources,â Cor- poration for Public Broadcasting, Spring 2008, https://www.stlmortgagecrisis.org/. R. Kohavi et al., âTrustworthy Online Controlled Experiments: Five Puz- zling Outcomes Explained,â Microsoft, 2012, https://www.exp- platform.com/ Documents/puzzlingOutcomesInControlledExperiments.pdf.
M. Kramer and J. Kania, âCollective Impact,â Stanford Social Innovation Review, Winter 2011, https://www.fsg.org/tabid/191/ArticleId/211/Default.aspx? srpush=true.
N.D. Kristof, âGetting Smart on Aid,â The New York Times, 18 May 2011, https://www.nytimes.com/2011/05/19/opinion/19kristof.html?_r=1%5C& partner=rssnyt%5C&emc=rss.
G. King et al., âMatching As Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference,â Political Analysis, no. 15 (2007): 199–236, https://gking.harvard.edu/files/gking/files/matchp.pdf.
G. Linch, âQualifying Impact: A Better Metric for Measuring Journalism,â greglinch.com, 14 January 2012, https://www.greglinch.com/blog/2012/01/14/ quantifying-impact-a-better-metric-for-measuring-journalism/.
M. Lewis and H. Niles, âMeasuring Impact: The Art, Science and Mystery of Nonprofit News,â Investigative Reporting Workshop, 2013, https://irw.s3. amazonaws.com/uploads/measuring-impact-final-pdf.pdf.
LFA Group: Learning for Action/Bill & Melinda Gates Foundation/John S. and James L. Knight Foundation, âDeepening Engagement for Lasting Impact: Measuring Media Performance Results,â Learning for Action, February 2013, https://dmeforpeace.org/learn/deepening-engagement-lasting-impact-framework- measuring-media-performance-results.
Data Desk, âComplete Guide to the LAFD Data Controversy,â Los Ange-les Times, 12 April 2012 (ongoing), https://timelines.latimes.com/lafd- data- controversy/.
J. Mayer and R. Stern, âA Resource for Newsroom: Identifying and Measuring Audience Engagement Efforts,â Reynolds Journalism Institute, 3 June 2011, https://www.rjionline.org/sites/default/files/theengagementmetric-fullreport- spring2011.pdf.
McKinsey & Company, Social Impact Assessment Portal, https://lsi.mckinsey. com/.
National Center for Media Engagement (NCME), âMeasuring Public Mediaâs Impact: Challenges and Opportunities,â March 2013, https://mediaengage.org/ CommunicateImpact/measure3.cfm.
E. NĂ ĂgĂĄin et al., âMaking an Impact: Impact Measurement Among Charities and Social Enterprises in the UK,â New Philanthropy Capital, October 2012, https://www.thinknpc.org/publications/making-an-impact/making-an-impact/.
J. Peters, âSome Newspapers, Tracking Readers Online, Shift Coverage,â The New York Times, 5 September 2010, https://www.nytimes.com/2010/09/06/ business/media/06track.html.
J. Peters, âA Newspaper, and a Legacy, Reordered,â The New York Times, 11 February 2012, https://www.nytimes.com/2012/02/12/business/media/the- washington-post-recast-for-a-digital-future.html?pagewanted=all%5C&_r=0.
A. Pilhofer, âFind the Right Metric for News,â aronpilhofer.com, 25 July 2012, https://aronpilhofer.com/post/27993980039/the-right-metric-for-news.
J. Priem et al., âThe Altmetrics Collection,â Public Library of Science, 2012, https://www.ploscollections.org/article/info:doi/10.1371/journal.pone.0048753.
C. Ramsay et al., âMisinformation and the 2010 Election: A Study of the US Electorate,â World Public Opinion, 10 December 2010, https://www.worldpublicopinion. org/pipa/pdf/dec10/Misinformation_Dec10_rpt.pdf.
J. Reisman et al., âA Handbook of Data Collection Tools: Companion to âA Guide to Measuring Advocacy and Policy,ââ Organizational Research Services, 2007, https://www.organizationalresearch.com/publicationsandresources/a_ handbook_of_data_collection_tools.pdf.
M. Rosenblum, âHow to Quantify the Impact of Journalism,â New York Video School, 30 March 2012, https://www.nyvs.com/blog/user/michael/How-To- Quantify-The-Impact-of-Journalism.
M. Salagnik, âExperimental Study of Inequality and Unpredictability in an Artificial Cultural Market,â Science, no. 311 (2006): 854, https://www.sciencemag. org/content/311/5762/854.short.
J. Search, Beyond the Box Office: New Documentary Valuations (May 2011), https://www.documentary.org/images/news/2011/AnInconvenientTruth_ BeyondTheBoxOffice.pdf.
Sparkwise, https://sparkwi.se/.
A. Spittle, âDefining New Metrics for Journalism,â andrewspittle.net, 28 April 2012, https://andrewspittle.net/2012/04/28/new-metrics/.
J. Stray, âBy the Numbers, American Journalism Failed to Inform Voters,â jonathanstray.com, 29 December 2010, https://jonathanstray.com/american- journalism-failed-to-inform-voters.
J. Stray, âDesigning Journalism to Be Used,â jonathanstray.com, 26 September 2010, https://jonathanstray.com/designing-journalism-to-be-used. J. Stray, âDoes Journalism Work?â jonathanstray.com, 14 December 2010, https://jonathanstray.com/does-journalism-work.
J. Stray, âMetrics, Metrics Everywhere: How Do We Measure the Impact of Journalism?â Nieman Journalism Lab, 17 August 2012, https://www.niemanlab. org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-of- journalism/.
E. A. Stuart, âMatching Methods for Causal Inference: A Review and a Look Forward,â Statistical Science 25, no. 1 (2010): 1–21, https://biostat.jhsph.edu/ ~estuart/Stuart10.StatSci.pdf.
R. J. Tofel, âNon-profit JournalismâIssues Around Impact: A White Paper from ProPublica,â ProPublica, February 2013, https://s3.amazonaws.com/ propublica/assets/about/LFA_ProPublica-white-paper_2.1.pdf.
TRASI: Tools and Resources for Assessing Social Impact, https://trasicommunity. ning.com/.
N. Ward/Infomart, âUn-juking the Stats: Measuring Journalismâs Impact on Society,â Infomart, 17 October 2012, https://www.infomart.com/un-juking-the- stats-measuring-journalisms-impact-on-society/.
N. Ward/Infomart, âWe Become What We Measure: Developing Impact Met- rics for Journalism,â Infomart, 3 October 2012, https://www.infomart.com/2012/ 10/03/we-become-what-we-measure-developing-impact-metrics-for-journalism/.
L. Williams, âHow Can News Organizations Assess Impact and Engagement?â Investigative News Network, 2013, https://newstraining.org/2013/09/25/how-can- news-organizations-assess-impact-and-engagement/.
J. M. White, âBandit Algorithms for Website Optimization: Developing, De- ploying, and Debugging,â OâReilly Media, 2012, https://oreilly.com/shop/product/ 0636920027393.html?bB=g.
âWITNESS Performance Dashboard,â December 2009, https://www3.witness. org/sites/default/files/downloads/witness-dashboard-evaluation-2010.pdf.
E. Zuckerman, âMetrics for Civic Impacts of Journalism,â ethanzucker- man.com, 30 June 2011, https://www.ethanzuckerman.com/blog/2011/06/ 30/metrics-for-civic-impacts-of-journalism/.
Â
Footnotes
- Itâs important to point out that media does not always positively influence the world, nor do its effects necessarily relate to any intention [see (L. Bennett, âToward a Theory of Press-State Relations in the United States,â Journal of Communication 40, no. 2 [June, 1990]: 103–127, https://onlinelibrary.wiley.com/doi/10.1111/j.1460- 2466.1990. tb02265 . x / abstract ); (H. Gans, Deciding Whatâs News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time [Chicago: Northwestern University Press, 1979] ); (E. Herman and N. Chomsky, Manufacturing Consent: The Political Economy of the Mass Media [New York: Pantheon Books, 1988] ); and (J. Mermin, Debating War and Peace: Media Coverage of U.S. Intervention in the Post-Vietnam Era [Princeton: Prince- ton University Press, 1999] )]. Research has shown, for instance, that increased exposure to certain media outlets was associated with fundamental misperceptions about the Iraq War [ (R. Lewis et al., âMisperceptions, the Media, and the Iraq War,â Political Science Quarterly 118, no. 4 [Winter 2003]: 569–598, https://onlinelibrary.wiley.com/doi/10.1002/ j . 1538 – 165X . 2003 . tb00406 . x / abstract )]. Despite attempts by some papers to correct the errors in their reporting [see, for example, (NYT editors, âThe Times and Iraq,â The New York Times, 26 May 2004, http : / /www.nytimes .com/ 2004 / 05 / 26 / international / middleeast/26FTE_NOTE.html )], a Harris Interactive Poll in 2008 found that a shock- ingly high 37 percent of Americans still believed that Saddam Hussein was manufacturing weapons of mass destruction in the lead-up to the U.S.-led invasion [ (Harris Interactive, âSignificant Minority Still Believe that Iraw Had Weapons of Mass Destruction When U.S. Invaded,â 10 November 2008, http : / / www . harrisinteractive . com / vault / Harris – Interactive – Poll – Research – Iraq – 2008 – 11 . pdf )]. Psychological experiments back these empirical and theoretical findings, reliably demonstrating how media framesâor lenses through which issues are defined and/or explicatedâinfluence readersâ perceptions [ (T. Nelson et al., âMedia Framing of a Civil Liberties Conflict and Its Effect on Tolerance,â The American Political Science Review 91, no. 3 [September, 1997]: 567–583, https://www.uvm.edu/~dguber/POLS234/articles/nelson.pdf )].
- This isnât an official title and weâll use it to refer to what is sometimes just one person or, alternatively, a small team. In our research, this role varies widely from full-time positions to a single person who juggles impact and analytics reports with numerous other duties.
- Jason Alcorn and Lauren Fuhrman of InvestigateWest and the Wisconsin Center for Investigative Journalism, respectively, will look at best practices for impact reporting. Jessica Clark of Media Impact Funders will address the issue of how foundations could best interact with newsrooms.
- In this version, we chose to sort these articles by page views. A proposed idea for the future would see the entire list of metrics be customizable. We sort by page views instead of publish date (which would be the other logical choice), because Google Analytics takes at least a day to populate data. As a result, the dashboard would always show incomplete data for organizations that publish daily. As the system grows to support other metrics, this view-starting position could be customizable as well.
- Unfortunately, Twitter doesnât guarantee a return of every tweet in its search results. Only through access to the Twitter firehose would a comprehensive list be possible.
- One company, CrowdTangle, promises to do this by monitoring a large number of Facebook pages for an organizationâs content. It is a paid service. An open source and community-maintained repository would be one interesting alternative for democratizing this kind of insight.
- Taxonomically she is an âImpcat,â a rare breed of lynx adept at measuring impact.
Â
Citations
- K. Bradsher, âLicense to Pollute: A Special Report,â The New York Times, 30 November 1997, https://www.nytimes.com/1997/11/30/business/license- pollute-special-report-light-trucks-increase-profits-but-foul-air-more.html.
- C. Duhigg et al., âThe iEconomy Series,â The New York Times, 2012, http: //www.nytimes.com/interactive/business/ieconomy.html?_r=1&.
- A. Nazir Afiq, âSaturday Night Live Pokes Fun at iPhone 5 Tech Pundits,â Vimeo, 2013, https://vimeo.com/51392953.
- J. Stray, âMetrics, Metrics Everywhere: How Do We Measure the Impact of Journalism?â Nieman Journalism Lab, 17 August 2012, https://www.niemanlab. org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-of- journalism/.
- B. Gates, âWhy Measurement Matters: 2013 Annual Letter from Bill Gates,â Bill & Melinda Gates Foundation, 2013, https://www.gatesfoundation.org/who- we-are/resources-and-media/annual-letters-list/annual-letter-2013.
- United Nations Development Programme Evaluation Office, Handbook on Monitoring and Evaluating Results (New York: United Nations Development Pro- gramme, 2002).
- United Nations Development Programme, What Will It Take to Achieve the Millennium Development Goals? An International Assessment (New York: United Nations Development Programme, 2010).
- W. Easterly, âHow the Millennium Development Goals are Unfair to Africa,â World Development 27, no. 1 (2009): 26–35, https://dri.fas.nyu.edu/docs/IO/ 13016/UnfairtoAfrica.pdf.
- âFunding Impact: Partnerships, Networks & Collaborations: A Learning Op- portunity,â AFRICA Grantmakersâ Affinity Group, 6 August 2014, https://africagrantmakers. org/resource/funding- impact- partnership- networks- collaborations- learning- opportunity-2009-conference/.
- RESOURCE ARCHIVE, International Human Rights Funders Group, https: //ihrfg.org/resource-archive/entry/debate-impact-single-issue-vs-multi-issue- organization.
- âMeasuring the Impact of Environmental Communications,â Environmen- tal Grantmakers Association, 12 March 2015, https ://ega. org/events/2015/ measuring-impact-environmental-communications.
- âTools and Resources for Assessing Social Impact,â Foundation Center, http: //trasi.foundationcenter.org/browse_toolkit.php.
- The BRITDOC Foundation, https://britdoc.org/.
- BritDoc, The Impact Field Guide & Toolkit (Miami: Ford Foundation, Knight Foundation et al., Date unknown).
- D. Green and E. Paluck, âDeference, Dissent, and Dispute Resolution: An Experimental Intervention Using Mass Media to Change Norms and Behavior in Rwanda,â American Political Science Review 103, no. 4 (November 2009): 622– 644.
- A. Mitchell et al., âNonprofit Journalism: A Growing but Fragile Part of the U.S. News System,â Pew Research Center, 10 June 2013, https://www.journalism. org/2013/06/10/nonprofit-journalism/.
- John S. and James L. Knight Foundation, 2013 990 Return of Private Foun- dation, https://www.knightfoundation.org/media/uploads/media_pdfs/KNIGHT_ FOUNDATION_990PF_FINAL_2013.pdf.
- How We Work Grant: New Venture Fund, July 2013, Bill & Melinda Gates Foundation, https://www.gatesfoundation.org/How-We-Work/Quick- Links/ Grants-Database/Grants/2013/07/OPP1092058.
- How We Work Grant: Guardian News & Media Ltd, August 2011, Bill & Melinda Gates Foundation, https://www.gatesfoundation.org/How-We-Work/ Quick-Links/Grants-Database/Grants/2011/08/OPP1034962.
- How We Work Grant: Univision Communications Inc., June 2014, Bill & Melinda Gates Foundation, https://www.gatesfoundation.org/How-We-Work/Quick- Links/Grants-Database/Grants/2014/06/OPP1109707.
- âDeepening Engagement for Lasting Impact: A Framework for Measuring Media Performance & Results,â Learning for Action, October 2013, https://www. learningforaction.com/wp/wp-content/uploads/2014/08/Media-Measurement- Framework_Final_08_01_14.pdf.
- âNew Program Funded to Measure Media Impact and Audience Engagement,â Knight Foundation, 29 April 2013, https://www.knightfoundation.org/press- room/press-release/new-program-funded-measure-media-impact-and-audien/.
- âMedia Impact Project,â USC Annenberg Norman Lear Center, https://www. mediaimpactproject.org/.
- Introducing the Media Impact Project Measurement System, Media Impact Project, USC Annenberg Norman Lear Center, 2014, https://www.mediaimpactproject. org/measurement.html.
- A. Phelps, âI Canât Stop Reading This Analysis of Gawkerâs Editorial Strat- egy,â Nieman Journalism Lab, 21 March 2012, https://www.niemanlab.org/20 12/03/i-cant-stop-reading-this-analysis-of-gawkers-editorial-strategy/.
- G. Linch, âQuantifying Impact: A Better Metric for Measuring Journalism,â greglinch.com, 14 January 2012, https://www.greglinch.com/2012/01/quantifying- impact-a-better-metric-for-measuring-journalism.html.
- J. Stray, âMetrics, Metrics Everywhere: How Do We Measure the Impact of Journalism?â Nieman Journalism Lab, 17 August 2012, https://www.niemanlab. org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-of- journalism/.
- A. Pilhofer, âFinding the Right Metric for News,â aronpilhofer.com, 25 July 2012, https://aronpilhofer.com/post/27993980039/the-right-metric-for-news.
- B. Abelson, âThe Relationship Between Promotion and Performance: Pageviews Above Replacement,â abelson.nyc, 14 November 2013, https://abelson.nyc/open- news/2013/11/14/Pageviews-above-replacement.html.
- B. Abelson, âWhither the Pageview Apocalypse?â abelson.nyc, 9 October 2013, https://abelson.nyc/open- news/2013/10/09/Whither- the- pageview_ apocalypse.html.
- âWhat is the Attention Web?â Chartbeat, https://chartbeat.com/attention- web#.VV3haWTBzGc.
- B. Abelson, âThe Relationship Between Promotion and Performance: Pageviews Above Replacements,â abelson.nyc, 14 November 2013, https://abelson.nyc/open- news/2013/11/14/Pageviews-above-replacement.html.
- A. Anand, âIntroducing MORIâOur Impact Tracker Tool,â Chalkbeat, 2 June 2014, https://chalkbeat.org/2014/06/02/introducing-mori-our-impact- tracker-tool/.
- Lindsay Green-Barber Author Page, Center for Investigative Reporting, http: //cironline.org/person/lindsay-green-barber.
- L. Green-Barber, â3 Investigations, 3 New Laws: See How CIRâs Stories Gain Macro Impact,â Center for Investigative Reporting, 2 October 2014, https://www. revealnews . org / article – legacy / 3 – investigations – 3 – new- laws- see – how- cirs – stories-gain-macro-impact/.
- âMORI: Chalkbeatâs Impact Tracker Tool,â Chalkbeat, https://chalkbeat. org/mori/.
- Snowplow: The Event Analytics Platform, https://snowplowanalytics.com/.
- Who Uses Snowplow, https://snowplowanalytics.com/product/who-uses- snowplow.html.
- Piwik, https://piwik.org/.
- OpenStreetMap, https://www.openstreetmap.org/.
- 596 Acres, https://www.596acres.org/.
- WordPress.org, Plugin Directory, https://wordpress.org/plugins/search. php?q=piwik.
- NPRâs Visual Carebot, Github, https://github.com/nprapps/carebot.
- J. Lichterman, âBuilding an Analytics Culture in a Newsroom: How NPR is Trying to Expand Its Digital Thinking,â Nieman Journalism Lab, 30 April 2014, https://www.niemanlab.org/2014/04/building- an- analytics- culture- in- a- newsroom-how-npr-is-trying-to-expand-its-digital-thinking/.
- R. J. Tofel, âNon-profit JournalismâIssues Around Impact: A White Pa- per from ProPublica,â ProPublica, February 2013, https://s3.amazonaws.com/ propublica/assets/about/LFA_ProPublica-white-paper_2.1.pdf.
- Pixel Ping on Github, https://documentcloud.github.io/pixel-ping/.
- J. Lichterman, âConstantly Tweaking: How The Guardian Continues to De- velop Its In-house Analytics System,â Nieman Journalism Lab, 29 January 2015, https://www.niemanlab.org/2015/01/constantly-tweaking-how-the-guardian- continues-to-develop-its-in-house-analytics-system/.
- J. Robinson, âWatching the Audience Move: A New York Times Tool Is Help- ing Direct Traffic from Story to Story,â Nieman Journalism Lab, 28 May 2014, https://www.niemanlab.org/2014/05/watching- the- audience-move- a-new- york-times-tool-is-helping-direct-traffic-from-story-to-story/.
- B. Abelson and M. Keller, âTow Fellows Brian Abelson, Stijn Debrouwere, and Michael Keller to Study the Impact of Journalism,â Tow Center for Digital Journalism, 29 April 2014, https://towcenter.org/tow- fellows- brian- abelson- and-michael-keller-to-study-the-impact-of-journalism/.
- Impact measures from ProPublica and WisconsinWatch.org, respectively, http: //www.propublica.org/about/impact/ and https://wisconsinwatch.org/about/ impact/.
- K. Golden, âCenter Awarded $35,000 from Knight-supported INNovation Fund to Translate Investigative Reporting into Art, Explore New Audiencesâand Profit,â WisconsinWatch.org, 23 October 2014, https://wisconsinwatch.org/20 14/10/center – awarded- 35000- from- knight – supported – innovation – fund- to – translate-investigative-reporting-into-art-explore-new-audiences-and-profit/.
- Celestial Emporium of Benevolent Knowledge, Wikipedia, https://en.wikipedia. org/wiki/Celestial_Emporium_of_Benevolent_Knowledge.
- Jorge Luis Borges, âDel Rigor en la Ciencia (On Exactitude in Science),â Los Anales de Buenos Aires 3, no. 3 (March 1946), https://www.sccs.swarthmore. edu/users/08/bblonder/phys120/docs/borges.pdf.
- J. Lichterman, âBuilding an Analytics Culture in a Newsroom: How NPR is Trying to Expand Its Digital Thinking,â Nieman Journalism Lab, 30 April 2014, https://www.niemanlab.org/2014/04/building- an- analytics- culture- in- a- newsroom-how-npr-is-trying-to-expand-its-digital-thinking/.
- Presidents-HeadsOfGovt, @VITweeple, Twitter, https://twitter.com/VITweeple/ lists/presidents-headsofgovt.
- Presidents-HeadsOfGovt, @VITweeple, Twitter, https://twitter.com/VITweeple/ lists/presidents-headsofgovt.
- U.S. Attorneys on Twitter, @TheJusticeDept, Twitter, https://twitter.com/ TheJusticeDept/lists/u-s-attorneys-on-twitter.
- Theory of Change, Wikipedia, https://en.wikipedia.org/wiki/Theory_ of_change.
- N. Silver, âSept. 27: The Impact of the 47 Percent,â FiveThirtyEight blog, The New York Times, 28 September 2012, https://fivethirtyeight.blogs.nytimes. com/2012/09/28/sept-27-the-impact-of-the-47-percent/?_r=0.
- JSON for Linking Data, https://json-ld.org/.
- NewsLynx: Own Your Analytics, https://newslynx.readthedocs.org/en/ latest/.
- Capitolwords: A Project of the Sunlight Foundation, https://capitolwords. org/about/.
- Podio, https://podio.com/.
- embed.ly, https://embed.ly/analytics.
- M. Down, âHunting Birds of Paradise,â The New York Times, 5 April 2011, https://www.nytimes.com/2011/04/06/opinion/06dowd.html?_r=0.
- XOXO Festival, âDarius Kazemi, Tiny SubversionsâXOXO Festival (2014),â YouTube, 21 October 2014, https://www.youtube.com/watch?v=l_F9jxsfGCw.
- Bikestorming, https://www.bikestorming.com/.
- J. Stray, âMetrics, Metrics Everywhere: How Do We Measure the Impact of Journalism?â Nieman Journalism Lab, 17 August 2012, https://www.niemanlab. org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-of- journalism/.
- J. Stray, âMetrics, Metrics Everywhere: How Do We Measure the Impact of Journalism?â Nieman Journalism Lab, 17 August 2012, https://www.niemanlab. org/2012/08/metrics-metrics-everywhere-how-do-we-measure-the-impact-of- journalism/.
- C. Petre, âThe Traffic Factories: Metrics at Chartbeat, Gawker Media, and The New York Times,â Tow Center for Digital Journalism, 7 May 2015, https:// www.niemanlab.org/2012/08/metrics-metrics-everywhere-how-do-we-measure- the-impact-of-journalism/.