Summary: Behavioral study of expectation in media

June 27, 2016

Aims and Hypotheses 

W

e designed a study aimed to determine whether participants’ trust in the information contained within an online news article is altered by changing the brand of the article, by demographic factors, and by readership habits such as regularly accessing news online or in print. Given prior findings in analogous disciplines based upon expectation and trust in media, we hypothesized that participants would show more trust in older, established print publications known for investigative journalism, like The New Yorker, than they would in more recently established publications online, like Buzzfeed. Further, we expected that people who reported more journalism readership would be more aware of brands and, consequently, more willing to trust information from known sources.

Sign up for CJR's daily email

Methods

Participants were recruited from Amazon Mechanical Turk. Eligibility was limited to those who were at least 18 years of age and lived in the United States. The experiment lasted approximately between 40 and 90 minutes, and each participant was paid $6 for completing the task. Participants who attempted to repeat the experiment and those who did not obtain a sufficient article attention score (described below) were excluded from analysis. After providing informed consent, participants first reported information about their own demographics, including age, education, socioeconomic status, and other similar information. Next, participants answered questions about their magazine readership, including how much, how often, and which publications they read, as well as how they accessed journalism (online, print, TV, radio, etc.). Once they had responded to these questions, they read the same article as it was presented in one of three media “shells” designed to look like either The New Yorker, BuzzFeed, or a “neutral” publication we created called The Review. Subjects were instructed to read the article and respond to questions once they had finished. The “back” button was disabled so that participants could not change their answers about prior readership after they had read the story.

Once they had completed reading, participants responded to eight questions regarding basic factual content of the written piece to ensure that they paid attention and read the piece in full. If subjects did not respond to at least five questions correctly, they were excluded from analysis. Next, they were asked several questions about their reaction to the piece. This included questions about, first, how much their responses reflected that their viewpoint was aligned with the protagonist’s, which we also considered to be the prevailing opinion of the writer (we called this positive trust); and second, how much their responses reflected that their viewpoint was aligned with the government, which we considered to be against the prevailing opinion of the writer (we called this negative trust). We then examined a similar summary score for both positive and negative trust that included questions about whether the participants trusted or did not trust the information contained in the piece, regardless of the opinion of the author. We also asked readers for their opinion of the writer’s qualifications, reliability, and credibility, how accurately they thought the two sides were portrayed, and how clearly they believed the facts were presented. We also specifically asked if they had taken note of the publication, and whether this affected their opinion of the piece. At the end of the experiment, all participants received a short debriefing form that explained our purpose and provided a link to the original article that they had just read.

Analysis 

Analyses were performed using Microsoft Excel and the Matlab Statistics Toolbox. Differences in demographics across group assignments were analyzed using one-way anova and t-tests. Differences in trust metrics between the two groups of interest were first analyzed based on our specific hypotheses using one-tailed, two-group t-tests, and regression models. Exploratory analyses were performed using Pearson correlation, and analysis of covariance.

Participant Demographics. After eliminating participants for poor performance on the factual story questions, we included 89 participants in the Buzzfeed group, 93 in the New Yorker group, and 85 in the neutral group. We examined subject age, ethnicity, and gender, as well as income and education levels across groups. We found no differences across each of the three group assignments according to demographic category (Age: F=0.39, p=0.67; Education: F=1.72, p=0.18; Gender: F=0.37, p=0.69; Income: F=1.37, p=0.25; Ethnicity: F=0.74, p=0.47). Age was related to income (r=0.17, p<0.01) and gender (women were younger than the men in this study; t=-2.2, p=0.028), and as such, no correlated factors were included in any one predictive analysis to avoid issues with autocorrelation. Of the group tested, approximately 60 percent were female, and the average age was 34.5 with a standard deviation of 10.8. Their average income level was between $20,000 and $40,000 annually. The participant racial/ethnic breakdown included participants who self-identified as white/Caucasian (77 percent), black/African-American (8 percent), Hispanic or Latino (7 percent), Asian or Pacific Islander (5 percent), other (2 percent), and 1 percent preferred not to say.

Primary analyses. Our primary question in this study pertained to whether the branding of the article one reads affects the way one trusts information contained within. To address this, we created a positive and a negative trust summary score for each individual. The positive trust score was a metric comprised of the questions participants answered about trusting the writer’s factual portrayal of the protagonist. The negative trust summary score was a metric reflecting the questions pertaining to not trusting the writer’s portrayal of the protagonist, as well as responses indicating trust for the pro-government opinion of the story. We found that while, on average, participants were more likely to trust rather than not trust the publication (positive trust versus negative trust: t=15.13, p<0.01), in line with our hypotheses, participants who read the article in the Buzzfeed shell reported lower positive trust (t=-1.79, p=0.037) and higher negative trust (t=1.64, p=0.05) relative to those who read the article in the New Yorker shell. We found a trending similar result in the same direction when we examined the data in terms of informational content (positive: t=-1.73,p=0.04; negative: t=1.45, p=0.07). There was no difference in reported trust from those who read the article in the New Yorker or Buzzfeed shells relative to the shell for the neutral publication, The Review, whose trust responses fell between the other two groups. Similarly, we did not find this relationship in bias reports for the protagonist (t=1.12, p=0.13) or government (t=0.9, p=0.19), or in writer credibility (t=-1.39, p=0.92).

Readership and trust behavior. We examined relationships between trust metrics with age, education, and a readership summary score, which was calculated as an additive score reflecting participants’ reported regular access to journalism across print, online, and other journalism media. (Note: only a limited number of demographic factors were examined due to autocorrelation between several factors.) However, none of the relationships between trust and these factors were significant. We also examined the relationship between demographics (age, education) and readership as it pertained to how much attention readers paid to branding. We found that people who said they noted the publication brand were significantly older (t=4.44, p<0.01), and had higher readership scores (t=2.75, p<0.01). However, there was no difference in education between those who did and did not note the brand (t=0.49, p=0.31). As we were also interested in how writer credibility is related to participants’ tendency to believe information presented within a story, we examined the relationship between those who reported distrust for the publication and reports of writer credibility. After controlling for participants who reported that this finding this had no impact on their view (121 total), we found that reported distrust was highly correlated with those who reported low writer credibility (r=-0.55, p<0.01). We did not find any relationship with age, education, participant readership, or whether subjects accessed news online or in print.

Exploratory analyses. We wanted to see if trust scores differed in those who accessed news online versus in other sources. We first explored differences in trust metrics across people whose primary source of news was online versus those whose primary source of news was not online. We found that negative trust was significantly greater for those whose primary source of news was online (t=2.39, p=0.01). This was not so for positive trust (t=0.14, p=0.88), writer credibility (t=-1.36, p=0.17), or bias for the protagonist (t=-0.67, p=0.50) or for the government (t=0.25,p=0.72). Finally, we wanted to determine whether being an online news consumer was related to reported trust, and if this was different based on group. We found that there was a difference trend (for factors of group by online use, F = 2.88, p = 0.057) across groups in the way that online news access predicted trust, such that those in the Buzzfeed group who were primarily online news consumers showed greater trust scores than those who weren’t. This relationship was not observed in the other two groups.

Jenna Reinen is a postdoctoral psychologist and cognitive neuroscientist at Yale. She studies how our experiences shape the way we learn and decide. Follow her @jennareinen, or visit her website.