When the Arizona Daily Star began experimenting with chatbots in 2016, readers seemed excited…and a little confused. They were fascinated by the new technology, but often responded to the bot in ways that hinted they were unsure what, or who, was on the other end.
“A lot of users feel like they’re talking to a person,” says Daily Star Product Manager Becky Pallack, who helped test one bot targeted at local parents and another for super shoppers. “They’ll say thank you and send emojis.”
Bots are everywhere now, helping people hail Lyfts, order pizza, and choose lipstick—and the experience can range from simple and easy to befuddling and unpleasant. The stakes are higher, though, when those bots speak for organizations that bill themselves as trustworthy sources of information. Misleading audiences, even accidentally or for only a moment, can damage a newsroom’s credibility. That doesn’t mean journalists shouldn’t use bots, but they must pay close attention to how they’re presenting them to readers.
That was one factor Jennifer Hefty considered when she decided to launch her newsroom’s first bot a few weeks before the 2016 elections. Hefty, the content strategist at the Fort Collins Coloradoan, had attended a session on conversational journalism at the Online News Association conference earlier that year and wanted to give it a try, but she was also aware of the technology’s potential to deceive. To keep readers informed about the experiment, she created an instructional video and written tutorial to introduce “Elexi,” a Facebook Messenger bot programmed to answer questions about candidates and deliver results on election night. The guide includes step-by-step instructions for users, contact information for Hefty, and, on the last frame in the video, an important disclaimer from Elexi: “Remember, I’m a robot, not a human.”
Once Elexi was live, Hefty monitored it closely and made adjustments to keep users from getting frustrated or confused.
“We’ve talked a lot about being transparent, about how we do things and why,” she says. “I wanted people to give us a little grace if the bot got something wrong, and to know that they can contact a real person.”
This kind of human oversight is crucial to preventing problems and preserving trust, says Julia Haslanger, an engagement consultant at Hearken, a Chicago-based company that helps news organizations interact with their audiences more effectively.
“If you’re just using the chatbot for distribution, you don’t need as much human involvement once it’s launched,” she says “But if you’re going to use it for engagement, you need to continue to engage with the people; otherwise, you’re not being genuine.”
Transparency was top of mind for Pallack at the Daily Star. “For a lot of people, their first experience with a chatbot is one of our products,” Pallack says. “We have to make it very simple and easy to use.”
Pallack and her colleagues started small, testing their first bots with niche audiences to learn more about the technology. One project was designed to help parents find the summer camp that best fit their children’s interests. Another helped people who cross the US-Mexico border for short but intense shopping trips by providing information about wait times at the border, exchange rates, and deals from online circulars.
Before becoming product manager a year ago, Pallack worked as a reporter for more than a decade, and she put those skills to use by seeking out test users at shopping malls and other community hubs. The shopping and camp bots were live for just a few months each, but Pallack says it was plenty of time to introduce the technology and learn from users’ reactions. “We spent a lot of time literally watching them use it on their phone,” she says. “When does their face light up because they love it, or their eyebrows scrunch up because they’re stuck?”
The tests also helped the newsroom realize when a bot wasn’t the right tool for the job, as was the case with the summer camp project. After watching parents interact with the bot, Pallack and her colleagues realized a filtered search interface would be a better way to present the camp database.
Pallack took what her team learned and used it in creating their third bot, which has been live since September. The bot is integrated into the Facebook page of the paper’s lifestyle vertical and designed to share updates about local events, new restaurants, and other content appealing to local families. The team has also become more adept at explaining the strengths and limitations of chatbots almost as soon as potential users sign on.
“The onboarding is really when the transparency should happen,” she says.
At this point, newsrooms are mostly using chatbots to push out headlines or solicit specific types of information from audience members. But the advent of platforms like Chatfuel makes it possible for even small newsrooms to launch bots without writing code. As their uses become more diverse, so will the factors newsrooms must consider. Haslanger suggests carefully considering—and communicating—how to use information audience members provide to the bot. Could it appear in a story? Be shared with reporters? With sponsors or advertisers?
The Daily Star is experimenting with integrating ads into the chatbot environment, creating the need for labeling standards. Hefty, meanwhile, plans to build a bot that interacts more smoothly with users. It seems counterintuitive to use a machine to build audience relationships, but Hefty says it’s possible.
“How do we make it feel like someone is messaging a friend?” she asks. “As long as you’re forthright about what you’re doing and who they’re talking to…you can still create that authentic relationship through something that’s artificial.”