Join us
Q and A

The Poetics of Posting

Haley Mlotek speaks with Mia Sato about abortion and social media

December 6, 2022
Photo of Sato by Amelia Holowaty Krales. Photo illustration by Darrel Frost.

At the end of June, a certain kind of post blew up on TikTok, Twitter, and Instagram, at the rapid rate of virality that people have come to expect in times of crisis: the promise of a bed to sleep in and a home to stay in for anyone traveling outside their home state for abortion care, “no questions asked.”

But Mia Sato, a writer for The Verge who has been reporting on social media in the post-Roe era, did have questions. “My coverage is about how platforms respond to user needs, or how they don’t,” she told me, “and what people are doing on the platforms provided to them.” Sato, who lives in Brooklyn, is twenty-seven, a former poetry student and audience engagement editor now covering platforms, technologies, and the behaviors inspired by both. “I just really like to talk to people and make narrative sense of life,” she said. She understands the nature of attention and the power of brevity. “The poetic line break is such a great way to learn about pacing, the art of building narrative tension, the voice, building the fall—the sarcasm or the knowing wink. Writing poetry is so fun and, surprisingly, a good skill for a journalist to have.” She has developed an appreciation for the poetics of posting: “I think every reporter should run a social media account for a newsroom at one point in their life. When you’re forced to read the comments and interact with people who consume your work, you ask yourself different questions in the process of reporting. You read stories differently, understand what things look like on different platforms, like where a headline cuts off. That can change the meaning of the story before someone even clicks on it.”

Over the summer, Sato spoke to Janie Harvey Garner, a nurse who created a Facebook group called Volunteer Aunties. Open to anyone, it was an attempt to organize the flood of offers to host those in need. Sato’s story—“On a post–Roe v. Wade internet, unvetted abortion support is going viral”—noted that abortion networks have always provided practical and logistical care and that the legality of abortion has never ensured equitable access. She also interviewed Marisa Falcon, the executive director of Apiary for Practical Support, about the dangers inherent in taking help from individuals. (“We need to be talking about what people need, not what people want to give,” Falcon said.)

For years—at The Verge and, before that, MIT Technology Review and other outlets—Sato has written about Facebook, now known as Meta. “Young people are not flocking to Facebook,” she told me. “It’s looking to competitors like TikTok, which are not really social in the way we understood social media to be; it’s not a place to chat with friends. It’s a place to offer up content that artificial intelligence thinks you will enjoy.” She added, “The company’s future depends on it not being what you think it is.” In recent months, Sato and her colleagues at The Verge have followed how Meta has restricted access to information about mail-order abortion pills and actively participated in the criminalization of abortion care. 

In our conversation, which has been condensed and edited for clarity, Sato and I talked about the internet as it seems, as opposed to the internet as it’s used, and the limits platforms place against what people need most. We spoke about how change, which never happens overnight, can still come as a surprise—and yet predicting what could happen next is often simpler than it might appear. 

So much of what I associate Facebook with is that it propagates mis- and disinformation, often seemingly without consequence for the people who are pushing it and even, in fact, facilitating their extreme profit. What does Facebook want to be, and how does it manage those aims with this paradox? 

Facebook needs young people. And the prevailing narrative is that young people want short-form; they want Instagram Reels. Well, they don’t want Reels, they want TikToks, and those are two different things. Content from sources you aren’t already following. With regards to my reporting on abortion, I think a big part of the tension is between people who want to use these platforms as they were and what the company is actually doing with the platforms. For users, what you think a platform is will usually be how you use it, and that is, of course, not always helpful—or safe. 

When you started reporting on your story about Facebook users offering unvetted support to people seeking abortions away from their home states, did you find anything that perhaps surprised you, or confirmed your existing understanding of what Facebook is today?

After Roe fell, I was watching how people responded online, because there are always so many different conflicting interests in what people post. I use the term “meme” pretty loosely, but one thing that I saw become a meme was something that usually went, “If you are in Seattle and need a place to stay for your abortion, my couch is open.” And it’s like a mic drop, got ’em kind of thing. 

Obviously, some of it might be virtue-signaling; some of it might be typed up in frustration and in the chaos of when the draft leaked, and then when finally Roe v. Wade was overturned. But what struck me was that it was a profoundly individualistic way to try to meet the moment of this need. When there is something like a fundamental right to healthcare, the urgency is so much more apparent, and the response is Well, I have a room here. A porch light is on. That, and the fact that it was going viral on every platform, everywhere, was very interesting to me. I saw a lot of people that I thought would have been more critical of this approach also posting it. No judgment at all—it’s just fascinating that people who spent the last three years talking about mutual aid would also share this approach. 

The posts were spreading rapidly, and I wanted to do a story about it as a phenomenon. There were micro Facebook groups and WhatsApp groups popping up, talking about building a network of apartments in certain neighborhoods, certain cities. My approach to this story—and to a lot of stories I do—was asking: What is already out there? We talk about this being a curtain-opener for the era of abortion being criminalized, but the reality is that that’s not a new thing. People have been criminalized for seeking abortion care for a very long time, no matter what the laws are, frankly. I knew that there were abortion networks already out there, organizations that have really honed the skill and art of practical abortion support, which is the catchall term for things like lodging, transportation, anything apart from the physical and medical element. I really wanted to know what those organizations thought about these viral posts, because there were none coming from those networks, and it turned out that what they were offering was so out of step with what the network knew patients needed. I was talking to this organization called Apiary for Practical Support, and the executive director, Marisa Falcon, said something that I remember challenged my own assumptions. She said, “Honestly, people don’t want your guest room. If they’re traveling for an abortion, they want a hotel room.” 

For me, all the pieces of the story fell into place then. I saw the gulf between the people who are providing material support and the people who mean well but don’t know how. Immediately the frustration came into focus for me: these groups who have been preparing for this, and then watching the public reaction, and are like, Wait, wait, wait. It rolls out of control. 

I do spend a lot of time thinking about the value of symbolic actions, and I think—especially when we’re talking about social media—almost all the action posted there is symbolic. It’s a way of saying, Here I am, here’s the side I’m on, here’s a record and a way for people to find me. And I have a lot of respect for the symbolic value in organizing, because sometimes it can do a lot. When the judgment came down, that, too, required a real understanding of what we were grieving: the loss of an era when abortion had been symbolically available by virtue of the law, but by no means practically—because the accessibility of abortion has never been equal, fair, or, for so many people, decriminalized. It sounds like this became a story about a group of people who were coming up against the limits of that symbolism.

Yeah, and I also do certainly sympathize with the people who feel, in moments of emergency, that all they can think to do is say I’m here, I’m available. I don’t fault the feeling, but I think those limits of symbolism have a new added risk now that we’ve turned a corner. The potential for criminalization, whether you are the one getting the abortion or not, is much bigger than before. This comes up in my story: it’s now incredibly risky, legally, to seek out abortion, and that adds new levels to this. There are new dangers to being someone vocal about abortion online. And that data, when you post, is creating more to be collected. 

I think maybe the question comes down to a cost-benefit analysis in a moment like this, where there are groups who are trying to do this work already in need of funding, of support, of people to do all these different tasks that might not be as tangible as hosting someone. They need someone to do the Excel work! 

That came up in my interviews, and it’s a tough question. When you are able to give, what do you want to give, and what is needed? What is being asked for? It’s an interesting question of charity, and perhaps uncomfortable as people reflect on their own feelings. 

I’m nodding emphatically at the Excel spreadsheet point, which is always the struggle with organizing work. It’s making me think, too, about how social media preys on the quality of making every person into the hero of their own story. I did want to ask where the idea for this reporting came from. It sounds like it wasn’t an assignment from an editor but one that you wanted to pursue. 

Yes. I was seeing the posts grow at incredible speed. I think that platforms have not done a great job, frankly, at having proactive measures in place to control the quality of information as things are shifting in real time. 

There’s also the element of platforms being primed for this stuff to get big. It’s how they operate. A viral TikTok that goes, “Hey, I’m in Illinois, and I need to say I’m here. Comment if you are also there for other people” is often what gains traction; that’s the way it feeds itself, through a hero narrative or the personal attachment to the news of the day. I was really fascinated with how this mostly well-meaning content creation could muddy the work that was already being done. Indeed, the influx of people offering support and resources was something that abortion care networks were scared of. They told me that they were nervous about how they would deal with people who wanted to help, but maybe with a skewed idea of what that really looks like because of what they saw on social media. 

I want to talk specifically about finding sources for your stories. You’re on social media and you’re seeing something like this trend blow up—how do you go about reaching out and negotiating access with people who might be good interviews?

In this case, I was watching to see what level of engagement people were getting after they posted the offers. Some of them would post the tweet and then wouldn’t really respond to the comments, so I was like, Okay, there’s not a great chance that they will have much more to say. The person I ended up speaking to had started a Facebook group, and—this is an adjacent thought, but it’s very interesting that when people think of how to organize around a moment like this, they turn to tools specifically like Facebook groups. 

My source had started a new Facebook group in the wake of the fall of Roe v. Wade, and it was modeled after a community that already exists on Reddit called the Auntie Network. The Facebook one was called Volunteer Aunties. I knew she would want to talk because she was very active in interacting with the group. I later learned that she is a moderator for several other enormous Facebook groups with tons and tons of members, so she’s really versed in how to use these groups to get something done, or at the least, to collect people. She was very generous with her time, and I’m thankful she spoke to me. 

Early on, days after the group was created, people suspected that there were antiabortion individuals joining. Her response was, “Yeah, I’m sure there are. We’ve added thousands of people and can’t know everyone’s hearts.” That was really who I wanted to talk to. Someone who was passionate and trying to create something out of this need, but who was perhaps not quite tapping into the networks that already existed. 

Absolutely, and I wanted to talk about that element: the implicit but very real danger that it would attract exactly the opposite of who the groups were trying to serve. Did you find that people had questions for you, or for themselves, about how to use Facebook? 

Part of the reason why there should be more thinking around why we gravitate toward the tools we do for organizing this way is that the existing networks themselves have very specific ways to vet people. There wasn’t too much concern about possible digital-safety elements, whereas existing networks have an intake process that is face-to-face, offline. Can you trust people on the internet to understand what they’re undertaking, to know what end-to-end-encrypted messages are, to know that Facebook can see those messages if end-to-end isn’t enabled? 

Earlier, too, you mentioned “memeification,” which is a flawed but accurate term for what you’re reporting on. I’m curious to know if you saw any overlap between this and another meme that appeared around the same time: there were the posts about offering a couch as well as posts about the dangers of abortion surveillance, encouraging people to delete their period-tracking apps and other measures for digital-safety practices. It seems so telling that there were these two distinct camps, one of which went deeper into social media and the other saying it is past the time to abandon it. 

Yes, it’s a whole other can of worms. I’m not a cybersecurity reporter, but from what I’ve read it seems like period-tracking apps could be a cause for concern but it’s much more likely that something like chat logs would be used against you, or even information you give to your doctors and people around you. It was interesting to me that the period-tracking apps were something that people turned on quickly, and that could be for a variety of different reasons. It feels like a diary, a very private place; even though you’re on your phone, you’re not posting it publicly online. There are these waves of reaction.

I do think, at the very least, the period-tracking-app question made people rethink where they were putting this information, even if they might still communicate via Facebook Messenger or similar. 

I recently saw somebody post on TikTok a conspiracy theory that that new conservative dating app is actually an FBI initiative—which, I mean, I don’t know if the FBI has that sense of humor, but I guess you can’t put it past them. Anyway, it led to someone explaining the concept of a honeypot and honey-trapping, which made me think about the literacy around using something like Facebook, which has a history of readily collaborating with the police, and has already been used to penalize and criminalize people seeking abortion access. When you were reporting on platforms and the Dobbs decision, did you rely on anybody specific to report on the risk involved? Did you reach out to Facebook?

I didn’t reach out to Facebook for this story, because it was about people willingly creating these groups. There were a couple of things I picked up on in my reporting. One was this idea that if people used euphemisms or code words, that would somehow bypass law enforcement, which I think really shows a fundamental misunderstanding of what is going on in a way that’s like, Should we be trusting you with handling patient data? 

And in a slightly different but related way, there was also the fact of Facebook taking down posts about access to abortion pills. This was especially important to note because of Facebook’s own content moderation policies around COVID-19 and health mis- and disinformation. Abortion is healthcare, and suddenly Facebook decides these abortion pills are against their own healthcare policies. 

Yes, I reported on that story, and published my own a few days before the story about the Facebook groups, how Meta took down posts offering to mail people abortion pills. Based on what I was seeing, I think there was a lot of shock that they would have this policy. The pharmaceutical thing was part of an existing policy that they had, and the story says so. I was immediately able to re-create the same effect that other outlets noted, which was that if you post “I’ll mail you abortion pills” it gets taken down right away, but if you post something like “I’ll mail you weed” it doesn’t. 

The kind of knowledge about how Facebook already responds to the memeification of support tells you a lot about how they might respond in the future. The exchange of pharmaceuticals isn’t allowed, but you can post information about accessing those resources. But then the company did respond to these stories by saying, Oh, it was a mistake and it’s been restored. You can tell me if that’s an adequate response. I don’t know. 

Alex Heath’s work at The Verge looks at the internal dynamics of employees and employers at platforms like Meta, while yours tends to focus on the external dynamics between the platforms and their people. Of course, those things have overlapped when it comes to the way the company contradicts itself on abortion access. How has that come up in your coverage?

The delineation between external and internal reporting for us at The Verge isn’t set in stone—he just happens to do so because that’s where his sourcing is very strong and for a lot of different reasons—but certainly how companies talk about things internally comes up regularly in my reporting. In the abortion pieces, the clearest example of that is in the story I did about mailing abortion pills. I tried to put into contrast how the company talks about things and what happens. The company response was that they didn’t allow any controlled substances to be offered or mailed, including any kind of medication, but you could post information about acquiring abortion pills. There was also a post by Planned Parenthood that was allowed under Meta’s rules, a post that gave information about abortion medication, and it was removed. Facebook later said it was an error and that it had been restored. With that story, it was about showing that it was already a policy they have raised in defense when people have criticized how they handle posts that deal with abortion, and here was an example of a post that didn’t violate those rules but was still removed. The explanation that the company points to was that it was an error, whatever that means. 

In the stories that I write day to day that are about product updates or policy changes that platforms raise and introduce, I do try to share with readers what we know about the inner workings of companies that are often very opaque and guarded. That’s where I rely quite a bit on Alex’s reporting, because he’s able to pull in documents and meeting transcripts and show exactly how the people running these companies are talking to their employees about the changes they’re seeing. That’s a perspective that the average user of platforms isn’t privy to very often. The only way they will ever know that is if someone internally decides to leak it to a journalist, or the higher-ups decide that this is what they’re running with. Those two things—what the public sees and what employees hear—aren’t always unified. They can talk about it to employees and then have a different spin for the public or the press. 

You mentioned how your work allows you to observe the incremental shifts that add up to what can seem like a huge change, tracking how a platform reveals what might be their new goal. Do you think these internal contradictions about what employees experience at Meta can reveal anything about what people might anticipate experiencing on these platforms as users?

There are documents leaked all the time. Not everything is the whistleblower report or that scope; there are little documents, emails, that are shared consistently with reporters. Hearing how companies talk to their employees and talk about things to their employees is a really good way for users to learn about their platforms. You can learn about company priorities in a way that you probably otherwise wouldn’t be able to. I think a big part that people maybe don’t think about much is how companies make money. Like, it’s so simple, but does the way you interact with the platform change based on what you know about their finances? You can start questioning why different features are being introduced, or when they pivot looking for a new revenue stream. The average social media user—because the platforms are, for the most part, free—doesn’t think too hard about it, but it is instructive. It’s good to remember that Meta is telling their employees to shut up about abortion on their internal forums, but at the same time, they have also made this commitment around helping their employees get access to this care. You can hold that at the same time as what we know about how they treat content on the platform. There are rules that they say are very clear, but it just so happens an advocacy group like Planned Parenthood, whose job is to provide information, will have posts removed in what is explained away as an “accident” or an “error.” 

Since Dobbs, I’ve seen people post on TikTok about documentaries like The Janes with perhaps a sense of wonder that they were able to run such an efficient network without internet. In fact, it worked exactly because the only paper trail was a literal trail of cue cards. It wasn’t something like Google Docs that could be accessed almost anywhere. In your reporting, how have you seen people’s responses evolve across different platforms? Have you seen anything from before the advent of Facebook or Twitter that reminds you of older responses? 

Collective memory is very short, and when a platform like TikTok skews so young, there are legitimate and often hilarious misunderstandings about what was going on prior to twenty years ago or so. I haven’t seen the TikTok with people talking about The Janes, but that’s funny, and I think it’s understandable. I mean, I don’t know how to use a fax machine. 

But I do think that in the moment, the misunderstanding of what is happening on digital platforms is not entirely their fault. They’re confused about Facebook’s responses. If you are surprised that Facebook is going to take down posts offering abortion pills by mail, I don’t really blame you. A lot of policies are ones you care about because it’s necessary to care about them. The privacy policies—people aren’t expected to know them or read them, or to know to use platforms with that in the background, understanding the framing. That is an incredible burden placed on consumers. People might not fully grasp what they’re doing or what they’re creating as they post on these platforms. I saw that was a big narrative thread in a lot of covid-19 reporting: people who should have known better. 

It’s true, the internet has the effect of wiping memory—or maybe wiping common sense. Was there any wider research you did for these stories beyond the interviews? Are there any books you’ve turned to, or writers you really trust, on platforms or technologies?

At the time, I was reading Uncanny Valley by Anna Wiener, and I don’t know that there was a direct tie-in with this specific story, but I do think that it really helps to think of these platforms as companies. It’s such a simple way to think about my work; The Verge has helped make that divide feel smaller. When companies do things that make users feel like Why would you do that? Nobody wants that, it’s not that weird to me anymore. The explanation is only related to profit, and to the guarantee that this company will be around in ten years. 

That is a very helpful framework. It’s a given if you’re a reporter, but I think as consumers, it’s a little bit harder to see that. I also think there’s a tension—or not even a tension, an inclination—toward the platforms. The fact that we turn toward them for this kind of work says a lot. I think about the other resources we have, and what we don’t. The fact that in an emergency the best thing someone can think to do is send a tweet is not just your fault—it signals more about the crumbling of institutions and, especially coming off the heels of covid, incredible isolation from the community around you. 

Earlier, you spoke about the risk involved in people using Facebook to secure abortion access. What are the dangers specific to journalists, either professionally or personally? Is there anything you do to protect yourself?

For me, as long as I’m on this beat, there will never be a window in which I can disengage or break from all of these platforms. I will be online forever if I’m doing this job. I need to be tuned in to what’s going on, or at least have some fluency with how the tools work and how the products are designed. As a reporter covering the internet, there’s a lot of things where you just have to stare into the sun, for lack of a better phrase. There’s an old YouTube video of a little boy staring directly into a light called “blind myself with a lamp for no reason.” That’s what it feels like. 

Is it dangerous? Maybe, but it’s part of the mandate of the job to understand and to have an ear to the ground on the platforms, in the communities. It’s part of what I’ve accepted, doing the work. 

What sort of changes to platforms like Meta should consumers and reporters alike be paying attention to now? 

One thing is the messaging element. Meta has introduced new tools for DMs on Instagram, and the end-to-end encryption is something that they’ve said they’re working on so that it becomes the default in 2023. The question of why that hasn’t been the default already became especially urgent post–Roe v. Wade. I’m not sure if it was prompted directly by abortion access, but I think watching for those security updates will be important. Other apps do it and have been for a while. Why can’t Facebook?

Haley Mlotek is a writer, editor, and organizer with the National Writers Union. Her first book, No Fault: A Memoir of Romance and Divorce, is forthcoming from Viking in February 2025.