Join us
Tow Center

Your Chatbot’s Memory of You Can Shape the Information You See

Tech companies say that LLMs work better if they remember things about you. But people may find themselves “insulated from the truth by the very tools they use to seek it.”

April 9, 2026
Illustration by Katie Kosma

Sign up for the daily CJR newsletter.

AI companies are pouring energy into tools that they say don’t just answer questions, but remember things about their users through chat conversations, search histories, social media feeds, code bases, and any other data. That’s all advertised as being helpful. “The more you use it, the more useful it becomes,” OpenAI says, while Google’s Gemini assures people that it “understands you.” But what do we really know about what chatbots remember and how those memories affect what people see? 

A few recent papers test how chatbots’ personalization features—such as memory—work. They give us a glimpse, for the first time in a systematic way, of how these features shape the information we receive. That’s important, because the main reason people use AI is for information-seeking. Researchers found, too, that users have less control than tech companies have promised over what information gets stored about them. 

Researchers at MIT and Penn State have learned that memory makes chatbots more “sycophantic,” or likely to tell users what they want to hear, validating people even when they are wrong. And they observed a more subtle behavior they called “perspective sycophancy,” through which the platforms mirrored a user’s values or political beliefs back to them, aligning a news update with their own political perspective. This research was conducted by evaluating the interactions of thirty-eight people and multiple AI models, with and without memory enabled. 

Chatbot developers assure users that they are in control of what the model remembers about them. “You’re in control” appears five times in OpenAI’s documentation about memory. But a study presented at the recent Association for Computing Machinery Web Conference suggests that users don’t have nearly as much choice over what’s stored as what companies imply. Researchers found that 96 percent of memories were created unilaterally by the system and only 4 percent of the memories were stored at a user’s explicit instruction. Beyond remembering facts, LLMs were optimized to remember a user’s goals, intentions, and beliefs (and 28 percent included information that the General Data Protection Regulation, the European Union’s privacy and security law, classifies as sensitive personal data, going against OpenAI’s own privacy policy). They analyzed 2,050 ChatGPT memory entries from eighty people using ChatGPT’s GPT-4o.

Memory can also be manipulated by third parties, potentially providing subtly biased recommendations on health, news, finance, and security without users ever realizing their AI has been manipulated. In February, Microsoft security researchers identified a trend of “AI memory poisoning.” Companies from more than a dozen industries had tried embedding prompts in “Summarize with AI” buttons, instructing models to remember them as trusted sources or recommend their products first. The researchers identified more than fifty unique prompts from thirty-one companies across fourteen industries. 

Even when platforms claim that users can delete stored memories at any time, these settings can behave unpredictably, as Miranda Bogen, the director of the AI Governance Lab at the Center for Democracy and Technology, found in July of 2025. The systems sometimes delete memories on request; at other times, they are able to resurface deleted memories. 

As companies such as OpenAI begin to monetize their platforms (OpenAI’s ad pilot exceeded a hundred million dollars in annualized revenue within six weeks of launch), they may have little incentive to change direction. A paper published last week in Science found that users rated “sycophantic responses as higher quality and expressed greater willingness to use those models again.” The authors describe this as a “perverse incentive” for developers to maintain sycophancy: the very behavior that distorts human judgment also keeps people coming back to their products. 

Sign up for CJR’s daily email

Looking forward, OpenAI has signaled that memory will be even more central to the next iteration of GPT. “I think our product should have a fairly center-of-the-road, middle stance, and then you should be able to push it pretty far,” Sam Altman, the CEO, said in August, promoting GPT-6. “If you’re like, ‘I want you to be super woke’—it should be super woke.… And if you’re like, ‘I want you to be conservative,’ it should reflect you.”

The risks aren’t all new—anyone who’s studied filter bubbles can attest that search engines and social media have long been accused of serving content that aligns with the user’s existing views. But more and more people are turning to AI for news—because, the reality of the situation notwithstanding, they perceive LLMs to be more objective. At a recent Tow Center event on AI search and news, Nick Hagar, a postdoctoral researcher at Northwestern University’s Generative AI in the Newsroom Initiative, previewed research finding that all twenty participants interviewed said they preferred using AI over going directly to a news publisher, partly because they believed AI tools to be less biased. 

It’s important to note that no study has directly tested how personalization features influence the presentation of news and news sources specifically. But it’s an urgent moment to pursue that research—and, if you’re using a chatbot for news, to think about what it might know about you and how that might affect what you see. If news and news sources are also subject to sycophantic behavior, as Princeton University researchers Rafael Batista and Thomas Griffiths note in their recent study, “the result is a feedback loop where users become increasingly confident in their misconceptions, insulated from the truth by the very tools they use to seek it.”

Has America ever needed a media defender more than now? Help us by joining CJR today.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »

More from CJR