Surprise Finding in OpenAI’s Big Study of Human–Chatbot Interactions

For months, one of the popular storylines in headlines and on social media has been this assumption: millions of people are secretly telling their deepest secrets to AI chatbots and treating them like they’re human confidants. But the biggest-ever analysis of real ChatGPT conversations paints a very different picture.
OpenAI published a massive privacy-preserving study of the write-errors committed by millions of users regaling its chatbot. Rather than a global tidal wave of after-hours confessionalism, the data demonstrate that most after-dark conversations are in fact functional, informational, or work related. Only a few are emotional and personal.
How the Study Was Conducted
To preserve privacy, the researchers never examined specific conversations. Instead, they applied automatic language models to categorize more than one million sampled chats according to type and intent.
- These classifiers worked to sort messages into categories like “asking,” “doing,” and “expressing,” as well as topic areas such as writing help, tutoring, and coding support.
This method enabled OpenAI to paint a rich statistical picture without revealing any private user data. The result is the most direct look yet into how people are using ChatGPT.
Practical Uses Dominate
The headline finding is striking:
- Roughly half of all messages are categorized as questions, when users are seeking information, explanations, or advice.
- About 40% are in “doing,” where people ask the AI to perform tasks like writing emails, coding, or summarizing documents.
- Only about 11% are “expressing,” in which someone is primarily sharing thoughts, feelings, or reflections without a direct request for information or action.
The researchers also found a similar pattern when focusing on content:
- Around 80% of usage is centered on pragmatic advice, information-seeking, and writing.
- Coding—a tired, clichéd stereotype—makes up only about 4% of conversations.
- Emotional connections, including personal relationships or companionship, account for less than 2% of the total.
Put another way, most people regard ChatGPT as a generic tool, not a therapist or best friend.
The Myth of the Digital Confessional
Such data directly refutes the cultural narrative of AI chatbots as humanity’s confessional booth. Stories of individuals forming deep emotional connections with bots do exist, and for those people, the interactions can indeed be significant. But they are exceptions, not the rule.
It’s worth remembering that “small” is not the same as “none.” A single-digit percentage of billions of messages still represents a large number of personal conversations worldwide. The stakes during those chats can be high, particularly when they involve mental-health concerns or loneliness.
OpenAI therefore emphasizes the need for robust safety systems, including automated checks to detect signs of distress or harmful content and to direct users toward supportive resources.
Who Uses ChatGPT—and How
The research also uncovers intriguing patterns regarding who turns to the chatbot:
- Gender shift: Women now constitute a slight majority of users in some datasets.
- Age trends: Young people remain the most active user base, but adoption is spreading across age groups and regions.
Another important insight is that non-work use is taking off.
- By the middle of 2025, personal conversations outnumbered work ones more than three to one.
- People now approach ChatGPT for learning, tutoring, and general problem-solving—whether for travel planning, homework help, or everyday tasks.
These trends reflect the changing presence of AI in daily life. As ChatGPT is used more for education and planning, accuracy and precision become just as crucial as empathy.
Context: AI and Human Sensitivity
Outside OpenAI, researchers studying people’s relationships with AI caution that the link between loneliness and chatbot use is complex.
- Some lonely individuals may spend more time with AI companions, and for them these interactions can provide comfort.
- Yet large-scale evidence indicates that confessional or companionship use is far from the prevailing form.
This nuance matters. While the data bust the myth of universal AI confessions, they also underscore the importance of creating chatbots that can navigate sensitive moments with care. Even a tiny fraction of emotional connections could represent millions of people worldwide.
Implications for Safety and Design
The results have important implications for technology developers and policymakers:
- Narratives vs. Reality: Public narratives can be radically different from actual behavior. Viral anecdotes of deep emotional connections grab attention, but they don’t represent the average experience.
- Focused Protection: Because some users convey emotions or mental-health concerns, chatbots must respond appropriately—detecting distress, providing accurate information, and avoiding harmful advice.
- Guiding Innovation: Since most interactions center on writing, learning, and planning, developers can focus on tools that enhance accuracy, creativity, and usability for these activities.
A Snapshot, Not the Last Word
OpenAI acknowledges that this research is merely a snapshot. As technology and features like voice activation or personalized memory continue to develop, usage patterns may evolve. People’s relationships with AI are still changing, and future studies will be needed to track those shifts.
But for now, the signal is clear: ChatGPT is a multi-purpose assistant, not an all-purpose digital confessional. The majority of users seek actionable help—writing a document, solving an equation, learning a skill, or getting instant advice. Emotional conversations occur and matter, but they do not drive the platform’s growth.
The Bottom Line
People are not spilling their hearts out to AI in droves, according to the largest study of its kind, which involved more than 3,000 participants. Instead, they use ChatGPT to accomplish tasks—planning trips, writing code, learning concepts, and finding information.



