by Muskan Kansay - 5 days ago - 2 min read
OpenAI CEO Sam Altman has issued a direct warning about privacy risks for anyone turning to ChatGPT for mental health support: your conversations with the AI are not legally confidential. Unlike discussions with real therapists, doctors, or lawyers—where confidentiality is guaranteed by law—nothing you share with ChatGPT has the same level of protection.
Speaking on Theo Von’s podcast, Altman was blunt about the reality: “It’s very screwed up… We should have the same concept of privacy for your conversations with AI that we do with a therapist… but we haven’t figured that out yet.” He also noted how this problem has grown almost overnight: “No one had to think about this even a year ago, and now it’s a massive issue.”
As of 2025, ChatGPT’s user base is expected to top one billion, with many people now asking the chatbot to “act as a therapist.” This means countless users are sharing intimate details about their mental health without realizing just how vulnerable their data is.
OpenAI says that deleted conversations are usually removed from its servers within 30 days—unless legal reasons, like lawsuits, require them to keep the data longer. Still, users have very little control over what happens after they hit send. Unlike apps that use end-to-end encryption, ChatGPT chats aren’t fully private. In fact, OpenAI staff might access your conversations for moderation or training.
Altman shared that OpenAI is pushing for new laws to give users more privacy protection, and policymakers are finally paying attention. For now, though, he warns against sharing anything with ChatGPT that you wouldn’t feel comfortable seeing in public.
In short, while AI is making mental health resources more accessible, there’s a big gap in privacy protection. Until laws catch up, it’s safest to save sensitive conversations for licensed professionals.