Person is Having Convo with an AI

OpenAI CEO Sam Altman has publicly advocated for using advanced AI models like GPT-5 in healthcare settings, envisioning a future where AI provides accessible mental health support to millions who cannot afford or access traditional therapy.[1] The vision is compelling: democratized mental healthcare, available 24/7, at minimal cost. However, this technological optimism collides with a legal reality that threatens to turn our most private therapeutic moments into discoverable evidence.

The collision between innovation and privacy became starkly visible in the ongoing litigation between The New York Times and OpenAI. A court ordered OpenAI to preserve all output log data–essentially, records of user interactions with ChatGPT–for litigation purposes.[2] This preservation order directly contradicts OpenAI’s public promise to delete user data after 30 days. The implications extend far beyond one copyright lawsuit: when courts can compel AI companies to preserve user interaction data, every conversation with ChatGPT becomes potential evidence.

The Confidentiality Crisis

To his credit, Sam Altman himself has warned users that there is no legal confidentiality when using ChatGPT as a therapist.[3] But this warning, buried in public statements and terms of service that few people read carefully, offers insufficient protection for vulnerable users seeking mental health support.

Traditional therapy with a human mental health specialist operates within a robust framework of legal protections. The therapist-patient privilege, recognized across U.S. jurisdictions, protects therapeutic communications from disclosure in legal proceedings. This privilege exists because society recognizes that effective mental health treatment requires absolute candor, and candor requires confidentiality.

ChatGPT offers no such protection. When you talk to an AI, you are not engaging in a privileged communication. You are creating a corporate data record that can be subpoenaed in civil litigation, requested through government surveillance authorities, exposed in data breaches, accessed by employees of the AI company with sufficient clearance, or turned over pursuant to court orders in cases having nothing to do with you.

The Third-Party Doctrine Problem

This vulnerability stems from a fundamental principle in Fourth Amendment law: the third-party doctrine. Under this doctrine, established in Smith v. Maryland, there is no reasonable expectation of privacy in information voluntarily shared with third parties.[4] When you share information with your bank, your phone company, or your email provider, the law presumes you’ve assumed the risk that this information might be disclosed to the government.

Applied to AI healthcare, the doctrine creates an impossible bind. Every conversation with ChatGPT is information “voluntarily” shared with OpenAI, a third party. Under traditional third-party doctrine analysis, users have no Fourth Amendment protection for this data. The government could potentially access these therapeutic conversations without a warrant, merely by requesting them from OpenAI or compelling production through legal processes.

But as Justice Sotomayor observed in United States v. Jones, the third-party doctrine is “ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.”[5] The Supreme Court’s decision in Carpenter v. United States reinforced this concern, declining to extend the third-party doctrine to cell-site location information because carrying a cell phone is “indispensable to participation in modern society” and thus not truly voluntary.[6]

The same logic should apply to AI health tools. If ChatGPT and similar AI systems become essential infrastructure for accessing mental healthcare, especially for underserved populations, using them is no more “voluntary” than carrying a cell phone. The assumption of risk rationale crumbles when the alternative is no mental health support at all. However, it is unclear whether the Supreme Court would buy this argument. If not, current law offers little protection: the third-party doctrine, combined with discovery rules governing civil litigation, means your AI therapy sessions exist in a legal twilight zone – not protected by therapist-patient privilege, not protected by the Fourth Amendment, and subject to broad discovery requests in any litigation where your mental state might be relevant.

Discovery in Practice: When Preservation Orders Become Routine

The NYT litigation is not an isolated incident. Discovery in civil litigation is broad, and courts routinely order the production of relevant communications. If you’ve discussed your mental health with ChatGPT and later become involved in litigation where your mental state is relevant – disability claims, custody disputes, personal injury cases, employment discrimination–those conversations could be fair game.

What makes this particularly troubling is the precedent being set. If preservation orders become routine practice in litigation involving AI companies, the promise of data deletion becomes meaningless. Every user interaction becomes a permanent record, sitting in corporate servers, waiting for the next subpoena. This risk creates a chilling effect that defeats the purpose of accessible mental health support.

The Closed-System Solution

We need not abandon AI’s promise in healthcare, but we must fundamentally rethink how such systems operate. The solution lies in closed-system architectures that keep sensitive health data isolated from the legal vulnerabilities of cloud-based, centrally-stored systems.

A closed-system approach would operate on these principles:

Local Data Storage: Healthcare AI systems should process and store data locally – on the user’s device or within HIPAA-compliant, access-restricted health system servers. Data should never transit to general-purpose corporate servers accessible to litigation discovery.

Institutional Deployment: Rather than individual consumer use of general-purpose chatbots, healthcare AI should be deployed through medical institutions bound by existing confidentiality frameworks. A hospital or therapy practice could use AI tools while remaining subject to HIPAA, therapist-patient privilege, and professional ethical obligations.

Technical Isolation: Healthcare AI systems should be technically separated from other AI services. The model serving your therapy needs should not share infrastructure, logs, or databases with models used for general consumer purposes.

Legal Clarity: Healthcare AI must operate under clear legal frameworks that extend existing confidentiality protections to AI-mediated care. This requires legislative action to establish that certain AI health communications carry the same privileges as traditional therapeutic relationships.

This approach is not unprecedented. The healthcare industry has successfully integrated technology while maintaining privacy through frameworks like HIPAA. Telemedicine platforms, electronic health records, and hospital communication systems all operate under strict confidentiality rules. Healthcare AI can and should follow the same path.

Balancing Innovation and Privacy

The objection from AI companies is predictable: closed systems limit innovation, increase costs, and reduce the accessibility that makes AI healthcare attractive in the first place. There is validity in these concerns. Centralized, cloud-based systems enable rapid improvement through data aggregation and allow companies to offer free or low-cost services.

But this argument privileges corporate convenience over patient welfare. We do not allow pharmaceutical companies to skip clinical trials because they’re expensive. We do not allow hospitals to ignore infection control protocols because they’re burdensome. We should not allow AI companies to offer healthcare services without basic confidentiality protections simply because those protections are costly.

Moreover, the costs of our current trajectory may be higher than we realize. If people cannot trust AI healthcare tools with sensitive information, they will either avoid these tools entirely or self-censor in ways that undermine therapeutic effectiveness. There is also a fundamental fairness issue: the people most likely to need accessible AI mental health services – those who cannot afford traditional therapy – are also the people most vulnerable to adverse legal consequences from having their mental health history exposed in litigation.

A Path Forward

The path forward requires action from multiple stakeholders:

Regulators must extend healthcare privacy frameworks to cover AI-mediated health services. This means amending HIPAA to explicitly address AI tools, clarifying when AI communications receive privilege protection, and establishing technical standards for healthcare AI systems.

Legislators should consider creating a new category of protected communication for AI health services that meet certain criteria: local data processing, institutional oversight, and technical isolation from discovery-vulnerable systems.

AI Companies must stop marketing general-purpose chatbots for healthcare use while disclaiming responsibility for confidentiality. If companies want to serve the healthcare market, they must build systems that comply with healthcare privacy standards – even if this means sacrificing some of the data aggregation advantages that make their current business models attractive.

Healthcare Institutions should take the lead in deploying AI tools within their existing confidentiality frameworks rather than directing patients to use consumer AI products.

Courts must recognize the unique privacy implications of AI health data when considering discovery requests and preservation orders. Blanket orders to preserve all user interaction data, without carve-outs for sensitive health information, fail to balance litigation needs against privacy rights.

Conclusion

The NYT litigation has given us an early warning: the current model is unsustainable. OpenAI cannot promise data deletion while courts order data preservation. Companies cannot encourage healthcare use while disclaiming confidentiality protections. Users cannot make informed choices when the legal risks of AI therapy remain largely invisible.

Closed-system architectures offer a solution that preserves both innovation and privacy. By processing sensitive health data locally, deploying AI through regulated healthcare institutions, and establishing clear legal protections, we can harness AI’s benefits without sacrificing the confidentiality that healthcare relationships require.

Before we fully embrace AI therapists, we must answer a fundamental question: What good is accessible mental health support if seeking that support creates a permanent, discoverable record that can be used against you for the rest of your life? Until we have a satisfactory answer, we should pause the push toward AI healthcare and build the privacy infrastructure that makes such innovation both possible and ethical.

[1] Heather Landi, OpenAI CEO Sam Altman Says GPT-5 Should Be Used in Health, MobiHealthNews (last updated Jan. 24, 2025), https://www.mobihealthnews.com/news/openai-ceo-sam-altman-says-gpt-5-should-be-used-health.

[2] Bruce D. Celebrezze et al., Court Orders OpenAI to Retain All Output Log Data–Considerations for ChatGPT Users, Loeb & Loeb Quick Takes (Sept. 4, 2024), https://quicktakes.loeb.com/post/102kd8y/court-orders-openai-to-retain-all-output-log-data-considerations-for-chatgpt-use.

[3] Kyle Wiggers, Sam Altman Warns There’s No Legal Confidentiality When Using ChatGPT as a Therapist, TechCrunch (July 25, 2025), https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/.

[4] Smith v. Maryland, 442 U.S. 735, 743–44 (1979).

[5] United States v. Jones, 565 U.S. 400, 417 (2012) (Sotomayor, J., concurring).

[6] Carpenter v. United States, 585 U.S. 296, 315 (2018).