ChatGPT Health Push Triggers Data Security Concerns

OpenAI’s ChatGPT Health Push Raises Questions About Data Security
OpenAI said Wednesday that it is rolling out a new feature in ChatGPT that allows users to connect medical records and wellness data, a move that could expand the chatbot’s role in health-related use cases.
The announcement also triggered immediate concerns from some privacy experts and advocacy groups about how highly sensitive personal data could be handled once it is shared with an AI system.
Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, warned that users who share their electronic medical records with the new ChatGPT Health feature could be taking those records outside of typical healthcare privacy protections. “Individuals sharing their electronic medical records with ChatGPT Health would remove the HIPPA protection from those records, which is dangerous,” Geoghegan said.
The development underscores a broader tension in consumer AI: the same tools that can make personal information easier to access and analyze can also create new risks if that information is uploaded, stored, or used in ways users do not fully understand.
- What happened: OpenAI announced a feature enabling users to connect medical records and wellness data in ChatGPT.
- Why it matters: Privacy advocates say sharing electronic medical records with the feature could remove HIPAA protections and increase exposure of sensitive health information.
- Broader context: As AI tools move deeper into personal and health-related services, questions about data security and governance are becoming more central.
