❓WHAT HAPPENED: OpenAI has introduced a new feature called ChatGPT Health, which uses medical records to generate personalized responses, despite warnings from experts about the risks of relying on artificial intelligence (AI) for health advice.
👤WHO WAS INVOLVED: OpenAI, medical professionals, legal experts, and organizations like the Center for Democracy and Technology and the Electronic Privacy Information Center.
📍WHEN & WHERE: The feature was recently launched by OpenAI.
💬KEY QUOTE: “New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share, and it must be protected.” – Andrew Crawford, Center for Democracy and Technology.
🎯IMPACT: The launch of ChatGPT Health raises concerns over data privacy, potential misuse of health advice, and the lack of regulatory oversight for AI-driven health tools.
OpenAI has launched a new feature called ChatGPT Health, which aims to use medical records to provide more personalized responses for users. The company claims the tool is “designed in close collaboration with physicians” and built with “strong privacy, security, and data controls.” However, it also emphasizes that the feature is “not intended for diagnosis or treatment,” raising questions about its practical application.
The move has sparked criticism from experts who warn that artificial intelligence (AI) health advice can be inaccurate and potentially dangerous. A recent investigation revealed that Google’s AI Overviews frequently provided incorrect health information, which could lead to serious risks if followed. People are already using the standard ChatGPT AI assistant to draft legal and medical claims, often based on nonexistent case law or hallucinated data. Last December, The National Pulse reported that a U.S. Department of Justice (DOJ) indictment alleges the OpenAI-developed chatbot encouraged a man—accused of harassing over a dozen women across five states—to persist in his stalking behavior.
Privacy concerns are at the forefront of the debate, with critics highlighting the lack of federal regulations like HIPAA governing consumer AI products. Andrew Crawford from the Center for Democracy and Technology recently noted, “Health data is some of the most sensitive information people can share, and it must be protected.” He warned that inadequate data protections could put users at risk, particularly as companies like OpenAI explore advertising as a business model.
Concerns also extend to law enforcement’s access to sensitive data. “How does OpenAI handle [law enforcement] requests?” Crawford asked, adding, “Do they just turn over the information? Is the user in any way informed?” Sara Geoghegan of the Electronic Privacy Information Center echoed these concerns, noting that AI companies can change their terms of service at any time without regulation.
Despite these warnings, OpenAI says it is pushing forward with ChatGPT Health, which the company contends will help users “take a more active role in understanding and managing their health and wellness.”
Join Pulse+ to comment below, and receive exclusive e-mail analyses.
