
Image Credits:Getty Images
On the heels of OpenAI’s ChatGPT Health reveal, Anthropic announced on Sunday that it’s introducing Claude for Healthcare, a set of tools for providers, payers, and patients.
Like ChatGPT Health, Claude for Healthcare will allow users to sync health data from their phones, smartwatches, and other platforms (both OpenAI and Anthropic have said that their models won’t use this data for training). But Anthropic’s product promises more sophistication than ChatGPT Health, which seems as though it will be more focused on a patient-side chat experience as it rolls out gradually.
Though some industry professionals are concerned about the role of hallucination-prone LLMs in offering clients medical advice, Anthropic’s “agent skills” seem promising.
Claude has added what it calls “connectors” to give the AI access to platforms and databases that can speed up research processes and report generation for payers and providers, including the Centers for Medicare and Medicaid Services (CMS) Coverage Database; the International Classification of Diseases, 10th Revision (ICD-10); the National Provider Identifier Standard; and PubMed.
Anthropic explained in a blog post that Claude for Health could use its connectors to speed up prior authorization review, the process in which a doctor must submit additional information to an insurance provider to see if it will cover a medication or treatment.
“Clinicians often report spending more time on documentation and paperwork than actually seeing patients,” Anthropic CPO Mike Krieger said in a presentation about the product.
For doctors, submitting prior authorization documents is more of an administrative task than something that requires their specialized training and expertise. It’s something that makes more sense to automate than the actual process of administering medical advice … though Claude will do that as well.
People are already relying on LLMs for medical advice. OpenAI said that 230 million people talk about their health with ChatGPT each week, and there’s no doubt that Anthropic is observing that use case as well.
Of course, both Anthropic and OpenAI warn consumers that they should see healthcare professionals for more reliable, tailored guidance.
