A new artificial intelligence tool being developed by the U.S. Food and Drug Administration to help speed up the review of medical devices is reportedly struggling with basic tasks, raising concerns among staff and experts.
Known internally as CDRH-GPT, the tool is still in beta testing and is intended to support reviewers at the FDA’s Center for Devices and Radiological Health — the division responsible for approving critical devices like pacemakers, insulin pumps, and imaging equipment.
According to two people familiar with the system, the AI currently cannot connect to the FDA’s internal network, has trouble uploading documents, and doesn’t allow users to submit questions properly. It also lacks internet access, meaning it cannot view recent research or subscription-based material.
The push to incorporate AI comes after this year’s widespread layoffs at the Department of Health and Human Services (HHS), which cut much of the support staff that previously helped device reviewers meet deadlines. While many frontline reviewers kept their jobs, they are now being asked to do more with less — a gap the agency hopes AI can fill.
Reviewers typically comb through vast amounts of data from animal studies and clinical trials, a process that can take months or even more than a year. In theory, AI could help reduce that time significantly. But staff who have tested the system say it’s not ready yet.
“There’s a real risk in pushing AI before it’s fully capable,” said Arthur Caplan, a medical ethics expert at NYU Langone Medical Center. “These decisions affect people’s lives. The tools still need human oversight — AI just isn’t smart enough yet to engage or challenge an applicant properly.”
FDA Commissioner Dr. Marty Makary, who took office on April 1, has prioritized AI integration across the agency. Last month, he set a June 30 deadline for an initial rollout. On Monday, he claimed the agency was ahead of schedule.
Still, internal feedback suggests otherwise. The two sources familiar with CDRH-GPT say many FDA staffers are concerned the agency is moving too fast. They worry the tool will be pushed into full use before it can reliably support regulatory decisions.
“I think they’re rushing AI development out of desperation,” one source said.
The FDA declined to comment directly and referred all inquiries to HHS, which has not responded.
Alongside CDRH-GPT, another tool called Elsa has already been launched across the FDA. It’s designed for simpler tasks, such as summarizing adverse event reports. Makary praised the early results. “One reviewer said the AI did in six minutes what would normally take two to three days,” he said.
But insiders say Elsa, too, has problems. When staff tested it by asking questions about FDA-approved products, the tool sometimes returned incorrect or incomplete answers. And while Elsa is functional, the people say it’s far from ready to handle the FDA’s more complex work.
It’s still unclear whether CDRH-GPT will eventually be merged with Elsa or continue as a separate system.
Outside experts are also raising ethical concerns. Richard Painter, a law professor and former ethics lawyer in the George W. Bush administration, warned that financial conflicts of interest could taint the process.
“We need clear rules to prevent reviewers using AI tools from having financial ties to the companies supplying that technology,” Painter said. “Otherwise, you risk damaging the agency’s credibility.”
Some FDA staff fear the new tools signal more than just support — they see a future where AI might replace them entirely.
“The agency is already stretched thin,” one staffer said. “We’re under a hiring freeze, people are leaving, and there’s no capacity to replace them. AI might be the future, but we’re not there yet.”
Related Topics