21.3 C
New York
Monday, May 6, 2024
spot_img

Physicians Struggle with Implementing A.I. in Patient Care, Citing Insufficient Regulations

In the field of medicine, cautionary tales about the unintended consequences of artificial intelligence (AI) are well-known. For example, there have been instances where AI programs designed to predict sepsis or improve follow-up care for patients have triggered false alarms and exacerbated health disparities. As a result, doctors have been hesitant to fully integrate AI into their workflow, instead using it as a supporting tool or back-office organizer. However, the field of AI in medicine has gained momentum and investment.

The role of the Food and Drug Administration (FDA), an important authority in approving new medical products, is a hot topic in AI. The FDA sees AI as a means to discover new drugs, identify unexpected side effects, and assist overwhelmed staff with repetitive tasks. Nevertheless, the FDA has faced criticism regarding how thoroughly it evaluates and describes the AI programs it approves for detecting tumors, blood clots, and collapsed lungs.

Physicians and lawmakers are calling for increased scrutiny of AI in healthcare. However, there is no single agency responsible for governing the entire AI landscape in medicine. Senator Chuck Schumer has already convened tech executives to discuss ways to foster the field and address potential pitfalls. Google, for instance, faced scrutiny from Congress due to concerns surrounding patient privacy and informed consent with its Med-PaLM 2 chatbot for health workers.

The FDA is lagging behind in its oversight of “large language models” that replicate expert advisers. The agency has just begun discussing the review of technology that continues to learn from processing diagnostic scans and lacks rules like those in Europe that cover a range of medical issues. Additionally, the agency’s authority is limited to approved products for sale, and it has no jurisdiction over internally developed AI tools used by large health systems, such as Stanford, Mayo Clinic, Duke, and health insurers, which can affect the care and coverage of thousands of patients without government oversight.

Doctors are seeking more information about the AI programs the FDA has cleared for detecting clots, tumors, and lung conditions. They want to know how the programs were built, how many people they were tested on, and their ability to detect conditions that a typical doctor might miss. The lack of publicly available information is causing doctors to be cautious, as they fear that patients may be subjected to unnecessary procedures, increased costs, and potentially harmful medications without significant improvements in care.

Dr. Eric Topol, an expert on AI in medicine, believes the FDA has been too lenient in allowing AI developers to keep their algorithms secret and failing to require rigorous studies to assess their benefits. Large studies are starting to shed light on the risks and benefits of using AI to detect diseases like breast cancer or skin cancer.

Dr. Jeffrey Shuren, chief of the FDA’s medical device division, acknowledges the need for ongoing efforts to ensure that AI programs deliver on their promises even after approval. Currently, AI software programs are not typically tested on patients before approval, unlike drugs and some devices. Building labs where developers can access extensive data to build and test AI programs is one approach that Dr. Shuren suggests, requiring changes to federal law because the current framework for regulating technologies is almost fifty years old.

Adapting machine learning for major hospital networks and health systems faces additional challenges. Lack of interoperability between different software systems and debates over who should bear the cost of implementation complicate the adoption of AI. While about 30% of radiologists are already using AI technology, selecting higher-risk tools like those prioritizing brain scans requires careful consideration, especially regarding the age-specific performance of such tools.

Amidst these challenges, Dr. Nina Kottler is leading an effort to vet AI programs at Radiology Partners. She evaluates approved AI programs by questioning developers and testing them against her radiologists’ interpretations. Programs that miss obvious problems or identify subtle ones are rejected. However, one program that scanned images of the head for aneurysms proved impressive, detecting 24% more cases than radiologists had identified.

In real-life scenarios, AI has shown promise. For instance, an AI program in a stroke-triage system detected a brain clot in a patient, prompting immediate intervention. The success of such systems can be life-changing for patients. However, not all instances have been as inspiring. Researchers at the University of Michigan found that a widely used AI tool for predicting sepsis fired off unnecessary alerts, leading to potential overdiagnosis and increased costs.

In conclusion, while AI holds great potential in medicine, there is a need for increased scrutiny, evaluation, and transparency when it comes to AI programs approved for medical use. The FDA’s oversight in this area is still developing, and changes may be required to ensure that AI programs deliver meaningful benefits and meet high standards of care.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles