top of page

AI and Health Care: Promise and Pitfalls

John F Kalafut, PhD, Chief Strategy Officer, Asher Informatics PBC

In response to the Senate Finance Committee Hearing on AI and Healthcare and the discussion around the AI Accountability Act. February 8, 2024

Key thought leaders in Healthcare AI testifiy
US Sanate Committee on Finance Testimony

Asher Informatics PBC is heartened and encouraged by the hearing on “Artificial intelligence and Healthcare: Promise & Pitfalls” convened by Chairman Ron Wyden (D-Oregon) and the Senate Finance Committee. Their staff assembled a stellar ensemble of witnesses that, we think, brought credible and practical information to their testimony. The witnesses represent the varied and important constituencies required to develop and maintain credible and equitable policies to responsibly catalyze the deployment and use of data-driven and algorithm-based interventions in clinical care. While the use of computer-aided decision tools and “AI” are not new to some areas of medicine (e.g.: radiology, anesthesia, neurology, audiology), the recent convergence of computational power, curated datasets, and breakthroughs in algorithm design allows us to contemplate the use of algorithm-based health services (per Peter Shen at Siemen’s) across the spectrum of healthcare delivery – the back-office to the bedside. This is a nuanced and complex domain, however. Creating meaningful health outcomes with the new generation of AI tools is not simply allowing ChatGPT to be used across a hospital. ‘Generative AI’ and its most recognizable interaction model – as a ‘chatbot’ – is not the only type of AI nor is it usually the most appropriate type for use in high-risk and mission-critical scenarios as in clinical care decisions and treatment planning. Understanding the landscape of algorithm-based offerings for healthcare requires expertise and insight that already overburdened health systems typically don’t have.  


The “FOMO” caused by the over-exuberant promotion of ‘genAI’ by technology evangelists can potentially lead to health-systems making ill-informed decisions about AI adoption, especially if the decisions are shaped by large technology firms who are incentivized to sell large amount of storage and compute. As Dr. Mark Sendak points out in his testimony, if large and well-resourced health-systems are the ones most likely to have requisite expertise and infrastructure to deploy and use clinical AI effectively, we will again be risking the widening of health-inequities. CMS (and private payers) can help ‘level the playing field’, though, as suggested by Peter Shen of Siemens Healthineers through more uniform payment ‘boosts’ or ‘add-on’ payments to help accelerate the adoption and use of AI applications. This is particularly relevant for the broad class of AI applications on the market that have received pre-market clearance by the FDA (granted, Siemens Healthineers sells many of these types of products).  


Doing so would not be without precedent, either. In the late 1990s and early 2000’s, CMS reimbursement mechanisms that enabled providers to receive small, add-on payments if Computer Aided Detection (CAD) software was used in the interpretation and reporting of screening mammography. Later in the decade, procedural codes and reimbursement formulae were also adjusted to allow breast imagers to get small reimbursement boosts for using computer vision/processing CAD tools for interpreting MRI of the breast. There also were professional reimbursement boosts developed for radiologists to use advanced visualization software when interpreting some studies.  Most of these reimbursements have either ended, been dramatically reduced, or will be authorized by private payers in very specific encounters. The first generation of Breast and Chest CAD did ultimately disappoint when larger, effectiveness studies were done and published.  


There are multiple reasons for the sub-optimal results and eventual dissatisfaction of breast imagers with CAD – technical, usability/human-factors, data policy, and interoperability flaws. But the reimbursement drove the diffusion and use of breast CAD into the clinic which also allowed for the broader assessment and measurement of the technology’s utility. The first CAD product for mammography was approved by the FDA in 1998 and the US CMS issued breast CAD reimbursement codes (with RVUS) in 2002. By 2008, 74% of all breast mammograms read in the US were assisted with the CAD software, increasing to 92% by 2016 [Gao et al reference AJR new frontiers in breast AI/CAD].  


One could mistakenly view the experience of first-generation “AI” in medicine as a lesson in wastefulness driven by non-usable technology and therefore any computer-aided intervention or algorithmically assisted technology should be rejected outright or be saddled with evidence burdens that will stifle innovation because promising methods will require clinical evidence and testing that will far-outstrip the resources of small and medium sized enterprises.  


The landscape today is also quite different from the first-generation of AI tools in diagnostic medicine; the computational methods are far superior, it is easier (though still hard) to aggregate large medical datasets, there are more modern paradigms of supporting computational applications in health systems, and clinical data are digital and somewhat exchangeable across venues for research and clinical utility measurement. We think the example of 1st Gen AI is an example of the system working! It’s not as if the CAD technologies didn’t have any positive effects or lacked any evidence necessary to allow their diffusion. Most healthcare technologies need wider study and assessment to arrive at a determination of effectiveness.  


To make meaningful improvements to healthcare via technology, we can’t expect every developer to have the budget or resources of large pharma. Similarly, not every health innovation should or requires Randomized Control Trials to demonstrate utility. There is an old joke in the bio-statistics community similar to: “the RCT for the new parachute system was stopped prematurely after the first control-arm subject was tested”. There tends to be in certain quarters of medicine a zealot-like assertion that we shouldn’t adopt any new method unless it can be run through an RCT. Yes – we need to strive towards strong and convincing evidence of all medical interventions, but sometimes common-sense, realistic constraints, or ‘good enough is good enough’ need to be considered. Did we really need RCTs to demonstrate that CT scanning would cause a seismic shift in patient care? Should there have been thousands of sham or real surgeries to open-up patients and compare to treatment costs for patients with just images? 


bottom of page