Healthcare’s comfort level with artificial intelligence and machine learning models – and skill at deploying them across myriad clinical, financial and operational use cases – continued to increase in 2022.
More and more evidence shows that training AI algorithms on a variety of datasets can improve decision support, boost population health management, streamline administrative tasks, enable cost efficiencies and even improve outcomes.
But there’s still a lot work to be done to ensure accurate, reliableunderstandable and evidence-based results that ensure patient safety and account for health equity.
There’s no doubt that AI’s application in healthcare has gone beyond “real” in 2019 tosignificant investmentby providers and payers last year. This year, we’ve reported on deeper industry discussions focused on trust and best practices. We’ve featured industry perspectives on the values ofdeep learning and neural networksand how to clear data hurdles along with announcements of successful studies and, of course, new healthcare AI technology partnerships. Here are Healthcare IT News’ most-read AI stories of 2022.
How AI bias happens – and how to eliminate it. Though posted about 30 days before the close of 2021, readers flocked to read the advice of Stanford cardiologist Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, director of its Atrial Fibrillation Program and professor of medicine at Stanford University School of Medicine. Narayan discussed multiple approaches to eliminate bias in AI, including training multiple versions of algorithms, adding multiple datasets to AI and updating a machine’s training datasets over time. He cautioned that algorithmic hygiene strategies are not foolproof, and that bias is more likely to compound when integrating complex systems.
Developing trust in healthcare AI, step by step. While usage of AI in healthcare has increased, providers have been concerned about how much they should trust machine learning in clinical settings. A Chilmark Research report by analyst Dr. Jody Ranck indicated that, based on a review of hundreds of first-year COVID-19 pandemic algorithms, numerous instances of AI could not be validated. Ranck proposed strategies to increase evidence-based AI development.
Sentient AI? Convincing you it’s human is just part of LaMDA’s job. In this guest post, which was published after amainstream mediafeeding frenzy about an ostensibly “sentient” machine learning application, Dr. Chirag Shah, associate professor at the Information School at the University of Washington, explains how Google’s LaMDA chatbot, which easily passed the Turing Test, does not prove the presence of self-aware consciousness. LaMDA proves only that it can create the illusion of possessing self-awareness – which is exactly what it was designed to do.
Duke, Mayo Clinic, others launch innovative AI collaboration. Artificial intelligence researchers and technology leaders from Duke, Mayo Clinic, University of California Berkeley and others unveiled a new Health AI Partnership at a virtual HIMSS learning event just before the close of 2021. By developing an online curriculum to help educate IT leaders and working with stakeholders, the collaborators are aiming to develop a standardized, evidence-driven process for AI deployments in healthcare.
The intersection of remote patient monitoring and AI. Robin Farmanfarmaian, author of “How AI Can Democratize Healthcare: The Rise in Digital Care” and four other books, discussed how AI is impacting remote patient monitoring (RPM) today and how it can democratize healthcare. “RPM has the ability to collect clinical-grade data when people are in all stages of health and at all ages,” she said. “When collected continuously in machine-readable databases, once RPM is more fully adopted, those databases have the ability to dwarf EHR data from a hospital or health system.”
Mayo launches AI startup program, with assists from Epic and Google. In March, The Mayo Clinic launched a 20-week startup program to give early-stage health tech AI companies a boost. The clinic’s technology, medical and business experts, along with thought leaders from Google and Epic, were to provide the cohort with expertise to help the startups delineate AI model requirements.
AI study finds 50% of patient notes duplicated. University of Pennsylvania Perelman School of Medicine in Philadelphia researchers used natural language processing to find the rate of note duplication, as well as the rate of duplication year over year, across the records of 1.96 million unique patients from 2015 to 2020. “Duplicate text casts doubt on the veracity of all information in the medical record, making it difficult to find and verify information in day-to-day clinical work,” according to their JAMA report published in September.
AWS, GE leaders talk hurdles to data sharing, AI implementation. In a fireside chat at HIMSS22, Amazon Web Services’ Dr. Taha Kass-Hout and GE Healthcare’s Vignesh Shetty discussed the challenges of AI and the opportunities for making better-connected decisions.
How AI and machine learning can predict illness and boost health equity. In a recent Q&A, Brett Furst, president of HHS Tech Group, discussed how leveraging the COVID-19 Research Database – one of the world’s most comprehensive cross-linked datasets – can establish cause-effect relationships between multiple variables. When machine learning determines how multiple variables interact, it can reliably predict health outcomes.
CommonSpirit Health gains huge efficiencies with AI-infused OR scheduling tool. This case study, featuring Brian Dawson, system vice president of perioperative services at CommonSpirit, showed how the health system implemented an AI utilization tool that would improve operating room efficiencies across its 350 hospitals. “Healthcare providers across the globe have had to do more with less, and it has led to increased burnout, staff shortages, patient dissatisfaction and scarce resources,” said Dawson.
Andrea Fox is senior editor of Healthcare IT News.
Healthcare IT News is a HIMSS publication.
Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here