Senator wants Google to answer for accuracy, ethics of generative AI tool

Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google parent company Alphabet, on Aug. 8, seeking clarity into the technology developer’s Med-PaLM 2, an artificial intelligence chatbot, and how it’s being deployed and trained in healthcare settings.

WHY IT MATTERS

In the letter, Warner expresses concerns about some news reports highlighting inaccuracies in the technology, and he asks Pichai to answer a series of questions about Med-PaLM 2 (and other AI tools like it), based around its algorithmic transparency, its ability to protect patient privacy and other concerns.

Warner questions whether Google is “prioritizing the race to establish market share over patient well-being,” and whether the company is “skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent.”

The senator asks Pichai for clarity about how the Med-PaLM 2 technology is being rolled out and tested in various healthcare settings – including at the Mayo Clinic, whose Care Network includes Arlington, Virginia-based VHC Health in Warner’s home state – what data sources it’s learning from and “how much information and agency patients have over how AI is involved in their care.”

Among the questions (quoted from the letter) Warner asked the Google CEO:

  • Researchers have found large language models to display a phenomenon described as “sycophancy,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?

  • Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?

  • What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?

  • Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?

  • Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?

  • Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?

  • Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?

  • Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.

  • What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context? 

  • How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.

  • Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?

  • In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?

THE LARGER TREND

Warner, who has business experience in the technology industry, has taken a keen interest in healthcare digital transformation initiatives such as telehealth and virtual care, cybersecurity, and AI ethics and safety.

This is not the first time he’s written directly to a Big Tech CEO. This past October, Warner wrote to Meta CEO Mark Zuckerberg seeking clarity on the company’s pixel technology and data tracking practices in healthcare.

He has shared similar concerns about the potential risks of artificial intelligence and has asked the White House to work more closely with the tech sector to help foster safer deployments of AI in healthcare and elsewhere.

This past April, Google began testing Med-PaLM 2 – which can answer medical questions, summarize documents and perform other data-intensive tasks – with healthcare customers such as the Mayo Clinic, with which it has been working closely since 2019.

At the Mayo Clinic, meanwhile, innovative work continues on generative AI across a variety of clinical and operational use cases. In June, Google and Mayo offered an update on some of the automation projects it’s pursuing.

Mayo Clinic Platform President Dr. John Halamka spoke with Healthcare IT News Managing Editor Bill Siwicki recently about the promise – and limitations – of generative AI, large language models and other machine learning applications for clinical care delivery.

ON THE RECORD

“While artificial intelligence undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes and an increased risk of diagnostic and care-delivery errors,” said Warner.

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” he added.

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
A decade of data describes nationwide youth mental health crisis thumbnail

A decade of data describes nationwide youth mental health crisis

Credit: CC0 Public Domain When Tanner Bommersbach, M.D., and a team of Mayo Clinic researchers analyzed national records of pediatric emergency department visits, they provided essential data to describe the growing national crisis in pediatric mental health. Their study found that from 2011 to 2020, youth visits to emergency departments for mental health reasons doubled
Read More
Video: Calf raise with dumbbell thumbnail

Video: Calf raise with dumbbell

Edward R. Laskowski, M.D.: The calf raise is an exercise you can do with dumbbells to work the calf muscles. The calf muscles are located in the back of the lower legs. Strengthening your calf muscles with the calf raise exercise will help protect your Achilles tendon and calf from injury. Nicole L. Campbell: To
Read More
Texas food firm warned about import violations thumbnail

Texas food firm warned about import violations

As part of its enforcement activities, the Food and Drug Administration sends warning letters to entities under its jurisdiction. Some letters are not posted for public view until weeks or months after they are sent. Business owners have 15 days to respond to FDA warning letters. Warning letters often are not issued until a company…
Read More
10 Flu Vaccine Side Effects That Are Totally Normal thumbnail

10 Flu Vaccine Side Effects That Are Totally Normal

We said it above, but it’s really worth emphasizing: These side effects tend to be mild and last a few days at most, according to Jan Carney, M.D., M.P.H., associate dean for public health & health policy and professor of medicine at the Larner College of Medicine at the University of Vermont. But if you’re…
Read More
Index Of News
Total
0
Share