Is AI Search a Medical Misinformation Disaster?

Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

A woman with brown hair in a black dressRenée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day[aresponsethatwasdrawnforan[aresponsethatwasdrawnforanOnion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.

Note: This article have been indexed to our site. We do not claim legitimacy, ownership or copyright of any of the content above. To see the article at original source Click Here

Related Posts
JAL、SDGs債で低燃費機材導入 A350・787でCO2排出量削減 thumbnail

JAL、SDGs債で低燃費機材導入 A350・787でCO2排出量削減

 日本航空(JAL/JL、9201)は2月2日、SDGs債の一つである社債「トランジションボンド」を発行すると発表した。JALがSDGs債を発行するのは初めてで、脱炭素社会への移行を使途とするトランジションボンドを航空業界で発行するのは初だという。低燃費機材のエアバスA350型機やボーイング787型機への更新などに調達資金を充てる。 A350や787の導入でCO2排出量を削減するJAL=PHOTO: Tadayuki YOSHIKAWA/Aviation Wire  5年債と10年債で、発行額はともに100億円を予定。長距離国際線で使用しているボーイング777-300ER型機などを低燃費でCO2(二酸化炭素)排出量を削減できるエアバスA350型機やボーイング787型機などへの更新を資金の使途としている。  オンラインで会見を開いた財務・経理本部長の菊山英樹専務は「現在の技術では電気飛行機や水素(燃料)飛行機は、航続距離などを含めてフィージビリティー(実現可能性)としては初期段階にあると考えている。将来の新技術の活用まで これより先は会員の方のみご覧いただけます。 無料会員は、有料記事を月あたり3記事まで無料でご覧いただけます。有料会員は、すべての有料記事をご覧いただけます。 会員の方はログインしてご覧ください。ご登録のない方は、無料会員登録すると続きをお読みいただけます。 無料会員として登録後、有料会員登録も希望する方は、会員用ページよりログイン後、有料会員登録をお願い致します。 * 会員には、無料個人会員および有料個人会員、有料法人会員の3種類ございます。  これらの会員になるには、最初に無料会員としての登録が必要です。 購読料はこちらをご覧ください。 * 有料会員と無料会員、非会員の違いは下記の通りです。・有料会員:会員限定記事を含む全記事を閲覧可能・無料会員:会員限定記事は月3本まで閲覧可能・非会員:会員限定記事以外を閲覧可能 * 法人会員登録は、こちらからお問い合わせください。* 法人の会員登録は有料のみです。
Read More
אין כפל מבצעים: על מערכת החיסון לבחור – טווח קצר או טווח ארוך thumbnail

אין כפל מבצעים: על מערכת החיסון לבחור – טווח קצר או טווח ארוך

כשמערכות ההגנה של הגוף מקריבות את הטווח הארוך לטובת ניצחונות בטווח הקצר רירית מעי של עכבר הרצופה בליטות דמויות-אצבע (בלבן) ומכילה איברי לימפה (באדום) שבתוכם מרכזי נבט (בירוק). צולם במיקרוסקופיה קונפוקלית גם לאחר שהתגברנו על מחלה נגיפית, המערכת החיסונית שלנו פועלת במלוא המרץ כדי לייצר נוגדנים המקנים לנו הגנה מפני חשיפה נוספת. אך מה קורה…
Read More
Christmas 2021 promises to be unusually hot in the world thumbnail

Christmas 2021 promises to be unusually hot in the world

Pas de Noël blanc en plaine cette année, ni pour la France, ni pour les États-Unis, mais plutôt un Noël vert... ou jaune, selon le degré de sécheresse de la végétation. Les températures seront en effet largement supérieures aux moyennes de saison le 25 décembre prochain sur de nombreux pays des deux hémisphères de la Planète.Cela…
Read More
Expedition 68 Officially Begins on Space Station – SpaceX Crew Swap Planned thumbnail

Expedition 68 Officially Begins on Space Station – SpaceX Crew Swap Planned

The seven-member Expedition 68 crew poses for an official portrait. From left are, NASA astronaut Frank Rubio; Roscosmos cosmonaut Dmitri Petelin; Japan Aerospace Exploration Agency (JAXA) astronaut Koichi Wakata; NASA astronauts Josh Cassada and Nicole Mann; and Roscosmos cosmonauts Sergey Prokopyev and Anna Kikina. Credit: NASAThe Expedition 68 mission is officially underway aboard the International
Read More
Index Of News
Total
0
Share