• AI for Surrogate Decision Making?!? Dave Wendler, Jenny Blumenthal-Barby, Teva Brender

  • 2024/12/12
  • 再生時間: 48 分
  • ポッドキャスト

AI for Surrogate Decision Making?!? Dave Wendler, Jenny Blumenthal-Barby, Teva Brender

  • サマリー

  • Surrogate decision making has some issues. Surrogates often either don’t know what patients would want, or think they know but are wrong, or make choices that align with their own preferences rather than the patients. After making decisions, many surrogates experience regret, PTSD, and depressive symptoms. Can we do better?

    Or, to phrase the question for 2024, “Can AI do better?” Follow that path and you arrive at a potentially terrifying scenario: using AI for surrogate decision making. What?!? When Teva Brender and Brian Block first approached me about writing a thought piece about this idea, my initial response was, “Hell no.” You may be thinking the same. But…stay with us here…might AI help to address some of the major issues present in surrogate decision making? Or does it raise more issues than it solves?

    Today we talk with Teva, Dave Wendler, and Jenny Blumenthal-Barby about:

    • Current clinical and ethical issues with surrogate decision making

    • The Patient Preferences Predictor (developed by Dave Wendler) or Personalized Patient Preferences Predictor (updated idea by Brian Earp) and commentary by Jenny

    • Using AI to comb through prior recorded clinical conversations with patients to play back pertinent discussions; to predict functional outcomes; and to predict patient preferences based on prior spending patterns, emails, and social media posts (Teva’s thought piece)

    • A whole host of ethical issues raised by these ideas including the black box nature, the motivations of private AI algorithms run by for profit healthcare systems, turning an “is” into an “ought”, defaults and nudges, and privacy.

    I’ll end this intro with a quote from Deb Grady in an editor’s commentary to our thought piece in JAMA Internal Medicine about this topic: “Voice technology that creates a searchable database of patients’ every encounter with a health care professional? Using data from wearable devices, internet searches, and purchasing history? Algorithms using millions of direct observations of a person’s behavior to provide an authentic portrait of the way a person lived? Yikes! The authors discuss the practical, ethical, and accuracy issues related to this scenario. We published this Viewpoint because it is very interesting, somewhat scary, and probably inevitable.”

    -@alexsmithmd.bsky.social

    続きを読む 一部表示

あらすじ・解説

Surrogate decision making has some issues. Surrogates often either don’t know what patients would want, or think they know but are wrong, or make choices that align with their own preferences rather than the patients. After making decisions, many surrogates experience regret, PTSD, and depressive symptoms. Can we do better?

Or, to phrase the question for 2024, “Can AI do better?” Follow that path and you arrive at a potentially terrifying scenario: using AI for surrogate decision making. What?!? When Teva Brender and Brian Block first approached me about writing a thought piece about this idea, my initial response was, “Hell no.” You may be thinking the same. But…stay with us here…might AI help to address some of the major issues present in surrogate decision making? Or does it raise more issues than it solves?

Today we talk with Teva, Dave Wendler, and Jenny Blumenthal-Barby about:

  • Current clinical and ethical issues with surrogate decision making

  • The Patient Preferences Predictor (developed by Dave Wendler) or Personalized Patient Preferences Predictor (updated idea by Brian Earp) and commentary by Jenny

  • Using AI to comb through prior recorded clinical conversations with patients to play back pertinent discussions; to predict functional outcomes; and to predict patient preferences based on prior spending patterns, emails, and social media posts (Teva’s thought piece)

  • A whole host of ethical issues raised by these ideas including the black box nature, the motivations of private AI algorithms run by for profit healthcare systems, turning an “is” into an “ought”, defaults and nudges, and privacy.

I’ll end this intro with a quote from Deb Grady in an editor’s commentary to our thought piece in JAMA Internal Medicine about this topic: “Voice technology that creates a searchable database of patients’ every encounter with a health care professional? Using data from wearable devices, internet searches, and purchasing history? Algorithms using millions of direct observations of a person’s behavior to provide an authentic portrait of the way a person lived? Yikes! The authors discuss the practical, ethical, and accuracy issues related to this scenario. We published this Viewpoint because it is very interesting, somewhat scary, and probably inevitable.”

-@alexsmithmd.bsky.social

AI for Surrogate Decision Making?!? Dave Wendler, Jenny Blumenthal-Barby, Teva Brenderに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。