• Misinformation Machines with Gordon Pennycook – Part 2

  • 2024/11/06
  • 再生時間: 1 時間 3 分
  • ポッドキャスト

Misinformation Machines with Gordon Pennycook – Part 2

  • サマリー

  • Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: ⁠Google Scholar Profile⁠⁠Twitter⁠⁠Personal Website⁠⁠Cornell University Faculty Page⁠ Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy TheoriesInterventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit’s Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示

あらすじ・解説

Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: ⁠Google Scholar Profile⁠⁠Twitter⁠⁠Personal Website⁠⁠Cornell University Faculty Page⁠ Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy TheoriesInterventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit’s Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

Misinformation Machines with Gordon Pennycook – Part 2に寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。