The Behavioral Design Podcast

著者: Samuel Salzer and Aline Holzwarth
  • サマリー

  • How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.
    Samuel Salzer and Aline Holzwarth
    続きを読む 一部表示

あらすじ・解説

How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.
Samuel Salzer and Aline Holzwarth
エピソード
  • Misinformation Machines with Gordon Pennycook – Part 2
    2024/11/06
    Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: ⁠Google Scholar Profile⁠⁠Twitter⁠⁠Personal Website⁠⁠Cornell University Faculty Page⁠ Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy TheoriesInterventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit’s Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示
    1 時間 3 分
  • Misinformation Machines with Gordon Pennycook – Part 1
    2024/11/04

    The Role of Misinformation and AI in the US Election with Gordon Pennycook

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel explore the complex world of misinformation in the context of the U.S. elections with special guest Gordon Pennycook, a psychology professor at Cornell University.

    The episode covers the effects of misinformation on democratic participation, and how behavioral science sheds light on reasoning errors that drive belief in falsehoods. Gordon shares insights from his groundbreaking research on misinformation, exploring how falsehoods gain traction and the role AI can play in both spreading and mitigating misinformation.

    The conversation also tackles the evolution of misinformation, including the impact of social media and disinformation campaigns that blur the line between truth and fiction.

    Tune in to hear why certain falsehoods spread faster than truths, the psychological appeal of conspiracy theories, and how humor can amplify the reach of misinformation in surprising ways.


    LINKS:

    Gordon Pennycook:

    • Google Scholar Profile
    • Twitter
    • Personal Website
    • Cornell University Faculty Page

    Further Reading on Misinformation:

    • Brandolini’s Law and the Spread of Falsehoods
    • Role of AI in Misinformation
    • The Psychology of Conspiracy Theories


    TIMESTAMPS:

    00:00 Introduction

    03:14 Behavioral Science and Misinformation

    05:28 Introducing Gordon Pennycook

    10:02 The Evolution of Misinformation

    12:46 AI’s Role in Misinformation

    14:51 Impact of Misinformation on Elections

    21:43 COVID-19 and Vaccine Misinformation

    26:32 Technological Advancements in Misinformation

    33:50 Conspiracy Theories

    35:39 Misinformation and Social Media

    42:35 The Role of Humor in Misinformation

    48:08 Quickfire Round: To AI or Not to AI

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠nuancebehavior.com.⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    53 分
  • The Dark Side of AI – Halloween Special
    2024/10/30

    In this spine-chilling Halloween special of the Behavioral Design Podcast, co-hosts Aline Holzwarth and Samuel Salzer take listeners on a journey into the eerie intersection of AI and behavioral science. They explore the potential ethical and social consequences of AI, from our urge to anthropomorphize machines to the creeping influence of human biases in AI engineering.

    The episode kicks off with the hosts sharing their favorite Halloween costumes and family traditions before delving into the broader theme of Frankenstein as an apt metaphor for AI. They discuss the human inclination to attribute human qualities to non-human entities and the ethical implications of creating machines that mirror humanity. The conversation deepens with reflections on biases in AI development, risks of ‘playing God,’ and the tension between technological progress and human oversight.

    In a thrilling twist, the hosts read a co-authored sci-fi story written with ChatGPT, illustrating the potential dark consequences of unchecked AI advancement. The episode wraps up with Halloween-themed wishes, encouraging listeners to ponder the boundaries between human and machine as they celebrate the holiday.


    Timestamps:

    03:38Frankenstein: Revisiting the original story

    09:09 – Frankenstein’s Modern AI Metaphor: Parallels to today’s technology

    18:06 – Reflections on AI and Anthropomorphism: The urge to humanize machines

    36:31 – Exploring Human Biases in AI Development: How biases shape AI

    42:06 – Trust in AI: Human vs. algorithmic decision-making

    46:45 – The Personalization of AI Systems: Pros and cons of tailored experiences

    49:10 – The Ethics of Playing God with AI: Examining the risks

    55:56 – Concluding Thoughts and Halloween Wishes: Reflecting on AI’s duality

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    1 時間

The Behavioral Design Podcastに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。