エピソード

  • Misinformation Machines with Gordon Pennycook – Part 2
    2024/11/06
    Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: ⁠Google Scholar Profile⁠⁠Twitter⁠⁠Personal Website⁠⁠Cornell University Faculty Page⁠ Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy TheoriesInterventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit’s Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示
    1 時間 3 分
  • Misinformation Machines with Gordon Pennycook – Part 1
    2024/11/04

    The Role of Misinformation and AI in the US Election with Gordon Pennycook

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel explore the complex world of misinformation in the context of the U.S. elections with special guest Gordon Pennycook, a psychology professor at Cornell University.

    The episode covers the effects of misinformation on democratic participation, and how behavioral science sheds light on reasoning errors that drive belief in falsehoods. Gordon shares insights from his groundbreaking research on misinformation, exploring how falsehoods gain traction and the role AI can play in both spreading and mitigating misinformation.

    The conversation also tackles the evolution of misinformation, including the impact of social media and disinformation campaigns that blur the line between truth and fiction.

    Tune in to hear why certain falsehoods spread faster than truths, the psychological appeal of conspiracy theories, and how humor can amplify the reach of misinformation in surprising ways.


    LINKS:

    Gordon Pennycook:

    • Google Scholar Profile
    • Twitter
    • Personal Website
    • Cornell University Faculty Page

    Further Reading on Misinformation:

    • Brandolini’s Law and the Spread of Falsehoods
    • Role of AI in Misinformation
    • The Psychology of Conspiracy Theories


    TIMESTAMPS:

    00:00 Introduction

    03:14 Behavioral Science and Misinformation

    05:28 Introducing Gordon Pennycook

    10:02 The Evolution of Misinformation

    12:46 AI’s Role in Misinformation

    14:51 Impact of Misinformation on Elections

    21:43 COVID-19 and Vaccine Misinformation

    26:32 Technological Advancements in Misinformation

    33:50 Conspiracy Theories

    35:39 Misinformation and Social Media

    42:35 The Role of Humor in Misinformation

    48:08 Quickfire Round: To AI or Not to AI

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠nuancebehavior.com.⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    53 分
  • The Dark Side of AI – Halloween Special
    2024/10/30

    In this spine-chilling Halloween special of the Behavioral Design Podcast, co-hosts Aline Holzwarth and Samuel Salzer take listeners on a journey into the eerie intersection of AI and behavioral science. They explore the potential ethical and social consequences of AI, from our urge to anthropomorphize machines to the creeping influence of human biases in AI engineering.

    The episode kicks off with the hosts sharing their favorite Halloween costumes and family traditions before delving into the broader theme of Frankenstein as an apt metaphor for AI. They discuss the human inclination to attribute human qualities to non-human entities and the ethical implications of creating machines that mirror humanity. The conversation deepens with reflections on biases in AI development, risks of ‘playing God,’ and the tension between technological progress and human oversight.

    In a thrilling twist, the hosts read a co-authored sci-fi story written with ChatGPT, illustrating the potential dark consequences of unchecked AI advancement. The episode wraps up with Halloween-themed wishes, encouraging listeners to ponder the boundaries between human and machine as they celebrate the holiday.


    Timestamps:

    03:38Frankenstein: Revisiting the original story

    09:09 – Frankenstein’s Modern AI Metaphor: Parallels to today’s technology

    18:06 – Reflections on AI and Anthropomorphism: The urge to humanize machines

    36:31 – Exploring Human Biases in AI Development: How biases shape AI

    42:06 – Trust in AI: Human vs. algorithmic decision-making

    46:45 – The Personalization of AI Systems: Pros and cons of tailored experiences

    49:10 – The Ethics of Playing God with AI: Examining the risks

    55:56 – Concluding Thoughts and Halloween Wishes: Reflecting on AI’s duality

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    1 時間
  • Recommender Systems with Carey Morewedge
    2024/10/23
    In this episode of the Behavioral Design Podcast, we delve into the world of AI recommender systems with special guest Carey Morewedge, a leading expert in behavioral science and AI. The discussion covers the fundamental mechanics behind AI recommendation systems, including content-based filtering, collaborative filtering, and hybrid models. Carey explains how platforms like Netflix, Twitter, and TikTok use implicit data to make predictions about user preferences, and how these systems often prioritize short-term engagement over long-term satisfaction. The episode also touches on ethical concerns, such as the gap between revealed and normative preferences, and the risks of relying too much on algorithms without considering the full context of human behavior. Join co-hosts Aline Holzwarth and Samuel Salzer as they together with Carey explore the delicate balance between human preferences and algorithmic influence. This episode is a must-listen for anyone interested in understanding the complexities of AI-driven recommendations! -- LINKS: Carey Morewedge: Google Scholar Profile Carey Morewedge - LinkedIn Boston University Faculty Page Personal Website Understanding AI Recommender Systems: How Netflix’s Recommendation System WorksImplicit Feedback for Recommender Systems (Research Paper)Why People Don’t Trust Algorithms (Harvard Business Review)⁠Nuance Behavior Website⁠ -- TIMESTAMPS: 00:00 The 'Do But Not Recommend' Game 07:53 The Complexity of Recommender Systems 08:58 Types of Recommender Systems 12:08 Introducing Carey Morewedge 14:13 Understanding Decision Making in AI 17:00 Challenges in AI Recommendations 32:13 Long-Term Impact on User Behavior 33:00 Understanding User Preferences 35:03 Challenges with A/B Testing 40:06 Algorithm Aversion 46:51 Quickfire Round: To AI or Not to AI 52:55 The Future of AI and Human Relationships -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示
    59 分
  • AI and Behavioral Science – What You Need to Know
    2024/10/16
    In the latest episode of the Behavioral Design Podcast, we are excited to launch Season 4 with an in-depth exploration of how behavioral science and AI converge, setting the stage for an engaging and thought-provoking season. This episode tackles big questions around AI’s growing influence, offering insights into both its promise and its challenges, especially as they relate to human behavior and decision-making. Join co-hosts Aline Holzwarth and Samuel Salzer as they introduce key themes for the season, including the profound implications of AI on behavioral science and society at large. The episode opens with breaking news from the AI world, such as the significance of neural networks, which serve as the foundation of modern AI systems. The hosts explain how neural networks work and contrast them with the extraordinary complexity of the human brain. The episode covers essential concepts for behavioral scientists, including large language models (LLMs), the backbone of generative AI, as well as prompt engineering and AI agents. These tools are transforming fields from healthcare to customer service, and the hosts break down their real-world applications, highlighting how they are used to enhance decision-making, automate workflows, and drive personalized interventions. Samuel and Aline debunk several common myths about AI, such as whether generative AI truly enhances creativity or if more complex models are always better. They also explore algorithmic bias versus human bias, discussing how AI can both amplify and address societal inequities depending on how it is designed and implemented. In “To AI or Not to AI”, this season’s quickfire round, the hosts weigh in on whether they’d trust AI for tasks like driving their kids to daycare or offering relationship advice, sparking a thought-provoking discussion on AI’s role in everyday life. This episode is a must-listen for anyone curious about the evolving relationship between behavioral science and AI, offering both high-level insights and detailed explorations of the real-world implications of these technologies. -- TIMESTAMPS: 00:00 Introduction to the Behavioral Design Podcast 02:36 Breaking News 04:30 Understanding Neural Networks 09:38 The Beauty and Complexity of the Human Brain 17:37 Season Preview 21:53 Meet Your Hosts 29:00 Nuanced Behavior 30:43 AI 101 for Behavioral Scientists 44:14 Debunking AI Myths 01:02:15 To AI or Not to AI: Quickfire Round 01:14:45 Final Thoughts LINKS: Geoffrey Hinton’s Talk on AI and John Hopfield’s Contributions to Neural NetworksSherry Turkle’s Memoir “The Empathy Diaries”Marvin Minsky and the Concept of the Brain as a MachineCassie Kozyrkov’s Blog on Machine LearningSendhil Mullainathan’s Paper on Algorithmic FairnessGenerative AI enhances individual creativity but reduces the collective diversity of novel content Superintelligence: Paths, Dangers, Strategies Biased Algorithms Are Easier to Fix Than Biased PeopleNuance Behavior Website -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: nuancebehavior.com. Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示
    1 時間 18 分
  • 2023 in Review – Season 3 Finale 🌟
    2023/12/14
    We've reached the end of Season 3! 🎉 In this finale, we give you the inside scoop on topics behavioral design from 2023. From our favorite resources to AI to films, we explore all things behavioral design, so you too are in the inside scoop! All resources are linked below. Enjoy! From the bottom of our hearts, thank you for supporting us throughout the year! We appreciate you! 🙏 🙌 Gratitude: A systematic review of the strength of evidence for the most commonly recommended happiness strategies in mainstream media | Nature Human Behaviour (Dunigan Folk & Elizabeth Dunn)No Sweat book - Michelle SegarPreregistering, transparency, and large samples boost psychology studies’ replication rate to nearly 90% | ScienceHigh replicability of newly discovered social-behavioural findings is achievable | Nature Human Behaviour Favorite Resources: BehaviorBytesWomen in Behavioral Science and the Women in Behavioral Science LinkedIn group – Darcie Piechowski Lesson on Fraud and Whistleblowing – Zoe ZianiChoice Overload: It’s not about the number – Hassan & Roos7 Routes to Applied Behavioural Science Experimentation and Observation – Affective + OECDMapping Behavioural Journeys – Common ThreadA Manifesto for Applying Behavioral Science – The Behavioural Insights TeamBehavioral Science as a Specialization – Connor JoyceThe Science of Context – Jared Peterson Top 10 films: Fallen leavesClosePassagesLuxembourg, LuxembourgPast LivesBeau Is AfraidOne Fine MorningBarbieOppenheimerInfinity Pool -- Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠Habit Weekly newsletter⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    続きを読む 一部表示
    52 分
  • Product Deep Dive: Pill Bottles 💊
    2023/12/06

    Welcome to the latest Product Deep Dive! ⁠💊

    In this bonus series of the Behavioral Design Podcast, we take a closer look at the seemingly simple, yet tremendously important, pill bottle.

    Previous guest, Aarthi Rao, took her stab at designing the best pill bottle, so we decided to deep dive into all things behavioral design in the pill bottle world, ourselves! Easy, attractive, social, personalized...tune in to learn more, this one was a lot of fun!

    Thank you to all of our listeners for supporting our podcast. Tune in next week for our Season 3 finale!
    --
    --

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠Habit Weekly newsletter⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    53 分
  • Human-Centered Behavioral Design with Aarthi Rao
    2023/11/29

    Aarthi Rao leads behavioral insights at Cityblock Health, as their Vice President of Behavioral Insights and Strategic Engagement Innovation. Aarthi also founded the Design and Innovation Lab at CVS Health.

    Aarthi has successfully merged human-centered practices, such as design thinking, with behavioral science at Cityblock. She is a strong advocate for merging qualitative and quantitative methods to better design patient experiences. Today we spoke to Aarthi about how to reach hard-to-reach communities, designing the perfect pill bottle that fits into a patient’s healthcare ecosystem, and so much more. Enjoy!

    --

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠Habit Weekly newsletter⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    57 分