エピソード

  • To Accede or Not To Accede? That Is The Question | It's FAFO Friday!
    2026/02/27

    Murderbots, mass layoffs, and media takeovers — all in one news cycle. Anthropic told the Pentagon "we will not accede." Block cut half its workforce overnight. And the Paramount-Warner Brothers deal raises real questions about who's running the media now.


    Also, thanks to Nicolás Maduro's fashion sense, Dan's 13-year-old is being called Lil Tator at school and honestly? The kids are all right.


    Happy FAFO Friday!

    続きを読む 一部表示
    43 分
  • "I just want AI to replace me as a scientist" | The co-founder of Diagnostic Robotics predicts the future
    2026/02/24

    Of all the industries AI will transform, Kira Radinsky believes chemistry and biology will change the most.


    Kira is the CEO of Diagnostic Robotics, which uses AI to automate the administrative work that's crushing healthcare teams — so clinicians can actually focus on patients. She's also the co-founder of Mana.bio, where they're accelerating drug discovery by orders of magnitude.


    She'll tell you she's a terrible scientist. Not because she isn't brilliant — but because she can't pipette without killing the cells. So she built AI to do the science instead.


    But this episode is about more than healthcare. It's about how to build systems that get smarter over time — feedback loops, causal inference, incentivizing algorithms to take risks, and knowing when to optimize for ROI instead of accuracy. Lessons that apply whether you're building in biotech or not.


    We cover:

    • How growing up Jewish in Soviet Ukraine — and fleeing to Israel just before the Gulf War — shaped Kira's obsession with predicting the future
    • How she built a system that successfully predicted real-world events, including Cuba's first cholera outbreak in Cuba in 130 years
    • How Mana.bio is using AI to build "rocketships" that deliver drugs to the right cells — and how they've done in three months what used to take 20 years
    • Why predictions are only valuable if there's something you can do about them — and why that makes healthcare an ideal field for AI
    • How to incentivize algorithms to make bolder predictions (it's easy to predict there won't be an earthquake today; it's much harder to say there will be)
    • Why causal inference is the most underrated tool in machine learning right now
    • How healthcare AI can perpetuate racial bias — and what builders need to do differently

    Note: this interview originally aired in October 2024.

    Chapters:

    • (01:44) - Why predictions are so important to Kira: lessons from fleeing Soviet-era Kyiv
    • (05:10) - Building a prediction engine from 150 years of news
    • (08:35) - How Kira predicted the Cuba cholera outbreak
    • (09:50) - Returning to biology by way of data
    • (12:50) - Predicting healthcare outcomes by finding your patient's twin
    • (17:53) - The racial bias hiding in healthcare AI
    • (19:15) - Building Mana.bio and accelerating drug discovery
    • (24:33) - "In three months, what did what used to take 20 years"
    • (31:44) - Builder tips: ROI, causal inference, and teaching algorithms to explore
    • (35:07) - Planning: Where generative AI needs improve

    Links & Resources:
    • Kira Radinsky on LinkedIn
    • Diagnostic Robotics
    • Mana.bio

    Support Future Around & Find Out

    • Get the free newsletter
    • And consider becoming a paid subscriber and help future proof this thing!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com
    続きを読む 一部表示
    39 分
  • AI Delivers Mediocre Results—By Design. So How Do You Stand Out? | MetaLab CEO Luke Des Cotes
    2026/02/20

    You probably know by now that AI is the definition of mediocre. As in: it’s the average of everything it’s been trained on. So how do you get beyond average? How do you build a moat?


    It certainly doesn’t seem to be via the models. While there are models of the month (hey, Opus 4.6, my new friend!), they seem to be pretty swappable.


    So, the model ain’t it. But proprietary data (e.g. an AI that knows you really well), yes! Or doing something really hard in the real world (think: Waymo self-driving cars). Maybe via trust and safety (Anthropic is certainly making a play here). Or... how about via amazing design and good taste.


    Remember when ChatGPT first came out and everyone derided “AI wrappers”… well, maybe a wrapper isn’t so bad, assuming you can differentiate on one or more of the above.


    Luke Des Cotes is the CEO of MetaLab, the agency famous for designing interfaces, including early versions of Slack and Coinbase, so don’t be shocked when you hear him say that great design can be your moat.


    MetaLab is working with a host of AI companies (another shocker), including Windsurf (AI + code), Suno (AI + music), Pika (AI + video), and more…, which is why Luke's take on AI surprised me. He's not rah rah. He's pretty judicious actually. Luke has questions about AI's costs and appropriateness for lots of use cases like those involving kids, but mostly he objects to its mediocrity.


    On this episode we discuss what it takes to go beyond.


    We also get into:

    • Why vibe-coded software isn't changing the world anytime soon
    • Why Shopify acquired a design agency right after telling employees to justify their existence against AI
    • How MetaLab designers are using AI to prototype in hours instead of weeks
    • The talent market for zero-to-one designers — and why they're harder to find than ever
    • Landlines, brick phones, and how parents are fighting back against always-on kids

    Chapters

    • (01:10) - "It's a race to the mean"
    • (03:10) - "How do you create emotional resonance?"
    • (05:33) - AI companies are burning money
    • (08:44) - Speed to good enough
    • (13:51) - Is the chat here to stay or a temporary fad?
    • (17:43) - It’s hard to find great 0 to 1 design talent
    • (22:28) - Seemingly conscious AI
    • (25:05) - Kids, landlines, and fighting always-on culture
    • (27:21) - Sounds like science fiction, but is here now…

    Links & Resources

    • Luke Des Cotes on LinkedIn
    • MetaLab

    Support Future Around & Find Out

    • Get the free newsletter
    • And consider becoming a paid subscriber and help future proof this thing!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com
    続きを読む 一部表示
    32 分
  • Could AI Make Capitalism Better? Henrik Werdelin Is Optimistic
    2026/02/17

    Henrik Werdelin is one of my favorite entrepreneurs. He’s founded and incubated several unicorns, most notably BARK, the dog happiness company.

    Henrik himself is a pretty happy guy — an optimistic guy who likes to ask what could go right? — and on the day we recorded (a few months ago as I was squirreling away interviews for the podcast relaunch), he helped me see through some future of tech gloom I was feeling. I honestly can’t even remember what Trump+tech hellscape we were living through that week, but I do remember that Henrik put me in a better mood. I think he’ll do the same for you, no matter how you’re feeling. 🤗


    Henrik believes AI could be a massive force for good. That it could bring forth a whole new — a better! — form of capitalism. He writes about this is in his latest book, Me, My Customer, and AI. He points to those (like Henry Ford) who took advantage of electricity by making drastic, not incremental, changes to how the build things. Our conversation pairs nicely with my recent episode with Azeem Azhar, who said the AI winners will “come from odd places”, as they have in previous tech transformations.

    Here’s more of what Henrik and I cover:

    • His concept of "relationship capital"—the moat AI can't clone—and why the companies that win next will be defined by who they serve, not what they make
    • The three components of relationship capital: intensity, community, and durability
    • The "it sucks that" method for finding problems worth solving (he took it to a fifth grade class; the teacher was not thrilled)
    • His vision for the "headless", agentic web, where your startup's MVP is a group of agents, not an app
    • The wildly practical AI tools he's built just for himself: a custom CRM that searches by vibes not names, a newsletter bot tuned to his quarterly goals, and an agent that handled his visa paperwork while he was in a meeting
    • Why entrepreneurial skills—agency, narrative, resourcefulness—are the ultimate career insurance, whether you start a company or not
    • The absolutely ridiculous story of how a prank on a cruise ship led to him meeting his BARK co-founder in a heart-shaped bed

    Chapters

    • (01:43) - Two Futures: AI Bad vs. AI Really, Really Good
    • (05:44) - Why Positivity Is Actually the Riskier Bet
    • (09:05) - Electricity, AI, and the Rise of Relationship Capital
    • (11:12) - The Three Components of Relationship Capital
    • (14:20) - "It Sucks That" — The Best Way to Find a Real Problem
    • (19:22) - The Headless Future and Minimum Viable Agents
    • (22:40) - N-of-One Software: Building Tools Just for Yourself
    • (26:48) - Henrik's Custom Newsletter Bot and AI-Powered CRM
    • (30:59) - Warp, Obsidian, and Letting Agents Loose on Your Computer
    • (34:45) - Entrepreneurial Skills as Career Insurance
    • (36:53) - The Heart-Shaped Bed: How Henrik Met His BARK Co-Founder

    Links & Resources

    • Henrik Werdelin on LinkedIn
    • Audos, Henrik’s latest venture where he hopes AI agents trained in his methods can help thousands of entrepreneurs (donkeycorns!) a year
    • Beyond the Prompt podcast, from co-hosts Henrik Werdelin and Jeremy Utley


    Support Future Around & Find Out

    • Get the free newsletter
    • And consider becoming a paid subscriber and help future proof this thing!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com
    続きを読む 一部表示
    39 分
  • "Shut Up, C-3PO!" or Do We Have a Duty To Treat Machines Well? | FAFO Friday
    2026/02/13

    Is AI conscious? Will it be someday? And should we be nice to it now... just in case?

    This FAFO Friday, Kwaku and I dive into the mind-bending world of machine consciousness.

    We cover a lot of ground, weaving from the different ways that Luke (co-dependent with R2) and Han (barking commands at C-3PO) treat their droids to whether Pascal’s Wager informs whether we should believe in AI consciousness just in case they do come alive and have been keeping score. (Pascal figured it was the safe bet to believe in God, just in case; maybe we should do likewise?)

    That’s from us knuckleheads, but we’ve also got a true expert on consciousness. This week I interviewed Daniel Hulme, one of the world’s leading AI researchers. He’s the Chief AI Officer at WPP, the CEO of Satalia (which WPP bought) and just founded and is CEO of Conscium, which is researching AI consciousness, efficiency (he thinks we’re scaling wrong and LLM’s are not the way), and building a platform to verify AI agents are safe. You’ll hear the first five minutes of my interview with Daniel.

    Daniel was not surprised by Moltbook (the Reddit-style site that AI agents built for themselves). That’s because he’s been putting agents together (in a “primordial soup” as he put it) for decades to observe the wild and wonderful ways they behave and to see if they’d create intelligence.

    Daniel does not think today’s agents are conscious, but can see a path to it. And he believes that a conscious superintellignece would be safer than a “zombie” one.

    But mostly he doesn’t want machines to feel pain and suffer.

    Huh???

    My brain is still kind of broken from our hourlong chat, which I’m producing now and will be released in a few weeks.

    For now, enjoy this preview and more from Kwaku and me as we talk about what we expect from machines, whether we want to be one with them, and more…

    続きを読む 一部表示
    19 分
  • Everyone's “Jumpy” Right Now: Azeem Azhar on When—Or Is It If?—AI Can Be Profitable
    2026/02/10

    Everyone's feeling jumpy about AI right now—and for good reason.

    The hype has been massive. The investment has been astronomical. But where's the actual return?

    In this episode, Azeem Azhar, founder of Exponential View and advisor to tech leaders and governments, breaks down why the next 18 months are make-or-break for AI. Companies need to prove there's real ROI, not just prototypes launched and tokens spent.

    We cover:

    • What hard evidence would actually prove AI is working (hint: it's not usage metrics)
    • Who can build a real moat with AI—and why the winners will likely come from unexpected places, as they have in previous tech transformations
    • The physical constraints nobody wants to talk about: chips, data centers, power grids, and whether America's infrastructure is up to the task
    • Why OpenAI's "ubiquity strategy" might be spreading too thin (and what Anthropic is doing differently)
    • The "pragmatic addicts" problem: we're dependent on AI even though we don't trust it
    • How Azeem and his team use AI to be more productive, how they automate whatever they can, and why individual contributors are acting more like managers (of AI)

    Note: This interview was recorded months before the "SaaSpacolypse" (big market drop) of Feb 2026; the analysis is as relevant as ever.

    Chapters

    • (01:51) - Why the next 18 months are the crucible for AI
    • (04:09) - What hard evidence would actually prove AI ROI (not token counts!)
    • (06:55) - Why it's so hard to measure AI's real impact
    • (09:55) - Who can build a moat with AI? Winners will be in "odd places"
    • (12:56) - Structural data advantages: why Waymo's edge is hard to replicate
    • (14:34) - Coding agents and whether developers will become disillusioned with them
    • (18:21) - Physical constraints: chips, data centers, power, and America's grid problem
    • (21:25) - How the Gulf countries became an unexpected AI hub
    • (28:32) - "Pragmatic addicts": why 75% of Americans distrust AI but use it anyway
    • (32:15) - The narrative of AI can be very unappealing: heaven on Earth or dystopia
    • (35:06) - How Azeem's team uses AI: augmentation vs. automation
    • (40:36) - What should we be talking about besides AI?
    • (44:16) - Sounds like science fiction: What Azeem can't believe is real and here today


    Links & Resources:

    • Exponential View: https://www.exponentialview.co/
    • Azeem's Boom or Bubble dashboard: https://boomorbubble.ai/
    • Azeem's New York Times piece on America's electric grid challenge: https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.html
    • More on the “MIT Study” claiming 95% of AI projects fail that Azeem and I both found to be really poorly done, but that is nonetheless is quoted by everyone: Here’s Azeem tearing the study apart with data: https://www.exponentialview.co/p/how-95-escaped-into-the-world
    • And here's me riffing with Kwaku Aning on it. You know why Azeem liked my take? Because I actually read the thing, unlike ~95% of the writers out there who just quoted that 95% number: https://www.futurearound.com/p/did-anyone-actually-read-that-mit-ai-study-that-made-the-markets-swoon-i-did

    Support Future Around & Find Out

    • Get the newsletter: https://www.futurearound.com
    • Become a paid subscriber and help future proof this thing!: https://www.futurearound.com

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com
    続きを読む 一部表示
    45 分
  • Claude Goes High Brow With Its Super Bowl Ad and "Constitution"; OpenAI Scrambles | It's FAFO Friday!
    2026/02/06

    Welcome to the first FAFO Friday!

    This week Dan and Kwaku dig into:
    - The uncanny valley that is AI agents and Moltbook—the "Reddit" that agents built for themselves to complain about humans, create a religion, and behave in ways that freak humans out
    - Anthropic takes aim at OpenAI with a Super Bowl ad that's spicy (for cubs and cougars alike)
    - We read Claude's "Constitution" and ask: Should AI do what you ask it to do—or what it thinks you _really_ want long-term?
    - Why Dan switched from OpenAI to Claude (and what he learned about tone, capability, and custom projects)
    - OpenAI scrambles; the market stumbles; Jensen Huang acts like Sam Altman is "just someone I used to know"
    - How AEO (AI Engine Optimization) becomes critical in an AI-agent world—and what that means for brand, marketing, and search
    - Why social media is already past (dark social won)
    - Elon's pivot to humanoid robots, data centers in space, and other cool things we definitely need
    - Are we setting higher ethical standards for machines than for tech leaders?

    Plus: Friendster, TiVo, Pee-wee's Playhouse, and other asides that we hope you get, but maybe you won't ¯\_(ツ)_/¯

    ---
    Support Future Around & Find Out
    - Subscribe to the newsletter and support: https://www.futurearound.com
    - Support the media — support the future — you hope to see. Please consider a paid subscription to Future Around & Find Out. You’ll also get access to exclusive events and the ability to ask questions of upcoming guests. Learn more: https://www.futurearound.com/upgrade

    続きを読む 一部表示
    52 分
  • Steer the Future or Get Steamrolled: Baratunde Thurston on Our Collective Power
    2026/02/03

    Baratunde Thurston wants us to live well with machines — not for us live under them, nor to be their almighty overlords.

    Baratunde is a technologist, a comedian, and an Emmy-nominated storyteller who explores interdependence. He gets spicy in this episode. The host of Life With Machines explores how he uses AI — without succumbing to its literal mediocrity — and why he feels he must use AI because otherwise he’s ceding the future to big tech. He also digs into the compromises made in service of building AGI, why strongmen are actually weak, and why CEOs need to stop bending the knee and learn how collective power and strength actually work.

    But he doesn't just critique—he offers builders a concrete path forward for how we can build a better future , because:

    "If we build these systems in a good way, there'll be more for everybody, more freedom for everybody and more money for everybody. I do believe that that is possible, but if we do this the wrong way, most of us are gonna suffer and a handful will enjoy their riches in a very secure compound."

    This episode is a banger. You will be inspired to take action!


    Chapters:

    • (02:00) - “I don't want to live under machines… I also don't want to be like master of the machine”
    • (06:25) - Creating good goals for AI systems and products
    • (09:00) - “Nothing about us without us” – principles of community-based action
    • (11:10) - How Baratunde stays creative and avoids mediocrity when using AI
    • (14:10) - Building BLAIR, Baratunde’s AI “co-host” and “producer” on Life With Machines
    • (16:50) - “You know nothing, John Snow.” Generative AI systems are not knowledge repositories!
    • (20:00) - Practice what you preach: on Mustafa Suleyman (Microsoft AI CEO) and his warning against building “Seemingly Conscious AI”
    • (24:56) - The AI funding shell game
    • (25:56) - Racing to AGI and the compromises (trust & safety, copyright, etc…) along the way
    • (29:26) - How Baratunde reconciles his unease with his own heavy use of AI
    • (32:40) - “Comedy will not save us; we will save us.” On the role of comedy vs. authority / authoritarians
    • (36:56) - Bending the knee: why Baratunde says tech CEOs need to learn how collective power works
    • (38:56) - What builders — what we! — can do (today!) to exercise our power about how these systems will be built
    • (40:56) - “If we build these systems in a good way, there’ll be more for everybody…”

    Where to find Baratunde Thurston:

    • Life with Machines: https://www.lifewithmachines.media/

    Support Future Around & Find Out

    • Subscribe to the newsletter and support: https://www.futurearound.com
    • Support the media — support the future — you hope to see. Please consider a paid subscription to Future Around & Find Out. You’ll also get access to exclusive events and the ability to ask questions of upcoming guests. Learn more: https://www.futurearound.com/upgrade

    Sponsor the show?

    • Interested in reaching an audience of senior technologists and decision-makers and aligning with future-forward content? Let's talk! Please email show host Dan Blumberg: dan@modernproductminds.com

    ---
    Music by Jonathan Zalben

    続きを読む 一部表示
    43 分