エピソード

  • The Goblin in the Machine | FAFO Friday
    2026/05/02

    I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant."

    Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?

    That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans.

    Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space.

    Links:

    • Where the goblins came from (OpenAI blog post)
    • My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)
    • GitHub Copilot is moving to usage-based billing (GitHub announcement)
    • ‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link)
    • Meta inks deal for solar power at night, beamed from space (TechCrunch)

    Support Future Around & Find Out

    • Follow Dan on LinkedIn
    • Get the free newsletter
    • Become a paid subscriber and help future proof FAFO!
    続きを読む 一部表示
    34 分
  • AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"
    2026/04/28

    Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are.

    Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:

    In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.”

    Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.

    In this conversation:

    • Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans made
    • How to avoid — or at least how to mitigate — creating AI that’s biased
    • Red teaming AI and creating bias bounties
    • The "grandma hack" and other ways regular people accidentally jailbreak AI models
    • How AI companies are quietly rewriting their terms of service to dodge liability when things go wrong
    • Why the benchmarks you see when a new model drops are "basically spelling tests"
    • AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is alive
    • What builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came before

    Chapters:

    • (00:00) - "The thing I believe in the most is human agency"
    • (02:14) - Why builders have more agency than they realize
    • (04:00) - What is a bias bounty?
    • (06:41) - What 2,000 hackers at DEF CON found
    • (09:40) - The grandma hack
    • (11:30) - Why guardrails fall apart
    • (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game
    • (19:10) - Why most evals are "basically spelling tests"
    • (21:30) - How to actually evaluate an AI agent
    • (26:20) - "Moral outsourcing" and the AI layoff lie
    • (28:45) - Inside Rumman's tenure as U.S. AI Science Envoy
    • (32:10) - The legal loophole AI companies use to dodge liability
    • (35:35) - AI psychosis and the cold emails Rumman gets
    • (38:40) - Why Google's AI overview is quietly dangerous
    • (44:35) - The problem with "AI literacy"
    • (48:05) - Can we trust anything we see anymore?
    • (50:15) - What builders can do right now to take back agency

    Support Future Around & Find Out
    • Follow Dan on LinkedIn
    • Get the free newsletter
    • Become a paid subscriber and help future proof FAFO!
    続きを読む 一部表示
    54 分
  • We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?
    2026/04/25

    We won the Webby Award for best tech podcast of 2026!!!

    I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind...

    But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz.

    What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so?


    It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because…


    The Future Needs a Word.


    That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions…

    ---
    FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow.

    続きを読む 一部表示
    39 分
  • "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*
    2026/04/21

    So what even is “real” software anyway?

    Someone builds an app over the weekend. It works. It looks good. And then the search begins — for the asterisk. Security? Design quality? Can it go to production? Paul Ford says we’re in a new era: "I can't believe it's not software!"

    Paul is the co-founder of Aboard, where he helps organizations build custom software quickly, using AI tools. He's also one of my favorite tech writers. You may know him from "What Is Code," the opus he wrote for Bloomberg Businessweek a decade ago or from his writing in the New York Times, including his recent opinion piece, The A.I. Disruption We’ve Been Waiting for Has Arrived. Or perhaps you’re hip to Ftrain, where he’s been writing for longer than we’ve had the word “blog.”

    In this conversation, recorded at Aboard’s podcast studio (Paul and his cofounder also host a great show), we dig into the strange new world where roles are colliding, software* gets built quickly, and no one is quite sure what to teach their kids.

    We get into:

    • What Paul calls "the great search for the asterisk" — the moment someone demos an app and everyone scrambles to find the catch
    • How the power dynamic between engineers and everyone else is fundamentally shifting — and why that's both liberating and destabilizing
    • Why vibe coded prototypes are changing how agencies pitch and price their work — and why pricing is "very unresolved"
    • The skills that actually matter now: client communication, systems thinking, and depth over velocity
    • Why "the environmental costs [of AI] have become essentially a truthful folk narrative to talk about how difficult and scary and painful it is to see your life get continually smashed into bits."
    • What he's teaching his kids (hint: it's not to code)

    Chapters:

    • (01:40) - “We’re in a funny moment now” – catching up on the ten years since “What Is Code?”
    • (05:30) - “ You gotta stop fighting” - AI code is genuinely useful, caveats and all
    • (08:44) - AI enables people who could never afford custom software to have it
    • (09:50) - Why he knew he’d get yelled at for his recent piece in the NYTimes
    • (13:00) - “AI washing” and job cuts
    • (14:50) - Paul’s theory for why the market oscillates so wildly on AI news + are we going to vibe code our own DoorDash?
    • (17:00) - What’s the hardest thing about building with AI right now?
    • (19:36) - Hiring, the most in-demand skills, and “forward-deployed engineers”
    • (27:50) - “Product is still hard” – in response to: “What is something that AI will never be great at?”
    • (31:36) - “What is something that sounds like science fiction, but that will soon be real — and commonplace?”
    • (32:46) - Why Paul is excited about world models (and thinks LLM’s are topping out)
    • (36:06) - Why environmental concerns have become a “truthful folk narrative about how difficult and scary” AI is
    • (39:26) - There is no magic solution for climate (but one positive thing AI can do is help digest climate data)
    • (41:26) - Why kids should learn systems thinking

    Support Future Around & Find Out
    • Get the free newsletter
    • Become a paid subscriber and help future proof this thing!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com



    続きを読む 一部表示
    45 分
  • We're a Webby nominee for Best Tech Podcast! Please vote! And here are the FAFO highlights the Webby's loved so much
    2026/04/16

    Hey everyone... so, in case you haven't heard... this show, Future Around & Find Out, has been nominated for a Webby for best tech podcast!


    *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***

    I was kind of being chill about this. I am, admittedly, not my own best hype man, but then I got riled up when I heard the hosts of The Vergecast, one of the other nominees and last year's winner, complain that they weren't winning by enough votes and that they wanted to win by such a large margin that it -- quote -- hurts everyone's feelings.

    Well, those are my feelings Nilay Patel was talking about!


    Look, I like the Verge -- and I definitely didn't have them on my list of people I might feud with this years -- but f* those guys! Let's win this thing!


    So could you please vote? Today, April 16th is the last day to do so and we're currently just behind, in second place. The link to vote is in the show notes. You can also find it on the show's website at Future Around dot com


    And what is it you're voting for? Well, if you've been listening then you already know what this show is all about, but I also thought for newbies and even for long time listeners, it might be fun for you to hear exactly what the Webby judges listened to when they voted for FAFO to be a best tech podcast nominee. They ask for ten minutes of audio, so I made a highlight reel — and here it is.

    *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***

    続きを読む 一部表示
    11 分
  • We Need Inventors. And Inventors Need Us. Pablos Holman on Finding and Backing Zero to One Builders
    2026/04/14

    We live in a world where every crisis lands in your pocket the moment it happens. The result? We're more informed than ever — and somehow less capable of doing anything about it.

    Inventor and investor Pablos Holman has a diagnosis: we're spreading ourselves across every problem, which means we're solving none of them. His prescription is uncomfortable — pick one thing, go all in, and cut the noise.

    ***
    QUICK PLUG: Future Around & Find Out is nominated for a Webby for best tech podcast! Voting is open now for the People's Choice Award. Please vote before April 16th! https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology
    ***

    Pablos is the co-founder of Deep Futures, where he hunts for inventors tackling world-scale problems: energy, water, food, waste, transportation. Not apps. Atoms. And thanks to advances in AI and software, these "impossible" problems are more solvable than ever — if the right people show up to back them.

    In this conversation, recorded at the fabulous PopTech conference, he makes the case that inventors are the most important creative class on earth — and the most invisible. They're undersupported, uncelebrated, and working alone in garages. Some of them are probably going to blow themselves up. Those are exactly the people he's looking for.

    We get into:

    • Why doomscrolling is literally eroding your ability to make a difference
    • The difference between craft (optimization) and creation (zero-to-one) — and why AI is great at one and struggling with the other
    • Why you can name 100 musicians but fewer than two living inventors
    • How solving energy unlocks clean water, sanitation, and climate — essentially for free
    • Why software people are uniquely positioned to work on the hardest problems in the world right now

    Chapters:

    • (01:15) - Why the world isn't as broken as your newsfeed makes it seem
    • (03:00) - The sticky note exercise: how to pick the one problem worth your time
    • (04:30) - Inventors are the most important creative class nobody talks about
    • (07:00) - Living inventors you should actually know
    • (09:00) - What AI is good at — and what it still can't do
    • (12:30) - Why software people are the right ones to tackle deep tech problems
    • (22:56) - Energy is the root problem — solve it and you solve a lot else
    • (25:56) - Climate change needs a thousand solutions, not one big fix
    • (28:26) - The fashion industry's dirty secret and what robots can do about it

    Links & Resources
    • Pablos Holman on LinkedIn
    • Deep Future: VC firm, book, and podcast

    Support Future Around & Find Out

    • FAFO is nominated for a Webby for best tech podcast! Vote now!
    • Get the free newsletter
    • And consider becoming a paid subscriber and help future proof this thing!

    Sponsor the show?

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com

    ---

    Pablos's first appearance on the show covers his work at Blue Origin and Intellectual Ventures. Scroll in your podcast app to July 2025 to find that fun conversation. (Can listen before or after this one; not a prerequisite.)


    続きを読む 一部表示
    32 分
  • The Moon, the Mythos, the Mayhem | FAFO Friday
    2026/04/11

    Hey, great news! We’ve been nominated by the Webby Awards for best tech podcast! Voting is open now and we’re in second place for the popular choice prize. Just behind The Verge. They really don’t need this win, but it would really help this show grow. Would you please (ask a friend to) vote for Future Around & Find Out?

    *** VOTE FOR FAFO ***


    OK, here’s this week’s FAFO Friday… (we record on Fridays and the show has Friday/weekend vibes, so just go with it no matter what day of the week it is :)


    This week, Kwaku and I…

    • Gape at the moon in wonder
    • Ask why we sent humans on this mission when space robots could’ve done the job (related: why climb Mount Everest?)
    • Marvel at Anthropic’s new Mythos model, which they say is remarkably good at finding flaws in the world’s critical software — or is this just another example of their marketing savvy? — or both!?
    • Dig into AI world models and Jeff Bezos’s (modestly named) Project Prometheus
    • Ask whether we want robots in our houses (yes, but only if they’re dumb)
    • Keep FAFO weird (because in the age of AI that’s how you prove you’re human)

    *** VOTE FOR FAFO ***

    続きを読む 一部表示
    34 分
  • Trust Is All That's Left: How AI Scrambles the Creator Economy | Jim Louderback Live from SXSW
    2026/04/07

    Future Around & Find Out is a best technology podcast nominee! And with your help it could be a winner. The Webby Awards voting is open now. Please vote for FAFO!

    Thanks to AI, “content is about to become infinite.” And just like the Internet disrupted distribution, AI is disrupting creation. And so when anyone, anywhere can create content, what’s left? What’s defensible? That would be trust and humanity.

    Live from Podcast Movement Evolutions at SXSW, I sit down with Jim Louderback — former VidCon CEO, Inside the Creator Economy newsletter writer, and media veteran — to unpack what's actually changing and what builders and creators should do about it.

    We get into why the "age of perfection" is over, why founders need a meme instead of an elevator pitch, and why putting a creator on your cap table might be the smartest move a startup can make. Jim makes the case for a trust economy where views and likes are meaningless — and where the real question is how far your trust graph extends. We also talk digital twins (and what happens when yours goes rogue), why events are still the best way to prove you're human, the state of journalism and public media, and why 2004’s “Subservient Chicken” was so ahead of its time.

    Chapters:

    • (01:30) - How AI disrupts creation
    • (03:50) - The number of creators is about to double to 500 million
    • (06:45) - We’ll have “certified human” labels, just like “organic” and why the Subservient Chicken was so far ahead of its time
    • (08:40) - The age of perfection is over
    • (10:00) - The only thing that matters is trust
    • (12:00) - Events, FTW!
    • (13:45) - The elements of a great event are timeless
    • (18:11) - Favorite moments from SXSW
    • (21:56) - What’s your meme? > What’s your elevator pitch?
    • (23:28) - Put a creator on the cap table
    • (27:21) - Creator-community fit
    • (29:38) - The challenges of being a journalist today
    • (32:26) - Create your own digital twin
    • (36:26) - Why John Green’s jaw dropped when he learned of Dan’s grandma

    ---
    Future Around & Find Out
    • Vote for FAFO to be a Webby Awards winner!
    • Get the newsletter
    • Sponsor the show? Want to share your message with senior technologists? Email Dan: dan@futurearound.com
    続きを読む 一部表示
    39 分