エピソード

  • Did AI Kill Programming? | EP. 50
    2026/02/19
    Are AI coding tools actually replacing programmers, or just changing how software gets built? In this episode of Hidden Layers, Ron Green sits down with Dr. ZZ Si and Michael Wharton to unpack what has shifted with modern coding agents, what has not, and where the hype breaks down. They share concrete examples from their own workflows, including how coding tools have moved from autocomplete to handling larger chunks of work, and why the real bottleneck is no longer writing syntax, but defining intent, architecture, and product direction. The conversation also explores how these tools are reshaping team velocity, why senior engineers tend to get more leverage from AI than junior developers, and the risks of weakening the talent pipeline if companies stop investing in early-career engineers. The episode closes with a candid look at what skills will matter most in an AI-assisted world, how abstraction layers are changing the role of programmers, and whether we may already be near peak computer science graduates. 00:00 – The rise of AI coding tools 03:07 – How workflows are changing 06:27 – Team velocity and delivery speed 08:19 – Product thinking vs. engineering execution 09:46 – Is programming actually dying? 11:41 – What “programming” means now 15:23 – Senior vs. junior developer leverage 16:33 – The developer talent pipeline 18:21 – Ego, identity, and automation 19:08 – Before vs. after: building with AI 22:30 – Debugging and fixing issues with AI 24:42 – Spec-writing and product shaping with AI 26:49 – The future of computer science grads 29:20 – Closing reflections
    続きを読む 一部表示
    30 分
  • Your AI Is Too Big, Too Expensive, and Probably Wrong | EP. 49
    2026/01/22
    What if the most powerful AI in your organization isn’t the biggest model you can buy, but the one trained on data only you own? In this episode of Hidden Layers, Ron Green is joined by Dr. ZZ Si and Michael Wharton to break down why domain-specific AI models consistently outperform general-purpose systems in real enterprise environments. They explore how narrowly scoped models deliver higher accuracy, lower costs, better reliability, and stronger governance, especially when built on proprietary data. Through real-world examples spanning finance, industrial systems, healthcare, and document understanding, the conversation tackles when to build custom models, when to rely on APIs, and how to identify AI initiatives that actually make it into production. The takeaway is clear: focus beats scale, and specificity is often the fastest path to durable competitive advantage. Chapters 00:00:00 What Is Domain-Specific AI 00:01:15 General Models vs. Focused Systems 00:02:48 Performance, Cost, and Model Size 00:04:13 Proprietary Data as Advantage 00:07:58 Why AI Fails in Production 00:08:42 Real-World Domain-Specific Examples 00:10:54 How to Decide What to Build 00:14:53 Scale, Accuracy, and Uncertainty 00:18:49 The Spectrum of Domain-Specific AI 00:27:01 What We’d Build Differently Today
    続きを読む 一部表示
    30 分
  • AI Year in Review – Key Moments, Hot Takes, and 2026 Predictions | EP. 48
    2025/12/17
    2025 was another defining year for artificial intelligence. In this special AI Year in Review episode of Hidden Layers, Ron Green is joined by Emma Pirchalski, Michael Wharton, and Dr. ZZ Si to break down what actually mattered in AI this year. The team recaps the biggest developments from 2025, revisits their predictions from 2024 to see what held up (and what didn’t), and shares honest, experience-driven predictions for 2026. Topics include multimodal models, agents, enterprise adoption, governance gaps, workforce impact, ROI pressure, and where AI is truly headed next. This episode cuts past hype to focus on what leaders, builders, and decision-makers should actually be watching as AI moves from experimentation to execution. Chapters 00:00:00 Welcome and Introduction to 2025 AI Year in Review 00:00:56 Emma's Working Models Podcast Announcement 00:01:48 Top AI Developments of 2025 00:16:29 Reviewing 2025 Predictions 00:25:08 2026 Predictions 00:36:49 Closing Thoughts
    続きを読む 一部表示
    41 分
  • Why Agentic AI Isn’t Ready for Prime Time—Yet | EP. 47
    2025/11/13
    Artificial intelligence is shifting from prediction to autonomy—and “agentic AI” is leading the charge. In this episode of Hidden Layers, KUNGFU.AI’s Ron Green, Dr. ZZ Si, and Michael Wharton unpack what it really means for machines to act on their own, what’s hype versus real progress, and how far we are from true artificial general intelligence (AGI). They discuss how coding agents are transforming development workflows, why agentic AI is both overhyped and underutilized, the challenges of scaling reliable autonomy, the connection between AGI, biology, and lifelong learning, and whether new architectures or cognitive inspiration will take us the rest of the way. 00:00 – Intro: From prediction to autonomy 01:30 – What is agentic AI? 05:00 – Coding agents and creative workflows 08:00 – Reliability, risk, and real-world use 12:30 – The agentic hype cycle 16:00 – Why businesses underuse (and overuse) AI 19:00 – Narrow AI and domain-specific intelligence 22:00 – The AGI timeline debate 26:00 – Learning from biology and cognition 33:00 – Lifelong learning and what’s missing today
    続きを読む 一部表示
    37 分
  • Why AI Hallucinates (and Why It Might Never Stop) | EP. 46
    2025/09/25

    In this episode of Hidden Layers, Ron is joined by Michael Wharton and Dr. ZZ Si to explore one of the most pressing and puzzling issues in AI: hallucinations. Large language models can tackle advanced topics like medicine, coding, and physics, yet still generate false information with complete confidence.

    The discussion unpacks why hallucinations happen, whether they’re truly inevitable, and what cutting-edge research says about detecting and reducing them. From OpenAI’s latest paper on the mathematical inevitability of hallucinations to new techniques for real-time detection, the team explores what this means for AI’s reliability in real-world applications.

    続きを読む 一部表示
    31 分
  • GPT-5 Release Fallout, AGI Timeline, Google's Genie 3 and Meta's DINO V3 | EP. 45
    2025/09/03

    In this episode of Hidden Layers, we dive into the most important AI developments of the month. We cover OpenAI’s highly anticipated and controversial GPT-5 release, debate where we really are on the AGI timeline, explore groundbreaking new world models like Google’s Genie 3 and Tencent’s Huanyuan Gamecraft, and unpack Meta’s DINO V3 image encoder breakthrough.

    続きを読む 一部表示
    25 分
  • Bridging Physics and AI for Smarter Climate Decisions | EP. 44
    2025/08/16

    In this episode of Hidden Layers, host Ron talks with Dr. Hannah Lu, assistant professor at the University of Texas at Austin and core faculty at the Odin Institute for Computational Engineering and Sciences. Dr. Lu is pioneering the use of AI-powered surrogate models to make complex scientific simulations—like CO₂ absorption in geological formations—faster, more accurate, and more useful for real-world decision-making.

    They discuss:

    • How surrogate models work and why they’re so powerful
    • The challenges of applying AI to physics-based systems
    • How digital twins and uncertainty quantification are shaping the future of environmental modeling
    • The intersection of generative AI, physics constraints, and climate science
    続きを読む 一部表示
    28 分
  • Apple AI Collapse, Diffusion Video Boom, Copyright Wars & More | EP. 42
    2025/07/16

    In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton unpack July’s biggest AI developments—from flawed reasoning tests to surprising training breakthroughs.

    Apple’s “Illusion of Thinking” paper draws sharp critiques—from both humans and language models. Meta revives a forgotten 2019 attention mechanism to reshape scaling laws. Video generation tools from BlackForest Labs and others hit new levels of realism and interactivity. Federal courts weigh in on Anthropic and Meta’s use of copyrighted training data. A one-line tweak in training recurrent models dramatically boosts performance on long sequences. Cloudflare announces it will block AI scrapers by default—though it might be too late.

    From Transformer alternatives to copyright battles, this episode dives into the fast-moving intersection of AI research, engineering, and regulation.

    続きを読む 一部表示
    28 分