• Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcast

  • 2025/03/01
  • 再生時間: 1 時間 22 分
  • ポッドキャスト

Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcast

  • サマリー

  • Episode Title and Number: Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 12: The Effortless Podcast

    In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.

    From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.

    Key Topics & Chapter Markers
    • [00:00] Introduction – Why reasoning models matter & today's agenda
    • [05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning
    • [15:45] Open weights, open data, and open-source AI
    • [22:30] Fine-tuning vs. RL – When do you need reinforcement learning?
    • [30:10] S1 and the power of test-time scaling
    • [40:25] Budget forcing – Making AI "think" more efficiently
    • [50:50] RAG vs. SFT – What should startups use?
    • [01:05:30] Active learning – AI asking the right questions
    • [01:15:00] Final thoughts – Where AI reasoning is heading next
    Resources & Links

    📄 Papers Discussed:

    • SkyT1: "LLMs Can Easily Learn to Reason from Demonstrations"
    • S1: "Simple Test-Time Scaling"

    Hosts:

    Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.

    Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.

    Guest:

    Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.

    Follow the Hosts and the Guest:

    Dheeraj Pandey:

    LinkedIn - https://www.linkedin.com/in/dpandey

    Twitter - https://x.com/dheeraj

    Amit Prakash:

    LinkedIn - https://www.linkedin.com/in/amit-prakash-50719a2/

    Twitter - https://x.com/amitp42

    Alex Dimakis:

    LinkedIn - https://www.linkedin.com/in/alex-dimakis-b1b20320

    Twitter - https://x.com/AlexGDimakis

    Share Your Thoughts:

    Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.com

    Don’t forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!

    続きを読む 一部表示

あらすじ・解説

Episode Title and Number: Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 12: The Effortless Podcast

In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.

From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.

Key Topics & Chapter Markers
  • [00:00] Introduction – Why reasoning models matter & today's agenda
  • [05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning
  • [15:45] Open weights, open data, and open-source AI
  • [22:30] Fine-tuning vs. RL – When do you need reinforcement learning?
  • [30:10] S1 and the power of test-time scaling
  • [40:25] Budget forcing – Making AI "think" more efficiently
  • [50:50] RAG vs. SFT – What should startups use?
  • [01:05:30] Active learning – AI asking the right questions
  • [01:15:00] Final thoughts – Where AI reasoning is heading next
Resources & Links

📄 Papers Discussed:

  • SkyT1: "LLMs Can Easily Learn to Reason from Demonstrations"
  • S1: "Simple Test-Time Scaling"

Hosts:

Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.

Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.

Guest:

Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.

Follow the Hosts and the Guest:

Dheeraj Pandey:

LinkedIn - https://www.linkedin.com/in/dpandey

Twitter - https://x.com/dheeraj

Amit Prakash:

LinkedIn - https://www.linkedin.com/in/amit-prakash-50719a2/

Twitter - https://x.com/amitp42

Alex Dimakis:

LinkedIn - https://www.linkedin.com/in/alex-dimakis-b1b20320

Twitter - https://x.com/AlexGDimakis

Share Your Thoughts:

Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.com

Don’t forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!

Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcastに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。