エピソード

  • Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 11: The Effortless Podcast
    2025/03/01

    Episode Title and Number: Teaching AI to Think: Reasoning, Mistakes & Learning with Alex Dimakis - Episode 12: The Effortless Podcast

    In this episode, Amit and Dheeraj dive deep into the world of AI reasoning models with Alex, an AI researcher involved in OpenThinker and OpenThoughts. They explore two recent groundbreaking papers—SkyT1 and S1 (Simple Test Time Scaling)—that showcase new insights into how large language models (LLMs) develop reasoning capabilities.

    From structured reasoning vs. content accuracy to fine-tuning efficiency and the role of active learning, this conversation highlights the shift from prompt engineering to structured supervised fine-tuning (SFT) and post-training techniques. The discussion also touches on open weights, open data, and open-source AI, revealing the evolving AI landscape and its impact on startups, research, and beyond.

    Key Topics & Chapter Markers
    • [00:00] Introduction – Why reasoning models matter & today's agenda
    • [05:15] Breaking Down SkyT1 – Structure vs. Content in reasoning
    • [15:45] Open weights, open data, and open-source AI
    • [22:30] Fine-tuning vs. RL – When do you need reinforcement learning?
    • [30:10] S1 and the power of test-time scaling
    • [40:25] Budget forcing – Making AI "think" more efficiently
    • [50:50] RAG vs. SFT – What should startups use?
    • [01:05:30] Active learning – AI asking the right questions
    • [01:15:00] Final thoughts – Where AI reasoning is heading next
    Resources & Links

    📄 Papers Discussed:

    • SkyT1: "LLMs Can Easily Learn to Reason from Demonstrations"
    • S1: "Simple Test-Time Scaling"

    Hosts:

    Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.

    Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.

    Guest:

    Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.

    Follow the Hosts and the Guest:

    Dheeraj Pandey:

    LinkedIn - https://www.linkedin.com/in/dpandey

    Twitter - https://x.com/dheeraj

    Amit Prakash:

    LinkedIn - https://www.linkedin.com/in/amit-prakash-50719a2/

    Twitter - https://x.com/amitp42

    Alex Dimakis:

    LinkedIn - https://www.linkedin.com/in/alex-dimakis-b1b20320

    Twitter - https://x.com/AlexGDimakis

    Share Your Thoughts:

    Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.com

    Don’t forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!

    続きを読む 一部表示
    1 時間 22 分
  • Dissecting DeepSeek: Understanding Reasoning, Hardware, and Decentralized AI - Episode 10 Part 2: The Effortless Podcast
    2025/02/03
    This is the second part of episode 10 of Effortless Podcast, hosts Dheeraj Pandey and Amit Prakash sit down with Alex Dimakis, a renowned AI researcher and professor, to discuss one of the biggest breakthroughs in open AI models—DeepSeek R1. They explore how DeepSeek’s innovations in reasoning, reinforcement learning, and efficiency optimizations are reshaping the AI landscape.The conversation covers the shift from large, proprietary AI models to open-source alternatives, the role of post-training fine-tuning, and how reinforcement learning (GRPO) enables reasoning capabilities in LLMs. They also dive into KV caching, mixture of experts, multi-token prediction, and what this means for NVIDIA, hardware players, and AI startups.Key Topics & Timestamps:[00:00] - Introduction & Why DeepSeek Matters[01:30] - DeepSeek R1: Open-Source AI Disrupting the Industry[03:00] - Has China Become an AI Innovator?[07:30] - Open Weights vs. Open Data: What Really Matters?[10:00] - KV Caching, Mixture of Experts & Model Optimizations[21:00] - How Reinforcement Learning (GRPO) Enables Reasoning[32:00] - Why OpenAI is Keeping Its Reasoning Traces Hidden[45:00] - The Impact of AI on NVIDIA & Hardware Demand[1:02:00] - AGI: Language Models vs. Multimodal AI[1:15:00] - The Future of AI: Fine-Tuning, Open-Source & Specialized ModelsHosts:Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.Guest:Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.Follow the Hosts and the Guest:Dheeraj Pandey:LinkedIn - https://www.linkedin.com/in/dpandeyTwitter - https://x.com/dheerajAmit Prakash:LinkedIn - https://www.linkedin.com/in/amit-prak...Twitter - https://x.com/amitp42Alex Dimakis:LinkedIn - https://www.linkedin.com/in/alex-dima...Twitter - https://x.com/AlexGDimakisShare Your Thoughts:Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.comDon’t forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!
    続きを読む 一部表示
    1 時間 21 分
  • Alex Dimakis on Post-Training AI: To deep seek or not, that’s the $1 trillion question! - Episode 10 Part 2: The Effortless Podcast
    2025/01/27

    In this episode of the Effortless Podcast, hosts Dheeraj Pandey and Amit Prakash sit down with Alex Dimakis, a renowned AI researcher and professor at UC Berkeley. With a background in deep learning, graphical models, and foundational AI frameworks, Alex provides unparalleled insights into the evolving landscape of AI.

    The discussion delves into the detailing of foundation models, modular AI architectures, fine-tuning, and the role of synthetic data in post-training. They also explore practical applications, challenges in creating reasoning frameworks, and the future of AI specialization and generalization.

    As Alex puts it, "To deep seek or not, that’s the $1 trillion question." Tune in to hear his take on how companies can bridge the gap between large generalist models and smaller specialized agents to achieve meaningful AI outcomes.

    Key Topics and Chapter Markers:

    • Introduction to Alex Dimakis & His Journey [0:00]
    • From Foundation Models to Modular AI Systems [6:00]
    • Fine-Tuning vs. Prompting: Understanding Post-Training [15:00]
    • Synthetic Data in AI Development: Challenges and Solutions [25:00]
    • The Role of Reasoning and Chain of Thought in AI [45:00]
    • AI's Future: Specialized Models vs. General Systems [1:05:00]
    • Alex’s Reflections on AI Research and Innovation [1:20:00]

    Hosts:

    • Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.
    • Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.

    Guest:
    Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.

    Follow the Hosts and the Guest:

    • Dheeraj Pandey:
      • LinkedIn: Dheeraj Pandey
      • Twitter: @dheeraj
    • Amit Prakash:
      • LinkedIn: Amit Prakash
      • Twitter: @amitp42
    • Alex Shtoyanov:
      • LinkedIn: Alex Dimakis
      • Twitter: @AlexGDimakis

    Share Your Thoughts:
    Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.com

    Don’t forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!

    続きを読む 一部表示
    1 時間 10 分
  • Rajat Monga on TensorFlow, Startups, and the Future of AI - Episode 09: The Effortless Podcast
    2024/12/18

    In this special guest episode of the Effortless Podcast, Amit Prakash sits down with Rajat Monga, the creator of TensorFlow and current Corporate Vice President of Engineering at Microsoft. With a career spanning Google Brain, founding Inference, and leading AI inferencing at Microsoft, Rajat offers a unique perspective on the evolution of AI. The conversation dives into TensorFlow’s revolutionary impact, the challenges of building startups, the rise of PyTorch, the future of inferencing, and how transformative tools like GPT-4 and OpenAI’s Gemini are reshaping the AI landscape.

    Key Topics and Chapter Markers:

    Introduction to Rajat Monga & TensorFlow Legacy [0:00]

    The inflection points in AI: TensorFlow's role and challenges [6:00]

    PyTorch vs. TensorFlow: A tale of shifting paradigms [16:00]

    The startup journey: Building Inference and lessons learned [27:00]

    Exploring O1 and advancements in reasoning frameworks [54:00]

    AI inference: Cost optimizations and hardware innovations [57:00]

    Agents, trust, and validation: AI in decision-making workflows [1:05:00]

    Rajat’s personal journey: Tools for resilience and finding balance [1:20:00]

    Host:

    Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, and a PhD in Computer Engineering. Amit has a strong track record in analytics, machine learning, and large-scale systems.

    Follow Amit on:

    LinkedIn - https://www.linkedin.com/in/amit-prakash-50719a2/

    X (Twitter) - https://x.com/amitp42

    Guest:

    Rajat Monga: He is a pioneer in the AI industry, best known as the co-creator of TensorFlow. He has held senior roles at Google Brain and Microsoft, shaping the foundational tools that power today’s AI systems. Rajat also co-founded Inference, a startup focused on anomaly detection in data analytics. At Microsoft, he leads AI software engineering, advancing inferencing infrastructure for the next generation of AI applications. He holds a Btech Degree from IIT, Delhi.

    Follow Rajat on:

    LinkedIn - https://www.linkedin.com/in/rajatmonga/

    X (Twitter) - https://twitter.com/rajatmonga

    Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

    Email: EffortlessPodcastHQ@gmail.com

    続きを読む 一部表示
    1 時間 28 分
  • AI Agents, Workflows, and the Future of Enterprise Innovation - Episode 08: The Effortless Podcast
    2024/11/12

    In this episode, Amit and Dheeraj dive deep into the transformative world of AI agents and enterprise workflows. They explore the concept of “AgentOS”—a platform enabling modular agents, each equipped with specific skills to tackle distinct challenges within workflows. Drawing parallels between AI advancements and real-world enterprise needs, Amit and Dheeraj discuss the importance of balancing both probabilistic AI-driven nodes with deterministic workflows to enhance efficiency without losing context or accuracy.

    Together, they examine industry needs, like reducing redundancy in incident management through vector databases, and predict the impact of AI agents on collaboration platforms, employee workflows, and customer support. This episode is a rich, thought-provoking journey through the latest in AI, where Amit and Dheeraj also offer their insights into enterprise AI adoption, future trends, and their forecasts for the next decade of AI-driven business transformation.

    Key Topics and Timestamps:

    • The Rise of AI Agents [0:14]
    • System 1 and System 2 Thinking in AI [1:50]
    • Workflows in Enterprise Automation [3:51]
    • Handling Context and Intent in Workflows [10:32]
    • Incident Management and Reducing Redundancy [43:57]
    • Collaboration Platforms’ Future [49:13]
    • Optimizing Enterprise Workflows with Agentic Systems [1:03:46]
    • The Enterprise AI Landscape and Predictions [1:13:18]

    Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

    続きを読む 一部表示
    1 時間 27 分
  • Deep dive into Analytics - Episode 07 : The Effortless Podcast
    2024/09/16

    In Episode 07 of The Effortless Podcast, Amit and Dheeraj embark on a deep dive into the world of analytics, unpacking its evolution, present state, and future in a tech landscape transformed by AI and cloud computing. The episode explores how analytics has grown from complex, on-premise systems to dynamic, cloud-powered solutions, accessible to anyone with a query. Amit and Dheeraj, with decades of combined experience in data architecture, share personal stories from their journeys at companies like Google, Oracle, and ThoughtSpot, alongside their perspectives on the transformations analytics has undergone over time. They highlight the shift from structured SQL-based data systems to the unstructured, AI-driven models of today.

    Key Topics and Chapter Markers:

    • The Foundations of Modern Analytics [00:03:00]
    • Cloud Computing’s Impact on Analytics [00:15:30]
    • Data Warehouse Wars: Snowflake vs. Databricks [00:26:15]
    • AI, LLMs, and the Future of Analytics [00:41:10]
    • Visualization as the Next Frontier in Analytics [00:45:00]
    • Reflections on the Path to AI [00:52:30]

    Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

    続きを読む 一部表示
    53 分
  • History of AI - EP06 Part 2: The Effortless Podcast
    2024/07/29

    This episode continues the journey from Part 1, where hosts Amit Prakash and Dheeraj Pandey mapped out AI’s evolution from its inception in the 1950s to the transformative hardware advancements that laid the groundwork for modern machine learning. In Part 2, Amit and Dheeraj pick up the thread to explore the key concepts that have defined AI’s development in the past two decades. They delve into the significance of neural network embeddings, vector representations, and the role of innovative frameworks like TensorFlow and PyTorch, shedding light on how these tools have driven modern AI forward.

    Listeners will gain unique insights into the breakthroughs that made today’s language models possible—from the introduction of attention mechanisms to the 2017 release of the transformer model, which fundamentally reshaped natural language processing. Amit and Dheeraj also discuss the scaling of AI with powerful hardware, including GPUs, and the impact of transfer learning and reinforcement learning with human feedback (RLHF). This episode is an invaluable listen for anyone curious about the mechanics behind modern AI and the future of intelligent systems.

    Key Topics & Chapter Markers:

    • Recap from Part 1: The Early Years of AI [00:00:00]
    • AI Architecture & Oracle’s Innovation in Hash Joins [00:02:00]
    • Impact of Nature in Creative and Collaborative Work [00:05:00]
    • The Rise of Neural Networks: Language and Image Processing [00:10:00]
    • Sparse and Dense Vectors Explained [00:15:00]
    • Google Translate’s Early Approaches & Statistical Methods [00:20:00]
    • TensorFlow vs. PyTorch: Defining the Modern AI Framework [00:30:00]
    • Dot Products, Similarity, and the Concept of Attention [00:35:00]
    • Transformers & The Attention Mechanism Revolution [00:42:00]
    • BERT, GPT, and the Dawn of Transfer Learning [01:00:00]
    • The Road to ChatGPT and OpenAI’s Innovations [01:10:00]
    • The Future of AI and Computational Scaling [01:15:00]

    Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

    続きを読む 一部表示
    1 時間 11 分
  • History of AI - EP06 Part 1: The Effortless Podcast
    2024/07/13

    In Part 1 of Episode 6, Amit Prakash and Dheeraj Pandey dive into the intriguing evolution of artificial intelligence, mapping its progress over the decades. This episode takes listeners through the origins of AI in the 1950s, where concepts initially stemmed from biological brain studies, to the major breakthroughs in computer science that have shaped AI development. Through analogies and real-world examples, they explore foundational ideas like neural networks, convergence, and the challenges of context and memory in recursive neural networks. They also delve into the impact of advancements in microprocessors, from Intel's complex instruction sets to NVIDIA's GPU innovations, explaining how these technologies enabled the computational leaps necessary for AI's growth.

    Join Amit and Dheeraj for this first segment on AI history as they lay down the complex, layered journey that has led to the AI advancements we see today.

    Key Topics & Chapter Markers:

    • AI's Evolutionary Journey & Key Challenges [00:00:00]
    • Neural Networks: Inspiration from Biology [00:01:00]
    • Weighted Sum, Inputs & Mathematical Functions [00:05:00]
    • Gradient Descent & Optimization in Neural Nets [00:10:15]
    • Computing Architecture: CPUs vs. GPUs [00:39:56]
    • RNNs and Early Problems in Memory & Context [01:03:00]
    • The Emergence of Convolutional Neural Networks (CNNs) [01:10:00]
    • ImageNet, GPUs & Scaling Neural Networks [01:24:00]

    Share Your Thoughts: Have questions or comments? Drop us a mail at EffortlessPodcastHQ@gmail.com

    続きを読む 一部表示
    1 時間 8 分