エピソード

  • Diving Prompt First: Comparing Managing LLMs with Managing Employees (With Prompt Examples)
    2024/11/11

    We dive deep into Comparing Managing LLMs with Managing Employees (With Prompt Examples).

    続きを読む 一部表示
    23 分
  • Diving Prompt First: Comparing Managing LLMs with Managing Employees
    2024/11/11

    We dive into Comparing Managing LLMs with Managing Employees.

    続きを読む 一部表示
    13 分
  • Diving Prompt First: Goal Oriented Frameworks
    2024/11/11

    We discover how Goal Oriented Frameworks our useful for successful chatbot interactions

    続きを読む 一部表示
    20 分
  • Diving Prompt First: Comparing Chatbots and Rain Man
    2024/11/11

    We compare communicating with Chatbots to communicating with Rain Man.

    続きを読む 一部表示
    24 分
  • Diving Prompt First: Self Consistency
    2024/10/11

    We discuss a technique called self-consistency which enhances the reasoning capabilities of large language models (LLMs). This technique involves prompting an LLM to generate multiple reasoning paths for a question and then selecting the most consistent answer among these paths. This method improves the accuracy and reliability of LLMs, particularly for tasks requiring complex reasoning, such as arithmetic and commonsense reasoning.

    続きを読む 一部表示
    10 分
  • Diving Prompt First: Automatic Reasoning and Tool-use (ART)
    2024/10/10

    ART addresses the limitations of traditional Chain-of-Thought (CoT) prompting by enabling LLMs to decompose tasks into multiple steps and utilize external resources, ultimately improving their performance on tasks requiring reasoning and complex problem-solving.

    続きを読む 一部表示
    11 分
  • Diving Prompt First: ReAct (Reason + Act)
    2024/10/09

    ReAct (Reason + Act) is a prompting technique that enhances the capabilities of Large Language Models (LLMs) by enabling them to reason, plan, and interact with external tools and data sources. This technique aims to overcome the limitations of traditional LLMs, which are restricted to their training data, leading to more accurate, reliable, and sophisticated applications.

    続きを読む 一部表示
    12 分
  • Diving Prompt First: Retrieval Augmented Generation (RAG)
    2024/10/08

    A technique that enhances large language models (LLMs) by integrating them with external knowledge sources.

    続きを読む 一部表示
    12 分