• AI accuracy: New models and progress on hallucinations

  • 2024/10/21
  • 再生時間: 14 分
  • ポッドキャスト

AI accuracy: New models and progress on hallucinations

  • サマリー

  • What's the latest with artificial intelligence and models' accuracy? Three objectives: Compare and contrast traditional autoregressive LLMs (Gemini/chatGPT/Claude/LLaMA) vs nonautoregressive AI (NotebookLM) vs chain of thought reasoning models (Strawberry o1) – including the benefits, detriments, and tradeoffs of eachError rates for NotebookLM vs traditional LLMs vs CoT reasoning models – focusing on accuracy benefits from a smaller corpus of curated source materials that users feed NotebookLM when promoting (vs an LLM trained on and inferencing from the entire web)Which model (or combination of models) will be the future standard? See also: 🧵https://x.com/AnthPB/status/1848186962856865904 This podcast is AI-generated with NotebookLM, using the following sources, research, and analysis: An Investigation of Language Model Interpretability via Sentence Editing (OSU, Stevens, 2021.04)Are Auto-Regressive Large Language Models Here to Stay? (Medium, Bettencourt, 2023.12.28)Attention Is All You Need (Google Brain, Vaswani/Shazeer/Parmar/Uszkoreit/Jones/Gomez/Kaiser/Polosukhin, 2017.06.12)BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (Facebook AI, Lewis/Liu/Goyal/Ghazvininejad/Mohamed/Levy/Stoyanov/Zettlemoyer, 2019.10.29)Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs (Zhang/Du/Pang/Liu/Gao/Lin, 2024.06.13)Contra LeCun on "Autoregressive LLMs are doomed" (LessWrong, rotatingpaguro, 2023.04.10)Do large language models need sensory grounding for meaning and understanding? (LeCun, 2023.03.24)Experimenting with Power Divergences for Language Modeling (Labeau/Cohen, 2019)Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel/Shazeer/Roberts/Lee/Narang/Matena/Zhou/Li/Liu, 2023.09.19)Improving Non-Autoregressive Translation Models Without Distillation (Huang/Perez/Volkovs, 2022.01.28)Non-Autoregressive Neural Machine Translation (Gu/Bradbury/Xiong/Li/Socher, 2017.11.27)On the Learning of Non-Autoregressive Transformers (Huang/Tao/Zhou/Li/Huang, 2022.06.13)Towards Better Chain-of-Thought Prompting Strategies: A Survey (Yu/He/Wu/Dai/Chen, 2023.10.08).pdfXLNet: Generalized Autoregressive Pretraining for Language Understanding (Yang/Dai/Yang/Carbonell/Salakhutdinov/Le, 2020.01.22) Not investment advice; do your own due diligence!
    続きを読む 一部表示
activate_samplebutton_t1

あらすじ・解説

What's the latest with artificial intelligence and models' accuracy? Three objectives: Compare and contrast traditional autoregressive LLMs (Gemini/chatGPT/Claude/LLaMA) vs nonautoregressive AI (NotebookLM) vs chain of thought reasoning models (Strawberry o1) – including the benefits, detriments, and tradeoffs of eachError rates for NotebookLM vs traditional LLMs vs CoT reasoning models – focusing on accuracy benefits from a smaller corpus of curated source materials that users feed NotebookLM when promoting (vs an LLM trained on and inferencing from the entire web)Which model (or combination of models) will be the future standard? See also: 🧵https://x.com/AnthPB/status/1848186962856865904 This podcast is AI-generated with NotebookLM, using the following sources, research, and analysis: An Investigation of Language Model Interpretability via Sentence Editing (OSU, Stevens, 2021.04)Are Auto-Regressive Large Language Models Here to Stay? (Medium, Bettencourt, 2023.12.28)Attention Is All You Need (Google Brain, Vaswani/Shazeer/Parmar/Uszkoreit/Jones/Gomez/Kaiser/Polosukhin, 2017.06.12)BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (Facebook AI, Lewis/Liu/Goyal/Ghazvininejad/Mohamed/Levy/Stoyanov/Zettlemoyer, 2019.10.29)Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs (Zhang/Du/Pang/Liu/Gao/Lin, 2024.06.13)Contra LeCun on "Autoregressive LLMs are doomed" (LessWrong, rotatingpaguro, 2023.04.10)Do large language models need sensory grounding for meaning and understanding? (LeCun, 2023.03.24)Experimenting with Power Divergences for Language Modeling (Labeau/Cohen, 2019)Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel/Shazeer/Roberts/Lee/Narang/Matena/Zhou/Li/Liu, 2023.09.19)Improving Non-Autoregressive Translation Models Without Distillation (Huang/Perez/Volkovs, 2022.01.28)Non-Autoregressive Neural Machine Translation (Gu/Bradbury/Xiong/Li/Socher, 2017.11.27)On the Learning of Non-Autoregressive Transformers (Huang/Tao/Zhou/Li/Huang, 2022.06.13)Towards Better Chain-of-Thought Prompting Strategies: A Survey (Yu/He/Wu/Dai/Chen, 2023.10.08).pdfXLNet: Generalized Autoregressive Pretraining for Language Understanding (Yang/Dai/Yang/Carbonell/Salakhutdinov/Le, 2020.01.22) Not investment advice; do your own due diligence!

AI accuracy: New models and progress on hallucinationsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。