Increments

著者: Ben Chugg and Vaden Masrani
  • サマリー

  • Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com.
    © 2024 Ben Chugg and Vaden Masrani
    続きを読む 一部表示

あらすじ・解説

Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com.
© 2024 Ben Chugg and Vaden Masrani
エピソード
  • #77 (Bonus) - AI Doom Debate (w/ Liron Shapira)
    2024/11/19
    Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Definitions of "new knowledge" The reliance of deep learning on induction Can AIs be creative? The limits of statistical prediction Predictions of what deep learning cannot accomplish Can ChatGPT write funny jokes? Trends versus principles The psychological consequences of doomerism Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.
    続きを読む 一部表示
    2 時間 21 分
  • #0 - Introduction
    2020/05/19

    Ben and Vaden attempt to justify why the world needs another podcast, and fail.

    続きを読む 一部表示
    8 分
  • #76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira)
    2024/11/08
    Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208). We discuss Whether we're concerned about AI doom Bayesian reasoning versus Popperian reasoning Whether it makes sense to put numbers on all your beliefs Solomonoff induction Objective vs subjective Bayesianism Prediction markets and superforecasting References Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/ Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/ Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25). The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/ Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron Come join our discord server! DM us on twitter or send us an email to get a supersecret link Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com Special Guest: Liron Shapira.
    続きを読む 一部表示
    2 時間 51 分

Incrementsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。