エピソード

  • #80 – Dan Williams on How Persuasion Works
    2024/10/26

    Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

    You can find links and a transcript at www.hearthisidea.com/episodes/williams.

    We discuss:

    • If reasoning is so useful, why are we so bad at it?
    • Do some bad ideas really work like ‘mind viruses’? Is the ‘luxury beliefs’ concept useful?
    • What's up with the idea of a ‘marketplace for ideas’? Are people shopping for new beliefs, or to rationalise their existing attitudes?
    • How dangerous is misinformation, really? Can we ‘vaccinate’ or ‘inoculate’ against it?
    • Will AI help us form more accurate beliefs, or will it persuade more people of unhinged ideas?
    • Does fact-checking work?
    • Under transformative AI, should we worry more about the suppression or the proliferation of counter-establishment ideas?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

    続きを読む 一部表示
    1 時間 49 分
  • #79 – Tamay Besiroglu on Explosive Growth from AI
    2024/09/14

    Tamay Besiroglu is a researcher working on the intersection of economics and AI. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI.

    You can find links and a transcript at www.hearthisidea.com/episodes/besiroglu

    In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • The argument for explosive growth from ‘increasing returns to scale’
    • Does AI need to be able to automate R&D to cause rapid growth?
    • Which theories of growth best explain the Industrial Revolution; and what do they predict from AI?
    • What happens to human incomes under near-total job automation?
    • Are regulations likely to slow down frontier AI progress enough to prevent this? Might AI go the way of nuclear power?
    • Will AI hit on resource or power limits before explosive growth? Won't it run out of data first?
    • Why aren't academic economists more interested in the prospect of explosive growth, if indeed it is so plausible?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

    続きを読む 一部表示
    2 時間 9 分
  • #78 – Jacob Trefethen on Global Health R&D
    2024/09/08

    Jacob Trefethen oversees Open Philanthropy’s science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge.

    You can find links and a transcript at www.hearthisidea.com/episodes/trefethen

    In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading
    • How R&D for neglected diseases works —
    • How much does the world spend on it?
    • How do drugs for neglected diseases go from design to distribution?
    • No-brainer policy ideas for speeding up global health R&D
    • Comparing health R&D to public health interventions (like bed nets)
    • Comparing the social returns to frontier (‘Progress Studies’) to global health R&D
    • Why is there no GiveWell-equivalent for global health R&D?
    • Won't AI do all the R&D for us soon?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    続きを読む 一部表示
    2 時間 30 分
  • #77 – Elizabeth Seger on Open Sourcing AI
    2024/07/25

    Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI.

    You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • What ‘open source’ really means
    • What is (and isn’t) open about ‘open source’ AI models
    • How open source weights and code are useful for AI safety research
    • How and when the costs of open sourcing frontier model weights might outweigh the benefits
    • Analogies to ‘open sourcing nuclear designs’ and the open science movement

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Note that this episode was recorded before the release of Meta’s Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.

    続きを読む 一部表示
    1 時間 21 分
  • #76 – Joe Carlsmith on Scheming AI
    2024/03/16

    Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford.

    You can find links and a transcript at www.hearthisidea.com/episodes/carlsmith

    In this episode we talked about a report Joe recently authored, titled ‘Scheming AIs: Will AIs fake alignment during training in order to get power?’. The report “examines whether advanced AIs that perform well in training will be doing so in order to gain power later”; a behaviour Carlsmith calls scheming.

    We talk about:

    • Distinguishing ways AI systems can be deceptive and misaligned
    • Why powerful AI systems might acquire goals that go beyond what they’re trained to do, and how those goals could lead to scheming
    • Why scheming goals might perform better (or worse) in training than less worrying goals
    • The ‘counting argument’ for scheming AI
    • Why goals that lead to scheming might be simpler than the goals we intend
    • Things Joe is still confused about, and research project ideas

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    続きを読む 一部表示
    1 時間 52 分
  • #75 – Eric Schwitzgebel on Digital Consciousness and the Weirdness of the World
    2024/02/04

    Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here.

    We talk about:

    • The possibility of digital consciousness
      • Policy ideas for avoiding major moral mistakes around digital consciousness
      • Prospects for the science of consciousness, and why we likely won't have clear answers in time
    • Why introspection is much less reliable than most people think
      • How and why we invent false stories about our own choices without realising
      • What randomly sampling people's experiences reveals about what we're doing with most of our attention
    • The possibility of 'overlapping minds'
    • How and why our actions might have infinite effects, both good and bad
      • Whether it would be good news to learn that our actions have infinite effects, or that the universe is infinite in extent
    • The best science fiction on digital minds and AI

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    続きを読む 一部表示
    1 時間 59 分
  • #74 – Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons
    2023/12/19

    Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme

    In this episode we talk about:

    • Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change
    • Why transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors face
    • As well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concern

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    続きを読む 一部表示
    1 時間 54 分
  • Bonus: 'How I Learned To Love Shrimp' & David Coman-Hidy
    2023/11/24

    In this bonus episode we are sharing an episode by another podcast: How I Learned To Love Shrimp. It is co-hosted by Amy Odene and James Ozden, who together are "showcasing innovative and impactful ways to help animals".

    In this interview they speak to David Coman-Hidy, who is the former President of The Humane –League, one of the largest farm animal advocacy organisations in the world. He now works as a Partner at Sharpen Strategy working to coach animal advocacy organisations.

    続きを読む 一部表示
    1 時間 19 分