COMPLEXITY

著者: Santa Fe Institute
  • サマリー

  • The official podcast of the Santa Fe Institute. Subscribe now and be part of the exploration!
    2019-2024 Santa Fe Institute
    続きを読む 一部表示

あらすじ・解説

The official podcast of the Santa Fe Institute. Subscribe now and be part of the exploration!
2019-2024 Santa Fe Institute
エピソード
  • Nature of Intelligence, Ep. 5: How do we assess intelligence?
    2024/11/20

    Guests:

    • Erica Cartmill, Professor, Anthropology and Cognitive Science, Indiana University Bloomington
    • Ellie Pavlick, Assistant Professor, Computer Science and Linguistics, Brown University

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education
    • Diverse Intelligences Summer Institute

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

    Talks:

    • How do we know what an animal understands by Erica Cartmill
    • The Future of Artificial Intelligence by Melanie Mitchell

    Papers & Articles:

    • “Just kidding: the evolutionary roots of playful teasing,” in Biology Letters (September 23, 2020), doi.org/10.1098/rsbl.2020.0370
    • “Overcoming bias in the comparison of human language and animal communication,” in PNAS (November 13, 2023), doi.org/10.1073/pnas.22187991
    • “Using the senses in animal communication,” by Erica Cartmill, in A New Companion to Linguistic Anthropology, Chapter 20, Wiley Online Library (March 21, 2023)
    • “Symbols and grounding in large language models,” in Philosophical Transactions of the Royal Society A (June 5, 2023), doi.org/10.1098/rsta.2022.0041
    • “Emergence of abstract state representations in embodied sequence modeling,” in arXiv (November 7, 2023), doi.org/10.48550/arXiv.2311.02171
    • “How do we know how smart AI systems are,” in Science (July 13, 2023), doi: 10.1126/science.adj59
    続きを読む 一部表示
    48 分
  • Nature of Intelligence, Ep. 4: Babies vs Machines
    2024/11/06

    Guests:

    • Linda Smith, Distinguished Professor and Chancellor's Professor, Psychological and Brain Sciences, Department of Psychological and Brain Sciences, Indiana University Bloomington
    • Michael Frank, Benjamin Scott Crocker Professor of Human Biology, Department of Psychology, Stanford University

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

    Talks:

    • Why "Self-Generated Learning” May Be More Radical and Consequential Than First Appears by Linda Smith
    • Children’s Early Language Learning: An Inspiration for Social AI, by Michael Frank at Stanford HAI
    • The Future of Artificial Intelligence by Melanie Mitchell

    Papers & Articles:

    • “Curriculum Learning With Infant Egocentric Videos,” in NeurIPS 2023 (September 21)
    • “The Infant’s Visual World The Everyday Statistics for Visual Learning,” by Swapnaa Jayaraman and Linda B. Smith, in The Cambridge Handbook of Infant Development: Brain, Behavior, and Cultural Context, Chapter 20, Cambridge University Press (September 26, 2020)
    • “Can lessons from infants solve the problems of data-greedy AI?” in Nature (March 18, 2024), doi.org/10.1038/d41586-024-00713-5
    • “Episodes of experience and generative intelligence,” in Trends in Cognitive Sciences (October 19, 2022), doi.org/10.1016/j.tics.2022.09.012
    • “Baby steps in evaluating the capacities of large language models,” in Nature Reviews Psychology (June 27, 2023), doi.org/10.1038/s44159-023-00211-x
    • “Auxiliary task demands mask the capabilities of smaller language models,” in COLM (July 10, 2024)
    • “Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model,” in Cognitive Science (First published: 14 May 2024), doi.org/10.1111/cogs.13448
    続きを読む 一部表示
    39 分
  • Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?
    2024/10/23

    Guests:

    • Tomer Ullman, Assistant Professor, Department of Psychology, Harvard University
    • Murray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMind

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
    • The Technological Singularity by Murray Shanahan
    • Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray Shanahan
    • Solving the Frame Problem by Murray Shanahan
    • Search, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard Southwick

    Talks:

    • The Future of Artificial Intelligence by Melanie Mitchell
    • Artificial intelligence: A brief introduction to AI by Murray Shanahan

    Papers & Articles:

    • “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)
    • “Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833
    • “Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264
    • “Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8
    • “Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399
    • “Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024),
    • “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422
    続きを読む 一部表示
    45 分

COMPLEXITYに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。