-
サマリー
あらすじ・解説
The machines are coming. Scratch that—they're already here: AIs that propose new combinations of ideas; chatbots that help us summarize texts or write code; algorithms that tell us who to friend or follow, what to watch or read. For a while the reach of intelligent machines may have seemed somewhat limited. But not anymore—or, at least, not for much longer. The presence of AI is growing, accelerating, and, for better or worse, human culture may never be the same. My guest today is Dr. Iyad Rahwan. Iyad directs the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin. Iyad is a bit hard to categorize. He's equal parts computer scientist and artist; one magazine profile described him as "the Anthropologist of AI." Labels aside, his work explores the emerging relationships between AI, human behavior, and society. In a recent paper, Iyad and colleagues introduced a framework for understanding what they call "machine culture." The framework offers a way of thinking about the different routes through which AI may transform—is transforming—human culture. Here, Iyad and I talk about his work as a painter and how he brings AI into the artistic process. We discuss whether AIs can make art by themselves and whether they may eventually develop good taste. We talk about how AIphaGoZero upended the world of Go and about how LLMs might be changing how we speak. We consider what AIs might do to cultural diversity. We discuss the field of cultural evolution and how it provides tools for thinking about this brave new age of machine culture. Finally, we discuss whether any spheres of human endeavor will remain untouched by AI influence. Before we get to it, a humble request: If you're enjoying the show—and it seems that many of you are—we would be ever grateful if you could let the world know. You might do this by leaving a rating or review on Apple Podcasts, or maybe a comment on Spotify. You might do this by giving us a shout-out on the social media platform of your choice. Or, if you prefer less algorithmically mediated avenues, you might do this just by telling a friend about us face-to-face. We're hoping to grow the show and the best way to do that is through listener endorsements and word-of-mouth. Thanks in advance, friends. Alright, on to my conversation with Dr. Iyad Rahwan. Enjoy! A transcript of this episode will be available soon. Notes and links 3:00 – Images from Dr. Rahwan's ‘Faces of Machine’ portrait series. One of the portraits from the series serves as our tile art for this episode. 11:30 – The “stochastic parrots” term comes from an influential paper by Emily Bender and colleagues. 18:30 – A popular article about DALL-E and the “avocado armchair.” 21:30 – Ted Chiang’s essay, “Why A.I. isn’t going to make art.” 24:00 – An interview with Boris Eldagsen, who won the Sony World Photography Awards in March 2023 with an image that was later revealed to be AI-generated. 28:30 – A description of the concept of “science fiction science.” 29:00 – Though widely attributed to different sources, Isaac Asimov appears to have developed the idea that good science fiction predicts not the automobile, but the traffic jam. 30:00 – The academic paper describing the Moral Machine experiment. You can judge the scenarios for yourself (or design your own scenarios) here. 30:30 – An article about the Nightmare Machine project; an article about the Deep Empathy project. 37:30 – An article by Cesar Hidalgo and colleagues about the relationship between television/radio and global celebrity. 41:30 – An article by Melanie Mitchell (former guest!) on AI and analogy. A popular piece about that work. 42:00 – A popular article describing the study of whether AIs can generate original research ideas. The preprint is here. 46:30 – For more on AlphaGo (and its successors, AlphaGo Zero and AlphaZero), see here. 48:30 – The study finding that the novelty of human Go playing increased due to the influence of AlphaGo. 51:00 – A blogpost delving into the idea that ChatGPT overuses certain words, including “delve.” A recent preprint by Dr. Rahwan and colleagues, presenting evidence that “delve” (and other words overused by ChatGPT) are now being used more in human spoken communication. 55:00 – A paper using simulations to show how LLMs can “collapse” when trained on data that they themselves generated. 1:01:30 – A review of the literature on filter bubbles, echo chambers, and polarization. 1:02:00 – An influential study by Dr. Chris Bail and colleagues suggesting that exposure to opposing views might actually increase polarization. 1:04:30 – A book by Geoffrey Hodgson and Thorbjørn Knudsen, who are often credited with developing the idea of “generalized Darwinism” in the social sciences. 1:12:00 – An article about Google’s NotebookLM podcast-like audio summaries...