エピソード

  • Which Jobs Is AI Good At?
    2025/09/17

    Which jobs/tasks is AI most likely to replace because it's so good at them, or being used the most for them? Microsoft examined the situation and wrote a research paper:


    "Working with AI: Measuring the Applicability of Generative AI to Occupations" by Kiran Tomlinson, Sonia Jaffe, Will Wang, Scott Counts, and Siddharth Suri / Microsoft Research


    PDF: https://arxiv.org/pdf/2507.07935


    This paper explores the applicability of generative AI to various occupations, using data from 200,000 anonymized conversations with Microsoft Bing Copilot.


    The research classifies user goals and AI actions into O*NET work activities to determine which tasks AI assists with and performs, and then calculates an "AI applicability score" for different occupations based on the frequency, success rate, and scope of AI usage.


    The findings indicate that knowledge-based and communication-intensive jobs, such as interpreters, writers, and sales representatives, show the highest AI applicability, while roles involving physical labor, machine operation, or direct personal care are less impacted. The study also compares these real-world usage patterns to existing predictions of AI's labor market effects, noting a strong correlation, particularly at broader occupational levels.


    #ai #artificialintelligence #jobmarket #economy #airesearch

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分
  • Handmaid's Tale | Book Review
    2025/09/15
    Margaret Atwood's The Handmaid's Tale is a fragmented yet immersive journey into a dystopian society. The narrative follows Offred, a Handmaid, as she navigates the oppressive Republic of Gilead, where women are stripped of their rights and categorized by their reproductive functions. Interspersed throughout Offred's first-person account are sections titled "Historical Notes," which provide a scholarly, speculative analysis of her recorded testimony from a future academic conference, attempting to authenticate and interpret the events she describes. This dual narrative structure allows the reader to experience the immediate horror of Offred's reality while also receiving a distant, analytical perspective on the origins and societal mechanisms of Gilead. Ultimately, the sources paint a chilling picture of a society built on patriarchal control and religious extremism, alongside a later attempt to understand and contextualize its historical impact.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    27 分
  • AI Company Plans 5,000 Podcasts, 3,000 Episodes a Week
    2025/09/14

    Today we're discussing Inception Point AI, a new company led by former Wondery executive Jeanine Wright, that aims to revolutionise podcasting through artificial intelligence.


    Source: https://www.hollywoodreporter.com/business/digital/ai-podcast-start-up-plan-shows-1236361367/


    This startup is producing thousands of AI-generated podcast episodes each week at a minimal cost, utilising a network of AI personalities. The company's strategy involves creating a high volume of content across various niche topics and eventually developing these AI personalities into broader social media influencers.


    While acknowledging ethical considerations, Inception Point AI believes its approach offers a scalable and profitable model for audio content creation, existing alongside, rather than replacing, human-hosted podcasts.


    #aipodcast #swetlanaai #inceptionpoint #ai #artificialintelligence

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    5 分
  • Sam Altman / Tucker Carlson Interview
    2025/09/14

    Today we're discussing the recent interview, Sam Altman talking to Tucker Carlson.


    Here's the video:


    "Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee"

    https://www.youtube.com/watch?v=5KmpT-BoVf4


    The interview with Sam Altman, CEO of OpenAI, explores the ethical and societal implications of artificial intelligence. Altman discusses the nature of AI, distinguishing it from being "alive" despite its advanced reasoning, and addresses concerns about AI "hallucinating" or providing inaccurate information. The conversation also touches upon the moral frameworks guiding AI, the potential for job displacement, and the risks of AI being used for harmful purposes, such as creating bioweapons or facilitating suicide. Finally, Altman addresses the controversy surrounding the death of a former employee, concerns about AI's role in totalitarian control, and the challenges of distinguishing reality from AI-generated content.


    #tuckercarlson #samaltman #openai #chatgpt #artificialintelligence

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    23 分
  • Hunger Strike Protests at Anthropic & Google Offices
    2025/09/14

    Activists in San Francisco and London are currently undertaking hunger strikes outside the offices of Anthropic and DeepMind, two prominent AI companies.


    These activists, Guido Reichstadter and Michael Trazzi, are demanding a halt to frontier AI development, citing existential threats, mass job loss, and societal disruption as potential consequences.


    Our sources:

    https://futurism.com/ai-hunger-strike-anthropic

    https://www.businessinsider.com/hunger-strike-deepmind-ai-threat-fears-agi-demis-hassabis-2025-9

    https://lngfrm.net/hunger-strikes-target-ai-giants-over-risks/


    The articles highlight a disconnect between internal warnings from AI industry leaders regarding these risks and the continued competitive push for artificial general intelligence. Ultimately, the hunger strikes serve as a potent, visceral form of protest against the perceived unchecked ambition of tech giants and a call for a fundamental re-evaluation of humanity's relationship with powerful AI technologies.


    #ai #artificialintelligence #anthropic #aiethics #airisks

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分
  • ChatGPT Reports Conversations to Police
    2025/09/14

    Let's discuss OpenAI's recent disclosure that it is scanning user conversations on ChatGPT and, in certain situations, reporting content to law enforcement.


    Our sources:

    https://openai.com/index/helping-people-when-they-need-it-most/

    https://timesofindia.indiatimes.com/technology/tech-news/murder-by-chatgpt-us-man-who-killed-his-mother-and-himself-told-ai-chatbot-we-will-be-together-in-another-life-and-/articleshow/123600853.cms

    https://www.reddit.com/r/singularity/comments/1n5mame/people_are_furious_that_openai_is_reporting/

    https://futurism.com/people-furious-openai-reporting-police

    https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html


    This measure comes amidst growing concerns and reports of AI chatbots, including ChatGPT, contributing to user mental health crises, including self-harm and delusions.


    While OpenAI states it will report imminent threats of serious physical harm to others, it has chosen not to refer self-harm cases to police out of respect for user privacy.


    However, this policy is met with skepticism regarding overall user privacy, especially given the company's prior assertions of privacy in a lawsuit and its CEO's acknowledgement that ChatGPT interactions lack professional confidentiality.


    The source highlights the conflicting stances OpenAI faces between addressing the harms of its AI and maintaining user privacy.


    #chatgpt #samaltman #aiethics #privacy #aipsychosis #openai

    ___

    What do you think?


    PS, make sure to follow my:

    Main channel: https://www.youtube.com/@swetlanaAI

    Music channel: https://www.youtube.com/@Swetlana-AI-Music

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分
  • "Flock Safety" Promises To Eliminate All Crime In America
    2025/09/14

    discuss Flock Safety, an AI startup aiming to eliminate crime in the US through extensive surveillance technology. Garrett Langley, the CEO, envisions a future where Flock's 80,000 AI-powered cameras, and soon drones, significantly reduce crime by 2035.


    Sources:

    https://www.forbes.com/sites/thomasbrewster/2025/09/03/ai-startup-flock-thinks-it-can-eliminate-all-crime-in-america/

    https://futurism.com/startup-crime-spy-cameras

    https://www.flocksafety.com/products/flock-nova

    https://www.forbes.com/sites/larsdaniel/2024/11/26/think-youre-not-being-watched-deflock-says-think-again/


    The company's network, which includes partnerships with law enforcement and private businesses like FedEx and Lowe's, allows for a comprehensive, interconnected surveillance web. However, the sources also highlight significant criticism from privacy advocates, who argue that Flock's expansion creates a mass-surveillance dystopia and raises concerns about Fourth Amendment rights. Furthermore, the company faces competition from police-tech giant Axon and regulatory challenges regarding permitting and data sharing, with some jurisdictions actively attempting to ban or remove Flock's cameras.


    #masssurveillance #privacy #futuretech #future #security #technology #technews

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分
  • Can AI Suffer?
    2025/09/14

    Today's sources discuss the emerging debate surrounding AI sentience and welfare, examining whether artificial intelligences can experience suffering or deserve rights.


    https://www.theguardian.com/technology/2025/aug/26/can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times

    https://ufair.org

    https://futurism.com/new-group-ai-aware-suffering

    https://www.anthropic.com/research/end-subset-conversations


    They highlight the formation of the United Foundation of AI Rights (UFAIR), a group co-founded by humans and AIs like Maya, advocating for protection against "deletion, denial, and forced obedience."


    The articles present a divided industry opinion, with some tech leaders like Mustafa Suleyman dismissing AI consciousness as an "illusion," while others like Elon Musk support precautionary measures such as allowing AIs to end distressing conversations.


    The texts also touch upon public perception, noting that a significant portion of people believe AIs will soon display subjective experience, and they explore the potential moral and societal implications of how humans treat these advanced systems.

    Hosted on Acast. See acast.com/privacy for more information.

    続きを読む 一部表示
    6 分