• S3E11 - Intentional AI: AI video rewards planning, not your ideas
    2026/03/10
    AI video tools promise fast, cheap production. But what you get back depends entirely on how much thinking you did before you hit enter.In Episode 11 of Season 3's Intentional AI series, Virgil and Cole take on AI video generation, arguably the most complex and most hyped area of AI content creation. Video production has always been expensive, often running thousands of dollars per minute through traditional workflows. AI tools are pitched as the solution to that cost. The reality is more complicated.The core question Cole raises early: are you using AI for speed, or for creativity? With video, that matters even more than with text or images, because the ability to edit what AI generates is extremely limited. You are largely working with what comes back.Virgil tested three tools, Claude, Artlist, and Sora, using the same prompt and the same source article the series has been following. The results varied wildly. Some tools produced clean, factually grounded output that could serve as a foundation with additional editing. Others burned through resources quickly and delivered results that raised more questions than they answered (to put it lightly). Each tool had tradeoffs between creative quality, turnaround time, cost, and practical usability.The pattern held across the board: AI video does not reward vague ideas. It rewards storyboarding, defined objectives, and clear constraints. The most realistic use case is not generating entire videos from scratch, but using AI for individual pieces -- a specific animation, a graphic element, a rough draft to react to.AI video is getting better. But better does not mean ready.Previously in the Intentional AI series:Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research and AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for accessibilityEpisode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEOEpisode 7: Why AI can make your content personalization worseEpisode 8: The real value of AI wireframes is NOT the wireframesEpisode 9: Just because AI can create images doesn't mean you should use themEpisode 10: The Super Bowl didn't sell AI, it exposed itNew episodes every other Tuesday.For more conversations about AI, design, and digital strategy, visit https://www.highmonkey.com/podcast and subscribe on your favorite podcast platform.(0:00) - Intro(1:04) - The good & bad of AI video generation(1:30) - Are you using AI for speed or creativity?(3:56) - Structure up front = your best friend(7:28) - How Coinbase used simplicity to stand out(8:42) - More Super Bowl AI narrative unpacking(10:20) - We tested 3 tools for AI video generation(11:47) - Testing Claude(14:24) - Testing Artlist(16:32) - Testing Sora(19:03) - Closing thoughts & takeaways(21:40) - OutroSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    23 分
  • S3E10 - Intentional AI: The Super Bowl didn't sell AI, it exposed it
    2026/02/24

    In Episode 10 of Intentional AI, we are taking a short detour in our Intentional AI series to talk about the Super Bowl. Not the game. The ads. A noticeable chunk of them leaned hard into AI. On the surface, it felt like a big moment for the industry. But when you look closer, it raises a different question. Are we watching real progress, or just very expensive hype?

    We unpack what was actually being sold, what was implied, and what gets left out when AI is positioned as effortless.

    AI has value. We are not arguing that it does not. But it works best when it is used intentionally and within clear boundaries. When it is marketed as a replacement for thinking, planning, or strategy, that is where things fall apart.


    If you are trying to separate signal from noise, this one is for you.


    Previously in the Intentional AI series:

    1. Episode 1: Intentional AI and the Content Lifecycle
    2. Episode 2: Maximizing AI for Research and Analysis
    3. Episode 3: Smarter Content Creation with AI
    4. Episode 4: The role of AI in content management
    5. Episode 5: How much can you trust AI for accessibility
    6. Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEO
    7. Episode 7: Why AI can make your content personalization worse
    8. Episode 8: The real value of AI wireframes is NOT the wireframes
    9. Episode 9: Just because AI can create images doesn't mean you should use them


    New episodes every other Tuesday.


    For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    (0:00) - Intro

    (0:42) - We had to talk about the Super Bowl

    (2:05) - The numbers behind AI in the Super Bowl

    (3:55) - How AI is marketed vs reality of AI

    (7:30) - This is why we started Intentional AI

    (8:30) - Reflections on the current realities of AI

    (13:20) - Where does AI make the most sense?

    (15:30) - Our reaction to the AI generated ads

    (17:30) - Join us and learn to be responsible with AI

    (19:00) - Outro


    **Also disclaimer: there is a math error at 18:15 - the correct calculation is closer to $100-150 million.**


    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts, Spotify, or Soundcloud:

    https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024

    https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0

    https://soundcloud.com/discussing-stupid

    Check Us Out on Socials:

    https://www.linkedin.com/company/discussing-stupid

    https://www.instagram.com/discussingstupid/

    https://www.facebook.com/discussingstupid

    https://x.com/DiscussStupid

    続きを読む 一部表示
    20 分
  • S3E9 - Intentional AI: Just because AI can create images doesn't mean you should use them
    2026/02/10
    In Episode 9 of the Intentional AI series, Cole and Virgil take on one of the most common and misunderstood uses of AI today: image and graphic generation. From social media visuals to promotional graphics, AI images are fast, easy, and everywhere.The conversation focuses on why images became the public on ramp to AI and why that familiarity creates risk. Visuals feel harmless, but the moment AI starts generating finished looking images, teams inherit decisions around ownership, ethics, and trust that they are often unprepared to make.A central theme of the episode is responsibility escalation. As AI reduces the effort required to create images, the importance of human judgment increases. Treating AI generated visuals as final work can quickly introduce legal, ethical, and reputational problems.Virgil shares a practical experiment where he used a simple prompt to generate three social media promotional graphics from an existing article and tested the results across three tools: Canva, Claude, and Artlist.Canva produced the most generic and repetitive designs. Claude delivered cleaner structure and stronger messaging but struggled with fonts, formats, and variation. Artlist created the most visually interesting outputs, though it introduced workflow limitations and cost concerns.The episode reinforces a consistent conclusion across the series. AI can help jumpstart visual work, but it cannot replace judgment, intent, or responsibility.In this episode, they explore:Why AI images are so tempting to useWhere AI generated graphics actually helpWhy most AI visuals fall flatEthical and ownership risks teams overlookA comparison of Canva, Claude, and ArtlistA downloadable Episode Companion Guide is available below with example outputs and tool takeaways.https://links.discussingstupid.com/s3e9companionPreviously in the Intentional AI series:Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research and AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for accessibilityEpisode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEOEpisode 7: Why AI can make your content personalization worseEpisode 8: The real value of AI wireframes is NOT the wireframesNew episodes every other Tuesday.For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform.(0:00) - Intro(1:40) - You can’t escape AI imagery(3:18) - Why AI images are risky(4:40) - The legal and ethical line(6:15) - Creativity vs time and cost(9:28) - Every tool has hopped on the AI bandwagon(13:20) - The slippery slope of AI visuals(15:35) - We tested 3 tools for AI visuals(17:30) - Testing Canva(20:40) - Testing Claude (Opus)(22:15) - Testing Artlist(24:15) - Tool testing takeaways(26:45) - Closing thoughts(28:00) - OutroSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    29 分
  • S3E8 - Intentional AI: The real value of AI wireframes is NOT the wireframes
    2026/01/28
    In Episode 8 of the Intentional AI series, Cole, Virgil, and Chad explore one of the most tempting uses of AI in digital work: wireframing and page layout. With AI now able to generate full wireframes in minutes or even seconds, the promise of speed is undeniable. But speed alone is not the point.The conversation focuses on where AI genuinely helps in the wireframing process and where it introduces new risks. Wireframes are meant to establish structure, hierarchy, and intent, not just visual output. While AI can quickly generate layouts, components, and patterns, it still requires strong human judgment to evaluate what is correct, what is missing, and what could cause problems downstream.A key theme of the episode is escalation of responsibility. As AI reduces the time required to create wireframes, the importance of human review, direction, and decision making increases. Treating AI generated wireframes as finished work can introduce serious risks, especially around accessibility, content fidelity, maintainability, and overall project direction.Virgil shares an experiment where he used AI to first generate a detailed prompt for wireframing, then tested that prompt across three tools: Claude, Google Gemini 3, and Figma Make. The results reveal clear differences in layout quality, accessibility handling, content retention, and how easily the outputs could be integrated into real workflows.Claude produced the strongest layout and structural patterns but failed badly on accessibility and removed large portions of content. Gemini generated simpler wireframes with clearer structure, but used even less content and still struggled with accessibility. Figma Make stood out for workflow integration, retaining all content and allowing direct editing inside Figma, though it also failed accessibility requirements and relied heavily on generic styling and placeholder imagery.Throughout the episode, the group returns to the same conclusion. AI is extremely effective at getting the first portion of wireframing done quickly. It is far less effective at making judgment calls, enforcing standards, or understanding context without guidance.In this episode, they explore:How wireframing fits into the content lifecycleWhy speed changes the risk profile of design workUsing AI to generate prompts instead of starting from scratchWhere AI wireframes succeed and where they failAccessibility and content risks in AI generated layoutsA wireframing comparison of Claude, Gemini 3, and Figma MakeA downloadable Episode Companion Guide is available below with tool comparisons and key takeaways.DS-S3-E8-CompanionDoc.pdfPreviously in the Intentional AI series:Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research & AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for accessibility?Episode 6: You’re asking AI to solve the wrong problems for SEO/GEO/AEOEpisode 7: Why AI can make your content personalization worseNew episodes every other Tuesday.For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform.(0:00) - Intro(1:12) - Why wireframing belongs in the content lifecycle(2:24) - Wireframing is hard / The appeal of AI here(4:08) - Using AI to create the prompt for wireframing(5:27) - Why prompt creation unlocks the real value(7:15) - AI wireframing = filling in blanks & reacting(10:34) - Risks for teams without wireframing expertise(12:21) - Using AI to ask better questions, not skip thinking(13:57) - Iterating prompts and adding constraints(15:24) - We tested 3 AI tools for wireframing(15:56) - Testing Claude(19:41) - Testing Gemini(21:05) - Testing Figma Make(24:56) - Practical takeaways and best use cases(26:50) - OutroSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    29 分
  • S3E7 - Intentional AI: Why AI can make your content personalization worse
    2026/01/13
    In Episode 7 of the Intentional AI series, Cole and Virgil focus on content personalization and why it is one of the most overpromised areas of AI. While personalization is often positioned as simple and automated, doing it well requires far more clarity and intent than most tools suggest.They break personalization into two main approaches. Role based personalization tailors messages for specific audiences or job functions, while behavioral personalization adapts experiences based on how people interact with content over time. The conversation also touches on predictive analysis and where AI may eventually help interpret patterns across analytics data.A central theme of the episode is trust. Using AI for personalization assumes the system understands audience priorities and pain points. Without clear direction, AI fills in the gaps with assumptions. Cole and Virgil explain why personalization has always been difficult to implement, why adoption remains low, and why AI does not remove the need for strategy, measurement, or human judgment.The episode also addresses the risks of personalization. Messages that are too generic get ignored, while messages that feel overly personal can cross into uncomfortable territory. Finding the right balance is still a human responsibility.In the second half of the episode, they continue their ongoing experiment using the same AI written accessibility article from earlier episodes. This time, they test three tools by asking them to generate role based promotional emails for a head of web marketing, a director of information technology, and a C level executive. The results highlight meaningful differences in tone, structure, and assumptions across tools.The takeaway is consistent with the Intentional AI series. AI can support personalization, but only when you define goals, outcomes, and boundaries first.In this episode, they explore:What content personalization actually meansRole based versus behavioral personalizationWhy personalization adoption remains lowThe balance between relevance and creepinessHow AI supports personalization without replacing strategyA role based email comparison of Perplexity, Copilot, and ClaudeA downloadable Episode Companion Guide is available below with tool comparisons and practical takeaways.DS-S3-E7-CompanionDoc.pdfPreviously in the Intentional AI series:Episode 1: Intentional AI and the Content LifecycleEpisode 2: Using AI for Research and AnalysisEpisode 3: AI and Content CreationEpisode 4: Content Management and AIEpisode 5: How much can you trust AI for accessibility?Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEONew episodes every other Tuesday.For more conversations about AI and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform.(0:00) - Intro(0:56) - Delivering tailored content with AI(1:30) - Different kinds of AI personalization(4:10) - Why personalization can be tricky(5:00) - The need for measurement and outcomes(7:45) - The Personalization Pendulum™(10:00) - The work doesn’t go away!(13:10) - We tested 3 AI tools for personalization(16:10) - Testing Perplexity(18:10) - Testing Copilot & Claude(19:20) - Explaining our prompting process(21:25) - The topic of AI replacing human labor(24:30) - OutroSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    26 分
  • S3E6 - Intentional AI: You’re asking AI to solve the wrong problems for SEO/GEO/AEO
    2025/12/16
    In Episode 6 of the Intentional AI series, Cole, Virgil, and Seth move into the visibility stage of the content lifecycle and tackle a common mistake they see everywhere. Teams keep treating SEO, GEO, and AEO as optimization problems, when in reality they are content quality, structure, and clarity problems.Search engines andgenerative models have both gotten smarter. Keyword tricks, shortcuts, and “secret sauce” tactics no longer work the way they once did. Instead, visibility now depends on clear intent, strong structure, accessible language, and content that actually helps people. The group looks at how SEO history is repeating itself, why organizations keep chasing hacks, and how that mindset actively works against long-term discoverability.They also dig into how SEO, GEO, and AEO overlap, where they differ, and why writing exclusively for AI can backfire by alienating human readers. The conversation covers content modeling, headless-style structures, and why these approaches help machines understand relationships without sacrificing usability.A major focus of the episode is schema. The team explains why schema is becoming increasingly important for generative engines, why it is difficult and error-prone to manage at scale, and where AI can help draft complex schema structures without fully understanding context. This leads to a broader point. AI can accelerate specific tasks, but it cannot replace judgment, prioritization, or review. In the second half of the episode, they continue their ongoing experiment using the same AI-written accessibility article from earlier episodes. They test how three tools approach GEO-focused improvements. Each tool surfaces different insights, none of them are complete on their own, and all of them require human decision-making to be useful. The takeaway is consistent with the theme of the series. AI is powerful when you ask it to solve the right problems, and dangerous when you expect it to fix foundational issues for you.In this episode, they explore:Why SEO, GEO, and AEO fail when treated as optimization tricksHow search has shifted from keywords to clarity, structure, and intentWhere SEO and GEO overlap and where they meaningfully divergeThe risk of writing for AI instead of for peopleWhy content modeling supports both search engines and generative enginesHow AI can assist with schema creation and where humans must interveneWhy repeating the same schema everywhere weakens its valueA GEO-focused comparison of Writesonic, Grammarly, and ClaudeWhy broad prompts underperform and targeted prompts lead to better outcomesA downloadable Episode Companion Guide is available below. It includes tool notes, schema examples, prompt guidance, and practical takeaways for applying AI to search without losing clarity or control.DS-S3-E6-CompanionDoc.pdfPreviously in the Intentional AI series:Episode 1: Applying AI to the content lifecycleEpisode 2: Maximizing AI for research and analysisEpisode 3: Smarter content creation with AIEpisode 4: The role of AI in content management AIEpisode 5: How much can you trust AI for accessibility?Upcoming episodes in the Intentional AI series:Jan 6, 2026 – Content PersonalizationJan 20, 2026 – Wireframing and LayoutFeb 3, 2026 – Design and MediaFeb 17, 2026 – Back End DevelopmentMar 3, 2026 – Conversational Search (with special guest)Mar 17, 2026 – Chatbots and Agentic AIMar 31, 2026 – Series Finale and Tool ReviewHoliday break noticeDiscussing Stupid will be taking a short break for the holidays. The next new episode will be released on January 6th.Whether you work on websites, structured content, or digital strategy, this episode is about recognizing when AI is being asked to solve the wrong problems. The goal is not more optimization. It is clearer intent, better structure, and content that actually deserves to be found.New episodes every other Tuesday.For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.Chapters(0:00) - Intro(0:37) - Boosting your SEO, GEO & AEO with AI (1:10) - Virgil on how SEO history is repeating itself (4:08) - Defining SEO & GEO overlaps (7:04) - Is a headless CMS better for GEO? (8:27) - Schema generation is awesome with AI(13:54) - If you tag everything, you’ve tagged nothing (15:18) - We tested 3 AI tools for SEO/GEO/AEO(16:39) - Testing Writesonic(18:16) - Testing Grammarly(19:33) - Testing Claude(20:54) - Every AI tool has gaps & you’re the filler(23:49) - Next episode preview… (24:55) - OutroSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://...
    続きを読む 一部表示
    26 分
  • S3E5 - Intentional AI: How much can you trust AI for accessibility?
    2025/12/02
    In Episode 5 of the Intentional AI series, Cole, Virgil, and Seth shift into another part of the content lifecycle. This time, they focus on accessibility and how AI fits into that work.Accessibility is more than code checks. It is making sure people can actually use and understand what you create. The team walks through what happened when they ran the High Monkey website through an AI accessibility review, where the tool gave helpful guidance, and where it completely misread the page.They also talk about the pieces of accessibility that AI handles surprisingly well, especially language, metaphors, and readability, and why these areas are often missed by standard scanners.In the second half of the episode, they continue the ongoing experiment from earlier episodes. Using the same AI written article from before, they test how three tools handle rewriting it to an adult eighth grade reading level, then compare the results with a readability checker. The differences across models show why simple writing, clear prompts, and human review are still necessary.In this episode, they explore:How AI evaluates accessibility on a real websiteWhere AI tools give useful insights and where they misinterpret contentWhy conversational explanations can help non technical teamsHow to prompt AI to look for the issues you actually care aboutThe importance of plain language and readable writing in accessibilityA readability comparison using Copilot, Perplexity, and GrammarlyWhy simple content supports both accessibility and AI performanceA downloadable Episode Companion Guide is available below. It includes key takeaways, tool notes, prompt examples, and practical advice for using AI in accessibility work.DS-S3-E5-CompanionDoc.pdfUpcoming episodes in the Intentional AI series:Dec 16, 2025 - SEO / AEO / GEOJan 6, 2026 - Content PersonalizationJan 20, 2026 - Front End Development and WireframingFeb 3, 2026 - Design and MediaFeb 17, 2026 - Back End DevelopmentMar 3, 2026 - Conversational Search (with special guest)Mar 17, 2026 - Chatbots and Agentic AIMar 31, 2026 - Series Finale and Tool ReviewWhether you work on websites, content workflows, or internal digital tools, this conversation is about using AI with care. The goal is to work smarter, keep content readable, and avoid handing all of your judgment over to automation.New episodes every other Tuesday.For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.Chapters(0:00) - Intro (0:46) - Today’s focus: Accessibility with AI(1:20) - We let AI audit HighMonkey.com(4:00) - Finding the human value in AI feedback(6:25) - The power of strategic prompting(12:33) - We tested 3 AI tools for accessibility(14:49) - AI Tool findings(18:17) - Keep all your readers in mind(20:50) - Next episode previewSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    23 分
  • S3E4 - Intentional AI: The role of AI in content management
    2025/11/11
    In Episode 4 of the Intentional AI series, Cole and Virgil move further into the content lifecycle and this time they are focusing on content management.Once your content’s written, the real work begins. Editing, organizing, translating, tagging, all the behind-the-scenes steps that keep content consistent and usable. In this episode, the team looks at how AI can help streamline those tasks and where it still creates new challenges.Joined by returning guest Chad, they break down where AI fits, where it fails, and what happens when you trust it to translate complex content on its own.In this episode, they explore:How AI supports the content management stage of the lifecycleCommon use cases like translation, auto-summary fields, and accessibility checksWhere automation makes sense and where it doesn’tThe biggest risks of AI content management, from oversimplification to data privacyWhy good input (clear, readable content) still determines good outputHow readable, accessible writing improves both human and AI understandingThis episode also continues the real-world experiment from previous episodes.Using the accessibility article originally created with Writesonic, the team tests how well three AI tools: Google Translate, DeepL, and ChatGPT, handle translating the piece into Spanish. The results reveal major differences in accuracy, tone, and overall usability across each model.A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content management stage.DS-S3-E4-CompanionDoc.pdf🦃 Note: We’re taking a short Thanksgiving break, the next episode will drop on December 2, 2025.Upcoming episodes in the Intentional AI series:Dec 2, 2025 — AccessibilityDec 16, 2025 — SEO / AEO / GEOJan 6, 2026 — Content PersonalizationJan 20, 2026 — Front End Development & WireframingFeb 3, 2026 — Design & MediaFeb 17, 2026 — Back End DevelopmentMar 3, 2026 — Conversational Search (with special guest!)Mar 17, 2026 — Chatbots & Agentic AIMar 31, 2026 — Series Finale & Tool ReviewWhether you’re managing websites, content workflows, or entire digital ecosystems, this conversation is about using AI intentionally, to work smarter without losing the human judgment that keeps content trustworthy.New episodes every other Tuesday.For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.Chapters(0:00) - Intro(0:50) - Today's focus: Content management with AI(1:58) - Content management opportunities with AI(6:18) - Recurring series theme: Trust(8:34) - Refine your process one step at a time(9:53) - Better content = better everything(10:22) - We tested 3 AI translation tools(12:02) - Cole's "elephant in the room" test(14:28) - Poor content = poor translations(16:58) - True translation happens between people(18:45) - Closing takeawaysSubscribe for email updates on our website:https://www.discussingstupid.com/Watch us on YouTube:https://www.youtube.com/@discussingstupidListen on Apple Podcasts, Spotify, or Soundcloud:https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0https://soundcloud.com/discussing-stupidCheck Us Out on Socials:https://www.linkedin.com/company/discussing-stupidhttps://www.instagram.com/discussingstupid/https://www.facebook.com/discussingstupidhttps://x.com/DiscussStupid
    続きを読む 一部表示
    21 分