• 'The Regulatory Landscape for AI in Insurance'
    2024/09/02

    Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

    続きを読む 一部表示
    34 分
  • Where to Get Started with the EU AI Act: Part Two
    2024/08/12
    In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there. In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including: Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records. Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken. Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU. Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance. Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy. Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future. 🔗 Key Topics Discussed: What documentation and transparency measures are required to demonstrate compliance? How can businesses effectively maintain and update these records? How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation? What are the biggest challenges you foresee in complying with the EU AI Act? What resources or support mechanisms are being provided to businesses to help them comply with the new regulations? How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector? What are the penalties for non-compliance, and how will they be determined and applied? What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy? What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve? How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary? 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    46 分
  • Where to Get Started with the EU AI Act: Part One
    2024/08/12
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations. With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations. The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act. Don't miss this informative session to ensure your organization is ready for the changes ahead! 🔗 Key Topics Discussed: What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? What impact will this have outside the EU? What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act? Are there any particular high-risk AI systems that require special attention under the new regulations? How do you assess and manage the risks associated with AI systems? What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations? How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips! 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. #AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAICheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    21 分
  • Building Trust in AI
    2024/07/08
    Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI." Episode Highlights: Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors. AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations. Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust. Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability. Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices. Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders. If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    31 分
  • NYC AI Bias Law: One Year In and What to Consider
    2024/07/01
    Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace. Episode Highlights: Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers. Year One Insights: What has been learned from the first year of compliance, including common challenges and successes. Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance. Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits. Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively. This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan. 🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    20 分
  • Understanding Colorado's New AI Consumer Protection Law
    2024/06/03
    In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers. Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20." Explore our courses here: https://courses.babl.ai/ For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights. Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/ Timestamps: 00:21 - Welcome and Introductions 00:43 - Overview of Colorado's AI Consumer Protection Law 01:52 - State vs. Federal Initiatives in AI Regulation 04:00 - Detailed Discussion on the Law's Provisions 07:02 - Risk Management and Compliance Techniques 09:51 - Importance of Proper Documentation 12:21 - Developer and Deployer Obligations 17:12 - Strategies for Public Disclosure and Risk Notification 20:48 - Annual Impact Assessments 22:44 - Transparency in AI Decision-Making 24:05 - Consumer Rights in AI Decisions 26:03 - Public Disclosure Requirements 28:36 - Final Thoughts and Takeaways Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    31 分
  • NIST AI Risk Management Framework & Generative AI Profile
    2024/05/06
    🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies. 🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management. 📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias. 🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies. 🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices. 🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions. 🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    44 分
  • The EU AI Act: Prohibited and High-Risk Systems and why you should care
    2024/04/08
    In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike. Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU. The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem. Key Topics Covered: Overview of the EU AI Act and its journey to enactment Differentiating prohibited and high-risk AI systems Understanding biases in AI algorithms and their implications Compliance challenges and the importance of early action How BABL AI supports organizations in achieving compliance and building trust Why You Should Tune In: Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance. Don't Miss Out: Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    25 分