• Episode 502 - Azure Open AI and Security

  • 2024/08/15
  • 再生時間: 1分未満
  • ポッドキャスト

Episode 502 - Azure Open AI and Security

  • サマリー

  • Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety. Groundedness detection to detect “hallucinations” in model outputs, coming soon. Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon.Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview. Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft LearnAI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations: AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment TemplateResponsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountabilityUtilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLMDesign the model architecture, focusing on the Transformer architecture Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft LearnAzure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completionsThe filtering system covers four main categories: hate, sexual, violence, and self-harmEach category is assessed at four severity levels: safe, low, medium, and highAdditional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilitiesUse diverse red teamers to simulate adversarial attacks and test the model’s robustnessMicrosoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdfDevelop a threat model and implement mitigations based on identified risks Other updates: Los Angeles Azure Extended ZonesCarbon OptimizationApp Config Ref GAOS SKU In-Place Migration for AKSOperator CRD Support with Azure Monitor Managed ServiceAzure API Center Visual Studio Code Extension Pre-releaseAzure API Management WordPress PluginAnnouncing a New OpenAI Feature for Developers on Azure
    続きを読む 一部表示
activate_samplebutton_t1

あらすじ・解説

Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications. Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3 YouTube: https://youtu.be/64Achcz97PI Resources: AI Tooling: Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety. Groundedness detection to detect “hallucinations” in model outputs, coming soon. Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon.Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview. Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service. AI Defender for Cloud AI Security Posture Management AI security posture management (Preview) - Microsoft Defender for Cloud | Microsoft LearnAI Workloads Enable threat protection for AI workloads (preview) - Microsoft Defender for Cloud | Microsoft Learn AI Red Teaming Tool Announcing Microsoft’s open automation framework to red team generative AI Systems | Microsoft Security Blog AI Development Considerations: AI Assessment from Microsoft Conduct an AI assessment using Microsoft’s Responsible AI Impact Assessment TemplateResponsible AI Impact Assessment Guide for detailed instructions Microsoft Responsible AI Processes Follow Microsoft’s Responsible AI principles: fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountabilityUtilize tools like the Responsible AI Dashboard for continuous monitoring and improvement Define Use Case and Model Architecture Determine the specific use case for your LLMDesign the model architecture, focusing on the Transformer architecture Content Filtering System How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft LearnAzure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completionsThe filtering system covers four main categories: hate, sexual, violence, and self-harmEach category is assessed at four severity levels: safe, low, medium, and highAdditional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters Red Teaming the LLM Plan and conduct red teaming exercises to identify potential vulnerabilitiesUse diverse red teamers to simulate adversarial attacks and test the model’s robustnessMicrosoft AI Red Team building future of safer AI | Microsoft Security Blog Create a Threat Model with OWASP Top 10owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-slides-v1_1.pdfDevelop a threat model and implement mitigations based on identified risks Other updates: Los Angeles Azure Extended ZonesCarbon OptimizationApp Config Ref GAOS SKU In-Place Migration for AKSOperator CRD Support with Azure Monitor Managed ServiceAzure API Center Visual Studio Code Extension Pre-releaseAzure API Management WordPress PluginAnnouncing a New OpenAI Feature for Developers on Azure

Episode 502 - Azure Open AI and Securityに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。