AI-102 Study Series Part 6: Managing Azure OpenAI Models in Azure AI Foundry

Managing Azure OpenAI models in Azure AI Foundry involves several key steps, including deployment, customization, monitoring, and scaling. Here’s a detailed guide: Step 1: Deploy Models Sign in to Azure AI Foundry: Go to the Azure AI Foundry portal and sign in. Select Your Project: Choose the project where you want to deploy the model. Model Catalog: Navigate to the Model Catalog and select the Azure OpenAI model you want to deploy. Deploy Model: Click on “Deploy” and configure the deployment settings, such as resource allocation and endpoint configuration 1. Step 2: Customize Models Open in Playground: After deployment, open the model in the Azure AI Foundry playground. Fine-Tuning: Customize the model with your own data using fine-tuning techniques. This improves the model’s accuracy and relevance for your specific use case 2. Embeddings and Indexes: Integrate additional components like embeddings and indexes to enhance the model’s capabilities 1. Step 3: Monitor and Scale Monitoring: Use Azure Monitor to track the performance and health of your deployed models. Set up alerts for any anomalies or performance issues 2. Scaling: Adjust the resource allocation based on demand. You can scale up or down using Azure AI Foundry’s flexible deployment options, such as serverless, managed, or reserved 2. Step 4: Manage Security and Compliance Security: Implement robust security frameworks to protect your models and data. Use built-in tools to manage harmful content and ensure compliance with regulations 2. Governance: Maintain governance over your AI models by tracking usage and access through Azure’s enterprise-grade security features 2. Example Configuration Here’s an example of deploying a model using the Azure CLI: ...

June 6, 2025 · 2 min · Taner

AI-102 Study Series Part 5: Overview of Azure OpenAI in Azure AI Foundry Models

Azure OpenAI in Azure AI Foundry Models provides access to a diverse set of AI models developed by OpenAI, integrated into the Azure ecosystem. These models are designed to handle various tasks, including natural language processing, image generation, and more. Here’s a detailed overview: Key Features of Azure OpenAI in Azure AI Foundry Models Diverse Model Selection: Azure OpenAI offers a range of models, including the latest GPT-4 series, which can understand and generate natural language and code 1. Multimodal Capabilities: Some models, like GPT-4 Turbo, can process both text and images, enabling more complex and versatile applications 1. Reasoning Models: Advanced models such as the o-series are optimized for logical reasoning and problem-solving 1. Embeddings and Image Generation: Models that convert text into numerical vectors for text similarity and generate original images from natural language descriptions 1. Audio Models: Models for speech-to-text, translation, and text-to-speech, supporting conversational interactions and audio generation 1. Benefits of Using Azure OpenAI in Foundry Models Enterprise-Grade Reliability: Models hosted and billed directly by Microsoft offer enterprise-grade SLAs and deep integration with Azure services 2. Flexible Deployment: Deploy models using various options like PayGo, Managed Compute, or Provisioned Throughput (PTU), allowing you to scale based on your needs 2. Advanced Tooling: Utilize tools like Model Leaderboard, Model Router, and Image Playground to optimize and manage your AI models 2. Responsible AI Standards: Models adhere to responsible AI standards, ensuring ethical and secure use 2. Example Use Cases Customer Support: Use GPT models to automate and enhance customer support interactions. Content Creation: Generate high-quality text and images for marketing and creative projects. Data Analysis: Employ reasoning models to analyze complex datasets and derive insights. Voice Assistants: Implement audio models for creating sophisticated voice assistants. For more detailed information, you can refer to the Azure documentation on OpenAI models 1. ...

June 5, 2025 · 2 min · Taner

AI-102 Study Series Part 4: Creating and Deploying Azure OpenAI in Azure AI Foundry

Creating and deploying Azure OpenAI in Azure AI Foundry involves several steps. Here’s a detailed guide to help you get started: Step 1: Create an Azure OpenAI Resource Sign in to Azure Portal: Go to the Azure portal and sign in with your Azure subscription. Create a Resource: Select “Create a resource” and search for “Azure OpenAI”. When you locate the service, select “Create”. Fill in the Details: Subscription: Choose your Azure subscription. Resource Group: Select an existing resource group or create a new one. Region: Choose the region where you want to deploy the resource. Name: Provide a descriptive name for your Azure OpenAI resource. Pricing Tier: Select the pricing tier (currently, only the Standard tier is available). Configure Network Security: ...

June 4, 2025 · 2 min · Taner

AI-102 Study Series Exercise 4: Retrieval Augmented Generation (RAG) Chat App

Overview This exercise demonstrates how to build a Retrieval Augmented Generation (RAG) chat application that integrates custom data sources into prompts for a generative AI model. The app is developed using Azure AI Foundry and deployed with Azure OpenAI and Azure AI Search. Steps & Configuration Details 1. Create an Azure AI Foundry Hub and Project Open Azure AI Foundry portal (https://ai.azure.com) and create a hub project. Configuration Items: Subscription: Your Azure subscription. Resource Group: Select or create a resource group. Hub Name: A valid name. Location: Example values → East US 2, Sweden Central (quota limits may require a different region). 2. Deploy Models Two models are required: ...

June 3, 2025 · 2 min · Taner

AI-102 Study Series Part 3: Key Uses and Benefits of Azure AI Containers

Azure AI containers are used to bring Azure AI services closer to your data, providing flexibility, control, and scalability. Here are some key uses and benefits: Key Uses of Azure AI Containers Data Compliance and Security: Azure AI containers allow you to process sensitive data locally, ensuring compliance with regulations that restrict data transfer to the cloud 1. Edge Computing: Deploy AI services at the edge to reduce latency and improve performance for real-time applications 1. Offline Capabilities: Run AI services in environments with limited or no internet connectivity, ensuring continuous operation 2. High Throughput and Low Latency: Containers can handle high volumes of data with minimal latency, making them ideal for bulk processing tasks like OCR or data analysis 3. Consistent API Experience: Use the same APIs available in Azure, providing a seamless transition between cloud and on-premises deployments 2. Benefits of Using Azure AI Containers Control Over Data: Choose where your data is processed, which is essential for compliance and security 1. Flexibility: Deploy AI services in various environments, including on-premises, cloud, and edge 2. Scalability: Scale services to meet high throughput and low latency requirements 3. Portability: Maintain consistent application behavior across different deployment environments 1. Example Use Cases Healthcare: Process patient data locally to comply with health data regulations. Manufacturing: Deploy AI models at the edge to monitor equipment and predict maintenance needs. Retail: Analyze customer data in-store to provide personalized experiences without sending data to the cloud. For more detailed information, you can refer to the Azure documentation on container support 1. ...

June 3, 2025 · 2 min · Taner

AI-102 Study Series Part 2: Azure AI Containers

The second of the series is about Azure AI containers. Azure AI containers allow you to run Azure AI services on-premises or in any environment that supports Docker. This provides flexibility to bring Azure AI capabilities closer to your data for compliance, security, or operational reasons. Here’s how you can use them: What Are Azure AI Containers? Azure AI containers package Azure AI services into Docker containers, enabling you to deploy and run these services locally or in your preferred environment. This approach is beneficial for scenarios where data cannot be sent to the cloud due to compliance or security requirements 1. ...

June 2, 2025 · 2 min · Taner

AI-102 Study Series Part 1: Securing Azure AI Services Networking

I started studying for AI-102 AI Engineer Associate certification. I am adding the subjects that I am falling short on this blog to improve my knowledge now. First of the series is networking… :) Securing and setting up the network for Azure AI services involves several key steps to ensure that your resources are protected and accessible only to authorized users. Here’s a comprehensive guide: Step 1: Configure Virtual Networks Create a Virtual Network: In the Azure portal, create a virtual network (VNet) that will host your Azure AI services. Add Subnets: Define subnets within your VNet to segment your network and improve security. Step 2: Set Up Private Endpoints Create Private Endpoints: Use private endpoints to connect your Azure AI services to your VNet securely. This ensures that traffic between your VNet and Azure AI services remains within the Azure backbone network. Configure DNS: Update your DNS settings to resolve the private endpoint IP addresses. Step 3: Configure Network Security Groups (NSGs) Create NSGs: Apply NSGs to your subnets to control inbound and outbound traffic. Define rules to allow traffic only from trusted sources. Apply NSGs: Attach the NSGs to your subnets and network interfaces. Step 4: Enable Firewall Rules Deny All by Default: Configure your Azure AI services to deny all incoming traffic by default. Allow Specific Networks: Create rules to allow traffic from specific VNets, subnets, or IP address ranges. Step 5: Use Service Tags and Application Security Groups Service Tags: Use Azure service tags to simplify the management of NSG rules. Service tags represent a group of IP address prefixes for specific Azure services. Application Security Groups: Group VMs and define security policies based on application tiers. Step 6: Monitor and Audit Enable Monitoring: Use Azure Monitor to track the performance and health of your Azure AI services. Audit Logs: Enable and review audit logs to track access and changes to your resources. Example Configuration Here’s an example of how you might configure your network-security-group.yml for NSGs: ...

June 1, 2025 · 3 min · Taner

AI-102 Study Series Part 7: Azure OpenAI REST API Reference

For the AI-102 exam, understanding the Azure OpenAI REST API reference is crucial. Here are the key areas you should focus on: Control Plane API Resource Management: Learn how to create, update, and delete Azure OpenAI resources using the control plane API 1. Deployment: Understand how to deploy models and manage deployments through Azure Resource Manager, Bicep, Terraform, and Azure CLI 1. Data Plane - Authoring API Fine-Tuning: Familiarize yourself with the API endpoints for fine-tuning models, uploading files, and managing ingestion jobs 1. Batch Operations: Study how to perform batch operations and model-level queries 1. Data Plane - Inference API Completions: Learn how to create completions for provided prompts using the inference API 1. Chat Completions: Understand the endpoints for chat completions, embeddings, and other inference capabilities 1. Authentication: Know the methods for authenticating API calls using API Keys or Microsoft Entra ID 1. REST API Versioning API Versions: Be aware of the versioning structure using the api-version query parameter, which follows the YYYY-MM-DD format 1. Example API Call Here’s an example of making a completion request: ...

June 1, 2025 · 2 min · Taner

AI-102 Study Series Part 8: Azure OpenAI REST API Examples

Here are examples for each of the key areas of the Azure OpenAI REST API that are important for the AI-102 exam: Control Plane API Example: Creating an Azure OpenAI Resource PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.CognitiveServices/accounts/{account-name}?api-version=2024-10-01 Request Body: { "location": "eastus", "sku": { "name": "S0" }, "kind": "OpenAI", "properties": { "networkAcls": { "defaultAction": "Deny", "virtualNetworkRules": [ { "id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}" } ] } } } This example shows how to create an Azure OpenAI resource using the control plane API 1. ...

June 1, 2025 · 2 min · Taner

AI-102 Study Series Exercise 3: Prompt Flow Chat App with Azure AI Foundry

Overview This exercise demonstrates how to use the Azure AI Foundry portal’s prompt flow to build a custom chat app. The app leverages a generative AI model (via Azure OpenAI’s GPT-4 variant) to manage conversation by taking a user’s question and the chat history as inputs and generating an answer. The work is divided into several setup and deployment phases. Step-by-Step Summary Create an Azure AI Foundry Hub and Project ...

May 17, 2025 · 4 min · Taner