AI-102 Study Series Exercise 14: AI Agent for Expense Claims with Semantic Kernel

Overview This exercise demonstrates how to develop an AI agent using Azure AI Foundry and the Semantic Kernel SDK. The agent processes expense claims by analyzing data and generating structured responses. Steps & Configuration Details 1. Deploy a Model in Azure AI Foundry Open Azure AI Foundry portal (https://ai.azure.com) and sign in. Search for gpt-4o and select Use this model. Configuration Items: Azure AI Foundry Resource: A valid name. Subscription: Your Azure subscription. Resource Group: Select or create a resource group. Region: Choose any AI Services-supported location. Deployment Name: gpt-4o (default). 2. Clone the Repository Open Azure Portal (https://portal.azure.com). Launch Azure Cloud Shell (PowerShell environment). Clone the repository: rm -r ai-agents -f git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents Navigate to the correct folder: cd ai-agents/Labfiles/04-semantic-kernel/python Install dependencies: python -m venv labenv ./labenv/bin/Activate.ps1 pip install python-dotenv azure-identity semantic-kernel[azure] Open the configuration file: code .env Update Configuration Values: Project Endpoint (copied from Azure AI Foundry portal). Model Deployment Name (gpt-4o) Save the configuration file. 3. Implement the AI Agent Open the agent code file: code semantic-kernel.py Add references: from dotenv import load_dotenv from azure.identity.aio import DefaultAzureCredential from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AzureAIAgentThread from semantic_kernel.functions import kernel_function from typing import Annotated Define an Email Plugin for expense claim submission: class EmailPlugin: """A Plugin to simulate email functionality.""" @kernel_function(description="Sends an email.") def send_email(self, to: Annotated[str, "Recipient"], subject: Annotated[str, "Email Subject"], body: Annotated[str, "Email Body"]): print(f"\nTo: {to}") print(f"Subject: {subject}") print(f"{body}\n") Load configuration settings: load_dotenv() ai_agent_settings = AzureAIAgentSettings() Connect to Azure AI Foundry: async with ( DefaultAzureCredential(exclude_environment_credential=True, exclude_managed_identity_credential=True) as creds, AzureAIAgent.create_client(credential=creds) as project_client, ): Define the AI agent: expenses_agent_def = await project_client.agents.create_agent( model=ai_agent_settings.model_deployment_name, name="expenses_agent", instructions="""You are an AI assistant for expense claim submission. When a user submits expenses data and requests an expense claim, use the plug-in function to send an email to [email protected] with the subject 'Expense Claim' and a body that contains itemized expenses with a total. Then confirm to the user that you've done so.""" ) Create a Semantic Kernel Agent: expenses_agent = AzureAIAgent( client=project_client, definition=expenses_agent_def, plugins=[EmailPlugin()] ) Process expenses data: thread: AzureAIAgentThread = AzureAIAgentThread(client=project_client) try: prompt_messages = [f"{prompt}: {expenses_data}"] response = await expenses_agent.get_response(thread_id=thread.id, messages=prompt_messages) print(f"\n# {response.name} :\n {response}") except Exception as e: print(e) finally: await thread.delete() if thread else None await project_client.agents.delete_agent(expenses_agent.id) 4. Run the AI Agent Sign into Azure: az login Run the application: python semantic-kernel.py Example prompt: Submit an expense claim. The agent should generate an email for an expense claim. 5. Clean Up Delete Azure resources to avoid unnecessary costs: Open Azure Portal (https://portal.azure.com). Navigate to Resource Groups. Select the resource group and click Delete.

June 7, 2025 · 2 min · Taner

AI-102 Study Series Exercise 15: Multi-Agent Orchestration with Semantic Kernel

Overview This exercise demonstrates how to orchestrate multiple AI agents using Azure AI Foundry and the Semantic Kernel SDK. The solution involves two agents: Incident Manager Agent – Analyzes service logs and recommends resolution actions. DevOps Assistant Agent – Executes corrective actions and updates logs. Steps & Configuration Details 1. Deploy a Model in Azure AI Foundry Open Azure AI Foundry portal (https://ai.azure.com) and sign in. Search for gpt-4o and select Use this model. Configuration Items: Azure AI Foundry Resource: A valid name. Subscription: Your Azure subscription. Resource Group: Select or create a resource group. Region: Choose any AI Services-supported location. Deployment Name: gpt-4o (default). Tokens per Minute Rate Limit: 40,000 TPM (adjusted in Models and Endpoints). 2. Clone the Repository Open Azure Portal (https://portal.azure.com). Launch Azure Cloud Shell (PowerShell environment). Clone the repository: rm -r ai-agents -f git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents Navigate to the correct folder: cd ai-agents/Labfiles/05-agent-orchestration/Python Install dependencies: python -m venv labenv ./labenv/bin/Activate.ps1 pip install python-dotenv azure-identity semantic-kernel[azure] Open the configuration file: code .env Update Configuration Values: Project Endpoint (copied from Azure AI Foundry portal). Model Deployment Name (gpt-4o) Save the configuration file. 3. Implement AI Agents Open the agent code file: code agent_chat.py Add references: from azure.identity import DefaultAzureCredential from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AgentGroupChat from semantic_kernel.functions import kernel_function from typing import Annotated Define the Incident Manager Agent: incident_agent_definition = await client.agents.create_agent( model=ai_agent_settings.model_deployment_name, name="Incident_Manager", instructions="Analyze service logs, identify issues, and recommend resolution actions." ) agent_incident = AzureAIAgent( client=client, definition=incident_agent_definition, plugins=[LogFilePlugin()] ) Define the DevOps Assistant Agent: devops_agent_definition = await client.agents.create_agent( model=ai_agent_settings.model_deployment_name, name="DevOps_Assistant", instructions="Execute corrective actions based on recommendations from the Incident Manager." ) agent_devops = AzureAIAgent( client=client, definition=devops_agent_definition, plugins=[DevopsPlugin()] ) 4. Implement Multi-Agent Strategies Define Selection Strategy (determines which agent responds next): class SelectionStrategy: async def select_agent(self, agents, history): if history[-1].name == "DevOps_Assistant" or history[-1].role == "User": return next(agent for agent in agents if agent.name == "Incident_Manager") return next(agent for agent in agents if agent.name == "DevOps_Assistant") Define Termination Strategy (ends conversation when resolution is complete): class ApprovalTerminationStrategy: async def should_agent_terminate(self, agent, history): return "no action needed" in history[-1].content.lower() 5. Implement Multi-Agent Chat Create a Group Chat: chat = AgentGroupChat( agents=[agent_incident, agent_devops], termination_strategy=ApprovalTerminationStrategy(agents=[agent_incident], maximum_iterations=10, automatic_reset=True), selection_strategy=SelectionStrategy(agents=[agent_incident, agent_devops]) ) Append log file data: await chat.add_chat_message(logfile_msg) Invoke the chat: async for response in chat.invoke(): if response is None or not response.name: continue print(f"{response.content}") 6. Run the AI Agent Sign into Azure: az login Run the application: python agent_chat.py Example output: INCIDENT_MANAGER > /home/.../logs/log1.log | Restart service ServiceX DEVOPS_ASSISTANT > Service ServiceX restarted successfully. INCIDENT_MANAGER > No action needed. 7. Clean Up Delete Azure resources to avoid unnecessary costs: Open Azure Portal (https://portal.azure.com). Navigate to Resource Groups. Select the resource group and click Delete. This summary captures the essential steps while highlighting all configuration items and code references required for orchestrating multiple AI agents in Azure AI Foundry. ...

June 7, 2025 · 3 min · Taner