Overview

This exercise demonstrates how to orchestrate multiple AI agents using Azure AI Foundry and the Semantic Kernel SDK. The solution involves two agents:

  1. Incident Manager Agent – Analyzes service logs and recommends resolution actions.
  2. DevOps Assistant Agent – Executes corrective actions and updates logs.

Steps & Configuration Details

1. Deploy a Model in Azure AI Foundry

  • Open Azure AI Foundry portal (https://ai.azure.com) and sign in.
  • Search for gpt-4o and select Use this model.
  • Configuration Items:
    • Azure AI Foundry Resource: A valid name.
    • Subscription: Your Azure subscription.
    • Resource Group: Select or create a resource group.
    • Region: Choose any AI Services-supported location.
    • Deployment Name: gpt-4o (default).
    • Tokens per Minute Rate Limit: 40,000 TPM (adjusted in Models and Endpoints).

2. Clone the Repository

  • Open Azure Portal (https://portal.azure.com).
  • Launch Azure Cloud Shell (PowerShell environment).
  • Clone the repository:
    rm -r ai-agents -f
    git clone https://github.com/MicrosoftLearning/mslearn-ai-agents ai-agents
    
  • Navigate to the correct folder:
    cd ai-agents/Labfiles/05-agent-orchestration/Python
    
  • Install dependencies:
    python -m venv labenv
    ./labenv/bin/Activate.ps1
    pip install python-dotenv azure-identity semantic-kernel[azure]
    
  • Open the configuration file:
    code .env
    
  • Update Configuration Values:
    • Project Endpoint (copied from Azure AI Foundry portal).
    • Model Deployment Name (gpt-4o)
  • Save the configuration file.

3. Implement AI Agents

  • Open the agent code file:
    code agent_chat.py
    
  • Add references:
    from azure.identity import DefaultAzureCredential
    from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AgentGroupChat
    from semantic_kernel.functions import kernel_function
    from typing import Annotated
    
  • Define the Incident Manager Agent:
    incident_agent_definition = await client.agents.create_agent(
        model=ai_agent_settings.model_deployment_name,
        name="Incident_Manager",
        instructions="Analyze service logs, identify issues, and recommend resolution actions."
    )
    agent_incident = AzureAIAgent(
        client=client,
        definition=incident_agent_definition,
        plugins=[LogFilePlugin()]
    )
    
  • Define the DevOps Assistant Agent:
    devops_agent_definition = await client.agents.create_agent(
        model=ai_agent_settings.model_deployment_name,
        name="DevOps_Assistant",
        instructions="Execute corrective actions based on recommendations from the Incident Manager."
    )
    agent_devops = AzureAIAgent(
        client=client,
        definition=devops_agent_definition,
        plugins=[DevopsPlugin()]
    )
    

4. Implement Multi-Agent Strategies

  • Define Selection Strategy (determines which agent responds next):
    class SelectionStrategy:
        async def select_agent(self, agents, history):
            if history[-1].name == "DevOps_Assistant" or history[-1].role == "User":
                return next(agent for agent in agents if agent.name == "Incident_Manager")
            return next(agent for agent in agents if agent.name == "DevOps_Assistant")
    
  • Define Termination Strategy (ends conversation when resolution is complete):
    class ApprovalTerminationStrategy:
        async def should_agent_terminate(self, agent, history):
            return "no action needed" in history[-1].content.lower()
    

5. Implement Multi-Agent Chat

  • Create a Group Chat:
    chat = AgentGroupChat(
        agents=[agent_incident, agent_devops],
        termination_strategy=ApprovalTerminationStrategy(agents=[agent_incident], maximum_iterations=10, automatic_reset=True),
        selection_strategy=SelectionStrategy(agents=[agent_incident, agent_devops])
    )
    
  • Append log file data:
    await chat.add_chat_message(logfile_msg)
    
  • Invoke the chat:
    async for response in chat.invoke():
        if response is None or not response.name:
            continue
        print(f"{response.content}")
    

6. Run the AI Agent

  • Sign into Azure:
    az login
    
  • Run the application:
    python agent_chat.py
    
  • Example output:
    INCIDENT_MANAGER > /home/.../logs/log1.log | Restart service ServiceX
    DEVOPS_ASSISTANT > Service ServiceX restarted successfully.
    INCIDENT_MANAGER > No action needed.
    

7. Clean Up

  • Delete Azure resources to avoid unnecessary costs:

This summary captures the essential steps while highlighting all configuration items and code references required for orchestrating multiple AI agents in Azure AI Foundry.

Related Posts