Important Points:
Deploying a Model in Azure AI Foundry:
- Sign in to the Azure AI Foundry portal.
- Search for and select the
gpt-4o
model. - Create a project with customized settings (resource name, subscription, resource group, region).
- The project includes connections to Azure AI services and models.
Creating a Client Application:
- Use Azure AI Foundry and Azure AI Model Inference SDKs to develop an application.
- Choose between Python or C# for development.
Application Configuration:
- Note the Azure AI Foundry project endpoint.
- Clone the GitHub repo containing the code files.
- Navigate to the folder containing the chat application code files.
Writing Code to Connect and Chat:
- Add references to necessary SDK namespaces.
- Initialize the project client and chat client.
- Create a loop to allow user input and retrieve completions from the model.
Running the Application:
- Sign into Azure and run the app.
- Interact with the generative AI model by asking questions.
Cleaning Up:
- Delete the resource group to avoid unnecessary costs.
Code References:
Cloning GitHub Repo:
rm -r mslearn-ai-foundry -f git clone https://github.com/microsoftlearning/mslearn-ai-studio mslearn-ai-foundry
Navigating to Code Folder:
cd mslearn-ai-foundry/labfiles/chat-app/python cd mslearn-ai-foundry/labfiles/chat-app/c-sharp
Installing Libraries:
python -m venv labenv ./labenv/bin/Activate.ps1 pip install python-dotenv azure-identity azure-ai-projects azure-ai-inference dotnet add package Azure.Identity dotnet add package Azure.AI.Projects --version 1.0.0-beta.9 dotnet add package Azure.AI.Inference --version 1.0.0-beta.5
Editing Configuration File:
code .env code appsettings.json
Adding References:
from dotenv import load_dotenv from azure.identity import DefaultAzureCredential from azure.ai.projects import AIProjectClient from azure.ai.inference.models import SystemMessage, UserMessage, AssistantMessage
using Azure.Identity; using Azure.AI.Projects; using Azure.AI.Inference;
Initializing Project Client:
projectClient = AIProjectClient( credential=DefaultAzureCredential(exclude_environment_credential=True, exclude_managed_identity_credential=True), endpoint=project_connection, )
DefaultAzureCredentialOptions options = new() { ExcludeEnvironmentCredential = true, ExcludeManagedIdentityCredential = true }; var projectClient = new AIProjectClient(new Uri(project_connection), new DefaultAzureCredential(options));
Creating Chat Client:
chat = projectClient.inference.get_chat_completions_client()
ChatCompletionsClient chat = projectClient.GetChatCompletionsClient();
Initializing Prompt:
prompt = [SystemMessage("You are a helpful AI assistant that answers questions.")]
var prompt = new List<ChatRequestMessage>() { new ChatRequestSystemMessage("You are a helpful AI assistant that answers questions.") };
Getting Chat Completion:
prompt.append(UserMessage(input_text)) response = chat.complete(model=model_deployment, messages=prompt) completion = response.choices[0].message.content print(completion) prompt.append(AssistantMessage(completion))
prompt.Add(new ChatRequestUserMessage(input_text)); var requestOptions = new ChatCompletionsOptions() { Model = model_deployment, Messages = prompt }; Response<ChatCompletions> response = chat.Complete(requestOptions); var completion = response.Value.Content; Console.WriteLine(completion); prompt.Add(new ChatRequestAssistantMessage(completion));
Running the Application:
az login python chat-app.py dotnet run