Summarize text using Azure AI chat app with .NET
Get started with Semantic Kernel by creating a simple .NET 8 console chat application. The application will run locally and use the OpenAI gpt-35-turbo
model deployed into an Azure OpenAI account. Follow these steps to provision Azure OpenAI and learn how to use Semantic Kernel.
Get started with the .NET Azure OpenAI SDK by creating a simple .NET 8 console chat application. The application will run locally and use the OpenAI gpt-35-turbo
model deployed into an Azure OpenAI account. Follow these steps to provision Azure OpenAI and learn how to use the .NET Azure OpenAI SDK.
Prerequisites
- .NET 8.0 SDK - Install the .NET 8.0 SDK
- An Azure subscription - Create one for free
- Azure Developer CLI - Install or update the Azure Developer CLI
- Access to Azure OpenAI service.
- On Windows, PowerShell
v7+
is required. To validate your version, runpwsh
in a terminal. It should returns the current version. If it returns an error, execute the following command:dotnet tool update --global PowerShell
.
Deploy the Azure resources
Ensure that you follow the Prerequisites to have access to Azure OpenAI Service as well as the Azure Developer CLI, and then follow the following guide to set started with the sample application.
Clone the repository: dotnet/ai-samples
From a terminal or command prompt, navigate to the quickstarts directory.
This provisions the Azure OpenAI resources. It may take several minutes to create the Azure OpenAI service and deploy the model.
azd up
Note
If you already have an Azure OpenAI service available, you can skip the deployment and use that value in the Program.cs, preferably from an IConfiguration
.
Troubleshoot
On Windows, you might get the following error messages after running azd up
:
postprovision.ps1 is not digitally signed. The script will not execute on the system
The script postprovision.ps1 is executed to set the .NET user secrets used in the application. To avoid this error, run the following PowerShell command:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
Then re-run the azd up
command.
Another possible error:
'pwsh' is not recognized as an internal or external command, operable program or batch file. WARNING: 'postprovision' hook failed with exit code: '1', Path: '.\infra\post-script\postprovision.ps1'. : exit code: 1 Execution will continue since ContinueOnError has been set to true.
The script postprovision.ps1 is executed to set the .NET user secrets used in the application. To avoid this error, manually run the script using the following PowerShell command:
.\infra\post-script\postprovision.ps1
The .NET AI apps now have the user-secrets configured and they can be tested.
Trying Hiking Benefits Summary sample
- From a terminal or command prompt, navigate to the
semantic-kernel\01-HikeBenefitsSummary
directory.
- From a terminal or command prompt, navigate to the
azure-openai-sdk\01-HikeBenefitsSummary
directory.
It's now time to try the console application. Type in the following to run the app:
dotnet run
If you get an error message the Azure OpenAI resources may not have finished deploying. Wait a couple of minutes and try again.
Understanding the code
Our application uses the Microsoft.SemanticKernel
package, which is available on NuGet, to send and receive requests to an Azure OpenAI service deployed in Azure.
The entire application is contained within the Program.cs file. The first several lines of code loads up secrets and configuration values that were set in the dotnet user-secrets
for you during the application provisioning.
// == Retrieve the local secrets saved during the Azure deployment ==========
var config = new ConfigurationBuilder()
.AddUserSecrets<Program>()
.Build();
string endpoint = config["AZURE_OPENAI_ENDPOINT"];
string deployment = config["AZURE_OPENAI_GPT_NAME"];
string key = config["AZURE_OPENAI_KEY"];
// Create a Kernel containing the Azure OpenAI Chat Completion Service
Kernel kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(deployment, endpoint, key)
.Build();
The Kernel
class facilitates the requests and responses with the help of AddAzureOpenAIChatCompletion
service.
Kernel kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(deployment, endpoint, key)
.Build();
Once the Kernel
is created, we read the contents of the file benefits.md
and create a prompt
to ask the the model to summarize that text.
// Create and print out the prompt
string prompt = $"""
Please summarize the the following text in 20 words or less:
{File.ReadAllText("benefits.md")}
""";
Console.WriteLine($"user >>> {prompt}");
To have the model generate a response based off prompt
, use the InvokePromptAsync
function.
// Submit the prompt and print out the response
string response = await kernel.InvokePromptAsync<string>(prompt, new(new OpenAIPromptExecutionSettings() { MaxTokens = 400 }));
Console.WriteLine($"assistant >>> {response}");
Customize the text content of the file or the length of the summary to see the differences in the responses.
Understanding the code
Our application uses the Azure.AI.OpenAI
client SDK, which is available on NuGet, to send and receive requests to an Azure OpenAI service deployed in Azure.
The entire application is contained within the Program.cs file. The first several lines of code loads up secrets and configuration values that were set in the dotnet user-secrets
for you during the application provisioning.
// == Retrieve the local secrets saved during the Azure deployment ==========
var config = new ConfigurationBuilder()
.AddUserSecrets<Program>()
.Build();
string openAIEndpoint = config["AZURE_OPENAI_ENDPOINT"];
string openAIDeploymentName = config["AZURE_OPENAI_GPT_NAME"];
string openAiKey = config["AZURE_OPENAI_KEY"];
// == Creating the AIClient ==========
var endpoint = new Uri(openAIEndpoint);
var credentials = new AzureKeyCredential(openAiKey);
The OpenAIClient
class facilitates the requests and responses. ChatCompletionOptions
specifies parameters of how the model will respond.
var openAIClient = new OpenAIClient(endpoint, credentials);
var completionOptions = new ChatCompletionsOptions
{
MaxTokens = 400,
Temperature = 1f,
FrequencyPenalty = 0.0f,
PresencePenalty = 0.0f,
NucleusSamplingFactor = 0.95f, // Top P
DeploymentName = openAIDeploymentName
};
Once the OpenAIClient
client is created, we read the content of the file benefits.md
. Then using the ChatRequestUserMessage
class we can add to the model the request to summarize that text.
string userRequest = """
Please summarize the the following text in 20 words or less:
""" + markdown;
completionOptions.Messages.Add(new ChatRequestUserMessage(userRequest));
Console.WriteLine($"\n\nUser >>> {userRequest}");
To have the model generate a response based off the user request, use the GetChatCompletionsAsync
function.
ChatCompletions response = await openAIClient.GetChatCompletionsAsync(completionOptions);
ChatResponseMessage assistantResponse = response.Choices[0].Message;
Console.WriteLine($"\n\nAssistant >>> {assistantResponse.Content}");
Customize the text content of the file or the length of the summary to see the differences in the responses.
Clean up resources
When you no longer need the sample application or resources, remove the corresponding deployment and all resources.
azd down
Next steps
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for