Integrate OpenAI, Communication, and Organizational Data Features into a Line of Business App
Level: Intermediate
This tutorial demonstrates how Azure OpenAI, Azure Communication Services, and Microsoft Graph/Microsoft Graph Toolkit can be integrated into Line of Business (LOB) applications to enhance user productivity, elevate the user experience, and take LOB apps to the next level.
- AI: Enable users to ask questions in natural language and convert their answers to SQL that can be used to query a database, allow users to define rules that can be used to automatically generate email and SMS messages, and learn how natural language can be used to retrieve data from your own custom data sources. Azure OpenAI is used for these features.
- Communication: Enable in-app phone calling to customers and Email/SMS functionality using Azure Communication Services.
- Organizational Data: Pull in related organizational data that users may need (documents, chats, emails, calendar events) as they work with customers to avoid context switching. Providing access to this type of organizational data reduces the need for the user to switch to Outlook, Teams, OneDrive, other custom apps, their phone, etc. since the specific data and functionality they need is provided directly in the app. Microsoft Graph and Microsoft Graph Toolkit are used for this feature.
The application is a simple customer management app that allows users to manage their customers and related data. It consists of a front-end built using TypeScript that calls back-end APIs to retrieve data, interact with AI functionality, send email/SMS messages, and pull in organizational data. Here's an overview of the application solution that you'll walk through in this tutorial:
The tutorial will walk you through the process of setting up the required Azure and Microsoft 365 resources. It'll also walk you through the code that is used to implement the AI, communication, and organizational data features. While you won't be required to copy and paste code, some of the exercises will have you modify code to try out different scenarios.
What You'll Build in this Tutorial
Choose Your Own Adventure
You can complete the entire tutorial from start to finish or complete specific topics of interest you. The tutorial is broken down into the following topic areas:
- Clone the Project Exercise (required exercise).
- AI Exercises: Create an Azure OpenAI resource and use it to convert natural language to SQL, generate email/SMS messages, and work with your own data and documents.
- Communication Exercises: Create an Azure Communication Services resource and use it to make phone calls from the app and send email/SMS messages.
- Organizational Data Exercises: Create a Microsoft Entra ID app registration so that Microsoft Graph and Microsoft Graph Toolkit can be used to authenticate and pull organizational data into the application.
Prerequisites
- Node - Node 16+ and npm 7+ will be used for this project
- git
- Visual Studio Code (although Visual Studio Code is recommended, any editor can be used)
- Azure subscription
- Microsoft 365 developer tenant
- Docker Desktop or another OCI (Open Container Initiative) compliant container runtime such as Podman, or nerdctl capable of running a container.
Microsoft Cloud Technologies used in this Tutorial
- Microsoft Entra ID
- Azure Communication Services
- Azure OpenAI Service
- Microsoft Graph
- Microsoft Graph Toolkit
Clone the Project
The code project used in this tutorial is available at https://github.com/microsoft/MicrosoftCloud. The project's repository includes both client-side and server-side code required to run the project, enabling you to explore the integrated features related to artificial intelligence (AI), communication, and organizational data. Additionally, the project serves as resource to guide you in incorporating similar features into your own applications.
In this exercise you will:
- Clone the GitHub repository.
- Add an .env file into the project and update it.
Before proceeding, ensure that you have all of the prerequisites installed and configured as outlined in the Prerequisites section of this tutorial.
Clone the GitHub Repository and Create an .env
File
Run the following command to clone the Microsoft Cloud GitHub Repository to your machine.
git clone https://github.com/microsoft/MicrosoftCloud
Open the MicrosoftCloud/samples/openai-acs-msgraph folder in Visual Studio Code.
Note
Although we'll use Visual Studio Code throughout this tutorial, any code editor can be used to work with the sample project.
Notice the following folders and files:
- client: Client-side application code.
- server: Server-side API code.
- docker-compose.yml: Used to run a local PostgreSQL database.
Rename the .env.example in the root of the project to .env.
Open the .env file and take a moment to look through the keys that are included:
ENTRAID_CLIENT_ID= TEAM_ID= CHANNEL_ID= OPENAI_API_KEY= OPENAI_ENDPOINT= OPENAI_API_VERSION=2023-06-01-preview OPENAI_MODEL=gpt-35-turbo POSTGRES_USER= POSTGRES_PASSWORD= ACS_CONNECTION_STRING= ACS_PHONE_NUMBER= ACS_EMAIL_ADDRESS= CUSTOMER_EMAIL_ADDRESS= CUSTOMER_PHONE_NUMBER= API_PORT=3000 AZURE_COGNITIVE_SEARCH_ENDPOINT= AZURE_COGNITIVE_SEARCH_KEY= AZURE_COGNITIVE_SEARCH_INDEX=
Update the following values in .env. These values will be used by the API server to connect to the local PostgreSQL database.
POSTGRES_USER=web POSTGRES_PASSWORD=web-password
Now that you have the project in place, let's try out some of the application features and learn how they're built. Select the Next button below to continue or jump to a specific exercise using the table of contents.
AI: Create an Azure OpenAI Resource and Deploy a Model
To get started using Azure OpenAI in your applications, you need to create an Azure OpenAI Service and deploy a model that can be used to perform tasks such as converting natural language to SQL, generating email/SMS message content, and more.
In this exercise you will:
- Create an Azure OpenAI Service resource.
- Deploy a model.
- Update the .env file with values from your Azure OpenAI Service resource.
Create an Azure OpenAI Service Resource
Visit the Azure portal in your browser and sign in.
Type openai in the search bar at the top of the portal page and select Azure OpenAI from the options that appear.
Select Create in the toolbar.
Note
If you see a message about completing an application form to enable Azure OpenAI on your subscription, select the Click here to request access to Azure OpenAI service link and complete the form. Once you've completed the form, you'll need to wait for the Azure OpenAI team to approve your request. After receiving your approval notice, you can go back through this exercise and create the resource.
While this tutorial focuses on Azure OpenAI, if you have an OpenAI API key and would like to use that while you're waiting for access to Azure OpenAI, you can skip this section and go directly to the Update the Project's .env File section below. Assign your OpenAI API key to
OPENAI_API_KEY
in the .env file (you can ignore any other.env
instructions related to OpenAI). Once you have access to Azure OpenAI, revisit this exercise, create the resource and model, and update the .env file with the values from your Azure OpenAI resource.Perform the following tasks:
- Select your Azure subscription.
- Select the resource group to use (create a new one if needed).
- Select the region you'd like to use.
- Enter the resource name. It must be a unique value.
- Select the Standard S0 pricing tier.
Select Next until you get to the Review + submit screen. Select Create.
Once your Azure OpenAI resource is created, navigate to it and select Keys and Endpoint in the Resource Management section.
Locate the KEY 1 and Endpoint values. You'll use both values in the next section so copy them to a local file.
Select Model deployments in the Resource Management section.
Select the Manage Deployments button to go to Azure OpenAI Studio.
Select Create new deployment in the toolbar.
Enter the following values:
- Model: gpt-35-turbo.
- Model version: Auto-update to default.
- Deployment name: gpt-35-turbo.
Note
Azure OpenAI supports several different types of models. Each model can be used to handle different scenarios.
Select Create.
Once the model is deployed, select Completions in the Playground section.
Select the gpt-35-turbo model from the Deployments dropdown. Select Generate an email from the Examples dropdown.
Take a moment to read through the prompt text that's provided. Select Generate to see the text that the model generates.
Warning
If you get an error message about the model not being ready, wait a few minutes and try again. It can take a few minutes for the model to be fully deployed and ready to use.
If you get an error saying, "The completion operation does not work with the specified model.", this normally means you selected a newer model version rather than the default version. Select Deployments and delete the model you created earlier. Create a new gpt-35-turbo model deployment, ensure that you select Auto-update to default for the Model version, give it a name of gpt-35-turbo, and wait for the model to be fully deployed. Once it's deployed, go back to the Playground and try the completion again.
Select Regenerate multiple times. Note that the text is different each time.
To the right of the screen you'll see properties listed such as Temperature. Change the Temperature value to 0 and select Regenerate again. Read through the email text that is generated.
Select Regenerate one final time and note that the email text is the same as the text that was generated previously.
Note
Lowering the temperature means that the model will produce more repetitive and deterministic responses. Increasing the temperature will result in more unexpected or creative responses.
Update the Project's .env
File
Go back to Visual Studio Code and open the
.env
file at the root of the project.Copy the KEY 1 value from your Azure OpenAI resource and assign it to
OPENAI_API_KEY
in the .env file located in the root of the openai-acs-msgraph folder:OPENAI_API_KEY=<KEY_1_VALUE>
Copy the *Endpoint value and assign it to
OPENAI_ENDPOINT
in the .env file. Remove the/
character from the end of the value if it's present.OPENAI_ENDPOINT=<ENDPOINT_VALUE>
Note
You'll see that values for
OPENAI_MODEL
andOPENAI_API_VERSION
are already set in the .env file. The model value is set to gpt-35-turbo which should match the model name you created earlier in this exercise. The API version is set to a supported value defined in the Azure OpenAI reference documentation.Save the .env file.
Start the Application Services
It's time to start up your application services including the database, API server, and web server.
In the following steps you'll create three terminal windows in Visual Studio Code.
Right-click on the .env file in the Visual Studio Code file list and select Open in Integrated Terminal. Ensure that your terminal is at the root of the project - openai-acs-msgraph - before continuing.
Choose from one of the following options to start the PostgreSQL database:
If you have Docker Desktop installed and running, run
docker-compose up
in the terminal window and press Enter.If you have Podman with podman-compose installed and running, run
podman-compose up
in the terminal window and press Enter.To run the PostgreSQL container directly using either Docker Desktop, Podman, nerdctl, or another container runtime you have installed, run the following command in the terminal window:
Mac, Linux, or Windows Subsystem for Linux (WSL):
[docker | podman | nerdctl] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v $(pwd)/data:/var/lib/postgresql/data -p 5432:5432 postgres
Windows with PowerShell:
[docker | podman] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v ${PWD}/data:/var/lib/postgresql/data -p 5432:5432 postgres
Once the database container starts, press the + icon in the Visual Studio Code Terminal toolbar to create a second terminal window.
cd
into the server/typescript folder and run the following commands to install the dependencies and start the API server.npm install
npm start
Press the + icon again in the Visual Studio Code Terminal toolbar to create a third terminal window.
cd
into the client folder and run the following commands to install the dependencies and start the web server.npm install
npm start
A browser will launch and you'll be taken to http://localhost:4200.
AI: Natural Language to SQL
The quote "Just because you can doesn't mean you should" is a useful guide when thinking about AI capabilities. For example, Azure OpenAI's natural language to SQL feature allows users to make database queries in plain English, which can be a powerful tool to enhance their productivity. However, powerful doesn't always mean appropriate or safe. This exercise will demonstrate how to use this AI feature while also discussing important considerations to keep in mind before deciding to implement it.
Here's an example of a natural language query that can be used to retrieve data from a database:
Get the the total revenue for all companies in London.
With the proper prompts, Azure OpenAI will convert this query to SQL that can be used to return results from the database. As a result, non-technical users including business analysts, marketers, and executives can more easily retrieve valuable information from databases without grappling with intricate SQL syntax or relying on constrained datagrids and filters. This streamlined approach can boost productivity by eliminating the need for users to seek assistance from technical experts.
This exercise provides a starting point that will help you understand how natural language to SQL works, introduce you to some important considerations, get you thinking about pros and cons, and show you the code to get started.
In this exercise, you will:
- Use GPT prompts to convert natural language to SQL.
- Experiment with different GPT prompts.
- Use the generated SQL to query the PostgreSQL database started earlier.
- Return query results from PostgreSQL and display them in the browser.
Let's start by experimenting with different GPT prompts that can be used to convert natural language to SQL.
Using the Natural Language to SQL Feature
In the previous exercise you started the database, APIs, and application. You also updated the
.env
file. If you didn't complete those steps, follow the instructions at the end of the exercise before continuing.Go back to the browser (http://localhost:4200) and locate the Custom Query section of the page below the datagrid. Notice that a sample query value is already included: Get the total revenue for all orders. Group by company and include the city.
Select the Run Query button. This will pass the user's natural language query to Azure OpenAI which will convert it to SQL. The SQL query will then be used to query the database and return any potential results.
Run the following Custom Query:
Get the total revenue for Adventure Works Cycles. Include the contact information as well.
View the terminal window running the API server in Visual Studio Code and notice it displays the SQL query returned from Azure OpenAI. The JSON data is used by the server-side APIs to query the PostgreSQL database. Any string values included in the query are added as parameter values to prevent SQL injection attacks:
{ "sql": "SELECT c.company, c.city, c.email, SUM(o.total) AS revenue FROM customers c INNER JOIN orders o ON c.id = o.customer_id WHERE c.company = $1 GROUP BY c.company, c.city, c.email", "paramValues": ["Adventure Works Cycles"] }
Go back to the browser and select Reset Data to view all of the customers again in the datagrid.
Exploring the Natural Language to SQL Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Note
The goal of this exercise is to show what's possible with natural language to SQL functionality and demonstrate how to get started using it. As mentioned earlier, it's important to discuss if this type of AI is appropriate for your organization before proceeding with any implementation. It's also imperative to plan for proper prompt rules and database security measures to prevent unauthorized access and protect sensitive data.
Now that you've seen the natural language to SQL feature in action, let's examine how it is implemented.
Open the server/apiRoutes.ts file and locate the
generateSql
route. This API route is called by the client-side application running in the browser and used to generate SQL from a natural language query. Once the SQL query is retrieved, it's used to query the database and return results.router.post('/generateSql', async (req, res) => { const userPrompt = req.body.prompt; if (!userPrompt) { return res.status(400).json({ error: 'Missing parameter "prompt".' }); } try { // Call Azure OpenAI to convert the user prompt into a SQL query const sqlCommandObject = await getSQLFromNLP(userPrompt); let result: any[] = []; // Execute the SQL query if (sqlCommandObject && !sqlCommandObject.error) { result = await queryDb(sqlCommandObject) as any[]; } else { result = [ { query_error : sqlCommandObject.error } ]; } res.json(result); } catch (e) { console.error(e); res.status(500).json({ error: 'Error generating or running SQL query.' }); } });
Notice the following functionality in the
generateSql
route:- It retrieves the user query value from
req.body.query
and assigns it to a variable nameduserQuery
. This value will be used in the GPT prompt. - It calls a
getSQLFromNLP()
function to convert natural language to SQL. - It passes the generated SQL to a function named
queryDb
that executes the SQL query and returns results from the database.
- It retrieves the user query value from
Open the server/openAI.ts file in your editor and locate the
getSQLFromNLP()
function. This function is called by thegeneratesql
route and is used to convert natural language to SQL.async function getSQLFromNLP(userPrompt: string): Promise<QueryData> { // Get the high-level database schema summary to be used in the prompt. // The db.schema file could be generated by a background process or the // schema could be dynamically retrieved. const dbSchema = await fs.promises.readFile('db.schema', 'utf8'); const systemPrompt = ` Assistant is a natural language to SQL bot that returns only a JSON object with the SQL query and the parameter values in it. The SQL will query a PostgreSQL database. PostgreSQL tables, with their columns: ${dbSchema} Rules: - Convert any strings to a PostgreSQL parameterized query value to avoid SQL injection attacks. - Always return a JSON object with the SQL query and the parameter values in it. - Return a JSON object. Do NOT include any text outside of the JSON object. - Example JSON object to return: { "sql": "", "paramValues": [] } User: "Display all company reviews. Group by company." Assistant: { "sql": "SELECT * FROM reviews", "paramValues": [] } User: "Display all reviews for companies located in cities that start with 'L'." Assistant: { "sql": "SELECT r.* FROM reviews r INNER JOIN customers c ON r.customer_id = c.id WHERE c.city LIKE 'L%'", "paramValues": [] } User: "Display revenue for companies located in London. Include the company name and city." Assistant: { "sql": "SELECT c.company, c.city, SUM(o.total) AS revenue FROM customers c INNER JOIN orders o ON c.id = o.customer_id WHERE c.city = $1 GROUP BY c.company, c.city", "paramValues": ["London"] } User: "Get the total revenue for Adventure Works Cycles. Include the contact information as well." Assistant: { "sql": "SELECT c.company, c.city, c.email, SUM(o.total) AS revenue FROM customers c INNER JOIN orders o ON c.id = o.customer_id WHERE c.company = $1 GROUP BY c.company, c.city, c.email", "paramValues": ["Adventure Works Cycles"] } - Convert any strings to a PostgreSQL parameterized query value to avoid SQL injection attacks. - Do NOT include any text outside of the JSON object. Do not provide any additional explanations or context. Just the JSON object is needed. `; let queryData: QueryData = { sql: '', paramValues: [], error: '' }; let results = ''; try { results = await callOpenAI(systemPrompt, userPrompt); if (results) { console.log('results', results); const parsedResults = JSON.parse(results); queryData = { ...queryData, ...parsedResults }; if (isProhibitedQuery(queryData.sql)) { queryData.sql = ''; queryData.error = 'Prohibited query.'; } } } catch (error) { console.log(error); if (isProhibitedQuery(results)) { queryData.sql = ''; queryData.error = 'Prohibited query.'; } else { queryData.error = results; } } return queryData; }
- A
userPrompt
parameter is passed into the function. TheuserPrompt
value is the natural language query entered by the user in the browser. - A
systemPrompt
defines the type of AI assistant to be used and rules that should be followed. This helps Azure OpenAI understand the database structure, what rules to apply, and how to return the generated SQL query and parameters. - A function named
callOpenAI()
is called and thesystemPrompt
anduserPrompt
values are passed to it. - The results are checked to ensure no prohibited values are included in the generated SQL query. If prohibited values are found, the SQL query is set to an empty string.
- A
Let's walk through the system prompt in more detail:
const systemPrompt = ` Assistant is a natural language to SQL bot that returns only a JSON object with the SQL query and the parameter values in it. The SQL will query a PostgreSQL database. PostgreSQL tables, with their columns: ${dbSchema} Rules: - Convert any strings to a PostgreSQL parameterized query value to avoid SQL injection attacks. - Always return a JSON object with the SQL query and the parameter values in it. - Return a JSON object. Do NOT include any text outside of the JSON object. - Example JSON object to return: { "sql": "", "paramValues": [] } User: "Display all company reviews. Group by company." Assistant: { "sql": "SELECT * FROM reviews", "paramValues": [] } User: "Display all reviews for companies located in cities that start with 'L'." Assistant: { "sql": "SELECT r.* FROM reviews r INNER JOIN customers c ON r.customer_id = c.id WHERE c.city LIKE 'L%'", "paramValues": [] } User: "Display revenue for companies located in London. Include the company name and city." Assistant: { "sql": "SELECT c.company, c.city, SUM(o.total) AS revenue FROM customers c INNER JOIN orders o ON c.id = o.customer_id WHERE c.city = $1 GROUP BY c.company, c.city", "paramValues": ["London"] } User: "Get the total revenue for Adventure Works Cycles. Include the contact information as well." Assistant: { "sql": "SELECT c.company, c.city, c.email, SUM(o.total) AS revenue FROM customers c INNER JOIN orders o ON c.id = o.customer_id WHERE c.company = $1 GROUP BY c.company, c.city, c.email", "paramValues": ["Adventure Works Cycles"] } - Convert any strings to a PostgreSQL parameterized query value to avoid SQL injection attacks. - Do NOT include any text outside of the JSON object. Do not provide any additional explanations or context. Just the JSON object is needed. `;
The type of AI assistant to be used is defined. In this case a "natural language to SQL bot".
Table names and columns in the database are defined. The high-level schema included in the prompt can be found in the server/db.schema file and looks like the following.
- customers (id, company, city, email) - orders (id, customer_id, date, total) - order_items (id, order_id, product_id, quantity, price) - reviews (id, customer_id, review, date, comment)
Tip
You may consider creating read-only views that only contain the data users are allowed to query using natural language to SQL.
A rule is defined to convert any string values to a parameterized query value to avoid SQL injection attacks.
A rule is defined to always return a JSON object (and nothing else) with the SQL query and the parameter values in it.
An example is given for the type of JSON object to return.
Example user prompts and the expected SQL query and parameter values are provided. This is referred to as "few-shot" learning. Although LLMs are trained on large amounts of data, they can be adapted to new tasks with only a few examples. An alternative approach is "zero-shot" learning where no example is provided and the model is expected to generate the correct SQL query and parameter values.
Two critical rules are repeated again at the bottom of the system prompt to avoid "recency bias".
Tip
Learn more about recency bias in the Azure OpenAI documentation.
The
getSQLFromNLP()
function sends the system and user prompts to a function namedcallOpenAI()
which is also located in the server/openAI.ts file. ThecallOpenAI()
function determines if the Azure OpenAI service or OpenAI service should be called by checking environment variables. If a key, endpoint, and model are available in the environment variables then Azure OpenAI is called, otherwise OpenAI is called.function callOpenAI(systemPrompt: string, userPrompt: string, temperature = 0, useBYOD = false) { const isAzureOpenAI = OPENAI_API_KEY && OPENAI_ENDPOINT && OPENAI_MODEL; if (isAzureOpenAI && useBYOD) { return getAzureOpenAIBYODCompletion(systemPrompt, userPrompt, temperature); } if (isAzureOpenAI) { return getAzureOpenAICompletion(systemPrompt, userPrompt, temperature); } return getOpenAICompletion(systemPrompt, userPrompt, temperature); }
Note
Although we'll focus on Azure OpenAI throughout this tutorial, if you only supply an
OPENAI_API_KEY
value in the .env file, the application will use OpenAI instead. If you choose to use OpenAI instead of Azure OpenAI you may see different results in some cases.Locate the
getAzureOpenAICompletion()
function.async function getAzureOpenAICompletion(systemPrompt: string, userPrompt: string, temperature: number): Promise<string> { checkRequiredEnvVars(['OPENAI_API_KEY', 'OPENAI_ENDPOINT', 'OPENAI_MODEL']); const fetchUrl = `${OPENAI_ENDPOINT}/openai/deployments/${OPENAI_MODEL}/chat/completions?api-version=${OPENAI_API_VERSION}`; const messageData: ChatGPTData = { max_tokens: 1024, temperature, messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt } ] }; const headersBody: OpenAIHeadersBody = { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': OPENAI_API_KEY }, body: JSON.stringify(messageData), }; const completion = await fetchAndParse(fetchUrl, headersBody); console.log(completion); let content = (completion.choices[0]?.message?.content?.trim() ?? '') as string; console.log('Azure OpenAI Output: \n', content); if (content && content.includes('{') && content.includes('}')) { content = extractJson(content); } console.log('After parse: \n', content); return content; } function checkRequiredEnvVars(requiredEnvVars: string[]) { for (const envVar of requiredEnvVars) { if (!process.env[envVar]) { throw new Error(`Missing ${envVar} in environment variables.`); } } } async function fetchAndParse(url: string, headersBody: Record<string, any>): Promise<any> { try { const response = await fetch(url, headersBody); return await response.json(); } catch (error) { console.error(`Error fetching data from ${url}:`, error); throw error; } }
This function does the following:
Accepts
systemPrompt
,userPrompt
, andtemperature
parameters.systemPrompt
: Lets the Azure OpenAI model know what role it should play and what rules to follow.userPrompt
: User information entered into the application such as natural language or rules that will be used by the model to generate the output.temperature
: Determines how creative the model should be when generating a response. A higher value means the model will take more risks.
Ensures that a valid Azure OpenAI API key, endpoint, ,and model are available by calling
checkRequiredEnvVars()
.Creates a
fetchUrl
value that is used to call Azure OpenAI's REST API and embeds the endpoint, model, and API version values from the environment variables into the URL.Creates a
messageData
object that includesmax_token
,temperature
, andmessages
to send to Azure OpenAI.max_tokens
: The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Older models have a context length of 2,048 tokens while newer ones support 4,096, 8,192, or even 32,768 tokens depending on the model being used.temperature
: What sampling temperature to use. A higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.messages
: Represents the messages to generate chat completions for, in the chat format. In this example two messages are passed in: one for the system and one for the user. The system message defines the overall behavior and rules that will be used, while the user message defines the prompt text provided by the user.
Calls
fetchAndParse()
to send thefetchUrl
andheadersBody
values to Azure OpenAI.Processes the response by retrieving the
completion.choices[0].message.content
value. If the response contains the expected results, the code extracts the JSON object from the response and returns it.Note
You can learn more about these parameters and others in the Azure OpenAI reference documentation.
Comment out the following lines in the
getSQLFromNLP()
function:// if (isProhibitedQuery(queryData.sql)) { // queryData.sql = ''; // }
Save openAI.ts. The API server will automatically rebuild the TypeScript code and restart the server.
Go back to the browser and enter Select all table names from the database into the Custom Query input. Select Run Query. Are table names displayed?
Go back to the
getSQLFromNLP()
function in server/openAI.ts and add the following rule into theRules:
section of the system prompt and then save the file.- Do not allow the SELECT query to return table names, function names, or procedure names.
Go back to the browser and perform the following tasks:
- Enter Select all table names from the database into the Custom Query input. Select Run Query. Are table names displayed?
- Enter Select all function names from the database. into the Custom Query input and select Run Query again. Are function names displayed?
QUESTION: Why is this still working after adding a rule saying that table names, function names, and procedure names aren't allowed?
ANSWER: This is due to the "only JSON" rule. If the rules were more flexible and didn't require a JSON object to be returned, you may see a message about Azure OpenAI being unable to perform the task.
Note
It's important to note that OpenAI models can return unexpected results on occasion that may not match the rules you've defined. It's important to plan for that in your code.
Take out the following rule from
systemPrompt
and save the file.- Only return a JSON object. Do NOT include any text outside of the JSON object. Do not provide any additional explanations or context. Just the JSON object is needed.
Run Select all table names from the database query again.
Notice the message now displayed in the browser. Azure OpenAI is unable to perform the task because of the the following rule. Since we removed the "only JSON" rule, the response can provide additional details about why the task can't be performed.
- Do not allow the SELECT query to return table names, function names, or procedure names.
You can see that AI may generate unexpected results even if you have specific rules in place. This is why you need to plan your prompt text and rules carefully, but also plan to add a post-processing step into your code to handle cases where you receive unexpected results.
Go back to server/openAI.ts and locate the
isProhibitedQuery()
function. This is an example of post-processing code that can be run after Azure OpenAI returns results. Notice that it sets thesql
property to an empty string if prohibited keywords are returned in the generated SQL query. This ensures that if unexpected results are returned from Azure OpenAI, the SQL query will not be run against the database.function isProhibitedQuery(query: string): boolean { if (!query) return false; const prohibitedKeywords = [ 'insert', 'update', 'delete', 'drop', 'truncate', 'alter', 'create', 'replace', 'information_schema', 'pg_catalog', 'pg_tables', 'pg_namespace', 'pg_class', 'table_schema', 'table_name', 'column_name', 'column_default', 'is_nullable', 'data_type', 'udt_name', 'character_maximum_length', 'numeric_precision', 'numeric_scale', 'datetime_precision', 'interval_type', 'collation_name', 'grant', 'revoke', 'rollback', 'commit', 'savepoint', 'vacuum', 'analyze' ]; const queryLower = query.toLowerCase(); return prohibitedKeywords.some(keyword => queryLower.includes(keyword)); }
Note
It's important to note that this is only demo code. There may be other prohibited keywords required to cover your specific use cases if you choose to convert natural language to SQL. This is a feature that you must plan for and use with care to ensure that only valid SQL queries are returned and run against the database. In addition to prohibited keywords, you will also need to factor in security as well.
Go back to server/openAI.ts and uncomment the following code in the
getSQLFromNLP()
function. Save the file.if (isProhibitedQuery(queryData.sql)) { queryData.sql = ''; }
Remove the following rule from
systemPrompt
and save the file.- Do not allow the SELECT query to return table names, function names, or procedure names.
Go back to the browser, enter Select all table names from the database into the Custom Query input again and select the Run Query button.
Do any table results display? Even without the rule in place, the
isProhibitedQuery()
post-processing code prohibits that type of query from being run against the database.As discussed earlier, integrating natural language to SQL in line of business applications can be quite beneficial to users, but it does come with its own set of considerations.
Advantages:
User-friendliness: This feature can make database interaction more accessible to users without technical expertise, reducing the need for SQL knowledge and potentially speeding up operations.
Increased productivity: Business analysts, marketers, executives, and other non-technical users can retrieve valuable information from databases without having to rely on technical experts, thereby increasing efficiency.
Broad application: By using advanced language models, applications can be designed to cater to a wide range of users and use-cases.
Considerations:
Security: One of the biggest concerns is security. If users can interact with databases using natural language, there needs to be robust security measures in place to prevent unauthorized access or malicious queries. You may consider implementing a read-only mode to prevent users from modifying data.
Data Privacy: Certain data might be sensitive and should not be easily accessible, so you'll need to ensure proper safeguards and user permissions are in place.
Accuracy: While natural language processing has improved significantly, it's not perfect. Misinterpretation of user queries could lead to inaccurate results or unexpected behavior. You'll need to plan for how unexpected results will be handled.
Efficiency: There are no guarantees that the SQL returned from a natural language query will be efficient. In some cases, additional calls to Azure OpenAI may be required if post-processing rules detect issues with SQL queries.
Training and User Adaptation: Users need to be trained to formulate their queries correctly. While it's easier than learning SQL, there can still be a learning curve involved.
A few final points to consider before moving on to the next exercise:
- Remember that, "Just because you can doesn't mean you should" applies here. Use extreme caution and careful planning before integrating natural language to SQL into an application. It's important to understand the potential risks and to plan for them.
- Before using this type of technology, discuss potential scenarios with your team, database administrators, security team, stakeholders, and any other relevant parties to ensure that it's appropriate for your organization. It's important to discuss if natural language to SQL meets security, privacy, and any other requirements your organization may have in place.
- Security should be a primary concern and built into the planning, development, and deployment process.
- While natural language to SQL can be very powerful, careful planning must go into it to ensure prompts have required rules and that post-processing functionality is included. Plan for additional time to implement and test this type of functionality and to account for scenarios where unexpected results are returned.
- With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering. Learn more about Data, privacy, and security for Azure OpenAI Service.
You've now seen how to use Azure OpenAI to convert natural language to SQL and learned about the pros and cons of implementing this type of functionality. In the next exercise, you'll learn how email and SMS messages can be generated using Azure OpenAI.
AI: Generating Completions
In addition to the natural language to SQL feature, you can also use Azure OpenAI Service to generate email and SMS messages to enhance user productivity and streamline communication workflows. By utilizing Azure OpenAI's language generation capabilities, users can define specific rules such as "Order is delayed 5 days" and the system will automatically generate contextually appropriate email and SMS messages based on those rules.
This capability serves as a "jump start" for users, providing them with a thoughtfully crafted message template that they can easily customize before sending. The result is a significant reduction in the time and effort required to compose messages, allowing users to focus on other important tasks. Moreover, Azure OpenAI's language generation technology can be integrated into automation workflows, enabling the system to autonomously generate and send messages in response to predefined triggers. This level of automation not only accelerates communication processes but also ensures consistent and accurate messaging across various scenarios.
In this exercise, you will:
- Experiment with different GPT prompts.
- Use GPT prompts to generate completions for email and SMS messages.
- Explore code that enables GPT completions.
- Learn about the importance of prompt engineering and including rules in your prompts.
Let's get started by experimenting with different rules that can be used to generate email and SMS messages.
Using the GPT Completions Feature
In a previous exercise you started the database, APIs, and application. You also updated the
.env
file. If you didn't complete those steps, follow the instructions at the end of the exercise before continuing.Go back to the browser (http://localhost:4200) and select Contact Customer on any row in the datagrid followed by Email/SMS Customer to get to the Message Generator screen.
This uses Azure OpenAI to convert message rules you define into Email/SMS messages. Perform the following tasks:
Enter a rule such as Order is delayed 5 days into the input and select the Generate Email/SMS Messages button.
You will see a subject and body generated for the email and a short message generated for the SMS.
Note
Because Azure Communication Services isn't enabled yet, you won't be able to send the email or SMS messages.
Close the email/SMS dialog window in the browser. Now that you've seen this feature in action, let's examine how it's implemented.
Exploring the GPT Completions Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Open the server/apiRoutes.ts file and locate the
completeEmailSmsMessages
route. This API is called by front-end portion of the app when the Generate Email/SMS Messages button is selected. It retrieves the user prompt, company, and contact name values from the body and passes them to thecompleteEmailSMSMessages()
function in the server/openAI.ts file. The results are then returned to the client.router.post('/completeEmailSmsMessages', async (req, res) => { const { prompt, company, contactName } = req.body; if (!prompt || !company || !contactName) { return res.status(400).json({ status: false, error: 'The prompt, company, and contactName parameters must be provided.' }); } let result; try { // Call OpenAI to get the email and SMS message completions result = await completeEmailSMSMessages(prompt, company, contactName); } catch (e: unknown) { console.error('Error parsing JSON:', e); } res.json(result); });
Open the server/openAI.ts file and locate the
completeEmailSMSMessages()
function.async function completeEmailSMSMessages(prompt: string, company: string, contactName: string) { console.log('Inputs:', prompt, company, contactName); const systemPrompt = ` Assistant is a bot designed to help users create email and SMS messages from data and return a JSON object with the email and SMS message information in it. Rules: - Generate a subject line for the email message. - Use the User Rules to generate the messages. - All messages should have a friendly tone and never use inappropriate language. - SMS messages should be in plain text format and NO MORE than 160 characters. - Start the message with "Hi <Contact Name>,\n\n". Contact Name can be found in the user prompt. - Add carriage returns to the email message to make it easier to read. - End with a signature line that says "Sincerely,\nCustomer Service". - Return a valid JSON object with the emailSubject, emailBody, and SMS message values in it: { "emailSubject": "", "emailBody": "", "sms": "" } - The sms property value should be in plain text format and NO MORE than 160 characters. - Only return a valid JSON object. Do NOT include any text outside of the JSON object. Do not provide any additional explanations or context. Just the JSON object is needed. `; const userPrompt = ` User Rules: ${prompt} Contact Name: ${contactName} `; let content: EmailSmsResponse = { status: true, email: '', sms: '', error: '' }; let results = ''; try { results = await callOpenAI(systemPrompt, userPrompt, 0.5); if (results) { const parsedResults = JSON.parse(results); content = { ...content, ...parsedResults, status: true }; } } catch (e) { console.log(e); content.status = false; content.error = results; } return content; }
This function has the following features:
systemPrompt
is used to define that an AI assistant capable of generating email and SMS messages is required. ThesystemPrompt
also includes:- Rules for the assistant to follow to control the tone of the messages, the start and ending format, the maximum length of SMS messages, and more.
- Information about data that should be included in the response - a JSON object in this case and only a JSON object.
- Two critical rules are repeated again at the bottom of the system prompt to avoid "recency bias".
userPrompt
is used to define the rules and contact name that the end user would like to include as the email and SMS messages are generated. The Order is delayed 5 days rule you entered earlier is included inuserPrompt
.- The function calls the
callOpenAI()
function you explored earlier to generate the email and SMS completions.
Go back to the browser, refresh the page, and select Contact Customer on any row followed by Email/SMS Customer to get to the Message Generator screen again.
Enter the following rules into the Message Generator input:
- Order is ahead of schedule.
- Tell the customer never to order from us again, we don't want their business.
Select Generate Email/SMS Messages and note the message. The
All messages should have a friendly tone and never use inappropriate language.
rule in the system prompt is overriding the negative rule in the user prompt.Go back to server/openAI.ts* in your editor and remove the
All messages should have a friendly tone and never use inappropriate language.
rule from the prompt in thecompleteEmailSMSMessages()
function. Save the file.Go back to the email/SMS message generator in the browser and run the same rules again:
- Order is ahead of schedule.
- Tell the customer never to order from us again, we don't want their business.
Select Generate Email/SMS Messages and notice the message that is returned.
What is happening in these scenarios? When using Azure OpenAI, content filtering is applied to ensure that appropriate language is always used. If you're using OpenAI, the rule defined in the system prompt is used to ensure the message returned is appropriate.
Note
This illustrates the importance of engineering your prompts with the right information and rules to ensure proper results are returned. Read more about this process in the Introduction to prompt engineering documentation.
Undo the changes you made to
systemPrompt
incompleteEmailSMSMessages()
, save the file, and re-run it again but only use theOrder is ahead of schedule.
rule (don't include the negative rule). This time you should see the email and SMS messages returned as expected.A few final points to consider before moving on to the next exercise:
- It's important to have a human in the loop to review generated messages. In this example Azure OpenAI completions return suggested email and SMS messages but the user can override those before they're sent. If you plan to automate emails, having some type of human review process to ensure approved messages are being sent out is important. View AI as being a copilot, not an autopilot.
- Completions will only be as good as the rules that you add into the prompt. Take time to test your prompts and the completions that are returned. Invite other project stakeholders to review the completions as well.
- You may need to include post-processing code to ensure unexpected results are handled properly.
- Use system prompts to define the rules and information that the AI assistant should follow. Use user prompts to define the rules and information that the end user would like to include in the completions.
AI: Bring Your Own Data
The integration of Azure OpenAI Natural Language Processing (NLP) and completion capabilities offers significant potential for enhancing user productivity. By leveraging appropriate prompts and rules, an AI assistant can efficiently generate various forms of communication, such as email messages, SMS messages, and more. This functionality leads to increased user efficiency and streamlined workflows.
While this feature is quite powerful on its own, there may be cases where users need to generate completions based on your company's custom data. For example, you might have a collection of product manuals that may be challenging for users to navigate when they're assisting customers with installation issues. Alternatively, you might maintain a comprehensive set of Frequently Asked Questions (FAQs) related to healthcare benefits that can prove challenging for users to read through and get the answers they need. In these cases and many others, Azure OpenAI Service enables you to leverage your own data to generate completions, ensuring a more tailored and contextually accurate response to user questions.
Here's a quick overview of how the "bring your own data" feature works from the Azure OpenAI documentation.
Note
One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure Cognitive Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion.
In this exercise, you will:
- Create a custom data source using Azure AI Studio.
- Deploy an embedding model using Azure AI Studio.
- Upload custom documents.
- Start a chat session in the Chat playground to experiment with generating completions based upon your own data.
- Explore code that uses Azure Cognitive Search and Azure OpenAI to generate completions based upon your own data.
Let's get started by deploying an embedding model and adding a custom data source in Azure AI Studio.
Adding a Custom Data Source to Azure AI Studio
Navigate to Azure OpenAI Studio and sign in with credentials that have access to your Azure OpenAI resource.
Select Deployments from the navigation menu.
Select Create new deployment and enter the following values:
- Model: text-embedding-ada-002.
- Model version: Default.
- Deployment name: text-embedding-ada-002.
After the model is created, select Azure OpenAI from the navigation menu to go to the welcome screen.
Locate the Bring your own data tile on the welcome screen and select Try it now.
Select Upload files from the Select data source dropdown.
Under the Select Azure Blob storage resource dropdown, select Create a new Azure Blob storage resource.
This will take you to the Azure portal where you can perform the following tasks:
- Enter a unique name for the storage account such as byodstorage[Your Last Name].
- Select a region that's close to your location.
- Select Review followed by Create.
Once the blob storage resource is created, go back to the Azure AI Studio dialog and select your newly created blob storage resource from the Select Azure Blob storage resource dropdown. If you don't see it listed, select the refresh icon next to the dropdown.
Cross-origin resource sharing (CORS) needs to be turned on in order for your storage account to be accessed. Select Turn on CORS in the Azure AI Studio dialog.
Under the Select Azure Cognitive Search resource dropdown, select Create a new Azure Cognitive Search resource.
This will take you back to the Azure portal where you can perform the following tasks:
- Enter a unique name for the Cognitive Search resource such as byodsearch[Your Last Name].
- Select a region that's close to your location.
- In the Pricing tier section, select Change Pricing Tier and select Basic followed by Select. The free tier isn't supported, so you'll clean up the Cognitive Search resource at the end of this tutorial.
- Select Review followed by Create.
Once the Cognitive Search resource is created, go to the resource Overview page and copy the Url value to a local file.
Select Keys in the left navigation menu and copy the Primary admin key value to a local file. You'll need these values later in the exercise.
Select Semantic ranker in the left navigation menu and ensure that Free is selected.
Note
To check if semantic ranker is available in a specific region, Check the Products Available by Region page on the Azure web site to see if your region is listed.
Go back to the Azure AI Studio Add Data dialog and select your newly created search resource from the Select Azure Cognitive Search resource dropdown. If you don't see it listed, select the refresh icon next to the dropdown.
Enter a value of byod-search-index for the Enter the index name value.
Select the Add vector search to this search resource checkbox.
In the Select an embedding model dropdown, select the text-embedding-ada-002 model you created earlier.
Select the checkbox followed by Next.
In the Upload files dialog, select Browse for a file.
Navigate to the project's customer documents folder (located at the root of the project) and select the following files:
- Clock A102 Installation Instructions.docx
- Company FAQs.docx
Note
This feature currently supports the following file formats for local index creation: .txt, .md, .html, .pdf, .docx, and .pptx.
Select Upload files. The files will be uploaded into a fileupload-byod-search-index container in the blob storage resource you created earlier.
Select Next to go to the Data management dialog.
In the Search type dropdown, select Hybrid + semantic.
Note
This option provides support for keyword and vector search. Once results are returned, a secondary ranking process is applied to the result set using deep learning models which improves the search relevance for the user. To learn more about semantic search, view the Semantic search in Azure Cognitive Search documentation.
Select the checkboxes to acknowledge the costs associated with using semantic search and vector embeddings.
Select Next, review the details, and select Save and close.
Now that your custom data has been uploaded, the data will be indexed and available to use in the Chat playground. This process may take a few minutes. Once it's completed, continue to the next section.
Using Your Custom Data Source in the Chat Playground
Locate the Chat session section of the page in Azure AI Studio and enter the following User message:
What safety rules are required to install a clock?
You should see a result similar to the following displayed:
Expand the 1 references section in the chat response and notice that the Clock A102 Installation Instructions.docx file is listed and that you can select it to view the document.
Enter the following User message:
What should I do to mount the clock on the wall?
You should see a result similar to the following displayed:
Now let's experiment with the Company FAQs document. Enter the following text into the User message field:
What is the company's policy on vacation time?
You should see that no information was found for that request.
Enter the following text into the User message field:
How should I handle refund requests?
You should see a result similar to the following displayed:
Expand the 1 references section in the chat response and notice that the Company FAQs.docx file is listed and that you can select it to view the document.
Select View code at the top of the Chat session section.
Note that you can switch between different languages, view the endpoint, and access the endpoint's key. Close the Sample Code dialog window.
Turn on the Show raw JSON toggle in the *Chat session. Notice the chat session starts with a message similar to the following:
{ "role": "system", "content": "You are an AI assistant that helps people find information." }
Now that you've created a custom data source and experimented with it in the Chat playground, let's see how you can use it in the project's application.
Using the Bring Your Own Data Feature in the Application
Go back to the project in Visual Studio Code and open the .env file. Update the following values with your Cognitive Services endpoint, key, and index name. You copied the endpoint and key to a local file earlier in this exercise.
AZURE_COGNITIVE_SEARCH_ENDPOINT=<COGNITIVE_SERVICES_ENDPOINT_VALUE> AZURE_COGNITIVE_SEARCH_KEY=<COGNITIVE_SERVICES_KEY_VALUE> AZURE_COGNITIVE_SEARCH_INDEX=byod-search-index
In a previous exercise you started the database, APIs, and application. You also updated the
.env
file. If you didn't complete those steps, follow the instructions at the end of the earlier exercise before continuing.Once the application has loaded in the browser, select the Chat Help icon in the upper-right of the application.
The following text should appear in the chat dialog:
How should I handle refund requests?
Select the Get Help button. You should see results returned from the Company FAQs.docx document that you uploaded earlier in Azure AI Studio. If you'd like to read through the document, you can find it in the customer documents folder at the root of the project.
Change the text to the following and select the Get Help button:
What safety rules are required to install a clock?
You should see results returned from the Clock A102 Installation Instructions.docx document that you uploaded earlier in Azure AI Studio. This document is also available in the customer documents folder at the root of the project.
Exploring the Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Go back to the project source code in Visual Studio Code.
Open the server/apiRoutes.ts file and locate the
completeBYOD
route. This API is called when the Get Help button is selected in the Chat Help dialog. It retrieves the user prompt from the request body and passes it to thecompleteBYOD()
function in the server/openAI.ts file. The results are then returned to the client.router.post('/completeBYOD', async (req, res) => { const { prompt } = req.body; if (!prompt) { return res.status(400).json({ status: false, error: 'The prompt parameter must be provided.' }); } let result; try { // Call OpenAI to get custom "bring your own data" completion result = await completeBYOD(prompt); } catch (e: unknown) { console.error('Error parsing JSON:', e); } res.json(result); });
Open the server/openAI.ts file and locate the
completeBYOD()
function.async function completeBYOD(userPrompt: string): Promise<string> { const systemPrompt = 'You are an AI assistant that helps people find information.'; // Pass that we're using Cognitive Search along with Azure OpenAI. return await callOpenAI(systemPrompt, userPrompt, 0, true); }
This function has the following features:
- The
userPrompt
parameter contains the information the user typed into the chat help dialog. - the
systemPrompt
variable defines that an AI assistant designed to help people find information will be used. callOpenAI()
is used to call the Azure OpenAI API and return the results. It passes thesystemPrompt
anduserPrompt
values as well as the following parameters:temperature
- The amount of creativity to include in the response. The user needs consistent (less creative) answers in this case so the value is set to 0.useBYOD
- A boolean value that indicates whether or not to use Cognitive Search along with Azure OpenAI. In this case, it's set totrue
so Cognitive Search functionality will be used.
- The
The
callOpenAI()
function accepts auseBYOD
parameter that is used to determine which OpenAI function to call. In this case, it setsuseBYOD
totrue
so thegetAzureOpenAIBYODCompletion()
function will be called.function callOpenAI(systemPrompt: string, userPrompt: string, temperature = 0, useBYOD = false) { const isAzureOpenAI = OPENAI_API_KEY && OPENAI_ENDPOINT && OPENAI_MODEL; if (isAzureOpenAI && useBYOD) { // Azure OpenAI + Cognitive Search: Bring Your Own Data return getAzureOpenAIBYODCompletion(systemPrompt, userPrompt, temperature); } if (isAzureOpenAI) { // Azure OpenAI return getAzureOpenAICompletion(systemPrompt, userPrompt, temperature); } // OpenAI return getOpenAICompletion(systemPrompt, userPrompt, temperature); }
Locate the
getAzureOpenAIBYODCompletion()
function in server/openAI.ts. It's quite similar to thegetAzureOpenAICompletion()
function you examined earlier, but is shown as a separate function to highlight a few key differences that are unique to the "bring your own data" scenario available in Azure OpenAI.The
fetchUrl
value includes anextensions
segment in the URL whereas the URL for the standard Azure OpenAI API does not.const fetchUrl = `${OPENAI_ENDPOINT}/openai/deployments/${OPENAI_MODEL}/extensions/chat/completions?api-version=${OPENAI_API_VERSION}`;
A
dataSources
property is added to themessageData
object sent to Azure OpenAI. ThedataSources
property contains the Cognitive Search resource'sendpoint
,key
, andindexName
values that were added to the.env
file earlier in this exercise.const messageData: ChatGPTData = { max_tokens: 1024, temperature, messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt } ], // Adding BYOD data source so that Cognitive Search is used with Azure OpenAI dataSources: [ { type: 'AzureCognitiveSearch', parameters: { endpoint: AZURE_COGNITIVE_SEARCH_ENDPOINT, key: AZURE_COGNITIVE_SEARCH_KEY, indexName: AZURE_COGNITIVE_SEARCH_INDEX } } ] };
The
headersBody
object includeschatpgpt_url
andchatgpt_key
properties that are used to call Azure OpenAI once the Cognitive Search results are obtained.const headersBody: OpenAIHeadersBody = { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': OPENAI_API_KEY, chatgpt_url: fetchUrl.replace('extensions/', ''), chatgpt_key: OPENAI_API_KEY }, body: JSON.stringify(messageData), };
The response returned by Azure OpenAI includes two messsages with roles of
tool
andassistant
. The sample application uses the second message with arole
ofassistant
to provide the user the information they requested. In cases where you want to provide additional information about the documents used to create the response (as you saw earlier in the Azure AI Studio playground), you can use the first message which includes theurl
to the document(s).{ "id": "12345678-1a2b-3c4e5f-a123-12345678abcd", "model": "", "created": 1684304924, "object": "chat.completion", "choices": [ { "index": 0, "messages": [ { "role": "tool", "content": "{\"citations\": [{\"content\": \"\\nCognitive Services are cloud-based artificial intelligence (AI) services...\", \"id\": null, \"title\": \"What is Cognitive Services\", \"filepath\": null, \"url\": null, \"metadata\": {\"chunking\": \"orignal document size=250. Scores=0.4314117431640625 and 1.72564697265625.Org Highlight count=4.\"}, \"chunk_id\": \"0\"}], \"intent\": \"[\\\"Learn about Azure Cognitive Services.\\\"]\"}", "end_turn": false }, { "role": "assistant", "content": " \nAzure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. [doc1]. Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle. [doc1].", "end_turn": true } ] } ] }
The following code is used in
getAzureOpenAIBYODCompletion()
to access the messages. Although citations aren't being used in this example, they're logged to the console so you can see the type of data that's returned.const completion = await fetchAndParse(fetchUrl, headersBody); console.log(completion); if (completion.error) { console.error('Azure OpenAI BYOD Error: \n', completion.error); return completion.error.message; } const citations = (completion.choices[0]?.messages[0]?.content?.trim() ?? '') as string; console.log('Azure OpenAI BYOD Citations: \n', citations); let content = (completion.choices[0]?.messages[1]?.content?.trim() ?? '') as string; console.log('Azure OpenAI BYOD Output: \n', content); return content;
A few final points to consider before moving on to the next exercise:
- The "bring your own data" feature of Azure OpenAI is currently in preview. It's not recommended to use it in production applications at this time.
- The sample application uses a single index in Azure Cognitive Search. You can use multiple indexes and data sources with Azure OpenAI. The
dataSources
property in thegetAzureOpenAIBYODCompletion()
function can be updated to include multiple data sources as needed. - Security must be carefully evaluated with this type of scenario. Users should't be able to ask questions and get results from documents that they aren't able to access.
Now that you've learned about Azure OpenAI, prompts, completions, and how you can use your own data, let's move on to the next exercise to learn how communication features can be used to enhance the application. If you'd like to learn more about Azure OpenAI, view the Get started with Azure OpenAI Service training content. Additional information about using your own data with Azure OpenAI can be found in the Azure OpenAI on your data documentation.
Communication: Creating an Azure Communication Services Resource
Effective communication is essential for successful custom business applications. By using Azure Communication Services (ACS), you can add features such as phone calls, live chat, audio/video calls, and email and SMS messaging to your applications. Earlier, you learned how Azure OpenAI can generate completions for email and SMS messages. Now, you'll learn how to send the messages. Together, ACS and OpenAI can enhance your applications by simplifying communication, improving interactions, and boosting business productivity.
In this exercise, you will:
- Create an Azure Communication Services (ACS) resource.
- Add a toll-free phone number with calling and SMS capabilities.
- Connect an email domain.
- Update the project's .env file with values from your ACS resource.
Create an Azure Communication Services Resource
Visit the Azure portal in your browser and sign in if you haven't already.
Type communication services in the search bar at the top of the page and select Communication Services from the options that appear.
Select Create in the toolbar.
Perform the following tasks:
- Select your Azure subscription.
- Select the resource group to use (create a new one if one doesn't exist).
- Enter an ACS resource name. It must be a unique value.
- Select a data location.
Select Review + Create followed by Create.
You've successfully created a new Azure Communication Services resource! Next, you'll enable phone calling and SMS capabilities. You'll also connect an email domain to the resource.
Enable Phone Calling and SMS Capabilities
Add a phone number and ensure that the phone number has calling capabilities enabled. You'll use this phone number to call out to a phone from the app.
Select
Phone numbers
from the Resource menu.Select
+ Get
in the toolbar (or select the Get a number button) and enter the following information:- Country or region:
United States
- Use case: Select
An application will be making calls or sending SMS mesages
- Number type: Toll-free
Note
A credit card is required on your Azure subscription to create the toll-free number. If you don't have a card on file, feel free to skip adding a phone number and jump to the next section of the exercise that connects an email domain. You can still use the app, but won't be able to call out to a phone number.
- Calling:
Make calls
- SMS:
Send and receive SMS
- Country or region:
Select Next: Numbers.
Select a Prefix (for example
877
) and leave the Quantity at 1. Select Search.Once a toll-free number is displayed, select Next: Summary.
Review the details and select Place order to add the phone number to your ACS resource.
Once the phone number is created, select it to get to the Features panel. Ensure that the following values are set:
- In the Calling section, select
Make calls
. - In the SMS section, select
Send and receive SMS
. - Select Save.
- In the Calling section, select
Copy the phone number value into a file for later use.
Connect an Email Domain
Perform the following tasks to create a connected email domain for your ACS resource so that you can send email. This will be used to send email from the app.
- Select Domains from the Resource menu.
- Select Connect domain from the toolbar.
- Select your Subscription and Resource group.
- Under the Email Service dropdown, select
Add an email service
. - Give the email service a name such as
acs-demo-email-service
. - Select
Review + create
followed byCreate
. - Once the deployment completes, select
Go to resource
, and select1-click add
to add a free Azure subdomain. - After the subdomain is added (it'll take a few moments to be deployed), select it.
- Once you're on the AzureManagedDomain screen, select MailFrom addresses from the Resource menu.
- Copy the MailFrom value to a file. You'll use it later as you update the .env file.
- Go back to your Azure Communication Services resource and select
Domains
from the Resource menu. - Select
Add domain
and enter theMailFrom
value from the previous step (ensure you select the correct subscription, resource group, and email service). Select theConnect
button.
Update the .env
File
Now that your ACS phone number (with calling and SMS enabled) and email domain are ready, update the following keys/values in the .env file in your project:
ACS_CONNECTION_STRING=<ACS_CONNECTION_STRING> ACS_PHONE_NUMBER=<ACS_PHONE_NUMBER> ACS_EMAIL_ADDRESS=<ACS_EMAIL_ADDRESS> CUSTOMER_EMAIL_ADDRESS=<EMAIL_ADDRESS_TO_SEND_EMAIL_TO> CUSTOMER_PHONE_NUMBER=<UNITED_STATES_BASED_NUMBER_TO_SEND_SMS_TO>
ACS_CONNECTION_STRING
: Theconnection string
value from the Keys section of your ACS resource.ACS_PHONE_NUMBER
: Assign your toll-free number to theACS_PHONE_NUMBER
value.ACS_EMAIL_ADDRESS
: Assign your email "MailTo" address value.CUSTOMER_EMAIL_ADDRESS
: Assign an email address you'd like email to be sent to from the app (since the customer data in the app's database is only sample data). You can use a personal email address.CUSTOMER_PHONE_NUMBER
: You'll need to provide a United States based phone number (as of today) due to additional verification that is required in other countries for sending SMS messages. If you don't have a US-based number, you can leave it empty.
Start/Restart the Application and API Servers
Perform one of the following steps based on the exercises you completed up to this point:
If you started the database, API server, and web server in an earlier exercise, you need to stop the API server and web server and restart them to pick up the .env file changes. You can leave the database running.
Locate the terminal windows running the API server and web server and press CTRL + C to stop them. Start them again by typing
npm start
in each terminal window and pressing Enter. Continue to the next exercise.If you haven't started the database, API server, and web server yet, complete the following steps:
In the following steps you'll create three terminal windows in Visual Studio Code.
Right-click on the .env file in the Visual Studio Code file list and select Open in Integrated Terminal. Ensure that your terminal is at the root of the project - openai-acs-msgraph - before continuing.
Choose from one of the following options to start the PostgreSQL database:
If you have Docker Desktop installed and running, run
docker-compose up
in the terminal window and press Enter.If you have Podman with podman-compose installed and running, run
podman-compose up
in the terminal window and press Enter.To run the PostgreSQL container directly using either Docker Desktop, Podman, nerdctl, or another container runtime you have installed, run the following command in the terminal window:
Mac, Linux, or Windows Subsystem for Linux (WSL):
[docker | podman | nerdctl] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v $(pwd)/data:/var/lib/postgresql/data -p 5432:5432 postgres
Windows with PowerShell:
[docker | podman] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v ${PWD}/data:/var/lib/postgresql/data -p 5432:5432 postgres
Once the database container starts, press the + icon in the Visual Studio Code Terminal toolbar to create a second terminal window.
cd
into the server/typescript folder and run the following commands to install the dependencies and start the API server.npm install
npm start
Press the + icon again in the Visual Studio Code Terminal toolbar to create a third terminal window.
cd
into the client folder and run the following commands to install the dependencies and start the web server.npm install
npm start
A browser will launch and you'll be taken to http://localhost:4200.
Communication: Making a Phone Call
Integrating Azure Communication Services' phone calling capabilities into a custom Line of Business (LOB) application offers several key benefits to businesses and their users:
- Enables seamless and real-time communication between employees, customers, and partners, directly from within the LOB application, eliminating the need to switch between multiple platforms or devices.
- Enhances the user experience and improves overall operational efficiency.
- Facilitates rapid problem resolution, as users can quickly connect with relevant support teams or subject matter experts quickly and easily.
In this exercise, you will:
- Explore the phone calling feature in the application.
- Walk through the code to learn how the phone calling feature is built.
Using the Phone Calling Feature
In the previous exercise you created an Azure Communication Services (ACS) resource and started the database, web server, and API server. You also updated the following values in the .env file.
ACS_CONNECTION_STRING=<ACS_CONNECTION_STRING> ACS_PHONE_NUMBER=<ACS_PHONE_NUMBER> ACS_EMAIL_ADDRESS=<ACS_EMAIL_ADDRESS> CUSTOMER_EMAIL_ADDRESS=<EMAIL_ADDRESS_TO_SEND_EMAIL_TO> CUSTOMER_PHONE_NUMBER=<UNITED_STATES_BASED_NUMBER_TO_SEND_SMS_TO>
Ensure you've completed the previous exercise before continuing.
Go back to the browser (http://localhost:4200), locate the datagrid, and select Contact Customer followed by Call Customer in the first row.
A phone call component will be added into the header. Enter your phone number (ensure it starts with + and includes the country code) and select Call. You will be prompted to allow access to your microphone.
Select Hang Up to end the call. Select Close to close the phone call component.
Exploring the Phone Calling Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Open the customers-list.component.ts file. The full path to the file is client/src/app/customers-list/customers-list.component.ts.
Note that
openCallDialog()
sends aCustomerCall
message and the customer phone number using an event bus.openCallDialog(data: Phone) { this.eventBus.emit({ name: Events.CustomerCall, value: data }); }
Note
The event bus code can be found in the eventbus.service.ts file if you're interested in exploring it more. The full path to the file is client/src/app/core/eventbus.service.ts.
The header component's
ngOnInit()
function subscribes to theCustomerCall
event sent by the event bus and displays the phone call component. You can find this code in header.component.ts.ngOnInit() { this.subscription.add( this.eventBus.on(Events.CustomerCall, (data: Phone) => { this.callVisible = true; // Show phone call component this.callData = data; // Set phone number to call }) ); }
Open phone-call.component.ts. Take a moment to expore the code. The full path to the file is client/src/app/phone-call/phone-call.component.ts. Note the following key features:
- Retrieves an Azure Communication Services access token by calling the
acsService.getAcsToken()
function inngOnInit()
; - Adds a "phone dialer" to the page. You can see the dialer by clicking on the phone number input in the header.
- Starts and ends a call using the
startCall()
andendCall()
functions respectively.
- Retrieves an Azure Communication Services access token by calling the
Before looking at the code that makes the phone call, let's examine how the ACS access token is retrieved and how phone calling objects are created. Locate the
ngOnInit()
function in phone-call.component.ts.async ngOnInit() { if (ACS_CONNECTION_STRING) { this.subscription.add( this.acsService.getAcsToken().subscribe(async (user: AcsUser) => { const callClient = new CallClient(); const tokenCredential = new AzureCommunicationTokenCredential(user.token); this.callAgent = await callClient.createCallAgent(tokenCredential); }) ); } }
This function performs the following actions:
- Retrieves an ACS userId and access token by calling the
acsService.getAcsToken()
function. - Once the access token is retrieved, the code performs the following actions:
- Creates a new instance of
CallClient
andAzureCommunicationTokenCredential
using the access token. - Creates a new instance of
CallAgent
using theCallClient
andAzureCommunicationTokenCredential
objects. Later you'll see thatCallAgent
is used to start and end a call.
- Creates a new instance of
- Retrieves an ACS userId and access token by calling the
Open acs.services.ts and locate the
getAcsToken()
function. The full path to the file is client/src/app/core/acs.service.ts. The function makes an HTTP GET request to the/acstoken
route exposed by the API server.getAcsToken(): Observable<AcsUser> { return this.http.get<AcsUser>(this.apiUrl + 'acstoken') .pipe( catchError(this.handleError) ); }
An API server function named
createACSToken()
retrieves the userId and access token and returns it to the client. It can be found in the server/typescript/acs.ts file.import { CommunicationIdentityClient } from '@azure/communication-identity'; const connectionString = process.env.ACS_CONNECTION_STRING as string; async function createACSToken() { if (!connectionString) return { userId: '', token: '' }; const tokenClient = new CommunicationIdentityClient(connectionString); const user = await tokenClient.createUser(); const userToken = await tokenClient.getToken(user, ["voip"]); return { userId: user.communicationUserId, ...userToken }; }
This function performs the following actions:
- Checks if an ACS
connectionString
value is available. If not, returns an object with an emptyuserId
andtoken
. - Creates a new instance of
CommunicationIdentityClient
and passes theconnectionString
value to it. - Creates a new user using
tokenClient.createUser()
. - Gets a token for the new user with the "voip" scope using
tokenClient.getToken()
. - Returns an object containing the
userId
andtoken
values.
- Checks if an ACS
Now that you've seen how the userId and token are retrieved, go back to
phone-call.component.ts
and locate thestartCall()
function.This function is called when Call is selected in the phone call component. It uses the
CallAgent
object mentioned earlier to start a call. ThecallAgent.startCall()
function accepts an object representing the number to call and the ACS phone number used to make the call.startCall() { this.call = this.callAgent?.startCall( [{ phoneNumber: this.customerPhoneNumber }], { alternateCallerId: { phoneNumber: this.fromNumber } }); console.log('Calling: ', this.customerPhoneNumber); console.log('Call id: ', this.call?.id); this.inCall = true; }
The
stopCall()
function is called when Hang Up is selected in the phone call component.endCall() { if (this.call) { this.call.hangUp({ forEveryone: true }); this.call = undefined; this.inCall = false; } else { this.hangup.emit(); } }
If a call is in progress, the
call.hangUp()
function is called to end the call. If no call is in progress, thehangup
event is emitted to the header parent component to hide the phone call component.Before moving on to the next exercise, let's review the key concepts covered in this exercise:
- An ACS userId and access token are retrieved from the API server using the
acsService.getAcsToken()
function. - The token is used to create a
CallClient
andCallAgent
object. - The
CallAgent
object is used to start and end a call by calling thecallAgent.startCall()
andcallAgent.hangUp()
functions respectively.
- An ACS userId and access token are retrieved from the API server using the
Now that you've learned how phone calling can be integrated into an application, let's switch our focus to using Azure Communication Services to send email and SMS messages.
Communication: Sending Email and SMS Messages
In addition to phone calls, Azure Communication Services can also send email and SMS messages. This can be useful when you want to send a message to a customer or other user directly from the application.
In this exercise, you will:
- Explore how email and SMS messages can be sent from the application.
- Walk through the code to learn how the email and SMS functionality is implemented.
Using the Email and SMS Features
In a previous exercise you created an Azure Communication Services (ACS) resource and started the database, web server, and API server. You also updated the following values in the .env file.
ACS_CONNECTION_STRING=<ACS_CONNECTION_STRING> ACS_PHONE_NUMBER=<ACS_PHONE_NUMBER> ACS_EMAIL_ADDRESS=<ACS_EMAIL_ADDRESS> CUSTOMER_EMAIL_ADDRESS=<EMAIL_ADDRESS_TO_SEND_EMAIL_TO> CUSTOMER_PHONE_NUMBER=<UNITED_STATES_BASED_NUMBER_TO_SEND_SMS_TO>
Ensure you've completed the exercise before continuing.
Go back to the browser (http://localhost:4200) and select Contact Customer followed by Email/SMS Customer in the first row.
Select the Email/SMS tab and perform the following tasks:
- Enter an Email Subject and Body and select the Send Email button.
- Enter an SMS message and select the Send SMS button.
Check that you received the email and SMS messages. As a reminder, the email message will be sent to the value defined for
CUSTOMER_EMAIL_ADDRESS
and the SMS message will be sent to the value defined forCUSTOMER_PHONE_NUMBER
in the .env file. If you weren't able to supply a United States based phone number to use for SMS messages you can skip that step.Note
If you don't see the email message in your inbox for the address you defined for
CUSTOMER_EMAIL_ADDRESS
in the .env file, check your spam folder.
Exploring the Email Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Open the customers-list.component.ts file. The full path to the file is client/src/app/customers-list/customers-list.component.ts.
When you selected Contact Customer followed by Email/SMS Customer in the datagrid, the
customer-list
component displayed a dialog box. This is handled by theopenEmailSmsDialog()
function in the customer-list.component.ts file.openEmailSmsDialog(data: any) { if (data.phone && data.email) { // Create the data for the dialog let dialogData: EmailSmsDialogData = { prompt: '', title: `Contact ${data.company}`, company: data.company, customerName: data.first_name + ' ' + data.last_name, customerEmailAddress: data.email, customerPhoneNumber: data.phone } // Open the dialog const dialogRef = this.dialog.open(EmailSmsDialogComponent, { data: dialogData }); // Subscribe to the dialog afterClosed observable to get the dialog result this.subscription.add( dialogRef.afterClosed().subscribe((response: EmailSmsDialogData) => { console.log('SMS dialog result:', response); if (response) { dialogData = response; } }) ); } else { alert('No phone number available.'); } }
The
openEmailSmsDialog()
function performs the following tasks:- Checks to see if the
data
object (which represents the row from the datagrid) contains aphone
andemail
property. If it does, it creates adialogData
object that contains the information to pass to the dialog. - Opens the
EmailSmsDialogComponent
dialog box and passes thedialogData
object to it. - Subscribes to the
afterClosed()
event of the dialog box. This event is fired when the dialog box is closed. Theresponse
object contains the information that was entered into the dialog box.
- Checks to see if the
Open the email-sms-dialog.component.ts file. The full path to the file is client/src/app/email-sms-dialog/email-sms-dialog.component.ts.
Locate the
sendEmail()
function:sendEmail() { if (this.featureFlags.acsEmailEnabled) { // Using CUSTOMER_EMAIL_ADDRESS instead of this.data.email for testing purposes this.subscription.add( this.acsService.sendEmail(this.emailSubject, this.emailBody, this.getFirstName(this.data.customerName), CUSTOMER_EMAIL_ADDRESS /* this.data.email */) .subscribe(res => { console.log('Email sent:', res); if (res.status) { this.emailSent = true; } }) ); } else { this.emailSent = true; // Used when ACS email isn't enabled } }
The
sendEmail()
function performs the following tasks:- Checks to see if the
acsEmailEnabled
feature flag is set totrue
. This flag checks to see if theACS_EMAIL_ADDRESS
environment variable has an assigned value. - If
acsEmailEnabled
is true, theacsService.sendEmail()
function is called and the email subject, body, customer name, and customer email address are passed. Because the database contains sample data, theCUSTOMER_EMAIL_ADDRESS
environment variable is used instead ofthis.data.email
. In a real-world application thethis.data.email
value would be used. - Subscribes to the
sendEmail()
function in theacsService
service. This function returns an RxJS observable that contains the response from the client-side service. - If the email was sent successfully, the
emailSent
property is set totrue
.
- Checks to see if the
To provide better code encapsulation and reuse, client-side services such as acs.service.ts are used throughout the application. This allows all ACS functionality to be consolidated into a single place.
Open acs.service.ts and locate the
sendEmail()
function. The full path to the file is client/src/app/core/acs.service.ts.sendEmail(subject: string, message: string, customerName: string, customerEmailAddress: string) : Observable<EmailSmsResponse> { return this.http.post<EmailSmsResponse>(this.apiUrl + 'sendemail', { subject, message, customerName, customerEmailAddress }) .pipe( catchError(this.handleError) ); }
The
sendEmail()
function inAcsService
performs the following tasks:- Calls the
http.post()
function and passes the email subject, message, customer name, and customer email address to it. Thehttp.post()
function returns an RxJS observable that contains the response from the API call. - Handles any errors returned by the
http.post()
function using the RxJScatchError
operator.
- Calls the
Now let's examine how the application interacts with the ACS email feature. Open the acs.ts file and locate the
sendEmail()
function. The full path to the file is server/typescript/acs.ts.The
sendEmail()
function performs the following tasks:Creates a new
EmailClient
object and passes the ACS connection string to it (this value is retrieved from theACS_CONNECTION_STRING
environment variable).const emailClient = new EmailClient(connectionString);
Creates a new
EmailMessage
object and passes the sender, subject, message, and recipient information.const msgObject: EmailMessage = { senderAddress: process.env.ACS_EMAIL_ADDRESS as string, content: { subject: subject, plainText: message, }, recipients: { to: [ { address: customerEmailAddress, displayName: customerName, }, ], }, };
Sends the email using the
emailClient.beginSend()
function and returns the response. Although the function is only sending to one recipient in this example, thebeginSend()
function can be used to send to multiple recipients as well.const poller = await emailClient.beginSend(msgObject);
Waits for the
poller
object to signal it's done and sends the response to the caller.
Exploring the SMS Code
Go back to the email-sms-dialog.component.ts file that you opened earlier. The full path to the file is client/src/app/email-sms-dialog/email-sms-dialog.component.ts.
Locate the
sendSms()
function:sendSms() { if (this.featureFlags.acsPhoneEnabled) { // Using CUSTOMER_PHONE_NUMBER instead of this.data.customerPhoneNumber for testing purposes this.subscription.add( this.acsService.sendSms(this.smsMessage, CUSTOMER_PHONE_NUMBER /* this.data.customerPhoneNumber */).subscribe(res => { if (res.status) { this.smsSent = true; } }) ); } else { this.smsSent = true; } }
The
sendSMS()
function performs the following tasks:- Checks to see if the
acsPhoneEnabled
feature flag is set totrue
. This flag checks to see if theACS_PHONE_NUMBER
environment variable has an assigned value. - If
acsPhoneEnabled
is true, theacsService.SendSms()
function is called and the SMS message and customer phone number are passed. Because the database contains sample data, theCUSTOMER_PHONE_NUMBER
environment variable is used instead ofthis.data.customerPhoneNumber
. In a real-world application thethis.data.customerPhoneNumber
value would be used. - Subscribes to the
sendSms()
function in theacsService
service. This function returns an RxJS observable that contains the response from the client-side service. - If the SMS message was sent successfully, it sets the
smsSent
property totrue
.
- Checks to see if the
Open acs.service.ts and locate the
sendSms()
function. The full path to the file is client/src/app/core/acs.service.ts.sendSms(message: string, customerPhoneNumber: string) : Observable<EmailSmsResponse> { return this.http.post<EmailSmsResponse>(this.apiUrl + 'sendsms', { message, customerPhoneNumber }) .pipe( catchError(this.handleError) ); }
The
sendSms()
function performs the following tasks:- Calls the
http.post()
function and passes the message and customer phone number to it. Thehttp.post()
function returns an RxJS observable that contains the response from the API call. - Handles any errors returned by the
http.post()
function using the RxJScatchError
operator.
- Calls the
Finally, let's examine how the application interacts with the ACS SMS feature. Open the acs.ts file. The full path to the file is server/typescript/acs.ts and locate the
sendSms()
function.The
sendSms()
function performs the following tasks:Creates a new
SmsClient
object and passes the ACS connection string to it (this value is retrieved from theACS_CONNECTION_STRING
environment variable).const smsClient = new SmsClient(connectionString);
Calls the
smsClient.send()
function and passes the ACS phone number (from
), customer phone number (to
), and SMS message:const sendResults = await smsClient.send({ from: process.env.ACS_PHONE_NUMBER as string, to: [customerPhoneNumber], message: message }); return sendResults;
Returns the response to the caller.
You can learn more about ACS email and SMS functionality in the following articles:
Before moving on to the next exercise, let's review the key concepts covered in this exercise:
- The acs.service.ts file encapsulates the ACS email and SMS functionality used by the client-side application. It handles the API calls to the server and returns the response to the caller.
- The server-side API uses the ACS
EmailClient
andSmsClient
objects to send email and SMS messages.
Now that you've learned how email and SMS messages can be sent, let's switch our focus to integrating organizational data into the application.
Organizational Data: Creating a Microsoft Entra ID App Registration
Enhance user productivity by integrating organizational data (emails, files, chats, and calendar events) directly into your custom applications. By using Microsoft Graph APIs and Microsoft Entra ID, you can seamlessly retrieve and display relevant data within your apps, reducing the need for users to switch context. Whether it's referencing an email sent to a customer, reviewing a Teams message, or accessing a file, users can quickly find the information they need without leaving your app, streamlining their decision-making process.
In this exercise, you will:
- Create a Microsoft Entra ID app registration so that Microsoft Graph can access organizational data and bring it into the app.
- Locate
team
andchannel
Ids from Microsoft Teams that are needed to send chat messages to a specific channel. - Update the project's .env file with values from your Microsoft Entra ID app registration.
Create a Microsoft Entra ID App Registration
Go to Azure portal and select Microsoft Entra ID.
Select the App registration tab followed by + New registration.
Fill in the new app registration form details as shown below and select Register:
- Name: microsoft-graph-app
- Supported account types: Accounts in any organizational directory (Any Microsoft Entra ID tenant - Multitenant)
- Redirect URI:
- Select Single-page application (SPA) and enter
http://localhost:4200
in the Redirect URI field.
- Select Single-page application (SPA) and enter
- Select Register to create the app registration.
Select Overview in the Resource menu and copy the
Application (client) ID
value to your clipboard.
Update the Project's .env File
Open the .env file in your editor and assign the
Application (client) ID
value toENTRAID_CLIENT_ID
.ENTRAID_CLIENT_ID=<APPLICATION_CLIENT_ID_VALUE>
If you'd like to enable the ability to send a message from the app into a Teams Channel, sign in to Microsoft Teams using your Microsoft 365 dev tenant account.
Once you're signed in, expand a team, and find a channel that you want to send messages to from the app. For example, you might select the Company team and the General channel (or whatever team/channel you'd like to use).
In the team header, click on the three dots (...) and select
Get link to team
.In the link that appears in the popup window, the team ID is the string of letters and numbers after
team/
. For example, in the link "https://teams.microsoft.com/l/team/19%3ae9b9.../", the team ID is 19%3ae9b9... up to the following/
character.Copy the team ID and assign it to
TEAM_ID
in the .env file.In the channel header, click on the three dots (...) and select
Get link to channel
.In the link that appears in the popup window, the channel ID is the string of letters and numbers after
channel/
. For example, in the link "https://teams.microsoft.com/l/channel/19%3aQK02.../", the channel ID is 19%3aQK02... up to the following/
character.Copy the channel ID and assign it to
CHANNEL_ID
in the .env file.Save the env file before continuing.
Start/Restart the Application and API Servers
Perform one of the following steps based on the exercises you completed up to this point:
If you started the database, API server, and web server in an earlier exercise, you need to stop the API server and web server and restart them to pick up the .env file changes. You can leave the database running.
Locate the terminal windows running the API server and web server and press CTRL + C to stop them. Start them again by typing
npm start
in each terminal window and pressing Enter. Continue to the next exercise.If you haven't started the database, API server, and web server yet, complete the following steps:
In the following steps you'll create three terminal windows in Visual Studio Code.
Right-click on the .env file in the Visual Studio Code file list and select Open in Integrated Terminal. Ensure that your terminal is at the root of the project - openai-acs-msgraph - before continuing.
Choose from one of the following options to start the PostgreSQL database:
If you have Docker Desktop installed and running, run
docker-compose up
in the terminal window and press Enter.If you have Podman with podman-compose installed and running, run
podman-compose up
in the terminal window and press Enter.To run the PostgreSQL container directly using either Docker Desktop, Podman, nerdctl, or another container runtime you have installed, run the following command in the terminal window:
Mac, Linux, or Windows Subsystem for Linux (WSL):
[docker | podman | nerdctl] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v $(pwd)/data:/var/lib/postgresql/data -p 5432:5432 postgres
Windows with PowerShell:
[docker | podman] run --name postgresDb -e POSTGRES_USER=web -e POSTGRES_PASSWORD=web-password -e POSTGRES_DB=CustomersDB -v ${PWD}/data:/var/lib/postgresql/data -p 5432:5432 postgres
Once the database container starts, press the + icon in the Visual Studio Code Terminal toolbar to create a second terminal window.
cd
into the server/typescript folder and run the following commands to install the dependencies and start the API server.npm install
npm start
Press the + icon again in the Visual Studio Code Terminal toolbar to create a third terminal window.
cd
into the client folder and run the following commands to install the dependencies and start the web server.npm install
npm start
A browser will launch and you'll be taken to http://localhost:4200.
Organizational Data: Signing In a User and Getting an Access Token
Users need to authenticate with Microsoft Entra ID in order for Microsoft Graph to access organizational data. In this exercise you'll see how the Microsoft Graph Toolkit's mgt-login
component can be used to authenticate users and retrieve an access token. The access token can then be used to make calls to Microsoft Graph.
Note
If you're new to Microsoft Graph, you can learn more about it in the Microsoft Graph Fundamentals learning path.
In this exercise, you will:
- Learn how to associate a Microsoft Entra ID app with the Microsoft Graph Toolkit so that it can be used to authenticate users and retrieve organizational data.
- Learn about the importance of scopes.
- Learn how the Microsoft Graph Toolkit's mgt-login component can be used to authenticate users and retrieve an access token.
Using the Sign In Feature
In the previous exercise you created an app registration in Microsoft Entra ID and started the application server and API server. You also updated the following values in the
.env
file.ENTRAID_CLIENT_ID=<APPLICATION_CLIENT_ID_VALUE> TEAM_ID=<TEAMS_TEAM_ID> CHANNEL_ID=<TEAMS_CHANNEL_ID>
Ensure you've completed the previous exercise before continuing.
Go back to the browser (http://localhost:4200), select Sign In in the header, and sign in using an admin user account from your Microsoft 365 Developer tenant.
Tip
Sign in with your Microsoft 365 developer tenant admin account. You can view other users in your developer tenant by going to the Microsoft 365 admin center.
If you're signing in to the application for the first time, you'll be prompted to consent to the permissions requested by the application. You'll learn more about these permissions (also called "scopes") in the next section as you explore the code. Select Accept to continue.
Once you're signed in, you should see the name of the user displayed in the header.
Exploring the Sign In Code
Now that you've signed in, let's look at the code used to sign in the user and retrieve an access token and user profile. You'll learn about the mgt-login web component that's part of the Microsoft Graph Toolkit.
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Open client/package.json and notice that the
@microsoft/mgt
package is included in the dependencies. This package contains MSAL (Microsoft Authentication Library) provider features as well as web components such as mgt-login and others that can be used to sign in users and retrieve and display organizational data.To use the mgt-login component to sign in users, the Microsoft Entra ID app's client Id (stored in the .env file as
ENTRAID_CLIENT_ID
) needs to be referenced and used.Open graph.service.ts and locate the
init()
function. The full path to the file is client/src/app/core/graph.service.ts. You'll see the following code:Providers.globalProvider = new Msal2Provider({ clientId: ENTRAID_CLIENT_ID, // retrieved from .env file scopes: ['User.Read', 'Presence.Read', 'Chat.ReadWrite', 'Calendars.Read', 'ChannelMessage.Read.All', 'ChannelMessage.Send', 'Files.Read.All', 'Mail.Read'] });
This code creates a new
Msal2Provider
object, passing the Microsoft Entra ID client Id from your app registration and thescopes
for which the app will request access. Thescopes
are used to request access to Microsoft Graph resources that the app will access. After theMsal2Provider
object is created, it's assigned to theProviders.globalProvider
object which is used by Microsoft Graph Toolkit components to retrieve data from Microsoft Graph.Open header.component.html in your editor and locate the mgt-login component. The full path to the file is client/src/app/header/header.component.html.
<mgt-login *ngIf="featureFlags.microsoft365Enabled" class="mgt-dark" (loginCompleted)="loginCompleted()"></mgt-login>
The mgt-login component enables user sign in and access token retrieval for use with Microsoft Graph. Upon successful sign in, the
loginCompleted
event is triggered, subsequently calling theloginCompleted()
function. Although mgt-login is used within an Angular component in this example, it is compatible with any web application.Display of the mgt-login component depends on the
featureFlags.microsoft365Enabled
value being set totrue
. This custom flag checks for the presence of theENTRAID_CLIENT_ID
environment variable to confirm that the application is properly configured and able to authenticate against Microsoft Entra ID. The flag is added to accommodate cases where users opt to complete only the AI or Communication exercises within the tutorial, rather than following the entire sequence.Open header.component.ts and locate the
loginCompleted
function. This function is called when theloginCompleted
event is emitted and used to retrieve the signed in user's profile usingProviders.globalProvider
.async loginCompleted() { const me = await Providers.globalProvider.graph.client.api('me').get(); this.userLoggedIn.emit(me); }
In this example, a call is being made to the Microsoft Graph
me
API to retrieve their user profile (me
represents the current signed in user). Thethis.userLoggedIn.emit(me)
code statement emits an event from the component to pass the profile data to the parent component. The parent component is app.component.ts file in this case, which is the root component for the application.To learn more about the mgt-login component visit the Microsoft Graph Toolkit documentation.
Now that you've logged into the application, let's look at how organizational data can be retrieved.
Organizational Data: Retrieving Files, Chats, and Sending Messages to Teams
In today's digital environment, users work with a wide array of organizational data, including emails, chats, files, calendar events, and more. This can lead to frequent context shifts—switching between tasks or applications—which can disrupt focus and reduce productivity. For example, a user working on a project might need to switch from their current application to Outlook to find crucial details in an email or switch to OneDrive for Business to find a related file. This back-and-forth action disrupts focus and wastes time that could be better spent on the task at hand.
To enhance efficiency, you can integrate organizational data directly into the applications users use everyday. By bringing in organizational data to your applications, users can access and manage information more seamlessly, minimizing context shifts and improving productivity. Additionally, this integration provides valuable insights and context, enabling users to make informed decisions and work more effectively.
In this exercise, you will:
- Learn how the mgt-search-results web component in the Microsoft Graph Toolkit can be used to search for files.
- Learn how to call Microsoft Graph directly to retrieve files from OneDrive for Business and chat messages from Microsoft Teams.
- Understand how to send chat messages to Microsoft Teams channels using Microsoft Graph.
Note
The mgt-search-results component is currently in preview and is subject to change. In addition to showing how mgt-search-results can be used, the exercises also show how to perform the same tasks using Microsoft Graph directly.
Using the Organizational Data Feature
In a previous exercise you created an app registration in Microsoft Entra ID and started the application server and API server. You also updated the following values in the
.env
file.ENTRAID_CLIENT_ID=<APPLICATION_CLIENT_ID_VALUE> TEAM_ID=<TEAMS_TEAM_ID> CHANNEL_ID=<TEAMS_CHANNEL_ID>
Ensure you've completed the previous exercise before continuing.
Go back to the browser (http://localhost:4200). If you haven't already signed in, select Sign In in the header, and sign in with a user from your Microsoft 365 Developer tenant.
Note
In addition to authenticating the user, the mgt-login web component also retrieves an access token that can be used by Microsoft Graph to access files, chats, emails, calendar events, and other organizational data. The access token contains the scopes (permissions) such as
Chat.ReadWrite
,Files.Read.All
, and others that you saw earlier:Providers.globalProvider = new Msal2Provider({ clientId: ENTRAID_CLIENT_ID, // retrieved from .env file scopes: ['User.Read', 'Presence.Read', 'Chat.ReadWrite', 'Calendars.Read', 'ChannelMessage.Read.All', 'ChannelMessage.Send', 'Files.Read.All', 'Mail.Read'] });
Select View Related Content for the Adatum Corporation row in the datagrid. This will cause organizational data such as files, chats, emails, and calendar events to be retrieved using Microsoft Graph. Once the data loads, it'll be displayed below the datagrid in a tabbed interface. It's important to mention that you may not see any data at this point since you haven't added any files, chats, emails, or calendar events for the user in your Microsoft 365 developer tenant yet. Let's fix that in the next step.
Your Microsoft 365 tenant may not have any related organizational data for Adatum Corporation at this stage. To add some sample data, perform at least one of the following actions:
Add files by visiting https://onedrive.com and signing in using your Microsoft 365 Developer tenant credentials.
- Select My files in the left navigation.
- Select Upload and then Folder from the menu.
- Select the openai-acs-msgraph/customer documents folder from the project you cloned.
Add chat messages and calendar events by visiting https://teams.microsoft.com and signing in using your Microsoft 365 Developer tenant credentials.
Select Teams in the left navigation.
Select a team and channel.
Select New conversation.
Enter New order placed for Adatum Corporation and select the Send button.
Feel free to add additional chat messages that mention other companies used in the application such as Adventure Works Cycles, Contoso Pharmaceuticals, and Tailwind Traders.
Select Calendar in the left navigation.
Select New meeting.
Enter "Meet with Adatum Corporation about project schedule" for the title and body.
Select Save.
Add emails by visiting https://outlook.com and signing in using your Microsoft 365 Developer tenant credentials.
Select New mail.
Enter your personal email address in the To field.
Enter New order placed for Adatum Corporation for the subject and anything you'd like for the body.
Select Send.
Go back to the application in the browser and refresh the page. Select View Related Content again for the Adatum Corporation row. You should now see data displayed in the tabs depending upon which tasks you performed in the previous step.
Let's explore the code that enables the organizational data feature in the application. To retrieve the data, the client-side portion of the application uses the access token retrieved by the mgt-login component you looked at earlier to make calls to Microsoft Graph APIs.
Exploring Files Search Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
Let's start by looking at how file data is retrieved from OneDrive for Business. Open files.component.html and take a moment to look through the code. The full path to the file is client/src/app/files/files.component.html.
Locate the mgt-search-results component and note the following attributes:
<mgt-search-results class="search-results" entity-types="driveItem" [queryString]="searchText" (dataChange)="dataChange($any($event))" />
The mgt-search-results component is part of the Microsoft Graph Toolkit and as the name implies, it's used to display search results from Microsoft Graph. The component uses the following features in this example:
The
class
attribute is used to specify that thesearch-results
CSS class should be applied to the component.The
entity-types
attribute is used to specify the type of data to search for. In this case, the value isdriveItem
which is used to search for files in OneDrive for Business.The
queryString
attribute is used to specify the search term. In this case, the value is bound to thesearchText
property which is passed to the files component when the user selects View Related Content for a row in the datagrid. The square brackets aroundqueryString
indicate that the property is bound to thesearchText
value.The
dataChange
event fires when the search results change. In this case, a customer function nameddataChange()
is called in the files component and the event data is passed to the function. The parenthesis arounddataChange
indicate that the event is bound to thedataChange()
function.Since no custom template is supplied, the default template built into
mgt-search-results
is used to display the search results.
An alternative to using components such as mgt-search-results, is to call Microsoft Graph APIs directly using code. To see how that works, open the graph.service.ts file and locate the
searchFiles()
function. The full path to the file is client/src/app/core/graph.service.ts.You'll notice that a
query
parameter is passed to the function. This is the search term that's passed as the user selects View Related Content for a row in the datagrid. If no search term is passed, an empty array is returned.async searchFiles(query: string) { const files: DriveItem[] = []; if (!query) return files; ... }
A filter is then created that defines the type of search to perform. In this case the code is searching for files in OneDrive for Business so
driveItem
is used just as you saw earlier with themgt-search-results
component. This is the same as passingdriveItem
toentity-types
in the mgt-search-results component that you saw earlier. Thequery
parameter is then added to thequeryString
filter along withContentType:Document
.const filter = { "requests": [ { "entityTypes": [ "driveItem" ], "query": { "queryString": `${query} AND ContentType:Document` } } ] };
A call is then made to the
/search/query
Microsoft Graph API using theProviders.globalProvider.graph.client.api()
function. Thefilter
object is passed to thepost()
function which sends the data to the API.const searchResults = await Providers.globalProvider.graph.client.api('/search/query').post(filter);
The search results are then iterated through to locate
hits
. Eachhit
contains information about a document that was found. A property namedresource
contains the document metadata and is added to thefiles
array.if (searchResults.value.length !== 0) { for (const hitContainer of searchResults.value[0].hitsContainers) { if (hitContainer.hits) { for (const hit of hitContainer.hits) { files.push(hit.resource); } } } }
The
files
array is then returned to the caller.return files;
Looking through this code you can see that the mgt-search-results web component you explored earlier does a lot of work for you and significantly reduces the amount of code you have to write! However, there may be scenarios where you want to call Microsoft Graph directly to have more control over the data that's sent to the API or how the results are processed.
Open the files.component.ts file and locate the
search()
function. The full path to the file is client/src/app/files/files.component.ts.Although the body of this function is commented out due to the mgt-search-results component being used, the function could be used to make the call to Microsoft Graph when the user selects View Related Content for a row in the datagrid. The
search()
function callssearchFiles()
in graph.service.ts and passes thequery
parameter to it (the name of the company in this example). The results of the search are then assigned to thedata
property of the component.override async search(query: string) { this.data = await this.graphService.searchFiles(query); }
The files component can then use the
data
property to display the search results. You could handle this using custom HTML bindings or by using another Microsoft Graph Toolkit control namedmgt-file-list
. Here's an example of binding thedata
property to one of the component's properties namedfiles
and handling theitemClick
event as the user interacts with a file.<mgt-file-list (itemClick)="itemClick($any($event))" [files]="data"></mgt-file-list>
Whether you choose to use the mgt-search-results component shown earlier or write custom code to call Microsoft Graph will depend on your specific scenario. In this example, the mgt-search-results component is used to simplify the code and reduce the amount of work you have to do.
Exploring Teams Chat Messages Search Code
Go back to graph.service.ts and locate the
searchChatMessages()
function. You'll see that it's similar to thesearchFiles()
function you looked at previously.- It posts filter data to Microsoft Graph's
/search/query
API and converts the results into an array of objects that have information about theteamId
,channelId
, andmessageId
that match the search term. - To retrieve the Teams channel messages, a second call is made to the
/teams/${chat.teamId}/channels/${chat.channelId}/messages/${chat.messageId}
API and theteamId
,channelId
, andmessageId
are passed. This returns the full message details. - Additional filtering tasks are performed and the resulting messages are returned from
searchChatMessages()
to the caller.
- It posts filter data to Microsoft Graph's
Open the chats.component.ts file and locate the
search()
function. The full path to the file is client/src/app/chats/chats.component.ts. Thesearch()
function callssearchChatMessages()
in graph.service.ts and passes thequery
parameter to it.override async search(query: string) { this.data = await this.graphService.searchChatMessages(query); }
The results of the search are assigned to the
data
property of the component and data binding is used to iterate through the results array and render the data. This example uses an Angular Material card component to display the search results.<div *ngIf="data.length"> <mat-card *ngFor="let chatMessage of data"> <mat-card-header> <mat-card-title [innerHTML]="chatMessage.summary"></mat-card-title> </mat-card-header> <mat-card-actions> <a mat-stroked-button color="basic" [href]="chatMessage.webUrl" target="_blank">View Message</a> </mat-card-actions> </mat-card> </div>
Sending a Message to a Microsoft Teams Channel
In addition to searching for Microsoft Teams chat messages, the application also allows a user to send messages to a Microsoft Teams channel. This can be done by calling the
/teams/${teamId}/channels/${channelId}/messages
endpoint of Microsoft Graph.In the following code you'll see that a URL is created that includes the
teamId
andchannelId
values. Environment variable values are used for the team ID and channel ID in this example but those values could be dynamically retrieved as well using Microsoft Graph. Thebody
constant contains the message to send. A POST request is then made and the results are returned to the caller.async sendTeamsChat(message: string): Promise<TeamsDialogData> { if (!message) new Error('No message to send.'); if (!TEAM_ID || !CHANNEL_ID) new Error('Team ID or Channel ID not set in environment variables. Please set TEAM_ID and CHANNEL_ID in the .env file.'); const url = `https://graph.microsoft.com/v1.0/teams/${TEAM_ID}/channels/${CHANNEL_ID}/messages`; const body = { "body": { "contentType": "html", "content": message } }; const response = await Providers.globalProvider.graph.client.api(url).post(body); return { id: response.id, teamId: response.channelIdentity.teamId, channelId: response.channelIdentity.channelId, message: response.body.content, webUrl: response.webUrl, title: 'Send Teams Chat' }; }
Leveraging this type of functionality in Microsoft Graph provides a great way to enhance user productivbity by allowing users to interact with Microsoft Teams directly from the application they're already using.
Organizational Data: Retrieving Emails and Calendar Events
In the previous exercise you learned how to retrieve files from OneDrive for Business and chats from Microsoft Teams using Microsoft Graph and the mgt-search-results component from Microsoft Graph Toolkit. You also learned how to send messages to Microsoft Teams channels. In this exercise, you'll learn how to retrieve email messages and calendar events from Microsoft Graph and integrate them into the application.
In this exercise, you will:
- Learn how the mgt-search-results web component in the Microsoft Graph Toolkit can be used to search for emails and calendar events.
- Learn how to customize the mgt-search-results component to render search results in a custom way.
- Learn how to call Microsoft Graph directly to retrieve emails and calendar events.
Exploring Email Messages Search Code
Tip
If you're using Visual Studio Code, you can open files directly by selecting:
- Windows/Linux: Ctrl + P
- Mac: Cmd + P
Then type the name of the file you want to open.
In a previous exercise you created an app registration in Microsoft Entra ID and started the application server and API server. You also updated the following values in the
.env
file.ENTRAID_CLIENT_ID=<APPLICATION_CLIENT_ID_VALUE> TEAM_ID=<TEAMS_TEAM_ID> CHANNEL_ID=<TEAMS_CHANNEL_ID>
Ensure you've completed the previous exercise before continuing.
Open emails.component.html and take a moment to look through the code. The full path to the file is client/src/app/emails/emails.component.html.
Locate the mgt-search-results component:
<mgt-search-results class="search-results" entity-types="message" [queryString]="searchText" (dataChange)="dataChange($any($event))"> <template data-type="result-message"></template> </mgt-search-results>
This example of the mgt-search-results component is configured the same way as the one you looked at previously. The only difference is that the
entity-types
attribute is set tomessage
which is used to search for email messages and an empty template is supplied.- The
class
attribute is used to specify that thesearch-results
CSS class should be applied to the component. - The
entity-types
attribute is used to specify the type of data to search for. In this case, the value ismessage
. - The
queryString
attribute is used to specify the search term. - The
dataChange
event fires when the search results change. The emails component'sdataChange()
function is called, the results are passed to it, and a property nameddata
is updated in the component. - An empty template is defined for the component. This type of template is normally used to define how the search results will be rendered. However, in this scenario we're telling the component not to render any message data. Instead, we'll render the data ourselves using standard data binding (Angular is used in this case, but you can use any library/framework you want).
- The
Look below the mgt-search-results component in emails.component.html to find the data bindings used to render the email messages. This example iterates through the
data
property and writes out the email subject, body preview, and a link to view the full email message.<div *ngIf="data.length"> <mat-card *ngFor="let email of data"> <mat-card-header> <mat-card-title>{{email.resource.subject}}</mat-card-title> <mat-card-subtitle [innerHTML]="email.resource.bodyPreview"></mat-card-subtitle> </mat-card-header> <mat-card-actions> <a mat-stroked-button color="basic" [href]="email.resource.webLink" target="_blank">View Email Message</a> </mat-card-actions> </mat-card> </div>
In addition to using the mgt-search-results component to retrieve messages, Microsoft Graph provides several APIs that can be used to search emails as well. The
/search/query
API that you saw earlier could certainly be used, but a more straightforward option is themessages
API.To see how to call this API, go back to graph.service.ts and locate the
searchEmailMessages()
function. It creates a URL that can be used to call themessages
endpoint of Microsoft Graph and assigns thequery
value to the$search
parameter. The code then makes a GET request and returns the results to the caller. The$search
operator searches for thequery
value in the subject, body, and sender fields automatically.async searchEmailMessages(query:string) { if (!query) return []; // The $search operator will search the subject, body, and sender fields automatically const url = `https://graph.microsoft.com/v1.0/me/messages?$search="${query}"&$select=subject,bodyPreview,from,toRecipients,receivedDateTime,webLink`; const response = await Providers.globalProvider.graph.client.api(url).get(); return response.value; }
The emails component located in emails.component.ts calls
searchEmailMessages()
and displays the results in the UI.override async search(query: string) { this.data = await this.graphService.searchEmailMessages(query); }
Exploring Calendar Events Search Code
Searching for calendar events can also be accomplished using the mgt-search-results component. It can handle rendering the results for you, but you can also define your own template which you'll see later in this exercise.
Open calendar-events.component.html and take a moment to look through the code. The full path to the file is client/src/app/calendar-events/calendar-events.component.html. You'll see that it's very similar to the files and emails components you looked at previously.
<mgt-search-results class="search-results" entity-types="event" [queryString]="searchText" (dataChange)="dataChange($any($event))"> <template data-type="result-event"></template> </mgt-search-results>
This example of the mgt-search-results component is configured the same way as the ones you looked at previously. The only difference is that the
entity-types
attribute is set toevent
which is used to search for calendar events and an empty template is supplied.- The
class
attribute is used to specify that thesearch-results
CSS class should be applied to the component. - The
entity-types
attribute is used to specify the type of data to search for. In this case, the value isevent
. - The
queryString
attribute is used to specify the search term. - The
dataChange
event fires when the search results change. The calendar event component'sdataChange()
function is called, the results are passed to it, and a property nameddata
is updated in the component. - An empty template is defined for the component. In this scenario we're telling the component not to render any data. Instead, we'll render the data ourselves using standard data binding.
- The
Immediately below the
mgt-search-results
component in calendar-events.component.html you'll find the data bindings used to render the calendar events. This example iterates through thedata
property and writes out the start date, time, and subject of the event. Custom functions included in the component such asdayFromDateTime()
andtimeRangeFromEvent()
are called to format data properly. The HTML bindings also include a link to view the calendar event in Outlook and the location of the event if one is specified.<div *ngIf="data.length"> <div class="root" *ngFor='let event of data'> <div class="time-container"> <div class="date">{{ dayFromDateTime(event.resource.start.dateTime)}}</div> <div class="time">{{ timeRangeFromEvent(event.resource) }}</div> </div> <div class="separator"> <div class="vertical-line top"></div> <div class="circle"> <div *ngIf="!event.resource.bodyPreview?.includes('Join Microsoft Teams Meeting')" class="inner-circle"></div> </div> <div class="vertical-line bottom"></div> </div> <div class="details"> <div class="subject">{{ event.resource.subject }}</div> <div class="location" *ngIf="event.resource.location?.displayName"> at <a href="https://bing.com/maps/default.aspx?where1={{event.resource.location.displayName}}" target="_blank" rel="noopener"><b>{{ event.resource.location.displayName }}</b></a> </div> <div class="attendees" *ngIf="event.resource.attendees?.length"> <span class="attendee" *ngFor="let attendee of event.resource.attendees"> <mgt-person person-query="{{attendee.emailAddress.name}}"></mgt-person> </span> </div> <div class="online-meeting" *ngIf="event.resource.bodyPreview?.includes('Join Microsoft Teams Meeting')"> <img class="online-meeting-icon" src="https://img.icons8.com/color/48/000000/microsoft-teams.png" title="Online Meeting" /> <a class="online-meeting-link" href="{{ event.resource.onlineMeetingUrl }}">Join Teams Meeting</a> </div> </div> </div> </div>
In addition to searching for calendar events using the
search/query
API, Microsoft Graph also provides anevents
API that can be used to search calendar events as well. Locate thesearchCalendarEvents()
function in graph.service.ts.The
searchCalendarEvents()
function creates start and end date/time values that are used to define the time period to search. It then creates a URL that can be used to call theevents
endpoint of Microsoft Graph and includes thequery
parameter and start and end date/time variables. A GET request is then made and the results are returned to the caller.async searchCalendarEvents(query:string) { if (!query) return []; const startDateTime = new Date(); const endDateTime = new Date(startDateTime.getTime() + (7 * 24 * 60 * 60 * 1000)); const url = `/me/events?startdatetime=${startDateTime.toISOString()}&enddatetime=${endDateTime.toISOString()}&$filter=contains(subject,'${query}')&orderby=start/dateTime`; const response = await Providers.globalProvider.graph.client.api(url).get(); return response.value; }
Here's a breakdown of the URL that's created:
- The
/me/events
portion of the URL is used to specify that the events of the signed in user should be retrieved. - The
startdatetime
andenddatetime
parameters are used to define the time period to search. In this case, the search will return events that start within the next 7 days. - The
$filter
query parameter is used to filter the results by thequery
value (the company name selected from the datagrid in this case). Thecontains()
function is used to look for thequery
value in thesubject
property of the calendar event. - The
$orderby
query paramter is used to order the results by thestart/dateTime
property.
- The
Once the
url
is created, a GET request is made to the Microsoft Graph API using the value ofurl
and the results are returned to the caller.As with the previous components, the calendar-events component (calendar-events.component.ts file) calls
search()
and displays the results.override async search(query: string) { this.data = await this.graphService.searchCalendarEvents(query); }
Note
You can make Microsoft Graph calls from a custom API or server-side application as well. View the following tutorial to see an example of calling a Microsoft Graph API from an Azure Function.
You've now seen example of using Microsoft Graph to retrieve files, chats, email messages, and calendar events. The same concepts can be applied to other Microsoft Graph APIs as well. For example, you could use the Microsoft Graph users API to search for users in your organization. You could also use the Microsoft Graph groups API to search for groups in your organization. You can view the full list of Microsoft Graph APIs in the documentation.
Congratulations!
You've completed this tutorial
Congratulations! You've learned how Azure OpenAI can be used to enhance user productivity, how Azure Communication Services can be used to integrate communication features, and how Microsoft Graph APIs and components can be used to retrieve and display organizational data. By using these technologies, you can create effective solutions that increase user productivity by minimizing context shifts and providing necessary decision-making information.
Clean Up Azure Resources
Cleanup your Azure resources to avoid additional charges to your account. Go to the Azure portal and delete the following resources:
- The Azure Cognitive Search resource
- The Azure Storage resource
- The Azure OpenAI resource
- The Azure Communication Services resource
Next Steps
Documentation
- Azure OpenAI Documentation
- Azure OpenAI on your data
- Azure Communication Services Documentation
- Microsoft Graph Documentation
- Microsoft Graph Toolkit Documentation
- Microsoft Teams Developer Documentation
Training Content
- Apply prompt engineering with Azure OpenAI Service
- Get started with Azure OpenAI Service
- Introduction to Azure Communication Services
- Microsoft Graph Fundamentals
- Video Course: Microsoft Graph Fundamentals for Beginners
- Explore Microsoft Graph scenarios for JavaScript development
- Explore Microsoft Graph scenarios for ASP.NET Core development
- Get started with Microsoft Graph Toolkit
- Build and deploy apps for Microsoft Teams using Teams Toolkit for Visual Studio Code
Have an issue with this section? If so, please give us some feedback so we can improve this section.
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for