Use Azure Data Factory to ingest data into an Azure Operator Insights Data Product
This article covers how to set up Azure Data Factory to write data into an Azure Operator Insights Data Product. For more information on Azure Data Factory, see What is Azure Data Factory.
Warning
Data Products do not support private links. It is not possible to set up a private link between a Data Product and Azure Data Factory.
Prerequisites
- A deployed Data Product: see Create an Azure Operator Insights Data Product.
- Permission to add role assignments to the Azure Key Vault instance for the Data Product.
- To find the key vault, search for a resource group with a name starting with
<data-product-name>-HostedResources-
; the key vault is in this resource group.
- To find the key vault, search for a resource group with a name starting with
- A deployed Azure Data Factory instance.
- The Data Factory Contributor role on the Data Factory instance.
Create a Key Vault linked service
To connect Azure Data Factory to another Azure service, you must create a linked service. First, create a linked service to connect Azure Data Factory to the Data Product's key vault.
- In the Azure portal, find the Azure Data Factory resource.
- From the Overview pane, launch the Azure Data Factory studio.
- Go to the Manage view, then find Connections and select Linked Services.
- Create a new linked service using the New button.
- Select the Azure Key Vault type.
- Set the target to the Data Product's key vault (the key vault is in the resource group with name starting with
<data-product-name>-HostedResources-
and is namedaoi-<uid>-kv
). - Set the authentication method to System Assigned Managed Identity.
- Grant Azure Data Factory permissions on the Key Vault resource.
- Go to the Data Product's key vault in the Azure portal.
- In the Access Control (IAM) pane, add a new role assignment.
- Give the Data Factory managed identity (this has the same name as the Data Factory resource) the 'Key Vault Secrets User' role.
Create a Blob Storage linked service
Data Products expose a Blob Storage endpoint for ingesting data. Use the newly created Key Vault linked service to connect Azure Data Factory to the Data Product ingestion endpoint.
- In the Azure portal, find the Azure Data Factory resource.
- From the Overview pane, launch the Azure Data Factory studio.
- Go to the Manage view, then find Connections and select Linked Services.
- Create a new linked service using the New button.
- Select the Azure Blob Storage type.
- Set the authentication type to SAS URI.
- Choose Azure Key Vault as the source.
- Select the Key Vault linked service that you created in Create a key vault linked service.
- Set the secret name to
input-storage-sas
. - Leave the secret version as the default value ('Latest version').
Now the Data Factory is connected to the Data Product ingestion endpoint.
Create Blob Storage datasets
To use the Data Product as the sink for a Data Factory pipeline, you must create a sink dataset.
- In the Azure portal, find the Azure Data Factory resource.
- From the Overview pane, launch the Azure Data Factory studio.
- Go to the Author view -> Add resource -> Dataset.
- Create a new Azure Blob Storage dataset.
- Select your output type.
- Set the linked service to the Data Product ingestion linked service that you created in Create a blob storage linked service.
- Set the container name to the name of the data type that the dataset is associated with.
- This information can be found in the Required ingestion configuration section of the documentation for your Data Product.
- For example, see Required ingestion configuration for the Monitoring - MCC Data Product.
- Ensure the folder path includes at least one directory; files copied into the root of the container won't be correctly ingested.
- Set the other fields as appropriate for your data.
- Follow the Azure Data Factory documentation (for example Creating a pipeline with the UI) to create a pipeline with this new dataset as the sink.
Repeat this step for all required datasets.
Important
The Data Product may use the folder prefix or the file name prefix (this can be set as part of the pipeline, for example in the Copy Activity) to determine how to process an ingested file. For your Data Product's requirements for folder prefixes or file name prefixes, see the Required ingestion configuration section of the Data Product's documentation. For example, see Required ingestion configuration for the Monitoring - MCC Data Product.
Create Data Pipelines
Your Azure Data Factory is now configured to connect to your Data Product. To ingest data using this configuration, you must follow the Data Factory documentation.
- Set up a connection in Azure Data Factory to the service containing the source data.
- Set up pipelines in Azure Data Factory to copy data from the source into your Data Product, using the datasets created in the last step.
Related content
Learn how to:
Feedback
https://aka.ms/ContentUserFeedback.
Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see:Submit and view feedback for