How to run a purge command in ADX using ADF?
Approach 1 Created an ADX activity in ADF, and in the command field, I wrote the purge command (which I copied from Azure Docs): .purge table MyTable in database DBName allrecords I tried running the purge command but it keeps returning syntax error:…
Cost-Effectiveness: ADF vs. Logic Apps for Azure Blob Storage
Good Afternoon, I am currently evaluating the cost-effectiveness of using Azure Data Factory (ADF) versus Azure Logic Apps for moving and deleting data in Azure Blob Storage. Could you please share your experiences and insights on the cost differences…
JIRA Tempo REST API with Copy Data Activity connection issue
Dear all, I would like to store the data from JIRA Tempo Account into my Azure Storage Account via Copy Data activity in Azure Synapse Workspace. It would be great if someone could help me on this. I create a new linked service with REST API and put…
When I Ran the Pipeline I got Below error Could you please help out me?
Operation on target Execute pl_sap_dataload_full failed: Operation on target df_commonload_copy1 failed: The request failed with status code '"BadRequest"'.
What could cause this error: Cluster creation failed due to an unexpected system error from a runtime related dependency
During one of the scheduled executions of a pipeline, an error occurred with the following details: Error code: [5000](https://go.microsoft.com/fwlink/?linkid=2117168#error-code-5000) Failure type: User configuration issue Details: Cluster creation…
Azure Data Factory: Copy Data Activity: "Schema import failed: Please select a table"
Hello! I am pretty new to Azure Data Factory and trying to build my first data pipeline. So far, I have managed to install an integration run time in my PC and established a connection to the on prem SQL server. I have tested the connection and it says…
Can’t access storage in the sandbox
Hai I am trying to login to the sandbox environment in azure , not sure but not able to access to Azure storage account in the sandbox! Any help is appreciated
Azure Data Factory pipeline failing on sink side: Job failed due to reason: at Sink 'sink1': Invalid object name '[Table_Name]'.
I have the following workflow: It fails on the sink step with: Job failed due to reason: at Sink 'sink1': Invalid object name '[table_name]'. The thing is that this table_name that shows up in the warning, I'm not using it nowhere in the whole data…
How to use wildcard for XML Files in Copy Data
I have a Azure Data Lake Storage Gen2 with following folder and file structure .\source .\source\SystemA_20240618.xml .\source\SystemB_20240618.xml .\source\SystemA_20240619.xml .\source\SystemB_20240619.xml I need to process only the files…
How to transform sharepoint list items into an excel format
I'm trying to fetch the columns inside a sharepoint list using below MS Graph API:- sites/site-id/lists/list-id/items?expand=fields Below is the output that I'm getting:- { "@odata.context":…
How do I dynamically map columns in Data Factory to Dynamics 365?
Hi, I currently have a pipeline in Data Factory that will use a view from an Azure db to insert data into an entity in Dynamics 365. While this is working as expected, the next part of the process needs to update Dynamics 365. The number of fields that I…
Azure Data Factory - Jira Connector Schema Change / missing data
Hi All, We have been using ADF to ingest Jira into our data lake for several months now. The connector has worked with no issues and we have been able to pull all issues and all the custom fields, etc from Jira. Our Monday refresh failed due to a schema…
Data flow is not sinking all data because I have 84 json files in different sub-folders (of which 78 files are blank, and 6 files have data). But when I delete the 78 blank files (and remain with 6 files with data), the data flow will sink all results.
Hello. Please support, my data flow pipeline is not sinking all data. This happens whenever I have 84 json files each in different sub-folders (of which 78 files are blank, and 6 files have data). But when I delete the 78 blank json files (and only…
Data Factory triggered from bob storage creating multiple runs
I have an ADF that currently triggers from a blob storage event, blob created, ignoring empty blobs. Previously this was correctly triggering a single ADF run when a new file was uploaded. I have recently introduced a Logic App which copies a new…
Base64 format of pdf file from Azure Data Factory
I am using Azure data factory to read a pdf file from a storage blob container and then convert it to base64 format using a Web activity. This is required to pass the base64 format to an API. Although the base64 value returned by the Web activity…
data factory data flow for SAP CDC connector
Hi all, I am working on getting CDC data from SAP using adf. Currently, I am facing issue to pass variables in using dynamic content dataflow. Refer to the diagram, it is working fine to pass variable in the Linked service properties for ODP context…
Azure Data Factory SFTP Linked Service: Failed to read binary packet data! (ProtocolError)
Hi everyone, I'm facing a problem with a SFTP server, where some restrictions are applied to avoid the usage of RSA Keys with SHA1 signature. I can connect without any problem to the SFTP server using WinSCP. There are no network restrictions so I can…
Pipelines that require a scale up of the db are not scaling up
Hi, I have a couple of pipelines that have a stored procedure activity that scale up a database before the actual processing of the data starts. This has been working fine for almost a year, however in the past 2 days, the scale up times out and…
Can we remove the artificial limitation of ADF Snowflake V2 supporting Multiple Statements
Issue with Snowflake V2 https://learn.microsoft.com/en-us/azure/data-factory/connector-snowflake?tabs=data-factory#differences-between-snowflake-and-snowflake-legacy How can I get feedback to the engineers that are working on this new connector for…
Concept Ideas -> Big Table on Source where Rows are getting deleted and Target is DWH
Hi there, I need to have some idea related to "efficient data load". On source side, all tables has an primary key, created and updated timestamps. Therefor we have implemented on each pipeline an incremental load logic so that we are only…