Exam Details

  • Exam Code
    :DP-300
  • Exam Name
    :Administering Relational Databases on Microsoft Azure
  • Certification
    :Microsoft Certifications
  • Vendor
    :Microsoft
  • Total Questions
    :368 Q&As
  • Last Updated
    :Apr 01, 2025

Microsoft Microsoft Certifications DP-300 Questions & Answers

  • Question 161:

    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while

    others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have an Azure Data Lake Storage account that contains a staging zone.

    You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

    Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks notebook, and then inserts the data into the data warehouse.

    Does this meet the goal?

    A. Yes

    B. No

  • Question 162:

    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have an Azure Data Lake Storage account that contains a staging zone.

    You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

    Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that copies the data to a staging table in the data warehouse, and then uses a stored procedure to execute the R script.

    Does this meet the goal?

    A. Yes

    B. No

  • Question 163:

    You have the following Azure Data Factory pipelines:

    1.

    Ingest Data from System1

    2.

    Ingest Data from System2

    3.

    Populate Dimensions

    4.

    Populate Facts

    Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute after Ingest Data from System1 and Ingest Data from System2. Populate Facts must execute after the Populate Dimensions pipeline. All the pipelines must execute every eight hours.

    What should you do to schedule the pipelines for execution?

    A. Add a schedule trigger to all four pipelines.

    B. Add an event trigger to all four pipelines.

    C. Create a parent pipeline that contains the four pipelines and use an event trigger.

    D. Create a parent pipeline that contains the four pipelines and use a schedule trigger.

  • Question 164:

    You have an Azure Data Factory pipeline that performs an incremental load of source data to an Azure Data Lake Storage Gen2 account.

    Data to be loaded is identified by a column named LastUpdatedDate in the source table.

    You plan to execute the pipeline every four hours.

    You need to ensure that the pipeline execution meets the following requirements:

    1.

    Automatically retries the execution when the pipeline run fails due to concurrency or throttling limits.

    2.

    Supports backfilling existing data in the table. Which type of trigger should you use?

    A. tumbling window

    B. on-demand

    C. event

    D. schedule

  • Question 165:

    You have an Azure Data Factory that contains 10 pipelines.

    You need to label each pipeline with its main purpose of either ingest, transform, or load. The labels must be available for grouping and filtering when using the monitoring experience in Data Factory.

    What should you add to each pipeline?

    A. an annotation

    B. a resource tag

    C. a run group ID

    D. a user property

    E. a correlation ID

  • Question 166:

    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while

    others might not have a correct solution.

    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

    You have an Azure Data Lake Storage account that contains a staging zone.

    You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

    Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data flow, and then inserts the data into the data warehouse.

    Does this meet the goal?

    A. Yes

    B. No

  • Question 167:

    You need to trigger an Azure Data Factory pipeline when a file arrives in an Azure Data Lake Storage Gen2 container.

    Which resource provider should you enable?

    A. Microsoft.EventHub

    B. Microsoft.EventGrid

    C. Microsoft.Sql

    D. Microsoft.Automation

  • Question 168:

    You have an Azure SQL Database managed instance named SQLMI1. A Microsoft SQL Server Agent job runs on SQLMI1.

    You need to ensure that an automatic email notification is sent once the job completes.

    What should you include in the solution?

    A. From SQL Server Configuration Manager (SSMS), enable SQL Server Agent

    B. From SQL Server Management Studio (SSMS), run sp_set_sqlagent_properties

    C. From SQL Server Management Studio (SSMS), create a Database Mail profile

    D. From the Azure portal, create an Azure Monitor action group that has an Email/SMS/Push/Voice action

  • Question 169:

    You plan to move two 100-GB databases to Azure.

    You need to dynamically scale resources consumption based on workloads. The solution must minimize downtime during scaling operations.

    What should you use?

    A. two Azure SQL Databases in an elastic pool

    B. two databases hosted in SQL Server on an Azure virtual machine

    C. two databases in an Azure SQL Managed instance

    D. two single Azure SQL databases

  • Question 170:

    You have an on-premises app named App1 that stores data in an on-premises Microsoft SQL Server 2016 database named DB1.

    You plan to deploy additional instances of App1 to separate Azure regions. Each region will have a separate instance of App1 and DB1. The separate instances of DB1 will sync by using Azure SQL Data Sync.

    You need to recommend a database service for the deployment. The solution must minimize administrative effort.

    What should you include in the recommendation?

    A. Azure SQL Managed instance

    B. Azure SQL Database single database

    C. Azure Database for PostgreSQL

    D. SQL Server on Azure virtual machines

Tips on How to Prepare for the Exams

Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Microsoft exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your DP-300 exam preparations and Microsoft certification application, do not hesitate to visit our Vcedump.com to find your solutions here.