A company is using Amazon Redshift as its data warehouse solution. The Redshift cluster handles the following types of workloads:
*Real-time inserts through Amazon Kinesis Data Firehose
*Bulk inserts through COPY commands from Amazon S3
*Analytics through SQL queries
Recently, the cluster has started to experience performance issues.
Which combination of actions should a database specialist take to improve the cluster's performance? (Choose three.)
A. Modify the Kinesis Data Firehose delivery stream to stream the data to Amazon S3 with a high buffer size and to load the data into Amazon Redshift by using the COPY command.
B. Stream real-time data into Redshift temporary tables before loading the data into permanent tables.
C. For bulk inserts, split input files on Amazon S3 into multiple files to match the number of slices on Amazon Redshift. Then use the COPY command to load data into Amazon Redshift.
D. For bulk inserts, use the parallel parameter in the COPY command to enable multi- threading.
E. Optimize analytics SQL queries to use sort keys.
F. Avoid using temporary tables in analytics SQL queries.
A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.
Which solution should the database specialist recommend?
A. Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.
B. Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.
C. Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
D. Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.
A company is using Amazon Redshift. A database specialist needs to allow an existing Redshift cluster to access data from other Redshift clusters. Amazon RDS for PostgreSQL databases, and AWS Glue Data Catalog tables.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
A. Take a snapshot of the required tables from the other Redshift clusters. Restore the snapshot into the existing Redshift cluster.
B. Create external tables in the existing Redshift database to connect to the AWS Glue Data Catalog tables.
C. Unload the RDS tables and the tables from the other Redshift clusters into Amazon S3. Run COPY commands to load the tables into the existing Redshift cluster.
D. Use federated queries to access data in Amazon RDS.
E. Use data sharing to access data from the other Redshift clusters.
F. Use AWS Glue jobs to transfer the AWS Glue Data Catalog tables into Amazon S3. Create external tables in the existing Redshift database to access this data.
A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.
What is the MOST operationally efficient way to restore the default permissions of the master user?
A. Modify the DB instance and set a new master user password.
B. Use AWS Secrets Manager to modify the master user password and restart the DB instance.
C. Create a new master user for the DB instance.
D. Review the IAM user that owns the DB instance, and add missing permissions.
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group.
What should the Database Specialist do to automatically collect the database logs for the Administrator?
A. Enable DocumentDB to export the logs to Amazon CloudWatch Logs
B. Enable DocumentDB to export the logs to AWS CloudTrail
C. Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
D. Configure an AWS Lambda function to download the logs using the download-db-log- file-portion operation and store the logs in Amazon S3
Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value.
Which strategy will resolve the issue?
A. Configure all replica tables to use DynamoDB auto scaling.
B. Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
C. Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
D. Configure the table-level write throughput limit service quota to a higher value.
A database specialist is constructing an AWS CloudFormation stack using AWS CloudFormation. The database expert wishes to avoid the stack's Amazon RDS ProductionDatabase resource being accidentally deleted.
Which solution will satisfy this criterion?
A. Create a stack policy to prevent updates. Include Effect : ProductionDatabase and Resource : Deny in the policy.
B. Create an AWS CloudFormation stack in XML format. Set xAttribute as false.
C. Create an RDS DB instance without the DeletionPolicy attribute. Disable termination protection.
D. Create a stack policy to prevent updates. Include Effect, Deny, and Resource :ProductionDatabase in the policy.
A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.
What might account for this? (Choose two.)
A. The new minor version has not yet been designated as preferred and requires a manual upgrade.
B. Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.
C. Applying minor version upgrades requires sufficient free space.
D. The AWS CLI command did not include an apply-immediately parameter.
E. Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.
A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.
Which activities would increase the pace of data migration? (Select three.)
A. Create multiple AWS DMS tasks to migrate the large table.
B. Configure the AWS DMS replication instance with Multi-AZ.
C. Increase the capacity of the AWS DMS replication server.
D. Establish an AWS Direct Connect connection between the on-premises data center and AWS.
E. Enable an Amazon RDS Multi-AZ configuration.
F. Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.
The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.
Which combination of changes will meet these requirements? (Choose two.)
A. Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.
B. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.
C. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.
D. Use parallel load with different data boundaries for larger tables.
E. Run the DMS tasks on a larger instance class. Increase local storage on the instance.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your DBS-C01 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.