AWS DBS-C01 Free Practice Questions — Page 2

Database - Specialty • 5 questions • Answers & explanations included

Question 6

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup. The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company. Which solution will meet these requirements with minimal effort?

A. Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
B. Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
C. Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
D. Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Show Answer & Explanation

Correct Answer: C. Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

Why C is correct: RDS Event Subscriptions allow you to subscribe to specific RDS events (database creation, deletion, backup, shutdown, etc.) via Amazon SNS. External tracking systems can subscribe to these SNS topics to receive real-time notifications. This is the native, minimal-effort solution specifically designed for tracking RDS operational events.Why other options are wrong: A: CloudWatch Events (now EventBridge) can monitor RDS, but it requires Lambda functions for processing, adding unnecessary complexity. B: CloudTrail tracks API calls, which is more granular than needed and requires Lambda processing. D: RDS doesn't write operational event logs to Kinesis; this isn't a valid configuration

Question 7

A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on- premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely. Which approach should the Database Specialist take to securely manage the database credentials?

A. Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
B. Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
C. Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
D. Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
Show Answer & Explanation

Correct Answer: C. Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.

Why C is correct: AWS Secrets Manager is designed specifically for managing database credentials with automatic rotation. It securely stores credentials, allows granular IAM permissions, automatically rotates passwords on a schedule (60 days as required), and provides APIs for applications to retrieve credentials at runtime. This meets all security requirements with minimal operational overhead.Why other options are wrong: A: Storing credentials in S3, even with restricted permissions, violates security best practices. Manual rotation every 60 days is error-prone. B: IAM database authentication doesn't use passwords and doesn't work with per-user mapping for thousands of ecommerce users. This misunderstands the requirement. D: Embedding credentials in AMIs is highly insecure and operationally complex, requiring new AMI creation and deployment every 60 days.

Question 8

A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379. Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.
F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
Show Answer & Explanation

Correct Answers: A. Enable in-transit and at-rest encryption on the ElastiCache cluster.; C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.; F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Why A, C, F are correct: A (Encryption in-transit and at-rest): Protects data from unauthorized access both during transmission and when stored. C (Security group configuration): Allows the cluster to communicate with itself (for cluster mode) and restricts client access to only trusted sources on port 6379. F (auth-token): Redis AUTH provides password-based authentication, adding an authentication layer beyond network security. Why other options are wrong: B: CloudWatch metrics are for monitoring, not security. D: Overly permissive IAM policies granting all ElastiCache API actions violate the principle of least privilege. E: This is backwards - ElastiCache clients don't need inbound rules for port 6379; they initiate connections to the cluster

Question 9

A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?

A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
C. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.
Show Answer & Explanation

Correct Answer: D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

Why D is correct: Creating an Aurora Replica from an RDS for PostgreSQL instance leverages AWS's native replication with minimal downtime. The replica continuously replicates data from the source RDS instance, and when ready, you promote it to a standalone Aurora cluster. This is the fastest method with minimal downtime (just the promotion time). Why other options are wrong: A: AWS DMS works but is slower than native replication and requires more setup and configuration. B: pg_dump/pg_restore requires complete database export and import with significant downtime for a 1 TB database. C: Snapshots require full database restoration, resulting in more downtime than continuous replication with promotion.

Question 10

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region. Where should the AWS DMS replication instance be placed for the MOST optimal performance?

A. In the same Region and VPC of the source DB instance.
B. In the same Region and VPC as the target DB instance.
C. In the same VPC and Availability Zone as the target DB instance.
D. In the same VPC and Availability Zone as the source DB instance.
Show Answer & Explanation

Correct Answer: C. In the same VPC and Availability Zone as the target DB instance.

For cross-Region AWS DMS migrations, the most performance-critical phase is the load phase into the target database, not the extraction from the source. In this scenario, 2 TB of data is being migrated from us-east-1 to us-west-2, which means the data must cross Regions regardless of where the replication instance is placed. Placing the AWS DMS replication instance in the same VPC and Availability Zone as the target RDS for PostgreSQL instance minimizes network latency and maximizes throughput during the data load phase. This avoids cross-Availability Zone traffic and allows DMS to load data into the target database using the fastest possible local network connection. Option A is a common best practice for same-Region migrations, where extraction is the bottleneck. However, in a cross-Region migration with a large dataset, optimizing the load phase is more important. Option B is less optimal because it does not guarantee placement in the same Availability Zone as the target. Option D worsens performance by placing the replication instance far from the target.

Ready for the Full DBS-C01 Experience?

Access all 34 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →