AWS DBS-C01 Free Practice Questions — Page 1

Database - Specialty • 5 questions • Answers & explanations included

Question 1

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details. When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a `could not connect to server: Connection times out` error message to Amazon CloudWatch Logs. What is the cause of this error?

A. The user name and password the application is using are incorrect.
B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers
D. The user name and password are correct, but the user is not authorized to use the DB instance
Show Answer & Explanation

Correct Answer: C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers

Why C is correct: The security group assigned to the DB instance needs inbound rules allowing connections FROM the application servers. Security groups are stateful firewalls that control inbound and outbound traffic. Since the application servers are trying to connect TO the database on port 3306, the DB instance's security group must allow inbound traffic from the application servers' security group or IP addresses. The "Connection times out" error specifically indicates a network connectivity issue, not an authentication problem.Why other options are wrong: A & D: Authentication errors (incorrect credentials or authorization) would produce different error messages like "Access denied" or "Authentication failed," not "Connection times out" B: This is backwards - the application servers need outbound rules (which are typically allowed by default), not inbound rules. The application initiates the connection to the database, not vice versa.

Question 2

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future. Which settings will meet this requirement? (Choose three.)

A. Set DeletionProtection to True.
B. Set MultiAZ to True.
C. Set TerminationProtection to True.
D. Set DeleteAutomatedBackups to False.
E. Set DeletionPolicy to Delete.
F. Set DeletionPolicy to Retain.
Show Answer & Explanation

Correct Answers: A. Set DeletionProtection to True.; D. Set DeleteAutomatedBackups to False.; F. Set DeletionPolicy to Retain.

Why A, D, F are correct: A (DeletionProtection): When set to True, this prevents the RDS instance from being deleted through the console, CLI, or API, providing protection against accidental deletion. D (DeleteAutomatedBackups to False): Ensures that automated backups are retained even after the DB instance is deleted, allowing data recovery. F (DeletionPolicy: Retain): This CloudFormation-specific setting preserves the RDS instance even when the CloudFormation stack is deleted, preventing data loss from stack deletion. Why other options are wrong: B (MultiAZ): This provides high availability and automatic failover, not protection against deletion. C (TerminationProtection): This is an EC2/CloudFormation stack feature, not an RDS-specific setting. E (DeletionPolicy: Delete): This would delete the resource when the stack is deleted, which is the opposite of what's needed.

Question 3

A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete. What is the MOST likely cause of the 5-minute connection outage?

A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint.
B. The client-side application is caching the DNS data and its TTL is set too high.
C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections.
D. There were no active Aurora Replicas in the Aurora DB cluster.
Show Answer & Explanation

Correct Answer: B. The client-side application is caching the DNS data and its TTL is set too high.

Why B is correct: Aurora failover itself completes in 15 seconds (as stated), but the 5-minute outage indicates a DNS caching issue. When applications cache DNS entries with high TTL (Time To Live) values, they continue trying to connect to the old primary instance's IP address even after failover. Aurora endpoints use DNS, and the recommended TTL is 30 seconds or less. If the application caches DNS for 5 minutes, it won't recognize the new primary until the cache expires.Why other options are wrong: A: Aurora uses storage replication, not redo log replay like traditional databases, so crash recovery is nearly instantaneous. C: Aurora Replicas are always warm and ready; there's no warm-up period needed. D: If there were no active replicas, Aurora couldn't failover at all, and the outage would persist until manual intervention.

Question 4

A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company's data center. The company's Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine. Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses. What should the Database Specialist do to correct the Data Analysts' inability to connect?

A. Restart the DB cluster to apply the SSL change
B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
C. Add explicit mappings between the Data Analysts' IP addresses and the instance in the security group assigned to the DB cluster.
D. Modify the Data Analysts' local client firewall to allow network traffic to AWS.
Show Answer & Explanation

Correct Answer: B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.

Why B is correct: When SSL/TLS is required for Aurora connections, clients must use the appropriate SSL certificate to establish encrypted connections. The Database Specialist can connect because they have the certificate configured, but the Data Analysts cannot connect because they lack it. They need to download the Amazon RDS root certificate and configure their connection strings to use SSL with this certificate.Why other options are wrong: A: SSL settings don't require a restart; they're applied immediately, and the Database Specialist is already connecting successfully. C: The security group already allows corporate IP addresses, and explicit IP mapping isn't a valid security group configuration. D: If local firewalls were the issue, the problem would affect all connections, including the Database Specialist's, not just the Analysts'.

Question 5

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed. What can the Database Specialist do to reduce the overall cost?

A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table
C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
Show Answer & Explanation

Correct Answer: C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

Why C is correct: DynamoDB Time to Live (TTL) is the most cost-effective and effortless solution. You create a timestamp attribute indicating when each item should expire, enable TTL on the table pointing to that attribute, and DynamoDB automatically deletes expired items within 48 hours at no additional cost. This is specifically designed for this use case.Why other options are wrong: A: AWS Glue is for ETL operations and would be unnecessarily complex and costly for simple deletion tasks. B: DynamoDB Streams only captures changes; it doesn't delete data. You'd still need additional processing logic. D: This creates unnecessary complexity with Data Pipeline, S3 exports, and truncation, adding significant cost and operational overhead

Ready for the Full DBS-C01 Experience?

Access all 34 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →