AWS DOP-C02 Free Practice Questions — Page 1

DevOps Engineer Professional • 5 questions • Answers & explanations included

Question 1

A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is de ned in the user-agent header that is sent with all requests to the API. After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modi ed the Lambda function to extract the API operation name, version information from the user-agent header and response code. Which additional set of actions should the DevOps engineer take to gather the required metrics?

A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Con gure a CloudWatch Logs metric lter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
B. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Con gure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
C. Con gure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Con gure a CloudWatch Logs metric lter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
D. Con gure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Con gure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.
Show Answer & Explanation

Correct Answer: A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Con gure a CloudWatch Logs metric lter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.

Why A is correct: CloudWatch Logs metric filters are the appropriate solution for extracting custom metrics from log data. When the Lambda function writes structured log lines containing the API operation name, response code, and version number, a metric filter can parse these logs and increment CloudWatch metrics. The dimensions (response code and application version) allow for granular tracking of each combination, which is exactly what's needed to monitor different app versions and their response codes per operation.Why other options are wrong: B: CloudWatch Logs Insights is a query tool for ad-hoc analysis, not for automatically populating metrics. It cannot create real-time metrics with dimensions. C: ALB access logs don't capture custom application data like API operation names or user-agent header details that the Lambda function extracts. Response metadata from Lambda isn't logged in ALB access logs. D: X-Ray is designed for distributed tracing and request analysis, not for creating custom CloudWatch metrics with specific dimensions. While X-Ray can track subsegments, it doesn't natively publish custom metrics to CloudWatch in the way described.

Question 2

A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold- start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) con gured. Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application's request volume decreases to 10% of its normal total. A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day. Which solution will meet these requirements?

A. Con gure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
B. Con gure reserved concurrency on the Lambda function with a concurrency value of 0.
C. Con gure provisioned concurrency on the Lambda function. Con gure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
D. Con gure reserved concurrency on the Lambda function. Con gure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100
Show Answer & Explanation

Correct Answer: C. Con gure provisioned concurrency on the Lambda function. Con gure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.

Why C is correct: Provisioned concurrency keeps Lambda functions initialized and ready to respond immediately, eliminating cold starts. AWS Application Auto Scaling can dynamically adjust the provisioned concurrency based on demand, handling the 10x traffic spike during midday while scaling down to minimum capacity (1) during low-traffic periods. This addresses both the cold-start latency issue and the variable traffic patterns efficiently. Why other options are wrong: A: A concurrency value of 1 is insufficient for thousands of requests, especially during the 10x spike. Deleting DAX is counterproductive as it helps with DynamoDB read performance. B: Reserved concurrency of 0 would prevent the function from executing at all—it caps maximum concurrent executions but doesn't eliminate cold starts. D: Reserved concurrency doesn't address cold starts; it only limits maximum concurrent executions. Auto Scaling on API Gateway is not the solution since the bottleneck is the Lambda cold-start time, not API Gateway capacity.

Question 3

A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production. The current log level is con gured within the Apache settings, but the team wants to change this con guration dynamically when the deployment occurs, so that they can set different log level con gurations depending on the deployment group without having a different application revision for each group. How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?

A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to con gure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml le.
B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to con gure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml le.
C. Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to con gure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml le.
D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to con gure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml le.
Show Answer & Explanation

Correct Answer: B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to con gure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml le.

Why B is correct: The DEPLOYMENT_GROUP_NAME environment variable is automatically provided by CodeDeploy to all deployment scripts. This allows the script to dynamically determine which deployment group it's running in without any additional configuration. The BeforeInstall lifecycle hook runs before the application is installed, making it the appropriate time to configure log levels. Since the environment variable is built-in, this approach requires no management overhead and works across all deployment groups without modification. Why other options are wrong: A: Using EC2 tags and API calls adds unnecessary complexity and requires managing IAM permissions. AfterInstall runs after installation, which is too late to configure settings needed during deployment. C: CodeDeploy doesn't support custom environment variables in the way described. ValidateService runs after deployment to verify the service is working, which is too late for configuration. D: DEPLOYMENT_GROUP_ID is a GUID that doesn't directly indicate environment type. The Install lifecycle hook is for the actual installation, not for configuration that should happen before installation.

Question 4

A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency. An audit nds that developers are occasionally not tagging the EBS volumes. A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is speci ed. Which solution will meet these requirements?

A. Set up AWS Con g in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Con gure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
B. Set up AWS Con g in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Con gure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Con gure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Con gure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
Show Answer & Explanation

Correct Answer: B. Set up AWS Con g in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Con gure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.

Why B is correct: AWS Config's managed rule for EC2::Volume resources specifically checks for tag compliance on EBS volumes. When a volume lacks the Backup_Frequency tag, Config detects this as non-compliant and triggers an automatic remediation action. The Systems Manager Automation runbook can apply the default "weekly" tag. This is the most efficient solution because it uses a pre-built managed rule designed exactly for this purpose. Why other options are wrong: A: Creating a custom rule for "all Amazon EC2 resources" is overly broad and less efficient than using the managed rule that specifically targets EBS volumes (EC2::Volume). C & D: EventBridge with CloudTrail for CreateVolume events would only catch new volumes, but the problem states existing volumes are also missing tags. Additionally, ModifyVolume events don't necessarily indicate missing tags. This reactive approach is less comprehensive than Config's continuous compliance monitoring.

Question 5

A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster is con gured with a single DB instance. The application performs read and write operations on the database by using the cluster's instance endpoint. The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window. What should a DevOps engineer do to meet these requirements?

A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations
Show Answer & Explanation

Correct Answer: A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.

Why A is correct: Adding a reader instance creates a read replica that can handle read traffic during maintenance. Using the cluster endpoint for writes ensures writes always go to the primary instance, while the reader endpoint automatically distributes read traffic across available readers. During maintenance, if the primary instance is updated, Aurora can promote the reader to primary with minimal downtime, and the endpoints automatically adjust. This provides the least interruption. Why other options are wrong: B: Custom ANY endpoints don't provide the same automatic failover capabilities during maintenance. They're typically used for specific use cases, not for high-availability during maintenance windows. C: Aurora doesn't have a "Multi-AZ option" in the same way as RDS for other databases. Aurora is inherently Multi-AZ with storage replicated across multiple AZs. Adding a reader instance is the correct approach. D: Same issue as B—custom ANY endpoints don't provide the optimal high-availability configuration for maintenance windows.

Ready for the Full DOP-C02 Experience?

Access all 71 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →