AWS DVA-C02 Free Practice Questions — Page 1

Developer Associate • 5 questions • Answers & explanations included

Question 1

A company is implementing an application on Amazon EC2 instances. The application needs to process incoming transactions. When the application detects a transaction that is not valid, the application must send a chat message to the company's support team. To send the message, the application needs to retrieve the access token to authenticate by using the chat API. A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts. Which solution will meet these requirements with the LEAST management overhead?

A. Use an AWS Systems Manager Parameter Store SecureString parameter that uses an AWS Key Management Service (AWS KMS) AWS managed key to store the access token. Add a resource-based policy to the parameter to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Parameter Store. Retrieve the token from Parameter Store with the decrypt flag enabled. Use the decrypted access token to send the message to the chat.
B. Encrypt the access token by using an AWS Key Management Service (AWS KMS) customer managed key. Store the access token in an Amazon DynamoDB table. Update the IAM role of the EC2 instances with permissions to access DynamoDB and AWS KMS. Retrieve the token from DynamoDB. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the message to the chat.
C. Use AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.
D. Encrypt the access token by using an AWS Key Management Service (AWS KMS) AWS managed key. Store the access token in an Amazon S3 bucket. Add a bucket policy to the S3 bucket to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Amazon S3 and AWS KMS. Retrieve the token from the S3 bucket. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the massage to the chat.
Show Answer & Explanation

Correct Answer: C. Use AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.

AWS Secrets Manager is the ideal solution for storing sensitive credentials like API access tokens. It provides automatic encryption at rest and in transit, supports resource-based policies for cross-account access, and handles decryption automatically when retrieving secrets. While Parameter Store SecureString can also store encrypted values, Secrets Manager is specifically designed for credential management and offers built-in rotation capabilities. The other options (DynamoDB and S3) require more manual setup for encryption and decryption, resulting in higher management overhead.

Question 2

A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to implement an application that collects all the lifecycle events of the EC2 instances. The application needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in the company's main AWS account for further processing. Which solution will meet these requirements?

A. Configure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle events. Add the SQS queue as a target of the rule.
B. Use the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queue. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle events. Add the SQS queue in the main account as a target of the rule.
C. Write an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle changes. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle change. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
D. Configure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.
Show Answer & Explanation

Correct Answer: D. Configure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.

Option D is correct because it follows the proper cross-account EventBridge pattern. First, you configure the main account's event bus to accept events from other accounts. Then, each source account creates a rule to forward EC2 lifecycle events to the main account's event bus. Finally, a rule in the main account routes these events to the SQS queue. Option B is incorrect because EventBridge rules cannot directly target cross-account SQS queues. Option A is incorrect because EC2 cannot be configured to directly deliver events to another account's event bus. Option C is inefficient and not event-driven.

Question 3

An application is using Amazon Cognito user pools and identity pools for secure access. A developer wants to integrate the user-specific file upload and download features in the application with Amazon S3. The developer must ensure that the files are saved and retrieved in a secure manner and that users can access only their own files. The file sizes range from 3 KB to 300 MB. Which option will meet these requirements with the HIGHEST level of security?

A. Use S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
B. Save the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
C. Use Amazon API Gateway and an AWS Lambda function to upload and download files. Validate each request in the Lambda function before performing the requested operation.
D. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.
Show Answer & Explanation

Correct Answer: D. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.

Amazon Cognito identity pools provide a powerful feature that allows you to use IAM policy variables to restrict users to their own S3 prefixes. By using the ${cognito-identity.amazonaws.com:sub} variable in an IAM policy, each authenticated user can only access objects under their unique identity prefix in S3. This provides the highest level of security as it enforces access control at the IAM level. Options A and B rely on application-level filtering which can be bypassed. Option C adds operational overhead with Lambda validation.

Question 4

A company is building a scalable data management solution by using AWS services to improve the speed and agility of development. The solution will ingest large volumes of data from various sources and will process this data through multiple business rules and transformations. The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance. Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?

A. AWS Batch
B. AWS Step Functions
C. AWS Glue
D. AWS Lambda
Show Answer & Explanation

Correct Answer: B. AWS Step Functions

AWS Step Functions is the ideal service for orchestrating workflows that require sequential execution and error handling. Step Functions provides built-in support for running tasks in sequence, handling retries and errors, and managing state between steps. It integrates seamlessly with other AWS services and scales automatically. AWS Batch is for batch computing jobs. AWS Glue is for ETL jobs but lacks the workflow orchestration capabilities. Lambda alone cannot manage sequential workflows with error handling efficiently.

Question 5

A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created. However, the function fails when it attempts to write to the DynamoDB table. What is the MOST likely cause of this issue?

A. The Lambda function's concurrency limit has been exceeded.
B. DynamoDB table requires a global secondary index (GSI) to support writes.
C. The Lambda function does not have IAM permissions to write to DynamoDB.
D. The DynamoDB table is not running in the same Availability Zone as the Lambda function.
Show Answer & Explanation

Correct Answer: C. The Lambda function does not have IAM permissions to write to DynamoDB.

The most likely cause is that the Lambda function's execution role does not have the necessary IAM permissions to write to the DynamoDB table. Lambda functions require explicit IAM permissions to interact with other AWS services. Since the function successfully reads from S3 (triggered by S3 event) but fails writing to DynamoDB, the execution role is missing DynamoDB write permissions like dynamodb:PutItem or dynamodb:UpdateItem. Option A would cause throttling errors, not write failures. Option B is incorrect as GSIs are for queries, not writes. Option D is incorrect as DynamoDB is a regional service and doesn't require AZ alignment.

Ready for the Full DVA-C02 Experience?

Access all 112 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →