AWS SAP-C02 Free Practice Questions — Page 2

Solutions Architect Professional • 5 questions • Answers & explanations included

Question 6

A retail company needs to provide a series of data files to another company, which is its business partner. These files are saved in an Amazon S3 bucket under Account A, which belongs to the retail company. The business partner company wants one of its IAM users, User_DataProcessor, to access the files from its own AWS account (Account B). Which combination of steps must the companies take so that User_DataProcessor can access the S3 bucket successfully? (Choose two.)

A. Turn on the cross-origin resource sharing (CORS) feature for the S3 bucket in Account A.
B. In Account A, set the S3 bucket policy to the following: { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" }
C. In Account A, set the S3 bucket policy to the following: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] }
D. In Account B, set the permissions of User_DataProcessor to the following: { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" }
E. In Account B, set the permissions of User_DataProcessor to the following: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] }
Show Answer & Explanation

Correct Answers: C. In Account A, set the S3 bucket policy to the following: { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor" }, "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::AccountABucketName/*" ] }; D. In Account B, set the permissions of User_DataProcessor to the following: { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": "arn:aws:s3:::AccountABucketName/*" }

For cross-account S3 access, you need both the resource-based policy (bucket policy in Account A) and the identity-based policy (IAM permissions in Account B). Option C correctly sets the bucket policy in Account A with a Principal specifying the exact IAM user ARN from Account B, along with the necessary actions (GetObject and ListBucket). Option D correctly grants the IAM user in Account B permissions to access the specific S3 bucket resources. Both policies must work together - the bucket policy grants permission TO the user, and the IAM policy grants permission FOR the user to perform those actions. Option A is wrong because CORS is for web browser cross-origin requests, not cross-account access. Option B is wrong because it lacks the Principal field, making it invalid. Option E is wrong because IAM user policies don't use the Principal field - that's only for resource-based policies.

Question 7

A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity. Which solution will meet these requirements MOST cost-effectively?

A. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.
B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.
C. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.
D. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments.
Show Answer & Explanation

Correct Answer: B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.

ECS with Fargate is the most cost-effective serverless container solution that meets all requirements. Fargate provides true serverless container orchestration with auto-scaling capabilities, allowing you to handle variable load efficiently while only paying for resources used. Using two separate ECS clusters for production and testing environments provides clear separation and independent scaling. Option A is incorrect because Lambda has limitations for container workloads (15-minute timeout, 10GB memory limit, and containers must comply with Lambda's runtime interface). Option C is incorrect because EKS adds unnecessary complexity and cost compared to ECS - EKS is overkill for this use case and has higher operational overhead. Option D is incorrect because Elastic Beanstalk, while simpler, is not truly serverless and doesn't minimize operational complexity as effectively as ECS Fargate for containerized microservices.

Question 8

A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application's data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record. The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy. What should a solutions architect recommend to meet these requirements?

A. Reconfigure the application's Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application's Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.
C. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application's Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.
D. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.
Show Answer & Explanation

Correct Answer: B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application's Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.

This solution provides automated failover within the 15-minute RTO requirement. Route 53 health checks actively monitor the application's availability and can trigger failover by updating DNS records with a failover routing policy. The Lambda function in the backup region automates the critical failover tasks: promoting the read replica to become the primary database and scaling up the Auto Scaling group from zero to handle traffic. SNS provides reliable notification triggering for the Lambda function. Option A is incorrect because latency-based routing doesn't provide failover - it routes based on latency, not health, so traffic would still go to a failed primary region. Option C is incorrect because it describes an active-active setup (same Auto Scaling values in both regions), which violates the budget constraint, and RDS doesn't support cross-region replication via S3 snapshots in real-time. Option D is incorrect because Global Accelerator doesn't solve the RTO problem - it would still route traffic to unhealthy targets, and using 5XX errors as a trigger is less reliable than Route 53 health checks.

Question 9

A company is hosting a critical application on a single Amazon EC2 instance. The application uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The application uses an Amazon RDS for MariaDB DB instance for a relational database. For the application to function, each piece of the infrastructure must be healthy and must be in an active state. A solutions architect needs to improve the application's architecture so that the infrastructure can automatically recover from failure with the least possible downtime. Which combination of steps will meet these requirements? (Choose three.)

A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
B. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode.
C. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios.
D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
E. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances.
F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
Show Answer & Explanation

Correct Answers: A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.; D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.; F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.

Option A provides high availability for the EC2 application tier by distributing traffic across multiple instances in an Auto Scaling group with minimum capacity of 2, ensuring automatic recovery if an instance fails. Option D enables RDS Multi-AZ deployment, which automatically replicates data synchronously to a standby instance in another AZ and provides automatic failover (typically 1-2 minutes). Option F creates an ElastiCache replication group with Multi-AZ enabled, providing automatic failover for the in-memory cache layer. Option B is incorrect because "unlimited mode" relates to T-instance CPU credits, not high availability. Option C is incorrect because a read replica in the same AZ doesn't protect against AZ failure, and manual promotion doesn't meet the "automatically recover" requirement. Option E is incorrect because ElastiCache doesn't use Auto Scaling groups - it uses replication groups with cluster mode or Multi-AZ configurations.

Question 10

A retail company is operating its ecommerce application on AWS. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an Amazon RDS DB instance as the database backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones. After an update of the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs. While the company is working on the problem, the solutions architect needs to provide a custom error page instead of the standard ALB error page to visitors. Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)

A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
B. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
C. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
D. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
Show Answer & Explanation

Correct Answers: A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.; E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.

Option A creates an S3 bucket to host the custom error page content - this is a simple, low-overhead solution for static error pages. Option E configures CloudFront to serve custom error pages by adding a custom error response configuration, which intercepts 502 errors from the ALB origin and displays the S3-hosted custom page instead. This is the least operational overhead because CloudFront handles the error interception automatically without additional logic or monitoring. Options B and D are incorrect because they involve complex CloudWatch alarms and Lambda functions to modify ALB forwarding rules, which is excessive operational overhead and doesn't actually solve the problem of displaying custom error pages. Option C is incorrect because Route 53 health checks and DNS failover don't address displaying custom error pages - they redirect to different endpoints entirely, which doesn't help when the issue is intermittent malformed headers.

Ready for the Full SAP-C02 Experience?

Access all 106 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →