AWS SAP-C02 Free Practice Questions — Page 1

Solutions Architect Professional • 5 questions • Answers & explanations included

Question 1

A company needs to architect a hybrid DNS solution. This solution will use an Amazon Route 53 private hosted zone for the domain cloud.example.com for the resources stored within VPCs. The company has the following DNS resolution requirements: On-premises systems should be able to resolve and connect to cloud.example.com. All VPCs should be able to resolve cloud.example.com. There is already an AWS Direct Connect connection between the on-premises corporate network and AWS Transit Gateway. Which architecture should the company use to meet these requirements with the HIGHEST performance?

A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the conditional forwarder.
C. Associate the private hosted zone to the shared services VPCreate a Route 53 outbound resolver in the shared services VPAttach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the outbound resolver.
D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.
Show Answer & Explanation

Correct Answer: A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for cloud.example.com that point to the inbound resolver.

Option A is correct because it properly implements hybrid DNS resolution for the requirements. Route 53 Inbound Resolver enables on-premises DNS servers to forward queries for the cloud.example.com domain to Route 53 allowing on-premises systems to resolve private hosted zone records. Associating the private hosted zone with all VPCs ensures that all VPC resources can resolve the domain natively. The inbound resolver provides the highest performance as it is a managed AWS service optimized for DNS resolution over Direct Connect. Attaching all VPCs to the Transit Gateway ensures network connectivity between on-premises and all VPCs. Options B uses EC2 which adds operational overhead and is not as performant as the managed Route 53 service. Option C incorrectly uses an outbound resolver which is for VPC-to-on-premises resolution not on-premises-to-VPC. Option D only associates the private hosted zone to the shared services VPC making it unavailable to other VPCs.

Question 2

A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region. Which solution will meet these requirements?

A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.
B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
Show Answer & Explanation

Correct Answer: C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.

Option C is the correct solution for implementing multi-region failover for the API. Deploying a separate API Gateway and Lambda functions in another region creates a complete replica of the API infrastructure. Route 53 failover routing policy automatically routes traffic to the backup region when health checks detect that the primary region is unhealthy providing automated disaster recovery. DynamoDB global tables enable automatic bidirectional replication between regions ensuring data consistency. Option A is incorrect because API Gateway edge-optimized endpoints cannot directly target Lambda functions in multiple regions. Option B uses multivalue answer routing which distributes traffic randomly rather than implementing true failover behavior. Option D is incorrect because Lambda functions cannot be made global - they are region-specific resources. The failover routing policy in option C provides the proper automated failover capability required by the question.

Question 3

A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the Production OU. Administrators use deny list SCPs in the root of the organization to manage access to restricted services. The company recently acquired a new business unit and invited the new unit's existing AWS account to the organization. Once onboarded, the administrators of the new business unit discovered that they are not able to update existing AWS Config rules to meet the company's policies. Which option will allow administrators to make changes and continue to enforce the current policies without introducing additional long-term maintenance?

A. Remove the organization's root SCPs that limit access to AWS Config. Create AWS Service Catalog products for the company's standard AWS Config rules and deploy them throughout the organization, including the new account.
B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the new account to the Production OU when adjustments to AWS Config are complete.
C. Convert the organization's root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporally apply an SCP to the organization's root that allows AWS Config actions for principals only in the new account.
D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.
Show Answer & Explanation

Correct Answer: D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.

Why D is correct: This solution provides temporary relief for the new account while maintaining existing policies for all other accounts. Creating a temporary Onboarding OU with an SCP that allows AWS Config actions gives the new business unit administrators the access they need. Moving the root's deny list SCPs to the Production OU maintains existing restrictions for all current accounts (since they're all in Production OU). Once AWS Config adjustments are complete, moving the new account to Production OU applies the standard restrictions. This approach requires no long-term maintenance - once the account is moved to Production OU, the temporary Onboarding OU can be deleted. Why others are wrong: A: Removing the organization's root SCPs eliminates security controls for ALL accounts in the organization, creating a significant security risk. AWS Service Catalog products don't enforce policies the way SCPs do - they just make resources available for deployment. This introduces ongoing maintenance of Service Catalog products and doesn't prevent unauthorized AWS Config changes. This fundamentally changes the security model from preventive controls (SCPs) to operational guidance (Service Catalog). B: While creating a temporary Onboarding OU is correct, this solution leaves the deny list SCPs at the root level. SCPs at the root apply to ALL OUs and accounts in the organization, including the Onboarding OU. You cannot override a deny from a parent (root) with an allow in a child (Onboarding OU). The SCP in the Onboarding OU would be ineffective because the root-level deny takes precedence. C: Converting from deny list to allow list SCPs is a massive undertaking requiring you to explicitly allow every service and action needed across the entire organization. This is extremely complex and high-risk. The "temporarily apply SCP for principals only in the new account" at root level doesn't work well because SCPs don't evaluate at the principal level in a granular way that would allow this pattern. This introduces significant long-term maintenance managing allow lists. Key SCP Concepts Applied: SCPs flow down the organization hierarchy (root → OU → account) An explicit deny at any level cannot be overridden by an allow at a lower level Moving SCPs from root to an OU excludes accounts in other OUs from those restrictions Temporary OUs are a valid pattern for onboarding with different policy requirements

Question 4

A company is running a two-tier web-based application in an on-premises data center. The application user consists of a single server running a stateful application. The application connects to a PostgreSQL database running on a separate server. The application's user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing. Which solution will provide a consistent user experience that will allow the application and database tiers to scale?

A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
B. Enable Aurora Auto Scaling for Aurora writes. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sticky sessions enabled.
D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
Show Answer & Explanation

Correct Answer: C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the robin routing and sticky sessions enabled.

Why C is correct: This solution addresses all requirements for a stateful application with growing user base: Aurora Auto Scaling for Aurora Replicas: Aurora has a single writer instance and can have multiple read replicas. Auto Scaling for replicas scales read capacity as user base grows. The application can direct read traffic to replicas while writes go to the primary writer instance. Application Load Balancer (ALB): Operates at Layer 7 (HTTP/HTTPS), which is appropriate for web-based applications. ALBs provide advanced routing, health checks, and better integration with Auto Scaling groups. Round robin routing: Distributes requests evenly across healthy targets, providing good load distribution. Sticky sessions enabled: Critical for stateful applications. Ensures that requests from the same user are routed to the same EC2 instance, maintaining session state and providing a consistent user experience. Why others are wrong: A: Network Load Balancer (NLB) operates at Layer 4 (TCP/UDP) and is designed for ultra-high performance, low latency scenarios or non-HTTP protocols. For a web-based application, ALB is more appropriate as it understands HTTP/HTTPS and provides better routing capabilities. Also, "least outstanding requests" is an ALB routing algorithm, not an NLB algorithm (NLB uses flow hash algorithm). While Aurora Replicas auto scaling is correct, the NLB choice is suboptimal. B: Aurora Auto Scaling for Aurora writers is not supported. Aurora can only have one writer instance at a time (in single-master configuration). You cannot auto-scale writer instances. Aurora Auto Scaling only works with read replicas. This makes this option technically invalid, even though the ALB with round robin and sticky sessions would be correct for the application tier. D: Same fundamental problem as option B - Aurora Auto Scaling for writers is not supported. Aurora only supports auto scaling for read replicas, not writers. Additionally, NLB is less suitable for web applications compared to ALB, and "least outstanding requests" is not an NLB routing algorithm. Key Concepts Applied: Aurora Architecture: Single writer, multiple read replicas model Aurora Auto Scaling: Only applies to read replicas, not writers Stateful Applications: Require sticky sessions to maintain session state ALB vs NLB: ALB for HTTP/HTTPS web applications, NLB for TCP/UDP or extreme performance needs Sticky Sessions: Route-based persistence that directs a user's requests to the same target

Question 5

A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers. The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions. Which solution will meet these requirements?

A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.
B. Con gure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.
C. Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.
D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.
Show Answer & Explanation

Correct Answer: A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.

The correct answer is A. Why A is correct: Option A properly uses CloudFront with a CloudFront Function to manipulate headers based on the User-Agent. CloudFront Functions are lightweight JavaScript functions that run at CloudFront edge locations and are specifically designed for simple, high-performance header manipulation tasks. They execute during the viewer request or viewer response phase, making them ideal for removing problematic headers from responses before they reach older devices. The architecture is serverless (Lambda functions + CloudFront), cost-effective, and CloudFront Functions have extremely low latency since they run at the edge. The ALB provides a standard way to invoke Lambda functions, and CloudFront adds the caching and header manipulation capabilities needed.

Ready for the Full SAP-C02 Experience?

Access all 106 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →