AWS SCS-C02 Free Practice Questions — Page 1

Security - Specialty • 5 questions • Answers & explanations included

Question 1

A company has an AWS Lambda function that creates image thumbnails from larger images. The Lambda function needs read and write access to an Amazon S3 bucket in the same AWS account. Which solutions will provide the Lambda function this access? (Select TWO)

A. Create an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3.
B. Generate an Amazon EC2 key pair. Store the private key in AWS Secrets Man-ager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3.
C. Create an IAM role for the Lambda function. Attach an IAM policy that al-lows access to the S3 bucket.
D. Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function's IAM role as the principal.
E. Create a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID.
Show Answer & Explanation

Correct Answers: C. Create an IAM role for the Lambda function. Attach an IAM policy that al-lows access to the S3 bucket.; D. Create an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function's IAM role as the principal.

Why C is correct: Creating an IAM role for the Lambda function and attaching an IAM policy that allows S3 access is the AWS best practice. Lambda functions should use IAM roles (execution roles) rather than embedding credentials. The role is automatically assumed by Lambda at runtime, providing temporary credentials through AWS STS.Why D is correct: This is another valid approach using resource-based policies. You can create an IAM role for Lambda and then use an S3 bucket policy that explicitly allows that role (as the principal) to access the bucket. This demonstrates cross-service authorization using resource-based policies combined with identity-based roles.Why A is wrong: Storing access keys in environment variables is a security anti-pattern. It exposes long-term credentials that could be compromised. AWS explicitly recommends against embedding credentials in code or configuration.Why B is wrong: EC2 key pairs are used for SSH access to EC2 instances, not for AWS API authentication. This answer confuses instance access with service-to-service authentication. Lambda doesn't use SSH keys to communicate with S3.Why E is wrong: Security groups control network traffic (layer 3/4), not API access to S3. S3 access requires IAM permissions, not network security groups. Additionally, Lambda functions in VPC can have security groups, but this doesn't grant S3 API permissions.

Question 2

A security engineer is configuring a new website that is named example.com. The security engineer wants to secure communications with the website by requiring users to connect to example.com through HTTPS. Which of the following is a valid option for storing SSL/TLS certificates?

A. Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS).
B. Default SSL certificate that is stored in Amazon CloudFront.
C. Custom SSL certificate that is stored in AWS Certificate Manager (ACM).
D. Default SSL certificate that is stored in Amazon S3.
Show Answer & Explanation

Correct Answer: C. Custom SSL certificate that is stored in AWS Certificate Manager (ACM).

Why C is correct: AWS Certificate Manager (ACM) is specifically designed to store and manage SSL/TLS certificates for AWS services. ACM provides free public certificates, handles automatic renewal, and integrates seamlessly with services like CloudFront, Application Load Balancers, and API Gateway. This is the standard AWS solution for SSL/TLS certificate management.Why A is wrong: AWS KMS is designed for encryption key management, not SSL/TLS certificate storage. KMS manages cryptographic keys used for data encryption, but it's not the appropriate service for storing SSL/TLS certificates used for HTTPS connections.Why B is wrong: CloudFront doesn't provide a "default SSL certificate" for custom domains like example.com. CloudFront has a default certificate only for CloudFront distribution domains (*.cloudfront.net), not for custom domains. Custom domains require either ACM certificates or third-party certificates.Why D is wrong: S3 is not designed to store SSL/TLS certificates for use with web services. While you could technically store certificate files in S3, there's no native integration with HTTPS endpoints, and this would not be a secure or functional solution for serving HTTPS traffic.

Question 3

A security engineer needs to develop a process to investigate and respond to potential security events on a company's Amazon EC2 instances. All the EC2 instances are backed by Amazon Elastic Block Store (Amazon EBS). The company uses AWS Systems Manager to manage all the EC2 instances and has installed Systems Manager Agent (SSM Agent) on all the EC2 instances. The process that the security engineer is developing must comply with AWS security best practices and must meet the following requirements: A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes. A compromised EC2 instance's metadata must be updated with corresponding incident ticket information. A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware. Any investigative activity during the collection of volatile data must be captured as part of the process. Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose THREE)

A. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
B. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Move the instance to an isolation subnet that denies all source and destination traffic. Associate the instance with the subnet to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
C. Use Systems Manager Run Command to invoke scripts that collect volatile data.
D. Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data.
E. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.
F. Create a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information.
Show Answer & Explanation

Correct Answers: A. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.; C. Use Systems Manager Run Command to invoke scripts that collect volatile data.; E. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.

Why A is correct: This step properly isolates the compromised instance by restricting its security groups, preventing malware spread while keeping the instance running. Enabling termination protection prevents accidental deletion. Detaching from Auto Scaling groups and ELB ensures the instance isn't terminated automatically or receiving traffic, which is critical for forensic investigation. Why C is correct: Using Systems Manager Run Command is the best practice for collecting volatile data (memory, running processes) because it provides a remote, auditable way to execute scripts without direct SSH/RDP access. The activity is logged through CloudTrail, meeting the requirement to capture investigative activity. SSM Agent is already installed per the question. Why E is correct: Creating an EBS snapshot preserves non-volatile storage for forensic analysis. Tagging with metadata and incident information meets the requirement to update the instance's metadata. EBS snapshots are point-in-time backups that can be analyzed separately without affecting the running instance. Why B is wrong: Moving to an isolation subnet that "denies all source and destination traffic" would make the instance completely unreachable, preventing any forensic data collection or investigation. This is too restrictive compared to using security groups to selectively block traffic while allowing investigative access. Why D is wrong: Establishing direct SSH/RDP sessions is not AWS best practice because it doesn't provide the same level of audit logging as Systems Manager Run Command. It also introduces the risk of the investigator's actions contaminating the evidence or altering the volatile state of the system. Why F is wrong: State Manager associations are for ongoing configuration management, not one-time snapshot creation. This adds unnecessary operational overhead compared to simply creating a snapshot directly. State Manager is designed for maintaining desired states across instances, not for incident response.

Question 4

A company has an organization in AWS Organizations. The company wants to use AWS CloudFormation StackSets in the organization to deploy various AWS design patterns into environments. These patterns consist of Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, Amazon RDS databases, and Amazon Elastic Kubernetes Service (Amazon EKS) clusters or Amazon Elastic Container Service (Amazon ECS) clusters. Currently, the company's developers can create their own CloudFormation stacks to increase the overall speed of delivery. A centralized CI/CD pipeline in a shared services AWS account deploys each CloudFormation stack. The company's security team has already provided requirements for each service in accordance with internal standards. If there are any resources that do not comply with the internal standards, the security team must receive notification to take appropriate action. The security team must implement a notification solution that gives developers the ability to maintain the same overall delivery speed that they currently have. Which solution will meet these requirements in the MOST operationally efficient way?

A. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create a custom AWS Lambda function that will run the aws cloudformation validate-template AWS CLI command on all CloudFormation templates before the build stage in the CI/CD pipeline. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create custom rules in CloudFormation Guard for each resource configuration. In the CI/CD pipeline, before the build stage, configure a Docker image to run the cfn-guard command on the CloudFormation template. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
C. Create an Amazon Simple Notification Service (Amazon SNS) topic and an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the security team's email addresses to the SNS topic. Create an Amazon S3 bucket in the shared services AWS account. Include an event notification to publish to the SQS queue when new objects are added to the S3 bucket. Require the developers to put their CloudFormation templates in the S3 bucket. Launch EC2 instances that automatically scale based on the SQS queue depth. Configure the EC2 instances to use CloudFormation Guard to scan the templates and deploy the templates if there are no issues. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
D. Create a centralized CloudFormation stack set that includes a standard set of resources that the developers can deploy in each AWS account. Configure each CloudFormation template to meet the security requirements. For any new resources or configurations, update the CloudFormation template and send the template to the security team for review. When the review is completed, add the new CloudFormation stack to the repository for the developers to use.
Show Answer & Explanation

Correct Answer: B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create custom rules in CloudFormation Guard for each resource configuration. In the CI/CD pipeline, before the build stage, configure a Docker image to run the cfn-guard command on the CloudFormation template. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.

Why B is correct: CloudFormation Guard is an open-source policy-as-code tool designed specifically to validate CloudFormation templates against security and compliance rules before deployment. Implementing this in the CI/CD pipeline's pre-build stage allows developers to maintain delivery speed while catching non-compliant resources early. Custom rules can enforce the security team's internal standards, and SNS notifications alert the security team only when issues are found. This is operationally efficient because it's automated and doesn't slow down compliant deployments. Why A is wrong: The aws cloudformation validate-template command only checks for valid JSON/YAML syntax and template structure, not compliance with security policies or internal standards. It cannot validate whether resources meet specific security requirements like encryption settings, network configurations, or other compliance controls. Why C is wrong: This solution is operationally inefficient. It requires maintaining EC2 instances, managing SQS queue processing, handling auto-scaling, and creating a separate S3 upload process. This adds significant infrastructure overhead and complexity compared to running validation directly in the CI/CD pipeline. It also slows down delivery since developers must upload templates separately. Why D is wrong: This approach severely limits developer agility by requiring security team review for every new resource or configuration. It creates a bottleneck that contradicts the requirement to "maintain the same overall delivery speed." Centralized templates also don't accommodate the diverse design patterns mentioned (EC2, ELB, RDS, EKS, ECS).

Question 5

A company is migrating one of its legacy systems from an on-premises data center to AWS. The application server will run on AWS, but the database must remain in the on-premises data center for compliance reasons. The database is sensitive to network latency. Additionally, the data that travels between the on-premises data center and AWS must have IPsec encryption. Which combination of AWS solutions will meet these requirements? (Choose TWO)

A. AWS Site-to-Site VPN
B. AWS Direct Connect
C. AWS VPN CloudHub
D. VPC peering
E. NAT gateway
Show Answer & Explanation

Correct Answers: A. AWS Site-to-Site VPN; B. AWS Direct Connect

Why A is correct: AWS Site-to-Site VPN provides IPsec encryption by default, meeting the encryption requirement. It creates encrypted tunnels over the internet between the on-premises data center and AWS VPC, allowing the application server in AWS to securely communicate with the on-premises database. Why B is correct: AWS Direct Connect provides a dedicated private network connection between on-premises and AWS, which significantly reduces network latency—critical for the latency-sensitive database mentioned in the question. However, Direct Connect alone doesn't provide encryption, so it must be combined with Site-to-Site VPN (VPN over Direct Connect) to meet the IPsec encryption requirement. Why C is wrong: AWS VPN CloudHub is used to provide connectivity between multiple on-premises sites through AWS, using a hub-and-spoke model. It's not designed for connecting AWS workloads to a single on-premises data center. The question describes a simple AWS-to-on-premises connection scenario. Why D is wrong: VPC peering connects two VPCs within AWS, not a VPC to an on-premises data center. It's used for inter-VPC communication within AWS, not for hybrid connectivity. Why E is wrong: NAT gateway provides outbound internet access for resources in private subnets and performs network address translation. It doesn't establish VPN connections or provide encrypted connectivity to on-premises networks. It's purely for internet access, not site-to-site connectivity.

Ready for the Full SCS-C02 Experience?

Access all 60 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →