AWS AIF-C01 Free Practice Questions — Page 2

AWS Certified AI Practitioner • 5 questions • Answers & explanations included

Question 6

A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and processing times up to 1 hour. The company needs near real-time latency. Which SageMaker inference option meets these requirements?

A. Real-time inference
B. Serverless inference
C. Asynchronous inference
D. Batch transform
Show Answer & Explanation

Correct Answer: C. Asynchronous inference

Asynchronous inference is designed for large payloads and long processing times while still providing near real-time responses. It can handle payloads up to 1 GB and processing times up to 15 minutes per request. Real-time inference has strict latency requirements and payload limits that would not accommodate 1 GB inputs. Serverless inference has payload and timeout limitations. Batch transform is for offline processing without real-time requirements.

Question 7

A company is using domain-specific models. The company wants to avoid creating new models from the beginning. The company instead wants to adapt pre-trained models to create models for new related tasks. Which ML strategy meets these requirements?

A. Increase the number of epochs.
B. Use transfer learning.
C. Decrease the number of epochs.
D. Use unsupervised learning.
Show Answer & Explanation

Correct Answer: B. Use transfer learning.

Transfer learning allows you to take a pre-trained model and adapt it for new related tasks without training from scratch. This approach leverages knowledge learned from one domain and applies it to another saving time and computational resources. Increasing or decreasing epochs affects training duration but does not enable model adaptation. Unsupervised learning is a type of ML approach not a strategy for adapting existing models.

Question 8

A company is building a solution to generate images for protective eyewear. The solution must have high accuracy and must minimize the risk of incorrect annotations. Which solution will meet these requirements?

A. Human-in-the-loop validation by using Amazon SageMaker Ground Truth Plus
B. Data augmentation by using an Amazon Bedrock knowledge base
C. Image recognition by using Amazon Rekognition
D. Data summarization by using Amazon QuickSight Q
Show Answer & Explanation

Correct Answer: A. Human-in-the-loop validation by using Amazon SageMaker Ground Truth Plus

Amazon SageMaker Ground Truth Plus provides human-in-the-loop validation which ensures high accuracy by having human reviewers verify and correct annotations. This minimizes the risk of incorrect annotations through expert human oversight. Data augmentation does not address annotation accuracy. Amazon Rekognition recognizes images but does not generate them. QuickSight Q is for data visualization and summarization not image generation or annotation.

Question 9

A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket. The data is encrypted with Amazon S3 managed keys (SSE-S3). The FM encounters a failure when attempting to access the S3 bucket data. Which solution will meet these requirements?

A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key.
B. Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C. Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D. Ensure that the S3 data does not contain sensitive information
Show Answer & Explanation

Correct Answer: A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key.

The solution requires ensuring proper IAM permissions for the role that Amazon Bedrock assumes. The role needs permission to decrypt data using the encryption key. For SSE-S3 encrypted data the role needs appropriate S3 access permissions. Making buckets public creates security vulnerabilities. Prompt engineering cannot resolve permission issues. Removing sensitive data does not address the access failure.

Question 10

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible. Which solution will meet these requirements?

A. Deploy optimized small language models (SLMs) on edge devices.
B. Deploy optimized large language models (LLMs) on edge devices.
C. Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D. Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Show Answer & Explanation

Correct Answer: A. Deploy optimized small language models (SLMs) on edge devices.

Deploying optimized small language models directly on edge devices provides the lowest latency because inference happens locally without network round trips. SLMs are designed to run efficiently on resource-constrained devices. LLMs are too large for edge deployment. Centralized APIs introduce network latency and asynchronous communication adds additional delays. Local edge deployment eliminates all network-related latency.

Ready for the Full AIF-C01 Experience?

Access all 31 pages of practice questions, track your progress, and simulate the real exam with timed mode.

Start Interactive Quiz →