Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?
A. Helps decrease the model's complexity
B. Improves model performance over time
C. Decreases the training time requirement
D. Optimizes model inference time
Correct Answer: B
Ongoing pre-training when fine-tuning a foundation model (FM) improves model performance over time by continuously learning from new data.
Question 72:
A company built a deep learning model for object detection and deployed the model to production.
Which AI process occurs when the model analyzes a new image to identify objects?
A. Training
B. Inference
C. Model deployment
D. Bias correction
Correct Answer: B
Inference is the correct answer because it is the AI process that occurs when a deployed model analyzes new data (such as an image) to make predictions or identify objects.
Question 73:
A company is building a contact center application and wants to gain insights from customer conversations. The company wants to analyze and extract key information from the audio of the customer calls. Which solution meets these requirements?
A. Build a conversational chatbot by using Amazon Lex.
B. Transcribe call recordings by using Amazon Transcribe.
C. Extract information from call recordings by using Amazon SageMaker Model Monitor.
D. Create classification labels by using Amazon Comprehend.
Correct Answer: B
Amazon Transcribe is the correct solution for converting audio from customer calls into text, allowing the company to analyze and extract key information from the conversations.
Question 74:
A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?
A. Use Amazon SageMaker Serverless Inference to deploy the model.
B. Use Amazon CloudFront to deploy the model.
C. Use Amazon API Gateway to host the model and serve predictions.
D. Use AWS Batch to host the model and serve predictions.
Correct Answer: A
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model without the need to manage the underlying infrastructure.
Amazon SageMaker Serverless Inferenceprovides a fully managed environment for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the
company to manage servers or other underlying infrastructure.
Question 75:
A company uses Amazon SageMaker for its ML pipeline in a production environment. The company has large input data sizes up to 1 GB and processing times up to 1 hour. The company needs near real-time latency.
Which SageMaker inference option meets these requirements?
A. Real-time inference
B. Serverless inference
C. Asynchronous inference
D. Batch transform
Correct Answer: A
Real-time inference is designed to provide immediate, low-latency predictions, which is necessary when the company requires near real-time latency for its ML models. This option is optimal when there is a need for fast responses, even with
large input data sizes and substantial processing times.
Option A (Correct): "Real-time inference":This is the correct answer because it supports low-latency requirements, which are essential for real-time applications where quick response times are needed.
Option B:"Serverless inference" is incorrect because it is more suited for intermittent, small-scale inference workloads, not for continuous, large-scale, low- latency needs.
Option C:"Asynchronous inference" is incorrect because it is used for workloads that do not require immediate responses.
Option D:"Batch transform" is incorrect as it is intended for offline, large-batch processing where immediate response is not necessary.
AWS AI Practitioner References:
Amazon SageMaker Inference Options:AWS documentation describes real-time inference as the best solution for applications that require immediate prediction results with low latency.
Question 76:
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.
Which action will reduce these risks?
A. Create a prompt template that teaches the LLM to detect attack patterns.
B. Increase the temperature parameter on invocation requests to the LLM.
C. Avoid using LLMs that are not listed in Amazon SageMaker.
D. Decrease the number of input tokens on invocations of the LLM.
Correct Answer: A
Creating a prompt template that teaches the LLM to detect attack patterns is the most effective way to reduce the risk of the model being manipulated through prompt engineering.
Question 77:
A company needs to build its own large language model (LLM) based on only the company's private data. The company is concerned about the environmental effect of the training process.
Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
A. Amazon EC2 C series
B. Amazon EC2 G series
C. Amazon EC2 P series
D. Amazon EC2 Trn series
Correct Answer: D
The Amazon EC2 Trn series (Trainium) instances are designed for high-performance, cost- effective machine learning training while being energy-efficient. AWS Trainium-powered instances are optimized for deep learning models and have
been developed to minimize environmental impact by maximizing energy efficiency. Option D (Correct): "Amazon EC2 Trn series":This is the correct answer because the Trn series is purpose-built for training deep learning models with lower
energy consumption, which aligns with the company's concern about environmental effects.
Option A:"Amazon EC2 C series" is incorrect because it is intended for compute- intensive tasks but not specifically optimized for ML training with environmental considerations.
Option B:"Amazon EC2 G series" (Graphics Processing Unit instances) is optimized for graphics-intensive applications but does not focus on minimizing environmental impact for training.
Option C:"Amazon EC2 P series" is designed for ML training but does not offer the same level of energy efficiency as the Trn series.
AWS AI Practitioner References:
AWS Trainium Overview:AWS promotes Trainium instances as their most energy- efficient and cost-effective solution for ML model training.
Question 78:
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers. Which actions should the company take to meet these requirements? (Select TWO.)
A. Detect imbalances or disparities in the data.
B. Ensure that the model runs frequently.
C. Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate.
E. Ensure that the model's inference time is within the accepted limits.
Correct Answer: AC
To build an AI model responsibly and minimize bias, it is essential to ensure fairness and transparency throughout the model development and deployment process. This involves detecting and mitigating data imbalances and thoroughly
evaluating the model's behavior to understand its impact on different groups.
Option A (Correct): "Detect imbalances or disparities in the data":This is correct because identifying and addressing data imbalances or disparities is a critical step in reducing bias. AWS provides tools like Amazon SageMaker Clarify to detect
bias during data preprocessing and model training. Option C (Correct): "Evaluate the model's behavior so that the company can provide transparency to stakeholders":This is correct because evaluating the model's behavior for fairness and
accuracy is key to ensuring that stakeholders understand how the model makes decisions. Transparency is a crucial aspect of responsible AI.
Option B:"Ensure that the model runs frequently" is incorrect because the frequency of model runs does not address bias.
Option D:"Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate" is incorrect because ROUGE is a metric for evaluating the quality of text summarization models, not for
minimizing bias.
Option E:"Ensure that the model's inference time is within the accepted limits" is incorrect as it relates to performance, not bias reduction.
AWS AI Practitioner References:
Amazon SageMaker Clarify:AWS offers tools such as SageMaker Clarify for detecting bias in datasets and models, and for understanding model behavior to ensure fairness and transparency.
Responsible AI Practices:AWS promotes responsible AI by advocating for fairness, transparency, and inclusivity in model development and deployment.
Question 79:
A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data.
Which stage of the ML pipeline is the company currently in?
A. Data pre-processing
B. Feature engineering
C. Exploratory data analysis
D. Hyperparameter tuning
Correct Answer: C
Exploratory data analysis (EDA) involves understanding the data by visualizing it, calculating statistics, and creating correlation matrices. This stage helps identify patterns, relationships, and anomalies in the data, which can guide further
steps in the ML pipeline. Option C (Correct): "Exploratory data analysis":This is the correct answer as the tasks described (correlation matrix, calculating statistics, visualizing data) are all part of the EDA process.
Option A:"Data pre-processing" is incorrect because it involves cleaning and transforming data, not initial analysis.
Option B:"Feature engineering" is incorrect because it involves creating new features from raw data, not analyzing the data's existing structure. Option D:"Hyperparameter tuning" is incorrect because it refers to optimizing model parameters,
not analyzing the data.
AWS AI Practitioner References:
Stages of the Machine Learning Pipeline:AWS outlines EDA as the initial phase of understanding and exploring data before moving to more specific preprocessing, feature engineering, and model training stages.
Question 80:
A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.
Which factor will drive the inference costs?
A. Number of tokens consumed
B. Temperature value
C. Amount of data used to train the LLM
D. Total training time
Correct Answer: A
In generative AI models, such as those built on Amazon Bedrock, inference costs are driven by the number of tokens processed. A token can be as short as one character or as long as one word, and the more tokens consumed during the
inference process, the higher the cost.
Option A (Correct): "Number of tokens consumed":This is the correct answer because the inference cost is directly related to the number of tokens processed by the model.
Option B:"Temperature value" is incorrect as it affects the randomness of the model's output but not the cost directly.
Option C:"Amount of data used to train the LLM" is incorrect because training data size affects training costs, not inference costs. Option D:"Total training time" is incorrect because it relates to the cost of training the model, not the cost of
inference.
AWS AI Practitioner References:
Understanding Inference Costs on AWS:AWS documentation highlights that inference costs for generative models are largely based on the number of tokens processed.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your AIF-C01 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.