An e-commerce company wants to build a solution to determine customer sentiments based on written customer reviews of products.
Which AWS services meet these requirements? (Select TWO.)
A. Amazon Lex
B. Amazon Comprehend
C. Amazon Polly
D. Amazon Bedrock
E. Amazon Rekognition
Correct Answer: BD
To determine customer sentiments based on written customer reviews, the company can use Amazon Comprehend and Amazon Bedrock.
Question 12:
A law firm wants to build an AI application by using large language models (LLMs). The application will read legal documents and extract key points from the documents.
Which solution meets these requirements?
A. Build an automatic named entity recognition system.
B. Create a recommendation engine.
C. Develop a summarization chatbot.
D. Develop a multi-language translation system.
Correct Answer: C
A summarization chatbot is ideal for extracting key points from legal documents. Large language models (LLMs) can be used to summarize complex texts, such as legal documents, making them more accessible and understandable. Option
C (Correct): "Develop a summarization chatbot":This is the correct answer because a summarization chatbot uses LLMs to condense and extract key information from text, which is precisely the requirement for reading and summarizing legal
documents.
Option A:"Build an automatic named entity recognition system" is incorrect because it focuses on identifying specific entities, not summarizing documents. Option B:"Create a recommendation engine" is incorrect as it is used to suggest
products or content, not summarize text.
Option D:"Develop a multi-language translation system" is incorrect because translation is unrelated to summarizing text.
AWS AI Practitioner
References:
Using LLMs for Text Summarization on AWS:AWS supports developing summarization tools using its AI services, including Amazon Bedrock.
Question 13:
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly.
What should the company do to mitigate this problem?
A. Reduce the volume of data that is used in training.
B. Add hyperparameters to the model.
C. Increase the volume of data that is used in training.
D. Increase the model training time.
Correct Answer: C
When a model performs well on the training data but poorly in production, it is often due to overfitting. Overfitting occurs when a model learns patterns and noise specific to the training data, which does not generalize well to new, unseen data
in production. Increasing the volume of data used in training can help mitigate this problem by providing a more diverse and representative dataset, which helps the model generalize better. Option C (Correct): "Increase the volume of data
that is used in training":Increasing the data volume can help the model learn more generalized patterns rather than specific features of the training dataset, reducing overfitting and improving performance in production.
Option A:"Reduce the volume of data that is used in training" is incorrect, as reducing data volume would likely worsen the overfitting problem. Option B:"Add hyperparameters to the model" is incorrect because adding hyperparameters alone
does not address the issue of data diversity or model generalization.
Option D:"Increase the model training time" is incorrect because simply increasing training time does not prevent overfitting; the model needs more diverse data.
AWS AI Practitioner
References:
Best Practices for Model Training on AWS:AWS recommends using a larger and more diverse training dataset to improve a model's generalization capability and reduce the risk of overfitting.
Question 14:
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions.
Which business objective should the company use to evaluate the effect of the LLM chatbot?
A. Website engagement rate
B. Average call duration
C. Corporate social responsibility
D. Regulatory compliance
Correct Answer: B
The business objective to evaluate the effect of an LLM chatbot aimed at reducing the actions required by call center employees should beaverage call duration.
Question 15:
A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.
Which AWS solution should the company use to automate the generation of graphs?
A. Amazon Q in Amazon EC2
B. Amazon Q Developer
C. Amazon Q in Amazon QuickSight
D. Amazon Q in AWS Chatbot
Correct Answer: C
Amazon QuickSight is a fully managed business intelligence (BI) service that allows users to create and publish interactive dashboards that include visualizations like graphs, charts, and tables. "Amazon Q" is the natural language query
feature within Amazon QuickSight. It enables users to ask questions about their data in natural language and receive visual responses such as graphs.
Option C (Correct): "Amazon Q in Amazon QuickSight":This is the correct answer because Amazon QuickSight Q is specifically designed to allow users to explore their data through natural language queries, and it can automatically generate
graphs to display sales data and other metrics. This makes it an ideal choice for the company to automate the generation of graphs showing total sales for its top- selling products across various retail locations.
Option A, B, and D:These options are incorrect:
AWS AI Practitioner
References:
Amazon QuickSight Qis designed to provide insights from data by using natural language queries, making it a powerful tool for generating automated graphs and visualizations directly from queried data.
Business Intelligence (BI) on AWS:AWS services such as Amazon QuickSight provide business intelligence capabilities, including automated reporting and visualization features, which are ideal for companies seeking to visualize data like
sales trends over time.
Question 16:
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to know how much information can fit into one prompt.
Which consideration will inform the company's decision?
A. Temperature
B. Context window
C. Batch size
D. Model size
Correct Answer: B
The context window determines how much information can fit into a single prompt when using a large language model (LLM) like those on Amazon Bedrock.
Question 17:
A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket.
The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data.
Which solution will meet these requirements?
A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key.
B. Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C. Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D. Ensure that the S3 data does not contain sensitive information.
Correct Answer: A
Amazon Bedrock needs the appropriate IAM role with permission to access and decrypt data stored in Amazon S3. If the data is encrypted with Amazon S3 managed keys (SSE- S3), the role that Amazon Bedrock assumes must have the
required permissions to access and decrypt the encrypted data.
Option A (Correct): "Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key":This is the correct solution as it ensures that the AI model can access the encrypted data securely
without changing the encryption settings or compromising data security. Option B:"Set the access permissions for the S3 buckets to allow public access" is incorrect because it violates security best practices by exposing sensitive data to the
public.
Option C:"Use prompt engineering techniques to tell the model to look for information in Amazon S3" is incorrect as it does not address the encryption and permission issue.
Option D:"Ensure that the S3 data does not contain sensitive information" is incorrect because it does not solve the access problem related to encryption.
AWS AI Practitioner
References:
Managing Access to Encrypted Data in AWS:AWS recommends using proper IAM roles and policies to control access to encrypted data stored in S3.
Question 18:
What are tokens in the context of generative AI models?
A. Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units.
B. Tokens are the mathematical representations of words or concepts used in generative AI models.
C. Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks.
D. Tokens are the specific prompts or instructions given to a generative AI model to generate output.
Correct Answer: A
Tokens in generative AI models are the smallest units that the model processes, typically representing words, subwords, or characters. They are essential for the model to understand and generate language, breaking down text into
manageable parts for processing.
Option A (Correct): "Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units":This is the correct definition of tokens in the context of generative AI models.
Option B:"Mathematical representations of words" describes embeddings, not tokens.
Option C:"Pre-trained weights of a model" refers to the parameters of a model, not tokens.
Option D:"Prompts or instructions given to a model" refers to the queries or commands provided to a model, not tokens.
AWS AI Practitioner
References:
Understanding Tokens in NLP:AWS provides detailed explanations of how tokens are used in natural language processing tasks by AI models, such as in Amazon Comprehend and other AWS AI services.
Question 19:
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately.
Which Amazon SageMaker inference option will meet these requirements?
A. Batch transform
B. Real-time inference
C. Serverless inference
D. Asynchronous inference
Correct Answer: A
Batch transform in Amazon SageMaker is designed for offline processing of large datasets. It is ideal for scenarios where immediate predictions are not required, and the inference can be done on large datasets that are multiple gigabytes in
size. This method processes data in batches, making it suitable for analyzing archived data without the need for real- time access to predictions.
Option A (Correct): "Batch transform":This is the correct answer because batch transform is optimized for handling large datasets and is suitable when immediate access to predictions is not required.
Option B:"Real-time inference" is incorrect because it is used for low-latency, real- time prediction needs, which is not required in this case. Option C:"Serverless inference" is incorrect because it is designed for small-scale, intermittent
inference requests, not for large batch processing. Option D:"Asynchronous inference" is incorrect because it is used when immediate predictions are required, but with high throughput, whereas batch transform is more suitable for very large
datasets.
AWS AI Practitioner
References:
Batch Transform on AWS SageMaker:AWS recommends using batch transform for large datasets when real-time processing is not needed, ensuring cost- effectiveness and scalability.
Question 20:
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images.
Which solution will meet these requirements?
A. Implement moderation APIs.
B. Retrain the model with a general public dataset.
C. Perform model validation.
D. Automate user feedback integration.
Correct Answer: A
Moderation APIs, such as Amazon Rekognition's Content Moderation API, can help filter and block inappropriate or unwanted images from being returned by a chatbot. These APIs are specifically designed to detect and manage undesirable
content in images. Option A (Correct): "Implement moderation APIs":This is the correct answer because moderation APIs are designed to identify and filter inappropriate content, ensuring the chatbot does not return unwanted images. Option
B:"Retrain the model with a general public dataset" is incorrect because retraining does not directly prevent inappropriate content from being returned. Option C:"Perform model validation" is incorrect as it ensures model correctness, not
content moderation.
Option D:"Automate user feedback integration" is incorrect because user feedback does not prevent inappropriate images in real-time.
AWS AI Practitioner
References:
AWS Content Moderation Services:AWS provides moderation APIs for filtering unwanted content from applications.
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Amazon exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your AIF-C01 exam preparations and Amazon certification application, do not hesitate to visit our Vcedump.com to find your solutions here.