A customer is concerned that the consolidation rate displayed in the identity resolution is quite low compared to their initial estimations.
Which configuration change should a consultant consider in order to increase the consolidation rate?
A. Change reconciliation rules to Most Occurring.
B. Increase the number of matching rules.
C. Include additional attributes in the existing matching rules.
D. Reduce the number of matching rules.
Correct Answer: B
The consolidation rate is the amount by which source profiles are combined to produce unified profiles, calculated as - (number of unified individuals / number of source individuals). For example, if you ingest 100 source records and create 80 unified profiles, your consolidation rate is 20%. To increase the consolidation rate, you need to increase the number of matches between source profiles, which can be done by adding more match rules. Match rules define the criteria for matching source profiles based on their attributes. By increasing the number of match rules, you can increase the chances of finding matches between source profiles and thus increase the consolidation rate. On the other hand, changing reconciliation rules, including additional attributes, or reducing the number of match rules can decrease the consolidation rate, as they can either reduce the number of matches or increase the number of unified profiles. References: Identity Resolution Calculated Insight: Consolidation Rates for Unified Profiles, Identity Resolution Ruleset Processing Results, Configure Identity Resolution Rulesets
Question 32:
Northern Trail Outfitters uploads new customer data to an Amazon S3 Bucket on a daily basis to be ingested in Data Cloud.
In what order should each process be run to ensure that freshly imported data is ready and available to use for any segment?
A. Calculated Insight > Refresh Data Stream > Identity Resolution
B. Refresh Data Stream > Calculated Insight > Identity Resolution
C. Identity Resolution > Refresh Data Stream > Calculated Insight
D. Refresh Data Stream > Identity Resolution > Calculated Insight
Correct Answer: D
To ensure that freshly imported data from an Amazon S3 Bucket is ready and available to use for any segment, the following processes should be run in this order:
Refresh Data Stream: This process updates the data lake objects in Data Cloud with the latest data from the source system. It can be configured to run automatically or manually, depending on the data stream settings. Refreshing the data
stream ensures that Data Cloud has the most recent and accurate data from the Amazon S3 Bucket.
Identity Resolution: This process creates unified individual profiles by matching and consolidating source profiles from different data streams based on the identity resolution ruleset. It runs daily by default, but can be triggered manually as
well2. Identity resolution ensures that Data Cloud has a single view of each customer across different data sources.
Calculated Insight: This process performs calculations on data lake objects or CRM data and returns a result as a new data object. It can be used to create metrics or measures for segmentation or analysis purposes3. Calculated insights
ensure that Data Cloud has the derived data that can be used for personalization or activation.
References:
1: Configure Data Stream Refresh and Frequency - Salesforce
A consultant is setting up a data stream with transactional data,
Which field type should the consultant choose to ensure that leading zeros in the purchase order number are preserved?
A. Text
B. Number
C. Decimal
D. Serial
Correct Answer: A
The field type Text should be chosen to ensure that leading zeros in the purchase order number are preserved. This is because text fields store alphanumeric characters as strings, and do not remove any leading or trailing characters. On the other hand, number, decimal, and serial fields store numeric values as numbers, and automatically remove any leading zeros when displaying or exporting the data. Therefore, text fields are more suitable for storing data that needs to retain its original format, such as purchase order numbers, zip codes, phone numbers, etc.
References: Zeros at the start of a field appear to be omitted in Data Exports Keep First `0' When Importing a CSV File Import and export address fields that begin with a zero or contain a plus symbol
Question 34:
Which information is provided in a .csv file when activating to Amazon S3?
A. An audit log showing the user who activated the segment and when it was activated
B. The activated data payload
C. The metadata regarding the segment definition
D. The manifest of origin sources within Data Cloud
Correct Answer: B
When activating to Amazon S3, the information that is provided in a .csv file is the activated data payload. The activated data payload is the data that is sent from Data Cloud to the activation target, which in this case is an Amazon S3 bucket1. The activated data payload contains the attributes and values of the individuals or entities that are included in the segment that is being activated2. The activated data payload can be used for various purposes, such as marketing, sales, service, or analytics. The other options are incorrect because they are not provided in a .csv file when activating to Amazon S3. Option A is incorrect because an audit log is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Activation History tab4. Option C is incorrect because the metadata regarding the segment definition is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Segmentation tab5. Option D is incorrect because the manifest of origin sources within Data Cloud is not provided in a .csv file, but it can be viewed in the Data Cloud UI under the Data Sources tab. References: Data Activation Overview, Create and Activate Segments in Data Cloud, Data Activation Use Cases, View Activation History, Segmentation Overview, [Data Sources Overview]
Question 35:
What does the Ignore Empty Value option do in identity resolution?
A. Ignores empty fields when running any custom match rules
B. Ignores empty fields when running reconciliation rules
C. Ignores Individual object records with empty fields when running identity resolution rules
D. Ignores empty fields when running the standard match rules
Correct Answer: B
The Ignore Empty Value option in identity resolution allows customers to ignore empty fields when running reconciliation rules. Reconciliation rules are used to determine the final value of an attribute for a unified individual profile, based on the values from different sources. The Ignore Empty Value option can be set to true or false for each attribute in a reconciliation rule. If set to true, the reconciliation rule will skip any source that has an empty value for that attribute and move on to the next source in the priority order. If set to false, the reconciliation rule will consider any source that has an empty value for that attribute as a valid source and use it to populate the attribute value for the unified individual profile.
The other options are not correct descriptions of what the Ignore Empty Value option does in identity resolution. The Ignore Empty Value option does not affect the custom match rules or the standard match rules, which are used to identify and link individuals across different sources based on their attributes. The Ignore Empty Value option also does not ignore individual object records with empty fields when running identity resolution rules, as identity resolution rules operate on the attribute level, not the record level.
References:
1.
Data Cloud Identity Resolution Reconciliation Rule Input
2.
Configure Identity Resolution Rulesets
3.
Data and Identity in Data Cloud
Question 36:
A user Is not seeing suggested values from newly-modeled data when building a segment.
What is causing this issue?
A. Value suggestion will only return results for the first 50 values of a specific attribute,
B. Value suggestion can only work on direct attributes and not related attributes.
C. Value suggestion requires Data Aware Specialist permissions at a minimum.
D. Value suggestion is still processing and takes up to 24 hours to be available.
Correct Answer: D
The most likely cause of this issue is that value suggestion is still processing and takes up to 24 hours to be available. Value suggestion is a feature that enables you to see suggested values for data model object (DMO) fields when creating
segment filters. However, this feature needs to be enabled for each DMO field, and it can take up to 24 hours for the suggested values to appear after enabling the feature. Therefore, if a user is not seeing suggested values from newly-
modeled data, it could be that the data has not been processed yet by the value suggestion feature.
References:
Use Value Suggestions in Segmentation
Question 37:
A segment fails to refresh with the error "Segment references too many data lake objects (DLOS)".
Which two troubleshooting tips should help remedy this issue? Choose 2 answers
A. Split the segment into smaller segments.
B. Use calculated insights in order to reduce the complexity of the segmentation query.
C. Refine segmentation criteria to limit up to five custom data model objects (DMOs).
D. Space out the segment schedules to reduce DLO load.
Correct Answer: AB
The error "Segment references too many data lake objects (DLOs)" occurs when a segment query exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To remedy this issue, the consultant can try the following troubleshooting tips: Split the segment into smaller segments. The consultant can divide the segment into multiple segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number of DLOs that are referenced in each segment query and avoid the error. The consultant can then use the smaller segments as nested segments in a larger segment, or activate them separately. Use calculated insights in order to reduce the complexity of the segmentation query. The consultant can create calculated insights that are derived from existing data using formulas. Calculated insights can simplify the segmentation query by replacing multiple filters or nested segments with a single attribute. For example, instead of using multiple filters to segment individuals based on their purchase history, the consultant can create a calculated insight that calculates the lifetime value of each individual and use that as a filter.
The other options are not troubleshooting tips that can help remedy this issue. Refining segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to the segment query complexity.
References:
1.
Troubleshoot Segment Errors
2.
Create a Calculated Insight
3.
Create a Segment in Data Cloud
Question 38:
When performing segmentation or activation, which time zone is used to publish and refresh data?
A. Time zone specified on the activity at the time of creation
B. Time zone of the user creating the activity
C. Time zone of the Data Cloud Admin user
D. Time zone set by the Salesforce Data Cloud org
Correct Answer: D
The time zone that is used to publish and refresh data when performing segmentation or activation is D. Time zone set by the Salesforce Data Cloud org. This time zone is the one that is configured in the org settings when Data Cloud is provisioned, and it applies to all users and activities in Data Cloud. This time zone determines when the segments are scheduled to refresh and when the activations are scheduled to publish. Therefore, it is important to consider the time zone difference between the Data Cloud org and the destination systems or channels when planning the segmentation and activation strategies. References: Salesforce Data Cloud Consultant uide, Segmentation, Activation
Question 39:
Cloud Kicks received a Request to be Forgotten by a customer.
In which two ways should a consultant use Data Cloud to honor this request?
Choose 2 answers
A. Delete the data from the incoming data stream and perform a full refresh.
B. Add the Individual ID to a headerless file and use the delete from file functionality.
C. Use Data Explorer to locate and manually remove the Individual.
D. Use the Consent API to suppress processing and delete the Individual and related records from source data streams.
Correct Answer: BD
To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two ways:
Add the Individual ID to a headerless file and use the delete from file functionality. This option allows the consultant to delete multiple Individuals from Data Cloud by uploading a CSV file with their IDs1. The deletion process is asynchronous
and can take up to 24 hours to complete1.
Use the Consent API to suppress processing and delete the Individual and related records from source data streams. This option allows the consultant to submit a Data Deletion request for an Individual profile in Data Cloud using the Consent
API2. A Data Deletion request deletes the specified Individual entity and any entities where a relationship has been defined between that entity's identifying attribute and the Individual ID attribute2. The deletion process is reprocessed at 30,
60, and 90 days to ensure a full deletion2. The other options are not correct because:
1.
Deleting the data from the incoming data stream and performing a full refresh will not delete the existing data in Data Cloud, only the new data from the source system.
2.
Using Data Explorer to locate and manually remove the Individual will not delete the related records from the source data streams, only the Individual entity in Data Cloud. References:
3.
Delete Individuals from Data Cloud
4.
Requesting Data Deletion or Right to Be Forgotten
5.
Data Refresh for Data Cloud
6.
[Data Explorer]
Question 40:
Which two requirements must be met for a calculated insight to appear in the segmentation canvas? Choose 2 answers
A. The metrics of the calculated insights must only contain numeric values.
B. The primary key of the segmented table must be a metric in the calculated insight.
C. The calculated insight must contain a dimension including the Individual or Unified Individual Id.
D. The primary key of the segmented table must be a dimension in the calculated insight.
Correct Answer: CD
A calculated insight is a custom metric or measure that is derived from one or more data model objects or data lake objects in Data Cloud. A calculated insight can be used in segmentation to filter or group the data based on the calculated value. However, not all calculated insights can appear in the segmentation canvas. There are two requirements that must be met for a calculated insight to appear in the segmentation canvas:
The calculated insight must contain a dimension including the Individual or Unified Individual Id. A dimension is a field that can be used to categorize or group the data, such as name, gender, or location. The Individual or Unified Individual Id is a unique identifier for each individual profile in Data Cloud. The calculated insight must include this dimension to link the calculated value to the individual profile and to enable segmentation based on the individual profile attributes.
The primary key of the segmented table must be a dimension in the calculated insight. The primary key is a field that uniquely identifies each record in a table. The segmented table is the table that contains the data that is being segmented, such as the Customer or the Order table. The calculated insight must include the primary key of the segmented table as a dimension to ensure that the calculated value is associated with the correct record in the segmented table and to avoid duplication or inconsistency in the segmentation results.
References: Create a Calculated Insight, Use Insights in Data Cloud, Segmentation
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Salesforce exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your DATA-CLOUD-CONSULTANT exam preparations and Salesforce certification application, do not hesitate to visit our Vcedump.com to find your solutions here.