A Prism data administrator is ready to create a Prism data source. As data is updated in Prism, the goal is to update the data in the Prism data source concurrently, enabling immediate incremental updates. How should the administrator create the Prism data source?
Create a table and select the Enable for Analysis checkbox.
Create a table and select Publish.
Publish a derived dataset with the Prism: Default to Dataset Access Domain.
Set Data Source Security on a derived dataset and select Publish.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, creating a Prism data source that supports immediate incremental updates as data is updated in Prism requires a specific configuration. According to the official Workday Prism Analytics study path documents, the administrator should create a table and select the Enable for Analysis checkbox (option A). The "Enable for Analysis" option, when selected during table creation, allows the table to be used directly as a Prism data source with real-time updates. This setting ensures that as data in the table is updated (e.g., through a Data Change task), the changes are immediately reflected in the Prism data source, enabling incremental updates without the need for republishing. This is particularly useful for scenarios requiring near-real-time data availability in reporting or analytics.
The other options do not achieve the goal of immediate incremental updates:
B. Create a table and select Publish: Publishing a table creates a static Prism data source, but updates to the table require republishing, which does not support immediate incremental updates.
C. Publish a derived dataset with the Prism: Default to Dataset Access Domain: Publishing a derived dataset creates a data source, but updates to the underlying data require republishing the dataset, which is not concurrent or incremental.
D. Set Data Source Security on a derived dataset and select Publish: Setting security and publishing a derived dataset follows the same process as option C, requiring republishing for updates, which does not meet the requirement for immediate updates.
Selecting the "Enable for Analysis" checkbox when creating a table ensures the Prism data source supports concurrent, incremental updates as data changes in Prism.
You created a derived dataset that imports data from a table, which will become your Stage 1. What can you add to this dataset?
As many transformation stages of any type as your scenario requires.
As many transformation stages of any type as long as they are in a particular order.
Up to five transformation stages.
Up to two Manage Fields transformation stages.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a derived dataset (DDS) allows users to transform data by adding various transformation stages after the initial import stage (Stage 1). According to the official Workday Prism Analytics study path documents, you can add as many transformation stages of any type as your scenario requires (option A). Prism Analytics supports a variety of transformation stages, such as Join, Union, Filter, Manage Fields, and Calculate Field, among others. There are no strict limits on the number of stages or their types, and they can be added in any order that makes sense for the data transformation logic, as long as the stages are configured correctly to produce the desired output. This flexibility allows users to build complex transformation pipelines tailored to their specific use case.
The other options are incorrect:
B. As many transformation stages of any type as long as they are in a particular order: While the order of stages matters for the transformation logic (e.g., a Filter before a Join), there is no predefined order requirement for all stages; the order depends on the scenario.
C. Up to five transformation stages: There is no limit of five transformation stages in Prism Analytics; you can add more as needed.
D. Up to two Manage Fields transformation stages: There is no restriction to only two Manage Fields stages; you can add as many as required.
The ability to add as many transformation stages as needed provides maximum flexibility in shaping the data within a derived dataset.
The final derived dataset in a Prism pipeline is complete and ready to publish. What should be done prior to publishing?
Add a Group By stage to the final derived dataset to add summary calculations.
Create a table without the Enable for Analysis checkbox selected.
Create a derived dataset with the PDS suffix.
Edit the Dataset API Name to reflect in the name of the Prism data source.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, before publishing a derived dataset as a Prism data source (PDS), it’s important to ensure that the dataset is properly configured for downstream use. According to the official Workday Prism Analytics study path documents, one key step to take prior to publishing is to edit the Dataset API Name to reflect in the name of the Prism data source (option D). The Dataset API Name determines the name of the published Prism data source, which is used in reporting, discovery boards, and integrations. Setting a meaningful and descriptive API name (e.g., "Expense_Reports_by_Location") ensures that the data source is easily identifiable and aligns with naming conventions, improving usability and manageability in the Workday ecosystem. This step is a best practice to avoid confusion and ensure clarity for report writers and analysts.
The other options are not required or relevant:
A. Add a Group By stage to the final derived dataset to add summary calculations: Adding a Group By stage is not mandatory unless the use case specifically requires summarizations, which is not indicated here.
B. Create a table without the Enable for Analysis checkbox selected: Creating a new table is unnecessary, as the dataset is already complete, and the "Enable for Analysis" checkbox is relevant for real-time updates, not a requirement for publishing a derived dataset.
C. Create a derived dataset with the PDS suffix: Creating a new dataset is not needed, as the final derived dataset is already prepared, and adding a "PDS" suffix is not a required step for publishing.
Editing the Dataset API Name ensures the Prism data source has a clear and meaningful name, facilitating its use in reporting and analytics.
When should a Prism configurator leverage advanced filter logic over basic filter logic?
The filter needs to remove NULL values.
The filter needs to use operators such as "equal to" or "not equal to".
The filter needs to leverage operators such as "greater than or equal to" or "less than or equal to".
The filter needs a combination of AND/OR operators.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, filters in a derived dataset can be applied using either basic (Simple) or advanced filter logic. According to the official Workday Prism Analytics study path documents, a Prism configurator should leverage advanced filter logic over basic filter logic when the filter needs a combination of AND/OR operators (option D). Basic filter logic (Simple Filter) allows for a list of conditions with a single operator ("If All" for AND, "If Any" for OR), but it cannot handle nested or mixed logical expressions (e.g., Condition1 AND (Condition2 OR Condition3)). Advanced filter logic, on the other hand, supports complex expressions with combinations of AND and OR operators, enabling more sophisticated filtering scenarios.
The other options do not necessitate advanced filter logic:
A. The filter needs to remove NULL values: Removing NULL values (e.g., using ISNOTNULL(field)) can be done with a Simple Filter using a single condition, so advanced logic is not required.
B. The filter needs to use operators such as "equal to" or "not equal to": These operators are supported in Simple Filters, so advanced logic is not necessary.
C. The filter needs to leverage operators such as "greater than or equal to" or "less than or equal to": These comparison operators are also supported in Simple Filters, making advanced logic unnecessary for this purpose.
Advanced filter logic is specifically required when combining AND and OR operators to create complex filtering conditions, providing the flexibility needed for such scenarios.
What is a feature of using an sFTP connection on a data change task?
You can copy sFTP connections.
You can reuse an sFTP connection in multiple data change tasks.
You can import an XLSX file from an sFTP server.
You can select multiple target tables in the data change task.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a secure File Transfer Protocol (sFTP) connection can be configured as a source for a Data Change task to import data into a table. According to the official Workday Prism Analytics study path documents, a key feature of using an sFTP connection is that it can be reused across multiple Data Change tasks. Once an sFTP connection is defined in the Prism Analytics environment, it is stored and can be selected as the source connection for different Data Change tasks, promoting efficiency and consistency in data ingestion workflows. This reusability reduces the need to redefine connection parameters for each task, streamlining the configuration process.
The other options are not accurate:
A. You can copy sFTP connections: While connections can be managed, there is no specific feature in Prism Analytics to "copy" sFTP connections as a distinct action.
C. You can import an XLSX file from an sFTP server: While sFTP connections support various file formats (e.g., CSV), the ability to import XLSX files is not guaranteed and depends on the system’s configuration, making this option less definitive.
D. You can select multiple target tables in the data change task: A Data Change task is designed to load data into a single target table, not multiple tables simultaneously, regardless of the connection type.
The ability to reuse an sFTP connection across multiple Data Change tasks is a core feature that enhances the flexibility and scalability of data import processes in Prism Analytics.
A custom report uses your recently published Prism data source, but you noticed a minor error in the published data. You need to delete the published rows to fix it. What happens to your custom report?
The report definition remains intact and will work after republishing.
The report definition will need to be manually recreated.
The report definition will be copied and a new version will appear after republishing.
The report definition will need to be edited to reflect changes.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, deleting published rows from a Prism data source (PDS) is a step taken to correct errors in the published data, often followed by republishing the dataset with corrected data. According to the official Workday Prism Analytics study path documents, when you delete the published rows, the report definition remains intact and will work after republishing (option A). The custom report’s definition, which is based on the Prism data source, is not affected by the deletion of published rows because the report definition references the data source’s structure (e.g., fields and metadata), not the specific data rows. Once the dataset is republished with the corrected data, the report will automatically reflect the updated data without requiring any changes to the report definition, assuming the structure of the data source remains the same.
The other options are incorrect:
B. The report definition will need to be manually recreated: The report definition is not deleted or invalidated by deleting published rows, so recreation is not necessary.
C. The report definition will be copied and a new version will appear after republishing: Workday does not automatically copy or version report definitions when a data source is republished.
D. The report definition will need to be edited to reflect changes: No edits are required unless the structure of the data source (e.g., field names or types) changes, which is not indicated in this scenario.
The report definition’s integrity is maintained, and it will function as expected after republishing the corrected data.
A Prism data writer needs to create a new Prism calculated field on a derived dataset using the CASE function. When creating a calculated field, what symbol do you use to view a list of fields that you can select from in the dataset?
[
(
#
{
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, when creating a calculated field in a derived dataset, users often need to reference existing fields in the dataset within their expressions, such as in a CASE function. According to the official Workday Prism Analytics study path documents, to view and select from a list of available fields in the dataset while building a calculated field expression, the user types the [ symbol (left square bracket). This symbol triggers a dropdown list of all fields in the dataset, allowing the user to select the desired field without manually typing its name, reducing the risk of errors. For example, typing [ and selecting a field like "Employee_ID" will insert [Employee_ID] into the expression, which can then be used in the CASE function logic.
The other symbols do not serve this purpose:
B. (: Parentheses are used for grouping expressions or defining function parameters, not for field selection.
C. #: The hash symbol is not used in Prism Analytics for field selection; it may be associated with other functionalities in different contexts.
D. {: Curly braces are not used for field selection in Prism Analytics; they may be used in other systems for different purposes, such as templating.
The use of the [ symbol ensures an efficient and accurate way to reference fields in a calculated field expression, streamlining the creation process in Prism Analytics.
When using a window function to calculate averages in Prism, what field type must the function operate on?
Text
Boolean
Numeric
Date
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, window functions are used to perform calculations across a set of rows, such as calculating averages with a function like AVG. According to the official Workday Prism Analytics study path documents, the AVG window function, which computes the average, must operate on a fieldof type Numeric. This is because averaging requires numerical values to perform arithmetic operations (e.g., summing the values and dividing by the count of rows). Non-numeric field types, such as Text or Date, cannot be averaged, and Boolean fields (true/false) are not suitable for this type of calculation. For example, a window function like AVG(salary) OVER (PARTITION BY department) would calculate the average salary per department, where "salary" must be a Numeric field.
The other options are incorrect:
A. Text: Text fields cannot be used for arithmetic operations like averaging.
B. Boolean: Boolean fields (true/false) are not suitable for calculating averages.
D. Date: Date fields cannot be directly averaged; they require conversion to a numeric representation (e.g., days since a reference date) first.
The requirement for a Numeric field type ensures that the AVG window function can perform the necessary mathematical computations accurately.
While viewing your lineage, you realize you have forgotten to add a description to some of your derived datasets. From the lineage, you double-click on a dataset to view the dataset details. What is the next step to add the missing descriptions?
Select the pencil icon next to the dataset name and Edit Transformations.
Select the pencil icon next to the Import stage to update the description.
Select Related Actions next to the dataset name and Edit Transformations.
Select Add Field from the dataset details to create a description.
To add or update the description of a derived dataset in Workday Prism Analytics, you should access the Edit Dataset Transformations task. This can be done by selecting the Related Actions next to the dataset name and choosing Edit Transformations. This method allows you to modify various aspects of the dataset, including its description.
This process is outlined in the Workday Prism Analytics User Guide, which states:
"If you have permission to edit a dataset, you can access the Edit Dataset Transformations task using these methods:
• Right-click the dataset name on the Data Catalog report and select Edit Transformations.
• Select Edit Transformations from the Quick Actions on the View Dataset Details report.
• Access the Edit Dataset task and select the dataset name that you want to edit."
Once in the Edit Dataset Transformations task, you can update the dataset's description by clicking on the configuration icon (often represented as a gear or pencil icon) and editing the description field.
You explode the Language Skills multi-instance field on your derived dataset and you want to change the business object that the new Language Skills Exploded instance field is mapped to. What steps should you take?
Select from the list of suggested BO values in the Explode stage configuration.
Click on the Related Actions next to the business object in the insight panel.
Add a Manage Fields before the Explode stage and modify the business object.
Add a Manage Fields after the Explode stage and modify the business object.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, the Explode stage transforms a multi-instance field (e.g., Language Skills) into multiple rows, creating a new single-instance field (e.g., Language Skills Exploded). The resulting field inherits the business object (BO) mapping from the original multi-instance field, but this mapping can be modified if needed. According to the official Workday Prism Analytics study path documents, to change the business object that the new Language Skills Exploded instance field is mapped to, you should add a Manage Fields stage after the Explode stage and modify the business object (option D).
The Manage Fields stage allows you to edit field properties, including the business object mapping, for the exploded field. After the Explode stage creates the new single-instance field, the Manage Fields stage can be used to reassign the business object by selecting a different Workday business object (e.g., changing from a generic object to a specific one like "Language"). This step ensures the field is mapped correctly for downstream reporting or integration with Workday reports.
The other options are incorrect:
A. Select from the list of suggested BO values in the Explode stage configuration: The Explode stage does not provide an option to modify business object mappings during its configuration; it focuses on exploding the multi-instance field.
B. Click on the Related Actions next to the business object in the insight panel: The insight panel provides metadata insights but does not allow direct modification of business object mappings for fields.
C. Add a Manage Fields before the Explode stage and modify the business object: Modifying the business object before the Explode stage affects the original multi-instance field, but the Explode stage will still create the new field with the inherited mapping, so this does not achieve the goal.
Adding a Manage Fields stage after the Explode stage is the correct approach to modify the business object mapping of the new exploded field.
You are adding a Join stage and choose Join type of Left Outer Join, causing Workday to search for a matching row in the imported pipeline. What happens if no matching rows exist?
A duplicate row will be generated.
The row will be omitted.
Included fields from the imported pipeline will have NULL values.
Included fields from both pipelines will have NULL values.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, a Left Outer Join in a Join stage includes all rows from the primary pipeline (the left pipeline) and attempts to match them with rows from the imported pipeline (the right pipeline) based on the join condition. According to the official Workday Prism Analytics study path documents, if no matching rows exist in the imported pipeline for a given row in the primary pipeline, the row from the primary pipeline is still included in the output, but the fields from the imported pipeline will have NULL values. This behavior ensures that all data from the primary pipeline is retained, while the absence of a match in the imported pipeline is represented by NULLs for the corresponding fields.
The other options are incorrect:
A. A duplicate row will be generated: A Left Outer Join does not generate duplicate rows; duplicates would occur only if multiple matches exist in the imported pipeline, which is not the case here.
B. The row will be omitted: In a Left Outer Join, rows from the primary pipeline are never omitted, even if no match is found; this behavior is specific to an Inner Join.
D. Included fields from both pipelines will have NULL values: Only the fields from the imported pipeline will have NULL values; the fields from the primary pipeline retain their original values.
This behavior of Left Outer Join ensures that all records from the primary pipeline are preserved, with NULLs indicating the absence of matching data from the imported pipeline.
A Prism administrator wants to hide a field that contains employee salary information but still allow the Prism data writers to view average salaries for employees by cost center. What is the reason for hiding this field?
To protect sensitive data.
To hide Prism-calculated fields used for interim processing.
To hide unpopulated or sparse data fields.
To use computed values instead of base values.
Comprehensive and Detailed Explanation From Exact Extract:
In Workday Prism Analytics, hiding a field is a common practice to control access to sensitive information while still allowing necessary analytics to be performed. According to the official Workday Prism Analytics study path documents, the primary reason for hiding a field like employee salary information is to protect sensitive data. Employee salary is considered personally identifiable information (PII) or sensitive data, and hiding the field ensures that individual salary details are not exposed to unauthorized users or in published data sources. However, by hiding the field, Prism data writers can still use it in calculations—such as computing the average salary by cost center—because hidden fields remain accessible for transformation and aggregation purposes within the dataset but are not visible in the final output or to end users of the published data source.
The other options do not align with the scenario:
B. To hide Prism-calculated fields used for interim processing: The salary field is a base field, not a calculated field used for interim processing, so this reason does not apply.
C. To hide unpopulated or sparse data fields: There is no indication that the salary field is unpopulated or sparse; the concern is about its sensitivity, not its data quality.
D. To use computed values instead of base values: Hiding the field does not inherently involve replacing it with computed values; the goal is to restrict visibility while still allowing computations like averages.
Hiding the salary field protects sensitive data while enabling aggregated analytics, aligning with Prism’s security and governance capabilities.