Salesforce Einstein Analytics Data Transformation - computeRelative
Salesforce Einstein Analytics Data Transformation – computeRelative
What is the core value of Einstein Analytics?
When Salesforce first launched Einstein Analytics, there were a lot of questions surrounding the business value the product could bring, beyond what was already offered for free with Salesforce’s core Operational Reporting functionality. Why would someone pay another monthly license cost if there was a free alternative that offered a similar functionality? Though at face value this is a fair question, further analysis proves that the two tools not only have different functionality, but also address different use cases in terms of actual business value.
While Salesforce's core Operational Reporting is more for displaying a static list of metrics confined to a specific object or two, Einstein Analytics allows the user to go deeper into their data with its more data-heavy features that more traditional BI tools generally have. Though Einstein Analytics has some more “front-end” friendly differentiators, such as an improved UI that allows for more configuration and visualization options, the key differentiator is the fact that Einstein Analytics is its own BI platform that expands data capacity, allows data transformations, and enables integrations and connectors to combine external data sources with local Salesforce data.
Over the next few months, we will be exploring these key differentiators to provide some additional education and insights around why companies should invest in Einstein Analytics and to demystify some of the “how to” by exploring actual use-cases for each functionality.
What is Data Transformation?
Today, we will be looking at the different Data Transformation features within Einstein Analytics. From there, we will start analyzing and presenting examples of each data transformation with their respective use case. Data Transformation is the modification and manipulation of specific data through the Dataflow Editor in Einstein Analytics.
In other words, Data Transformation is a translation of data extracted from a Salesforce Object or other data source into our defined format. To achieve this, we use different transformations such as Sfdc Digest, Edgemart, Augment, ComputeExpression, ComputeRelative, Flatten and SfdcRegister. We will speak to each of these in further detail in our upcoming blogs, but today we want to focus on one of the more complex data transformations available today, the computeRelative.
computeRelative: What is it and how can it be used?
computeRelative : We can use this transformation to get the trend of the data with the help of derived fields using values in other rows. This transformation is seamless and is able to get the insight and comparison of our data, so that we can easily make complex business decisions.
There are several use cases where we can use this transformation, such as:
- To get the age of pipeline opportunity
- To get the actual revenue change in opportunity from the beginning
- We can create a computed field to highlight duplicate records giving different logic.
- To get the latest/first from multiple child record value related to its parent object record.
To get this transformation in our dataflow, we need to consider several values such as:
- Partition By: This is basically used to get the field name from which we are partitioning data. This is meant to divide the data by a field, so the related data is grouped and displayed per unique value/record of the partitioning field.
For example, if we talk about Work Order and Service Appointment, we will partition our records by Work Order:
- Order By: In this we should put the name of the field by which we want to sort our data in either ascending or descending order.
- Compute/Derived Field: To create a derived field, there are two ways
1. SAQL - we can use the SAQL expression to get the data in the Derived Field.
2. Source Field - we can refer to the data of existing fields and take the particular required value in the derived field.
Below is an example where computeRelative is used. In the table below, we want to see the latest date from the "ActualEndDate" Field to get the time frame of the Parent Object.
As we can see in the figure, this dataset was partitioned by “Parent Record ID,” then ordered by date to get the latest date. Now, in the second image, we have a computed field calculation, where the offset function is “First,” as the latest date which will be the first date of that partitioned data that was sorted/ordered by date.
So, after using this computeRelative Transformation, the output will look like this:
Now, in the above use case, we want to remove the duplicate Parent Record, so that we have only one record per parent, since our ultimate target is to get a time frame of the parent record. We can achieve this by using computeRelative with the SAQL expression. Below you will find the image of how this will look like.
So, the final output of this table will look like this:
We can achieve the rest of the data transformation by using the computeExpression and Filter Transformation:
1. By using computeExpression Transformation, we can create a number field to calculate the number of days between the "CreatedDate" and "LastEnd Date" using days between functions.
2. Then, with the filter transformation, we can filter the Duplicate field with “Keep," as the only value.
3. The end result will look like the following:
As you can see, by using this transformation, we got a trend of our data and insight on the comparison to get a better understanding on the data.