ÃÛ¶¹ÊÓƵ

Ingest data using CRM source connectors

Learn how to easily batch ingest data from CRM sources into ÃÛ¶¹ÊÓƵ Experience Platform’s Real-Time Customer Profile and data lake seamlessly. For more detailed product documentation, see customer relationship management (CRM) on the Source Connectors overview page.

Standard workflow

Learn how to configure the source connector for Salesforce CRM using the standard workflow. The standard workflow requires upfront creation of schemas and identity namespaces. Other CRM source connectors may only support the standard workflow.

video poster

Transcript

Hi there. I’m going to give you a quick overview of how to ingest data from your CRM systems into ÃÛ¶¹ÊÓƵ Experience Platform. Data ingestion is a fundamental step to getting your data in Experience Platform so you can use it to build 360-degree, real-time customer profiles and use them to provide meaningful experiences. ÃÛ¶¹ÊÓƵ Experience Platform allows data to be ingested from various external sources by giving you the ability to structure, label, and enhance incoming data using platform services. You can ingest data from a wide variety of sources, such as ÃÛ¶¹ÊÓƵ applications, cloud-based storage, databases, and many others. Experience Platform provides tools to ensure that the ingested data is XDM compliant and helps prepare the data for real-time customer profiles and other services. When you log into platform, you will see Sources in the left navigation. Clicking sources will take you to the source catalog screen, where you can see all of the source connectors currently available in Platform. Note that there are so connectors for ÃÛ¶¹ÊÓƵ applications, CRM solutions, cloud storage providers, and more. Let’s explore how to ingest data from CRM systems into Experience Platform. Each source has its specific configuration details, but the general configuration for CRM source connectors are somewhat similar. For our video, let’s use the Salesforce CRM system. Select the desired source. When setting up a source connector for the very first time, you will be provided with an option to configure. For an already configured source connector, you will be given an option to add data. Since this is our first time creating a Salesforce account, let’s click on creating a new account and provide a source connection details. Complete the required fields for account authentication, and then initiate a source connection request. If the connection is successful, click next to proceed to data selection. In this step, you can explore the list of accessible objects in Salesforce CRM. Let’s search for the loyalty object and quickly preview the object data before we continue. Let’s proceed to the next step to assign a target dataset for the incoming data. You can choose an existing dataset or create a new dataset. Let’s choose the new dataset option and provide a dataset name and description. To create a dataset, you need to have an associated schema. Using the schema finder, assign a schema to this dataset. Upon selecting a schema for this dataset, Experience Platform performs a mapping between the source file field and the target field. This mapping is performed based on the title and type of the field. This pre-mapping of standard fields are editable. You can quickly clear all mapping and add a custom mapping between a source field and a target field. To do so, choose an attribute from the source file and map it to a corresponding schema attribute. To select a source field, you can either use the target field dropdown option or typing to find a field and then map it to a target field. Like how we mapped the loyalty field, let’s map the CRM ID target field to our schema field. Similarly, you can complete the mapping for other fields. Add calculated field option lets you run functions on source fields to prepare the data for ingestion. You can choose from a list of pre-defined functions that can be applied to your source fields. For example, we can combine the first name field and the last name field into a calculated field using the concatenation function before ingesting the data into a dataset field. Upon selecting a function, you can notice the function documentation on the right-hand side of the screen. You can also preview the sample result of a calculated field. Let’s save all the changes leave the window. You can view the calculated field displayed as a source field. Now, let’s quickly map the calculated field to a schema target field. After reviewing the field mapping, you can also preview data to see how the ingested data will get stored in your dataset. If the mapping looks good, let’s move to the next step. Scheduling lets you choose the frequency at which data should flow from source to a dataset. Let’s select a frequency of 15 minutes for this video and set a start time for data flow. To let historical data to be ingested, enable the Backfill option. Backfill is a Boolean value that determines what data is initially ingested. If backfill is enabled, all current files in the specified path will be ingested during the first scheduled injection. If backfill is disabled, only the files that are loaded in between the first run of the injection and the start time will be ingested. Files loaded before start time will not get ingested. Select load incremental data by assigning to a field that helps us distinguish between new and existing data. Let’s move the Dataflow step. Provide a name for your Dataflow.

In the Dataflow detail step, the partial ingestion toggle allows you to enable or disable the use of partial batch ingestion. The error threshold allows you to set the percentage of acceptable errors before the entire batch fills. By default, this value is set to 5%. Let’s review this source configuration details and then save your changes. We do not see any data flow on statuses as we have set a frequency of 15 minutes for our current data flow runs. So let’s wait for the data flow to run. Let’s refresh the page and you can now see that our data flow run status has been completed. Open the Dataflow run to view more details about the activity. Our last data flow run was completed successfully without any failed records. If there were any failed records, since we enabled error diagnosis for our data flows, we should be able to view the error code and error description for the failed records. Experience Platform also lets users preview or download the error diagnosis to determine what went wrong with the failed records. Let’s go back to the Dataflow activity tab. At this point, we verified that the data flow was completed successfully from the source to our dataset. Let’s open our dataset to verify the data flow and activities. You can open the Luma customer loyalty dataset right from the data flow window, or you can access it using the datasets option from the left navigation. Under the Dataset activity, you can see a quick summary of ingested batches and failed batches during a specific time window. Scroll down to view the ingested batch ID. Each batch represents actual data ingestion from a source connector to a target dataset. Let’s quickly preview the dataset to ensure that data integration was successful and our calculated fields are populated. We now have the dataset populated with data from Salesforce CRM. Finally, let’s see how to enable this data for real-time customer profile. In the real-time customer profile, you can see each customer’s holistic view that combines data from multiple channels, including online, offline CRM and third-party data. To enable our dataset for the real-time customer profile, ensure that the associated schema is enabled for real-time profile. Once the schema is enabled for profile, it cannot be disabled or deleted. Also, fields cannot be removed from the schema after this point. These implications are essential to keep in mind when working with the data in your production environment. It is recommended to verify and test the data ingestion process to capture and address any issues that may arise before enabling the dataset and schema for profile. Now, let’s enable profile for our dataset and save all the changes. In the next successful batch run, data ingested into our dataset will be used for creating real-time customer profile. ÃÛ¶¹ÊÓƵ Experience Platform allows data to be ingested from external sources by providing you with the ability to structure, label and enhance incoming data using platform services. -

Template workflow (Salesforce)

Learn how to configure the source connector for Salesforce CRM using the template workflow. This workflow auto-generates assets needed for ingesting Salesforce data based on templates. It saves you upfront time, and the assets can be customized according to your needs. This workflow is not supported for all CRM source connectors.

video poster

Transcript
In this video, I’ll show you how to use templates to auto-generate assets needed for ingesting Salesforce data into Experience Platform. These are the areas I’ll cover. Use the blue chapter markers below the video to advance to or replay these sections over again. Data ingestion is a fundamental step to getting data into Experience Platform so you can use it to build robust customer profiles and use them to provide meaningful experiences. ÃÛ¶¹ÊÓƵ Experience Platform allows you to ingest data from external sources. These data can be structured, labeled, and enhanced using platform services. The focus of this video is ingesting data from Salesforce, a third-party CRM system. There’s a couple ways to do this, but I’ll demonstrate using templates that auto-generate assets listed on this slide. There are other videos that explain schemas and identities, so if you’re unfamiliar with these topics, review those videos first. I also suggest you review the data ingestion overview video if you haven’t done this yet. Using templates, you reap several benefits as shown here. In earlier versions of this data connector, getting to this step of ingestion and thus to value was very time-consuming. Schemas, identities, datasets, mapping rules, and data flows had to be manually created. The template workflow does all of this for you and you can even customize it afterwards. Let’s get to the demo. I’m logged into Experience Platform. I’ll display sources by selecting the navigation link from the left. Now I’ll select the CRM category to jump to those connectors. Selecting Add Data under Salesforce will kick off the workflow. There are two available paths here. I can accelerate data ingestion by using templates provided by the system to auto-generate all the assets required prior to ingestion. If I already set up the schemas and identities for the Salesforce data I want to ingest, I’d use the second workflow. I’ll proceed with using templates. Now my organization has pre-existing authenticated Salesforce accounts. If this is a first-time setup, start with New Account. Here you’d provide the authentication credentials for your Salesforce account. The fields with an asterisk are required and you’d then choose Connect to Source. I’ll go back to my existing account to show you the remaining workflow. Once I select the account name, I’m presented with a list of templates I can choose to generate my assets based on the account database used with Salesforce. There’s both B2B and B2C types. It looks like some templates for B2B have already been configured. Notice the check boxes for these data tables are grayed out. This means the assets associated with them have been created in the system already. You can open a preview to explore sample data for a template. This is helpful when you want to verify that you’re selecting the correct template for the Salesforce data table you plan to ingest. Don’t worry if this doesn’t map one-to-one with the data coming over. You can modify the mapping in the workflow if needed. I’ll show you this soon. I’m going to select additional templates for the other Salesforce tables I want to ingest data against. I’ll select opportunities and opportunity contract roles. The schedule for data ingestion is configurable. Set it to once to create a one-time ingestion or change the frequency to minute, hour, day, or week to reflect your needs for data availability. I’ll select finish and this is when the magic happens. The system is doing all the work to generate the assets to support this data ingestion including schemas, identities, identity namespaces, datasets, and data flows. This also includes the identity relationships across the multiple schemas. Now this usually takes a minute or so to finish so if you want to go off and do some other things and come back feel free to do that. Once the assets are created this review page displays. This lists the data flows, datasets, schemas, and identity namespaces created or reused for the Salesforce data table selected in the previous step. Some of the assets are reused because they were created from previous template configurations and these same assets are used for this new template configuration. One of the benefits I discussed earlier for using templates is the acceleration to ingestion and the amount of work and time that saves users. I’ll open one of the generated schemas in a new browser tab. This B2B person schema was generated by the system. Now you can see the breadth and depth of the hierarchies used here for the different fields. Now this is only one of the many schemas involved in this data ingestion too. Using the manual workflow and the data connector these would have had to be created prior to setting up the data flow so that’s a lot of time and effort saved. Back in the review template assets screen I want to point out identity namespaces. These have all been generated by the system as well. This is another task that would have had to be in place using the manual workflow. Not only that but the relationships of those identities across the scheme is involved. If you recall I selected the templates for opportunities and opportunity contact roles. Even though the schemas used by these data sets were already generated through previous Salesforce template configuration the data sets are net new. Just a quick high-level call-out the data sets contain the data whereas the schemas validate the format of the data to ensure you’re ingesting quality data into experience platform. Back to data flows there are two key features you can access from here. First we’ll review preview mappings. I’ll select this for the first data flow and it opens in a modal window. This shows me the system generated mappings for the template. On the left side is the Salesforce fields and on the right are the experience platform fields relative to the data set and schema it’s targeting. Use this to review the field mappings. Changing any mappings though happens elsewhere. I’m going to be showing you that next. So as well as updating mappings you can change the draft status for a workflow. If you recall I mentioned earlier that you can make customizations to template generated assets. With data flows you’ll want to make sure the mappings are validated before you set them to active so that batch uploads coming from Salesforce don’t fail. I’ll select update data flow from the three picker for the first data flow. Once the data flow opens I’ll go to the data flow detail screen to point out some updates you can make there. Enable partial ingestion by toggling the setting. This lets you configure an error threshold expressed by the percentage of acceptable errors before the entire batch fails. Toward the bottom you can configure alerts. Alerts allow you to receive notifications on the status of your sources data flow. You can get updates when your data flow has started, is successful, has failed, or didn’t ingest any data. If you want to save these changes and come back to finish the mapping validation select save as draft here otherwise select next. On the mapping step you can update the mappings between the source fields from Salesforce and the target fields and experience platform. At the top it provides a high level status for the number of map fields, required fields, and identity fields. Below that is the detail for the mappings. On the right side are the fields for experience platform and the left has the Salesforce fields. Calculated fields are also automatically generated by the system which is really helpful. When I select next to go to the scheduling step the mapping validation happens. Now there’s an issue with the mapping. The currency ISO code from Salesforce it doesn’t exist in the experience platform schema. If I needed this field from Salesforce I’d go to the schema and I’d add the field there. However I’m going to go back and remove this field from my data flow. I’ll click on the remove icon to the right of the field and just to note I could have done this on the mapping validation error page as well. Now when I select next I no longer have a validation error. On the scheduling step you can modify the schedule for the batch ingestion using the same frequency and calendar options we reviewed when setting up the templates earlier. I’ll select next up here at the top. Now that everything looks good I’ll select finish. This takes me back to all the Salesforce data flows I’ve configured for the account. I’ve confirmed that the B2B opportunity data flow has been updated from draft to active so now I can start receiving data from Salesforce for this data flow. You also have access to data flow features using the 3Picker. You can update the data flow again, disable it if you no longer need the data, or view and monitoring once you begin to receive data sets. You should now feel comfortable configuring data flows using templates from the Salesforce data connector workflow in Experience Platform. Thanks and good luck!

For more information, please see the following documentation:

recommendation-more-help
9051d869-e959-46c8-8c52-f0759cee3763