ÃÛ¶¹ÊÓÆµ

Get started for administrators and developers

This tutorial gives an overview of the key administrative and data management functionality of Campaign v8. It is for administrators and technical marketers migrating from Campaign Standard to Campaign v8.

Understand the Campaign v8 Architecture

See Get Started with Campaign architecture to understand the Campaign architecture before starting to structure and organize your instance.

Install the client console

The main administration and configuration tasks are performed in the [Admin Console]. The first step is to set up your environment. The following video explains how to download and install the ÃÛ¶¹ÊÓÆµ Campaign Client Console and manage your connection to your instance.

video poster

Transcript
In this video, you’ll learn how to download and install the Campaign Client Console, create and manage your connections to multiple environments, and verify access to the Campaign Client Console. The Campaign Client Console is a lightweight browser-based application. We are not limited to a single Client Console installation, meaning we can install this on both our laptop and our desktop. The Client Console uses web services APIs to connect to the campaign application server. The application server executes the workflows and renders the dynamic content and deliveries. It also accesses the database, which contains the data mart, the campaign configurations, and the system tables. Within the Client Console, we can manage our connections to multiple environments. In a typical deployment, we may have a development environment, a staging environment, and a production environment. From the Client Console, we can simply add a connection for each environment that we’re connecting to. We can connect to multiple environments simultaneously by opening multiple sessions of the Client Console. In a moment, we will talk through downloading, installing, and configuring the Campaign Client Console. To complete these steps, the following prerequisites are required. First, we need a supported Windows operating system. To get the most up-to-date list of supported Windows operating systems, consult the ÃÛ¶¹ÊÓÆµ Campaign Compatibility Matrix. This can be found in Campaign’s online documentation. We’ll also need to access a browser connected to the internet. Additionally, we will need the URL to access the setupClient.exe of our Client Console and our login credentials provided to us for our instance of Campaign. Our URL and login credentials can be obtained from an administrator. To download the Campaign Client installer, open a browser and enter the URL you were provided to access the clientSetup.exe. Then, log in using the credentials you were provided. Note that the login username and password are both case sensitive. Once you log in, a list of supported operating systems is provided, as well as the required disk space to install the Client Console. Make sure these requirements are met before proceeding. Next, download the executable by selecting the download link. Once the download has finished, navigate to the location where the file was saved, typically in the Downloads folder. Once you find the setupClient.exe, double click it to run the installer. The setup wizard opens. Select Next to proceed. We are asked to choose an installation folder for the Client Console. I’m just going to keep this as the default location, but if you want to change that, you can select the folder icon to select your location. Select Next to continue. The Customize Installation step appears. We do not need to change anything here, so let’s just select Next to continue. The Program Group page appears. We can choose to only install the client for our own user, or keep the default to make it accessible for all users. I’m going to keep it the default. Select Next and then Finish, and the installation will begin automatically. Once the installation is done, select Finish to complete. Now let’s open the Client Console. Start by opening the Windows Start menu. A Campaign Client folder should be available. Within that folder, select the ÃÛ¶¹ÊÓÆµ Campaign Client Console. Once we’ve opened the Client Console, we need to configure our links to the different instances of Campaign. In the top right of the Client Console is a link. We want to select this link to open the Connection dialog. The Connection dialog is where we configure the server connection settings to all our Campaign instances. Select Add in the top left of the dialog, then select Connection from the dropdown and set the name for the Campaign instance. Once you’ve finished naming the connection, hit Tab or Enter. Next, in the Connection section, enter the URL for your Campaign instance. This is essentially the instance or the URL we used earlier that points to our Campaign instance but it doesn’t have the last part of NLJSPInstall.jsp. Once complete, select OK. Next, we need to add our login credentials again. These are the same credentials used earlier to download the installer, and we can also select the Remember Password checkbox. Only use this option if you’re the only one accessing this computer for security reasons. Select Login to continue. Once we login, we’ll see the homepage. This means we successfully installed and set up your Campaign Client Console. But we still don’t know how to manage multiple connections, so let’s go back to the Login dialog. To go back to the Login page, let’s first disconnect. Select the Disconnect option in the top left to disconnect. Next, select the Server link once more and, in the Connections dialog, select Add but this time select Folder from the dropdown. Provide a name for the folder. We can use these folders to organize our multiple connections. Next, select the connection we created and drag it into the Training folder. Using this folder structure, we can create multiple links to our instances and organize them properly. To demonstrate, let’s create a new one. With the Training folder selected, select Add Connection and Create a New Connection. Typically, we can copy the URL from the previous connection because they all follow the same pattern. Note that this is not always the case. Once complete, hit Enter and we have our next connection. This completes the exercise for downloading, installing and setting up the Campaign Client Console. Thanks for watching.

For more information, see Connect to Campaign with the client console.

Set up and manage access

ÃÛ¶¹ÊÓÆµ Campaign lets you define and manage the rights assigned to users:

  • Access to certain capabilities

  • Access to certain data

  • Access to certain actions (create, modify, delete)

    See Manage user permissionsfor more details.

Configure your instance

Deployment

Data Management

Fundamentals of data management with ÃÛ¶¹ÊÓÆµ Campaign workflows

Learn what targeting dimensions and working tables are, and how ÃÛ¶¹ÊÓÆµ Campaign manages data across different data sources.

video poster

Transcript
In this video, I will cover the fundamentals of data management with ÃÛ¶¹ÊÓÆµ Campaign, focusing on how the system handles multiple data sources. I will explain the two key fundamentals of data management with Campaign, the targeting dimensions and the working tables. And we will then look at the basic use cases for data management with workflows, and I will explain how the system handles them. The targeting dimension is the type of data a workflow is handling. It is defined by what you are querying. So if you are querying a recipient, then your targeting dimension is the recipient. If you query an order, your targeting dimension is the order. In most cases, the targeting dimension is defined by the first query activity and lives across the whole workflow until the end. For example, if you start by querying on recipient, the outbound transition will contain data of type recipient, and then the next activity will know that we are working on recipients. The targeting dimension is bound to a data schema, which means you can access any information of the linked schemas. You can change the targeting dimension within a workflow, but only to the targeting dimension of a schema that is linked to your initial targeting dimension. So for some complex use cases, you may have two different workflow lanes on the canvas, and each lane can have its own targeting dimension. The second fundamental is the working table. In the context of multi-data sources, you need to understand what a working table is and how it works. The working table is also known as a temporary table, and it stores the results of any query in the workflow. It is visible from the outbound transition of each workflow activity, so when you look at the outbound transition, the results that are displayed are stored on the working table. It’s not directly the data from the main table you’re looking at. Working tables are created by default on the same database as the targeting dimension to ensure high performance. But when an activity requires to reconcile data with another data source, which is on another database, then the working table is first moved to the secondary data source, and then reconciled with the additional data. The reconciliation can be done through an enrichment or union activity, for example. ÃÛ¶¹ÊÓÆµ Campaign manages the multiple data sources by copying the working table from one place to another. It uses the workflow to speed the process up. Once the operation is completed, the result is automatically copied back to the location of the targeting dimension. When the workflow is finished, the working table is deleted, except of course if you have enabled the option to keep it. Now let’s take a look at the three basic use cases, which will exemplify what I just explained. In the first use case, we query the recipients table, and then enrich it with data from the orders table. The recipients table is located on the local database. The orders table, that is linked to the recipient schema, is stored on the remote database. So what happens when you query on the recipients? A working table is created on the local database where the recipient targeting dimension is located. When the recipient’s data is enriched with the order data, the recipient’s working table is first copied to the remote database. Where the other data table is located. Then the enrichment is performed, and the final result is copied back to the local database where the targeting dimension is, the orders. Let’s see how it works in the campaign instance. So we see that we have a recipients table that is on the local database, and the second table, the orders table, that is located on the remote database, which for this instance is Snowflake. When you look at the example, we have a temporary table, and the targeting dimension is recipients. When we enrich the data with the data from the orders table, which is remote, and let’s display the schema on the outbound transition there, we have another temporary table for orders. But note the targeting dimension remains the recipient. Let’s take a closer look at the SQL logs and see what happens, or what has happened. We see that the working table was first created on the local database when querying the recipients. The table was then moved from the local database to the remote database. Actually it was copied to the remote database. And once the enrichment was completed, the working table was copied back from the remote database to the local database. In the second use case, we invert the queries. So we will get exactly the same result, but in this case we start querying the orders, and then enriching these with the recipient data. So there’s one fundamental difference between the two use cases. Because we started querying the queries, we started querying the orders, because we started querying on the order, the targeting dimension is now the order. So when the data is enriched with the recipient data, which is on the other database, the working tables will be copied back and forth, like in the first example, but in the other direction. So the data is copied from the remote database to the local database, and the results will then flow back to the targeting dimension on the remote database. If we look at the second example, we have the query on the order, we see that we have a temporary table which is linked to the order, so the targeting dimension is the order. And when we enrich with the recipients, the targeting dimension remains unchanged, and it is still the order. So it’s really the same thing as with use case one, just the other way around. And when we look at the logs, we see that we started querying on the external database, and then the work table was copied to the local database, and once the enrichment was done, it was copied back to the remote database. The results, as I said, are the same as in the first use case. The bottom line is, if you start querying on Snowflake, the remote database, your results will land on Snowflake, and if you start on the local database, they will land on the local database. In the last use case, we will change the targeting dimension. And if you remember, this only works with linked schemas. So we start querying on the order, which is on the remote database. Then we change the targeting dimension from orders to recipients. So the work table will be copied from the remote to the local database, and will remain there because the new targeting dimension is located there. So let’s go to the instance and take a look. So we query on the order. You can see the targeting dimension is order. But when we change the dimension, you can see we are now on the recipients. And if we look at the logs, we see only one single copy from remote to local. In a nutshell, ÃÛ¶¹ÊÓÆµ Campaign is able to handle a data model that is split across multiple data sources. It works automatically, seamlessly, and efficiently, using bulk load to copy the data, which makes it pretty fast. Most of the time, you don’t need to care about where the data is stored. Things happen under the hood and the workflow takes care of it. It’s all automated. But it is important for you to understand the fundamentals of working tables and targeting dimensions, and also when the data will be copied from one place to the other. This will help you optimize the design of your queries based on the amount of data you are manipulating. You can optimize the performance of your workflow by changing the targeting dimension to force the workflow to use a database engine that is different from the initial targeting dimension. Thank you for watching!

Create and extend a schema

Learn how to create a schema and how to extend an existing schema.

video poster

Transcript
Welcome! In this video I will explain how to create schemas and I will show you how to update an existing schema. If you are familiar with ÃÛ¶¹ÊÓÆµ Campaign Classic, I will also point out the differences between the two versions. Let’s jump right in and create a schema. Navigate to the Explorer tab and then under Administration, Configuration, you will find the data schemas. Here you can see all data schemas that are available in the product. Let’s create a new one. So we will create a new table. I’ve prepared an example. Let me paste that into the field. As you can see this is a list of products. It’s a very simple example. It has three attributes, meaning three columns. The productSkew, productTitle, and productDescription. All of these are strings. In addition, we will create an internal key on the root node. So we will have an additional column which contains the key and it should be auto-generated. We need to make sure that autoPk is set to true as well as autoUUID. This is specific to Campaign V8. In V8, the system creates UUIDs. Unlike in V7, where the IDs are numerical IDs and are counted up by one every time a new dataset is ingested. You can also see that by default the data source is the cloud database whereas with V7 it’s the local database. There’s one more thing you need to be aware of. If you create a table and want it to be exposed to the API and you know you will have a lot of access to this table, unitary calls mainly, updates, and so on, then you should enable the staging mechanism by setting autoStg to true. If you want to learn more about the staging mechanism, we do have a separate video available on this. So I will save the schema. You can see it listed here. Now we need to update the database structure to make sure the changes are applied. You can see that two tables will be created. The new table that will be located on the cloud database and will contain the list of products. And, because we have enabled the staging mechanism, there’s a second table which will be on the local database. It is a copy of the first table so it will contain the same data structure. Let’s take a closer look. Remember we created a new table with three columns and one internal key. We now have two tabs because we have multiple sources. The first one is for the cloud database. So what will happen there? We will create a new table that is the list of products with the three attributes, the description, the SKU, and the title, as well as the internal key which will be the UUID, which will be auto-generated each time a new data set is ingested. The second tab, the default tab, is a table on the local database which was created because we enabled the staging mechanism. It is a copy of the first table with some additional attributes. When I click on start, it’ll execute the SQL on both sides. First, on the cloud database and then on the local one. Okay, so the tables have been created. When you look at the tables, we now have a table with the products. It has the three attributes plus the key. And we have the local staging table, which of course is still empty. The way these tables will be used is the following. If we want to ingest the data through APIs, we will use the local staging table. If however we want to ingest the product list through data management and the batch workflows, we will use the main product table on the cloud database. Next, let’s extend the schema. We will extend the out-of-the-box operator table. The table we need to extend is the xtk table. So let’s add a new attribute. And we want to extend the operator table. Let’s see, the table we need is the xtk table. We’ll add a new attribute, the business unit. And I will save the extension and update the structure. And yes, the new column will be added. So let me execute. Okay, now let’s take a look and see what happened. You will need to refresh the page and there should be the new attribute, the business unit. We have extended the xtk schema on the local database. And due to the data replication mechanism in Campaign V8, the XXL schema on the cloud database was synced and updated as well. You can see it here, but you will not be able to see the XXL table in your UI, as all of this happens automatically in the background. If you want to know more about the data replication mechanism, this is covered in a separate video. So now you know how to create new schemas and how to extend the schema in Campaign V8. Thank you for watching.
recommendation-more-help
6c14c02c-c847-45c2-a495-d844b197db07