ÃÛ¶¹ÊÓƵ

Web SDK Migration Essentials

Understand the differences in App Measurement/AT.js vs. Web SDK, how to migrate, considerations in timing the migration, options for migrating and expected data differences.

Key topics covered

  • What is Web SDK?
  • Migration Steps and Considerations
  • Timing the Migrations
  • Common Missteps and Pitfalls to Avoid

video poster

Transcript
Okay, awesome. So hello and good morning. Welcome. And thank you for joining. Today’s session. Focused on essentials for migrating to the app Web SDK. My name is Moses Maxon and I work on the ÃÛ¶¹ÊÓƵ Ultimate Success team as a principal field engineer. Today, I’m joined by a couple of amazing colleagues who will introduce themselves in a moment and we will be walking us and they will be walking us through the presentation today. I am going to go ahead and kick off the session today. First and foremost, thank you for your time and attendance today. Just to note that this session is being recorded and a link to the recording will be sent out to everyone who registered. We are in listen only mode today. However, feel free to share any questions into the chat and Q&A pod. Our team will do their best to enter and in addition to that, we have reserved some time at the end of today’s session to review any questions that have surfaced throughout. At the end of today’s session, note that if there are any questions that we don’t get to during the session, the team will take note and we will follow up with you. After today’s session, we will also be sharing out a survey at the end of the presentation that we would love your participation in to help us shape future sessions in just a friendly reminder of our upcoming webinars happening in May. The links to the registrations are in the chat pod, so be sure to register for some of these great webinars happening in May. Okay. Taking a look at today’s agenda, we’ll be covering an overview of the AP web SDK, the essential migration steps and key considerations, data differences between implementation methods and what their causes may be followed by. Time for Q&A. And again, a quick poll for today’s session. Now let’s introduce today’s four with ÃÛ¶¹ÊÓƵ For eight plus years working as a technical consultant, I enjoy playing video games, gardening and I am an avid Eclipse Explorer. I have seen the last two total solar eclipses that have crossed the U.S. like there. Over to you, Rachel. Thanks, Moses. A quick intro before I handed over to Riley. I am Rachel Fenwick and I’ll be presenting part of this webinar today. I’m based in the New York area with my husband and two daughters and also an old dog. I celebrated seven years at ÃÛ¶¹ÊÓƵ this past December, started my career with ÃÛ¶¹ÊÓƵ and Consulting for almost five years, then moved over to the ultimate success or where I am now and prior to life at ÃÛ¶¹ÊÓƵ, I was on the client side managing ÃÛ¶¹ÊÓƵ Analytics implementation in-house. And now over to Riley, who will give a brief intro and then get into the meat of today’s content. All right, Thanks, Rachel. Riley Johnson, technical Consultant, focused mainly on ÃÛ¶¹ÊÓƵ Analytics and it’ll be Target. I’ve got a beautiful wife, a four year old son and a one year old daughter. No dog, unlike the other two here with us today, but ÃÛ¶¹ÊÓƵ for three and a half years. I also love playing video games. Moses and I actually play quite a bit of of Rocket League together, so it’s quite fun. Love golf and just hanging out with my wife and son and daughter now who’s just turning 31. But like Rachel’s mentioned, I’m going to jump into the beginning of our content, which is going to be what is the website? K And just like a brief overview to be able to know what the website can do for us, we need to understand kind of some of the pain points or potential pain points of what are the architecture of ÃÛ¶¹ÊÓƵ Solutions and the different interactions across different solutions. So prior to the Web SDK, every solution would have its own library. And so if you had multiple solutions, you’re going to be loading multiple JavaScript files onto your site that has its own, you know, rules set and different requirements for data collection. And none of these libraries were, you know, really built to work together. They’re all developed apart from each other and then were integrated into each other after the fact. And so if we’re using any kind of cross solution or platform use cases that require them to work together, they had to be manually coded together, you know, mentally integrated together, which could cause quite a bit of deployment friction. So the big pain points that we can see from implementations that did not utilize the Web SDK was library sizes. You have multiple library files loading on the page, one for Target, one for analytics. In this example, while what we’re covering today, all of your rules for ÃÛ¶¹ÊÓƵ Analytics, everything that’s being evaluated and additionally all of the data that’s being evaluated, all the inboxes that are being evaluated and delivered to the site, those individual files can get quite large and can take up quite a bit of load time, which goes in our next main point of performance that the tags of those libraries loading could potentially cause issues on page load times. We need to for sure wait for those libraries to finish loading so that we can fire the appropriate calls to the appropriate solution. And if we need to wait for a while for those libraries to call, obviously the page load times can increase. Multiple calls for a single use case is a big one, right? Specifically for like an A30 integration analytics for target integration, where we’re delivering a target data into ÃÛ¶¹ÊÓƵ Analytics to do additional reporting within analytics workspace. We have multiple calls firing, we have ÃÛ¶¹ÊÓƵ Target calls firing, collecting and delivering hit boxes and then ÃÛ¶¹ÊÓƵ Analytics collecting that data and firing an additional ÃÛ¶¹ÊÓƵ Analytics call into ÃÛ¶¹ÊÓƵ Analytics with that target data. Another pain point is waiting for the ECI ID to return before we can utilize personalization calls so we can, you know, cause some lag waiting for the ID to be set and evaluated before we can personalize the site’s fractured data collection. Target has no idea what an E bar is and analytics has no idea what a profile information is. Right? So there’s there’s different data collection methods across the solutions. And so we have to tie those together which can get confusing and then schema confusion between solutions. Just like I mentioned for A4 t, we are collecting data differently than we are at Target, than we are in for ÃÛ¶¹ÊÓƵ Analytics. And so we get confused as to what’s going on on each solution. So if we take a quick snapshot of this, right, this is kind of a current state. It’s a little bit, you know, watered down obviously, but this is the current state. A user lands on your websites and we’re firing these different calls. We’re calling the temp deck servers for the customer IDs. We’re getting ÃÛ¶¹ÊÓƵ, the ÃÛ¶¹ÊÓƵ Target, Dart, Gsx JavaScript library that will load and fire ÃÛ¶¹ÊÓƵ Target and then we have AB Measurement Digest, which is the analytics, JavaScript, JavaScript library that contains all of the information that we need for app measurement to fire. Right? So this is going to be all your tracking, all of your e bars and props and events and out of the box metrics, you know, order and and products and things like that. So we, we run into this issue right, where we’re firing multiple libraries, all working independent of each of each other. And then we have to try and tie them together with with special implementations to get them all to work better. So if we move to the Web SDK, what is the Web SDK and how can fix those pain points that we were experiencing previously? So the Web SDK is is a JavaScript script library, just like at JS and app measurement as this JavaScript library for website is called Alloy Dogs. And this allows you to interact with every service or solution from a single caller from a single library. So the web SDK sends data in a solution agnostic way, which is an extreme format. I’ll get into a little bit more and then Rachel will dive in further into the extended format. But what happens is the data gets sent to an edge network and then it gets sent out and forwarded out to whatever solution or destination that that data is set up for, and it sends it all in real time. And so there’s very, very minimal delay on forwarding the data to the appropriate solutions. So with the Web SDK, we see quite a few benefits or improvements from what we were seeing previously with those pain points. So from the individual JavaScript libraries, the biggest one that we’re going to see is the performance. The web SDK is much smaller than the, you know, the culmination of all of the JavaScript libraries. And so we can use all the JRE, just the Web SDK to improve performance and loading the library quicker with just the one library rather than loading all the libraries control. We have a lot of control over the data that’s being forwarded through. We can have insight and know where the data is at almost every millisecond of the data journey. So we can see from the fire on the Web site to the application that’s consuming that data, we can identify all the steps along the way and see where it is at. It really modernizes how ÃÛ¶¹ÊÓƵ is collecting data with the Web SDK. It allows us to set us up for success. The future of using data collection in a solution agnostic way and be able to utilize third party cookies and make sure we’re implementing the first party cookies, first party domains that are being managed by ÃÛ¶¹ÊÓƵ and then time to value after the implementation work is done. All the other other ÃÛ¶¹ÊÓƵ Solutions and ÃÛ¶¹ÊÓƵ Experience platform services can quite simply be turned on or off with toggles the destinations is how we set up, where we want the data to go and we can turn on a destination or we can turn off a destination very, very simply. So if we take another kind of snapshot of what we have here, we have our single library loading on the site Alawi digests, and that’s the Web SDK. And then from the Web SDK, we go to the Edge Network, and the Edge Network will pass off all of our calls to the respective solution or service or return information as expected, right? Obviously, we’re not just sending target data, we’re needing to get data from Target. And so from all of this communication, we’re able to do all this from a single Java script library rather than implementing multiple JavaScript libraries. Okay. So some basic terminology that we need to have for migrating into the Web SDK or even just learning about the Web SDK, really an app. So Web SDK also known as A.K.A, just this is just again a JavaScript library that gets loaded on to your site that directs data collection from the site or whatever platform you’re collecting data on to the edge network. So tags otherwise known are formally known as ÃÛ¶¹ÊÓƵ Launch is the experience Cloud Tag manager and the tag manager is what’s delivering currently all the solutions and their libraries to the site, but the tags is then can be used to deliver and simplify the deployment of the web SDK or Ali’s or Allaway that JS onto the site. The Edge Network is an Experience Cloud solution that’s routing the data that we’re collecting from the Web SDK into the Experience Platform or into Experience Cloud solutions. So ÃÛ¶¹ÊÓƵ Target or ÃÛ¶¹ÊÓƵ Analytics and then other third party destinations as well. And so we can set up integrations with, you know, a Facebook pixel or, you know, the edge can send data to other marketing tools that you’re utilizing. It doesn’t have to be just ÃÛ¶¹ÊÓƵ Solutions for receiving the Web SDK data schema. It’s simply a through a blueprint of your data. How is the data configured, what format, what values are going to be coming through in that Web SDK data are in those calls from the Web SDK and it helps the app, it helps the experienced platform know what structure they’re expecting and then how to consume and how to forward the structure of that data. MDM is experience data model. This is a standardized model of data. So if you are familiar with ÃÛ¶¹ÊÓƵ Analytics tagging, you likely are utilizing a data layer. A data layer is just a JSON object that lives on the site in the HTML of your site. But it’s also a JSON structure just with like a predefined standardized template or format that’s going to be accepted by a schema. And then key concepts and supported features of supported features are quite extensive. So this is regarding to ÃÛ¶¹ÊÓƵ Analytics specifically. There are a lot of features that are supported. The Web SDK supports. The only one that is not going to be supported is the hierarchy reporting and the ones that are currently not supported but will support are going to be activity map video, media tracking. That is going to be the comprehensive list of features that are supported and not supported by the the Web escape. And I won’t go through every single supported item, but you can review those on your own here. All right, Rachel, I’m going to kick it over to you and you can walk us through migration steps and considerations. All right, great. Sorry, just finishing up answering one question and chat there. Let’s jump in to some of the steps to migrate and a couple more key concepts will walk through before getting into the actual steps and prerequisites. So one of the major things that we need to know before we discuss migrating is the difference in data collection and the format of the data that’s collected with analytics with regards to analytics. So as we know with Legacy ÃÛ¶¹ÊÓƵ Analytics, this is built on props, errors and events. This doesn’t change when we migrate to the Web SDK. Analytics servers don’t care about the source of data as long as it arrives in this, recognize the ball structure, props, vars and events. That structure amounts to, you know, the numbered variable key value pairs like we’re used to every one equals a certain string and this is sent using a previously our app measurement JavaScript library on mobile. This is sent in the form of context data, and then we map those variables using processing roles data collected using the Edge library. Now, like the web SDK is solution agnostic when it leaves the client side application. So this is new and the data is structured using our model, therefore not structured using these key value pairs that ÃÛ¶¹ÊÓƵ Analytics servers are expecting. So the big takeaway here is that a critical task for migrating to the new library will be document laying and mapping your edge collected data and data structure in most cases to the format that analytics is expecting. So you will need to do some sort of transformation from sending the data between when it arrives on the analytics servers. Next slide. Moses. Another key concept we’ll talk about before jumping into migration steps is the next important piece of this puzzle and mapping and data format, which is the schema. The schema is a preconfigured data structure that’s used for sending our client side data via the edge. Schemas more broadly are used across Experience Platform they’re receiving and organizing the data we’re sending in from all kinds of different data sets and other sources, not just alloy JS, not just our web data from any source you can imagine. If you’re using platform. So by standardizing the format of this data, it allows for combining all of these data sets in one platform so approves the ability to share the data across these multiple solutions and bring it in from multiple sources and mapping the data to the analytics variables is made a lot easier if you’re using some of our specific schema field groups. So we have some schema field groups that are built out of the box, cater to what we typically see with ÃÛ¶¹ÊÓƵ Analytics implementations. There’s some variations for commerce versus, you know, retail and some other small tweaks. But essentially if you’re using one of these out of the box field groups, there’s a built in translator function between the Edge network and our analytics servers that transforms those SDA values into your numbered variables that we’re used to seeing. But one of these schema filled groups that most clients are using is called the ÃÛ¶¹ÊÓƵ Analytics Experience Event File Extension. So this includes the number dimension field groups and that together with other more specific field groups like I mentioned, for commerce, marketing, environment details, that should give you most of what you need for automatic mapping and then we’ll get into more options for mapping in the next few slides to come. Next slide, please. All right. So let’s talk high level steps to migrate. First things you need to do to get yourself set up. So initial setup points here, we’ll talk about first thing you want to do, configure your permissions and the ÃÛ¶¹ÊÓƵ admin console for data collection. You want to make sure everyone’s set up with the right profiles, the right access, make sure you have access to things like schemas, data sources, tags, just going through, doing a good audit of your product profiles there because there will be new ones that you’re using with web SDK that you weren’t using before. With analytics. Next thing would be configuring your schema for your structured data that you wanted to pass then. So your log in your choose a schema. Like I said, most clients are using this, you know, ÃÛ¶¹ÊÓƵ Analytics Experience event schema out of the box because it does cover just about everything that you’re collecting on your website in a traditional analytics implementation. So it’s a really good starting point for just getting set up. Next thing you want to do is create a data stream and all of this creating the schema, creating the data stream, setting up the data stream, configuring permissions. All of this is done in this point and click UI within ÃÛ¶¹ÊÓƵ Analytics Admin console. So creating your data stream super easy, it’s you go and click new data stream, walks you through the wizard to set it up. It will ask you which solutions do you want to send your data to. So you’ll add any of your applicable solutions analytics, target any other solution. You’re sending the data to platform and then you will also, if you’ve selected analytics there, you’ll configure the data stream that to feed the data to the appropriate report suite within analytics. So that’s where you would enter that information. And then lastly, you would want to set up your data stream for the appropriate environment using your Web SDK extension and tags and any manual configurations you may have. And that’s an initial setup for anyone. If it’s a new implementation or if you’re migrating from an existing analytics implementation, everyone has to complete those steps and a couple of additional steps. If you’re migrating from an existing analytics or target implementation, you will want to enable ID migration to maintain a visitor ID continuity. This is really important for avoiding any visitor clipping with the new solution. You don’t want your switch over to alloyed just to result in a lot of new visitor IDs as people are coming to your site so you’ll enable ID migration enabled. It’s just a setting you’re turning on and your tags property within the web SDK extension and this allows allows it to breed previous previously set CV cookies. So basically letting aloy just know, hey, we may have cookies that are already set for this visitor, we may not need a new one set. So super important step if you are again migrating from an existing implementation. All right, moving right along. So next thing we want to consider now that we’ve got all of the initial setup in the console done is how are we going to map our data? You have a few different options of mapping from down to the analytics servers. First option being client side mapping. Just as the name indicates, this would mean you as the client will be mapping your data layer values to DM keys to pass into each server call. So instead of what you’re probably doing today, which is sending a data layer value and from your site you would change that data layer object over to DM format. Second option would be what’s called data prep for data collection, also known as the mapper. I would say this is probably our most commonly used option similar to option one, and that you’re mapping a value from one object to another before it gets to its end point. But in this case, the mapping is done on edge servers and configured within the data streams interface, so less dev work on your site and less of a left in terms of migrating. Very straightforward. There are a couple of caveats with this one that I’ll get into in the in the coming slides, But the last option here that you have is processing roles. This is not our recommended option, but one to call it out. It is there. So in this case you would be mapping the values to analytics dimensions and the admin interface using processing roles, and we’ll get into the downsides of why we don’t recommend not. Coming up. All right. Next slide. So option one, client side mapping. So like I said, same very similar process is mapping your data layer values. You’re just kind of switching those over to steam keys. Mapping can be done using Tag’s capabilities and the interface or using custom code and a role within tags, whether you’re doing it in the tags interface or using custom code, the output will be a JSON object that aligns with the schema that you’ve set up. So one thing to note, if you are doing it this way, it relies on incorporating the specific out of the box schema fill groups, then enable that automatic mapping of the keys. So configuration for this option would be logging in to tags. You’ll create a variable data element object that aligns to the schema that you’ve set up and includes. The field groups mentioned for automatic mapping. You’ll see all of this as an option to pull into your data element when you go into the UI in a new role that you create, you’ll use an update variable action that maps the values from your existing data elements that are already created in a tags property to the associated keys for this automatic mapping the keys from the schema groups. So for example, if currently you collect page language and if our one you have map your new data element to capture a page language and something that looks like experience dot analytics accustomed to mentioned Uber, you are one and that would be this long string that I just read out. That would be an object from a schema field group. So that directly maps to the schema and the name of that object within the schema that you’ve already set up. So you would repeat this process for all of the numbered dimensions and events that you want to pass. Then for any given rule on your tags property, the update variable action configuration that we just walk through should be followed by a send event. Action within the rule can be in the current role or a subsequent role, but needs to be followed by send event where the data, the data element you’ve updated would be the same data element used any field in the send event action. All right, moving right along option to the mapper. So like I said, also known as data prep for data collection, most people will call it the mapper, though very similar to what we just walk through. Like I said, we’re kind of intercepting the data and mapping it before it gets to the analytics servers. Incoming data can be SDM or can be the Web SDK payload and the data object. The destination path for the mappings are the schema paths associated with the schema configured like we just walked through similar concept here. So the one thing to note with the mapper is that it does and it is critical that you’re using the out of the box schema field groups here. This is really the catch with this one you can’t do you know with client side mapping you have a little more freedom to add variables that aren’t necessarily preconfigured with our out of the box schema filled groups with the mapper. This is only configured for certain schemas, certain field groups. So you’re you’re relying on what’s been preconfigured for you in that schema and then mapping option three processing roles. And so the biggest difference with processing roles is that we’re not intercepting the data before it gets to the servers. The data is hitting the values are hitting the analytics server, and we’re mapping them in the processing roles like we would context data variables similar to how we do on a mobile implementation today. So when you’re sending variables to the analytics servers, they will show up as context data variables in the processing roles interface. So to use this option, all you would do is navigate to the processing roles interface and your admin console. Go to create a new rule. They’re dedicated to mapping variables. You would create a new action and each role for the mapping. And note that when the keys appear as context data, you’ll see the prefix of a dot x. So for example, you may see something that looks like a dot x done search, dot, keywords, parentheses, context data. This is how an x variable would come through in your processing roles. Once you’ve saved and configured your role, you can debug it actually using the experience. Cloud Debugger edge Trace feature, which showed you in real time how the processing rules are applied to the incoming data. So the reason that we don’t recommend using processing roles is because it’s not very scalable. You do hit limits with how many records you can have, and this takes a really long time. If you have somewhat of an extensive implementation, you’re going through one by one and creating these rules and it’s pretty heavy on the processing. You know, as your data is coming through. And we’re evaluating however many rules there are to evaluate before the data hits reporting that can add to your processing time. And yeah, it’s just not not our recommended option for mapping. Option one or two is going to be your best bet and really deciding between those two depends on how many variables you need that may not already be covered in a schema field group. If you’re creating custom schema field groups, you’re going to have to go with option one. If you’re using out of the box schema field groups which cover most clients needs, then you can use the mapper. And it’s a really low lift of actually doing any mapping. All right, next slide. So pivoting a little bit, we’ve gone over setup, we’ve gone over initial setup steps to configure your data streams, get everyone’s access set up, how to map your data structure. Now let’s talk about one major consideration and planning all of this out, and that would be your migration order for clients with analytics and Target specifically together. So this is a major consideration that you’ll want to think about when there’s multiple solutions at play and particular analytics and Target, like I mentioned, you can’t mix and match legacy libraries with the new Web SDK library on the same page of a site and expect a4t to still work. Most clients using Analytics and Target are relying heavily on A4 to use, so this is a big consideration to think through phase migrations though, where the old libraries exist on one set of pages and the new library exists on another set of pages is supported, provided that your web SDK is configured for the target migration. So let’s talk through how we might accomplish this without compromising a 40. So first thing we want to think about starting with analytics. This is a good migration flow. We’ll we’ll send this this content out, but save it, reference this when you’re going to plan a migration. This allows for being able to test the analytics implementation side by side, the new web SDK and implementation with the legacy measure and implementation testing it side by side before actually pushing to production while still preserving your for T integration. So first thing you’ll want to do is duplicate your dev and prod analytics report suites, whichever report suites you are migrating and whichever ones you’re using the most. I would pick those and create duplicates both dev and pride. This will be used for comparing 1 to 1 dev and provide data during your development work. As an optional step, you can duplicate your existing production tags property. This is not required, but it does allow for for sometimes a little cleaner development work. So if you’re working on the migration for a period of time, but you also have normal tagging operations and dev work that need to go on in the interim, you may not want to interfere or combine those two things. The libraries could get in the way of each other as you’re testing. So just optionally you can create a duplicate property to work in for the migration. The third step here would be creating your data stream. So create the data stream, make sure it’s configured for analytics and configure to service The report suites that we’ve dedicated to the dev and Prod web SDK copy report suites that we just created. And step one, this allows for, like I said, the dev and Pride Analytics data to be captured using the web SDK and compared directly to your existing dev and pride data collected using app measurement. Next step you will add the web SDK extension to your tags property and there you’ll configure your dev stage and prod data streams and disable the original role. So from there you’ll want to duplicate each existing analytics tag rule and prefix your copied role with web SDK so that it’s apparent you know, which are the duplicates that you’ll be updating within those new rules. You’ll add your new web SDK actions using the update variable and send event actions like we described in our mapping options above. Keep your existing raw conditions and analytics rule actions and the new rule intact. Step seven Here perform runtime validation so using your update variable and send event actions and the rule you will start to validate on your site dev site. At this point, go out and use your debugger to make sure you know you’re seeing the calls go out as expected. Step eight You’ll create analysis workspace so that here you can compare the data side by side, you know, create two different panels even and look at both report suites in the same workspace and give this period of time where you’re testing and viewing both sets of data to make sure that while the numbers may not line up 1 to 1, we wouldn’t expect them to. It’s two different libraries. Directionally, you should see the same data and you should not see a huge variance. Things should be moving in the same direction and no more than five to maybe 10% variance in any given metric. And we’ll talk about what to expect with variances and data differences coming up next. So when once you’ve completed this comparison and done your validation and dev, you’ll push your web SDK analytics only implementation to production. And from there you will want to validate the production data, same way we did with the dev. Similar with side by workspaces. All right, next step. Once we’ve got all of this done for analytics, let’s tackle Target. So Target will be migrated starting in dev. Once you’re side by side. Validation of the analytics at measurement data is comparable and in a good place with the web SDK, collected data. That’s an important thing here is support to support for T, The analytics service in the data stream can only be configured for the true production report suite once Target is sent live. So the web SDK based personalization cannot be validated side by side in production the same way that we just did for analytics. So you’ll want to know thorough validation and development will be necessary for target. So for target validation will remove our analytics actions from our new Web SDK rules. Once we’re ready for this and ready for Target to go live, we’re comfortable with where analytics is will remove those old analytics actions from the new Web SDK rules. So all that’s remaining is our new Web SDK actions. We will duplicate our existing target rules and the tags interface, so remove any references to target actions in the target roles and instead change those over to the send event actions and include all data formally captured as inbox parameters profile. Our entity data Update all of this as described in the target enablement and documentation next up, you will disable those original target rules, remove your target analytics and visitor API extensions from that tags property and then identify the representative activities for validation and your lower environment. Validate all of that. Make sure that you’re comfortable with where things are. And then lastly, moving on to the next slide. We should be ready to go live at this point, all of the following things should be true when you’re ready to go live. Our analytics is live in production but is reporting to our prod copy report suite. At this point, the PROD copy report suite data is comparable to the measurement collected data that we looked at in our analysis. Workspace. Our target activities have been thoroughly tested in the lower environment. All of our legacy extensions have been removed. Analytics, Target, Visitor API, they are not in our Tags property anymore. Now we’re ready to push to production and go live and we’ve kept four T intact. We’ve compared analytics side by side in both dev and production environments and we’ve tested our target implementation thoroughly and everything goes live to the same production, true production report suite all at once. All right, now that we have gone live, we’ve migrated and we’re comfortable with our data, let’s talk about some expected variances or differences in the data that a lot of clients notice and sometimes can run into questions with. But it’s good to it’s good to talk through this. It’s good to know ahead of time so that you can socialize things like this across the org and prepare. Everyone who’s using the data leads to less questions, less uncertainty. If you can get ahead of some of these things. Existing analytics customers are likely to encounter data variances both in volume and metric counts. When you’re comparing the data from the two different libraries, we consider any variances less than 5% to be acceptable for a migration like this provided like I said, the compared reports are directionally the same. Anything larger than 5% or variances where the reporting is not directionally the same could be evidence of a larger implementation issue that should be corrected or tested a little further. There are lots of different sources of variation between the two measurements systems for web analytics. Specifically, there’s a lot of complexities in how the libraries invoked in the context of a web page, as well as how the data being transmitted to the analytics servers. We just went through all of this. Everything is a very different process with Web SDK than it was app measurement. So some of the variances that you could expect to see metrics are showing higher volume. What’s the reason behind this? Well, Web SDK is a smaller and faster library that can capture additional data, especially in low bandwidth situations where measurement may have missed a head or the server call may not have successfully fired when it should have so slightly higher volume and in your metrics across the board could be expected in a different vein, your visits and visitors may show lower volume. Web SDK is known to be better at stitching together visitors and and sessions. So if you see a slightly lower volume in visits and visitors, it’s nothing to be alarmed about. You know, like we said, difference in 5% or more warrants a second look. But within that 5% threshold, this would be expected performance improvements. Link clicks are too high and page views are too low, so link clicks are counted as link clicks counted as page views by the website is a recurring issue sometimes we’ve seen overlooked during an implementation. This is easily addressed with a fix to the hidden payload that correctly characterizes lane clicks using the schema field group name web dot web interaction dot name and dot type keys in the hex DM payload. So the underlying issue with this is we’ve seen some clients using the schema field group tracking to track page views and instead they’re passing the page name into a field group that is meant for link clicks. So simple payload switch and this issue can be corrected, spike and refer instances, so that measurement only sets the refer once per page load. With web SDK, this is different. It sends the referrer on every event. This can be changed using custom code, but target and other solutions need that refer to be present on every event. So if you’re using target and other solutions, we would not recommend changing the fact that the refer said on every page inflated serialize and metric values and the web SDK specific reports we serialize event values, our report suite specific. So if you’re comparing a serialized event, count in a report suite that’s been more recently created to a report suite that has been around for a lot longer, the existing report suite will be discarding some events that the newer report suite does not. So anyone using serialized metrics just know if you’re creating a brand new report suite, that metric count can it’ll start over. So that may not be a direct 1 to 1 comparison. There. So biggest takeaway here is that with the Web SDK, the library is improved, much improved performance relative our old point solution libraries. And this often accounts for the differences that you will expect to see between the two otherwise identical implementation rates. But again, I would call this out as you’re migrating, socialize this with other business users, other analytics users and your company just to get ahead of it, the last thing you want to do is migrate and then someone else comes to you and says, Hey, my reports look different. What’s going on? Can I trust the data? If you get ahead of it, a lot of times you can avoid those conversations. All right. And that wraps us up for today. So we’ve got time for Q&A and a couple of some. Thank you, Rachel and Riley, for that excellent overview. Excuse me. I know I learned a lot here myself as well. So thank you. As we get into the Q&A portion of our session, there will be a quick two question Paul launching to get your feedback and to help shape future. So thank you for your participation there. Okay. Checking out the Q&A pod, we definitely had a lot of great questions in there and I think we mostly got to all of them, So maybe we’ll pick a couple to review here and feel free to continue to add questions as needed. We’ll try to get to them whether live on the call here or shortly afterwards. So I’ll just take a couple of the target ones and then Riley all I’ll ask you for to review some from the analytics side. So I saw a good one. Here is the Flickr handling script still needed for Target with the AP web SDK as it was with 80 touches. So the answer there is yes. If you’re deploying these libraries asynchronously and looking for ways to reduce flicker of your target experiences, then we have a web SDK compatible version of the Flicker Management JavaScript. So it does require swapping out the Flickr management. Jess that we have we have an experienced leak documentation that covers that and provides you what the code snippet is there. Next question for Target here as we shift ÃÛ¶¹ÊÓƵ Target using as we shift to ÃÛ¶¹ÊÓƵ Target using the Web SDK, do we need to overhaul existing abps or recommendations activities or can we keep things as they are with the rendering decision activity part? So this is a really great question and probably too long of an answer for this forum. However, generally decisioning and activity setup remains the same. The main differences between the implementation methods is really how the data is getting into target. I did paste a link into our experience lead lead documentation that covers this topic specifically. So thank you for your question there. Great question. Okay. Riley, any good examples from the analytics side we should quickly review here? Yeah, we just got two new ones, the chat column calling, one from Jan. We’ll get to those in just a sec. There was an unanswered one from the very first question about utilizing ÃÛ¶¹ÊÓƵ Tags to utilize the Web SDK. So no, it’s not required to utilize ÃÛ¶¹ÊÓƵ tags to deploy the Web SDK, but it is recommended. But the question here is will utilizing tags solve Cross-device tracking example of the user refreshes cookies? Will the first party ID persist? Yes. So when we’re talking about cross-device tracking, the best way to utilize or to do that is going to be utilizing ÃÛ¶¹ÊÓƵ Experience platform and customer Journey Analytics. I know it’s like an upsell and a different topic, but with EPI it’s much, much easier to stitch together SSIDs like the customer IDs from a CMS platform or platform, excuse me, and then stitching the IDs to all of the available ECI IDs that were provided for that logged in user. So no web SDK won’t solve Cross-device tracking, but the solution is going to be utilizing API and CJR. So Rachel, I’ll ask you this question a A for TI. I guess I’ll, I’ll take this one because I might be a little bit familiar with info. Everybody didn’t used to have its own hits from what they could see they do now. Are these hits coming through as linked clicks or is there a way to have a without any extra hits? So Jen A30 has always been sending additional hits into ÃÛ¶¹ÊÓƵ Analytics, but they are not counted as server calls. That’s just supplemental data tied to specific user IDs. They don’t come through as link clicks, they don’t come through as page views. It’s just that A30 hit type. And when you’re doing reporting analysis workspace takes that into account. And so we’re not reporting on A for T data hits that are giving that supplemental target data into ÃÛ¶¹ÊÓƵ Analytics. So there’s no concerns with over tracking or anything. There and and Colin so I know what a huge advantage a website web SDK is you can use first hit targeting with analytics audiences. However, to do this you have to fire target analytics at the same time. Do you recommend making analytics fire at page load started or target Fire at page load completed? Moses I think we tag team this one from an analytics perspective. We want to make sure that all the variables are available on the page. So if we’re firing analytics too early at the page, we won’t be able to capture the variables within your data layer. So we could be getting null values or incorrect values passed through to the schema or into the respective analytics calls. Now and then I can add on to that things for Aly. So when Web SDK first released, yeah, you had to fire Target Analytics at the same time through the Send event Call and then more recently within the last 6 to 8 months, our product teams had released the ability to sort of mimic how the classic implementations worked, where you could have target page, top analytics, hate page bottom. So yeah, if that fits for your implementation type, then you can request personalization as early as possible on the page. Again, you know, depending on what data is available at the time of that request, that would be needed for Target’s decisioning, followed by data collection with analytics at page bottom. So yeah, we have the option for both. Great question. Rachel. I’ll pass this one to you. In the past we had a JavaScript plugin for ÃÛ¶¹ÊÓƵ Analytics across Domain main tracking. Is there a replacement that can work with the web SDK? Good question. We don’t have a plugin like we had for ÃÛ¶¹ÊÓƵ Analytics, but there is a an API or function you can use for cross domain tracking in the URL and it’s essentially a pending the user IDs to the URL that’s passed over to the new domain. So we can follow up with or let me see if I can find the link to that before we hop off here. Awesome. And while Rachel is gathering that, I do want to remind you that if there were additional questions or items that we didn’t cover today, I definitely recommend you reaching out to your team or CSM for additional support. There. And yeah, just wanted to take some time to thank everybody for joining today’s session. I do want to remind you to complete the poll for the session today as well, and then wanted to also thank Riley and Rachel again for their time and their overview of the essentials to migrating to Web SDK. Thank you both for all the hard work that went into this. And next, next slide there. Riley If we could pull up the upcoming webinars here. So yeah, just again, the friendly reminder here. Again, thanks everybody for your time today for joining today’s session. Thank you to our main presenters, Rachel and Riley. We hope to have your company again on future webinars. Reminder. The links to register for these webinars are in the chat pod today. These are all happening within the next few weeks in May. And just a reminder, this recording will also be shared out to any of the attendees who have registered. So thanks again everyone for your time today and hope you have a great rest of your day and rest of your week. Thank you.

Summary

The meeting centered around the essentials of migrating to the Web SDK, a JavaScript library that offers benefits for interacting with services in a solution agnostic manner. ​Key points discussed included steps for migration like configuring permissions, setting up schemas, creating data streams, and mapping data. Considerations were made for handling data variances and determining the migration order for clients with analytics and Target. Insights were shared on cross-device tracking methods, firing analytics at page load start, and the significance of utilizing ÃÛ¶¹ÊÓƵ Tags. The meeting concluded with recommendations to complete the poll, register for upcoming webinars, and expressions of gratitude to the presenters and participants for their time and engagement.

recommendation-more-help
abac5052-c195-43a0-840d-39eac28f4780