Ă۶ąĘÓƵ

Introduction to AEM as a Cloud Service

Learn how to think differently about AEM as a Cloud Service implementations.

Transcript
This is “AEM as a cloud Service. Introduction: Architecture and Thinking Differently”. I am Darin Kuntze, senior cloud architect. I’m going to give you a quick presentation today. So the agenda is we’re going to go over the overview, architecture and a section I’d like to call thinking differently to show us the kind of deltas and differences that you’re probably used to when doing an implementation on AEM. And followed up with sort of a setting the table for what’s to come with a short description of mutable versus immutable content.
So the history and journey that AEM has been making, you may have noticed as you looked at the different releases over the number of years here. So in 2017, we started separating the content in apps into what’s called the Composite Repository. You may not have noticed. In 2018, the Twelve Factor and containerization, so we started building out the system to make it container-ready. 2019, we really kicked it into high gear and federated some of the services and the control system, so we’re extracting some of the key services out. And in 2020, launching the fully cloud native AEM as a Cloud Service. So that really leads us to the future where we’re able to continuously analyze, rollout and measure and refactor and stuff like that behind the scenes continuously, not worrying about the upgrade cycles or different mechanisms that you may have had to plan out more in the future. So Ă۶ąĘÓƵ is taking care of those types of things.
Key benefits we see on AEM as a Cloud Service are that it’s always current, it’s always modular, scalable and global, so we’re always monitoring performance and able to scale it up as the needs arise. As well as performance resiliency, so it’s very redundant and we do have active monitoring going on for every minute of the day, so you’re always assured that performance is always going to be there and the resiliency of the platform. It’s also secure by default. So all those activities that you would typically deal with on an on-prem system, or even in a managed services system about your penetration testing and various security practices are handled by default on AEM as a Cloud Service. And let’s look at some of the bullets underneath each one of these. So let’s go back to always current. So there’s a lower TCO with automatic product updates, so you’re able to finally leverage some of those capabilities as they come out and you’re just more agile with the deployment cycles, now that we have CI/CD built-in and self-service provisioning. So there’s no or less human interaction with being able to provision additional environments or systems or repositories. In the modular, scalable and global, we talked about performance but we have a dedicated CDN, the auto scaling. We also have faster asset ingestion and renditions with the dedicated microservice. So instead of AEM itself, the application rendering those renditions and doing the processing, it’s actually handled by a dedicated microservice so then you have faster throughput, faster processing and those ingestion activities don’t really have any impact on the performance of the system. Performance resiliency, geo-redundancy. So you can place your different environments in different regions. High reliability with self-healing, so it’s able to recognize if a pod or instance is behaving badly, then it’s able to replace those automatically, so self healing itself. So replacing those and adding additional if necessary, depending on the performance requirements. Backup and recovery strategy. So it’s automatically backed up and the recovery system is built-in. So all these systems are all integral for making a very performant and resilient system. From the secure by default level, we have enterprise level isolation, so each of your programs are running individually. You’re not sharing infrastructure with other customers or environments. They’re pre-configured with security rules that are very tight and locked down. We also provide authentication with our identity management system. So that it’ll be IMS, you can use your Ă۶ąĘÓƵ ID to log in, just like you would for Analytics or Target and also log into AEM. And compliant with the industry recognized security standards, which I believe was on the previous slide. So let’s look at architecture. So this slide here shows the breakdown of the different services that cloud service provides. It’s not a completely exhausting list of services but it gives you a pretty good idea. So in previous on-prem and managed services implementations, you instantly recognize the content repository service. That’s the JCR, if you would. So you have the different nodes per environment, publish author and shared, it’s four blobs. Then the new stuff down there on the bottom-right, coming from the I/O runtime with the Asset Compute Service being able to process assets, renditions and so on from a services perspective, instead of on the repository itself. Testing services for doing various testing activities, also running on I/O runtime and the replication service on the top-left over there, that service is new in that it is unlike the replication agents that you would typically see in AEM. So this replication service uses a pipeline service to distribute content through the different nodes as needed. The CI/CD service is key to getting content and testing and so on to automation, orchestration type stuff onto the different instances that you would be using. Identity management service is, again, mentioned before, that having the ability to use IMS, and your Ă۶ąĘÓƵ ID or your corporate ID to log into the system and the various other applications like Target Analytics and so on. In the center here is the orchestration service itself and that’s sort of one of the key components to AEM as a Cloud Service. So having the ability to run each of these different things in a container with the dash and lines around them. So like the different maintenance jobs run in their own container and then the resulting artifacts from those things affect possibly any of the number of different containers. Author tier with the multiple author containers themselves. The publish tier, you can see the dispatcher publisher combinations, as well as fronted by the CDN Service at the top. So since we’re talking about containers, here’s some of the key points to packaging up AEM into containers. So one, it allows for easier roll back to previous versions, so if problems occur, it automatically in most cases rolls back to previous versions. Rolling updates, so we’re able to roll out those key features and bug fixes and stuff like that with zero downtime. Truly dynamic auto scaling. So the metrics on the health of the different containers themselves are monitored so that if need be, we can scale up or scale down based on those metrics. Automated container building and testing. So we have the CI/CD pipelines and stuff like that, so it’s all automated and behind the scenes, so it’s tested and deployed without any user interaction. And the consistent and reliable testing due to the immutable nature of the container. So those containers have what’s called immutable or basically read-only content. So each of those images that those containers run are consistent, they’re the same for each published instance, for example, or the same for each author or whatever.
It’s always current. So one of the key features of being always current and always being able to get the latest releases, you’re getting continuous innovation. So when you see a new feature at a conference or whatever like that and you wonder how long is it going to take to upgrade or get on the latest path to get those various innovations, with the continuous upgrades and maintenance, when they come out, you get on that release automatically. So capabilities and maintenance type releases are continuously come out. But then on the monthly basis, we do do major or more feature-based releases on a monthly basis. So if we’re adding new features, the maintenance itself is coming out on a periodic basis throughout the week, or, you know, once a week or multiple times a week. So continuous integration and delivery. When talking about CI/CD, this is the process which in a basic way, we are building the images for release to your environment. So the release orchestration service itself that has the key Ă۶ąĘÓƵ AEM Cloud Service release, it takes your custom code, merges them together in the custom building step number one and then build a immutable image, a read-only image out of those two things. So that image itself is that one that’s going to be released to the different environments and used for the different run modes and so on. So once you have that image, it goes through two, three, four, five which would be going to stage testing, performance testing and then eventually to production deployment. Next, let’s get onto the thinking differently. Thinking differently in the cloud native world, things are just different. So capacity is scaled horizontally based on the actual traffic and activity. So you can think of, instead of building a service that may require more memory or more RAM or more disk to function properly, you got to think about these different environments will scale horizontally. So they’re all exactly the same. So if you need more capacities, capacities scale horizontally, so you might get another publish instance or another author instance based on what activity and traffic is doing. So when you’re designing these applications, think about that versus like just saying, we need to throw more CPU at it. So those AEM pods themselves, since they’re based on a container architecture, can be recycled and restarted at any time. So when you’re building content or an application, think about that. So, any resiliency you can build into the application will benefit greatly. So think about the sort of containerization concepts. The author itself is clustered, so it has an eventually consistent model. So it’s transparent to the end user in like the capacity and traffic activity monitoring that we see with publishing, author is also. So there could be use cases where an eventual, additional author may be spun up and used in your authoring capacity. You also think about the updates happening automatically. So when you’re planning your builds, make sure you’re up-to-date with the different API jars to make sure you can take advantage of those latest and greatest updates. Cloud manager itself is used for managing the program environment, so a lot of this is all self-serve. There’s a few things that would require additional activity but as more updates happen, more of these self-service type activities will be released on cloud manager.
Also the Ă۶ąĘÓƵ IMS, the IMS system is used for single sign on to all cloud applications. So you log into AEM the same way you log into Analytics. In fact, since it’s single sign-on, once you log into one of those applications, you’re able to log into AEM as a Cloud Service. Monitoring and ops are just built-in, so you don’t have to build in all these different monitoring and various other log consumption tasks, they’re built into the system. And many of the heavy tasks that you’d typically see in an on-premise or a managed services instance are done through these dedicated microservices or offloaded to an additional container to do those services. So minimal impact to the running application. Thinking differently as a cloud developer. So one of the things you should really avoid is saving any stateful objects to the file system. Think about user-generated content. You don’t want to save any of that content to the file system. Use sling jobs for any background, long running tasks. As I mentioned before, a lot of the heavy lifting tasks that AEM would typically run are now running as containerized jobs. You can also use these sling jobs for resiliency, so that it has the ability to resume those jobs if, for instance, a pod was restarted or shut down. So it can be picked up by another queue and then run on a different instance.
We also have the developer console. So this developer console kind of abstracts a lot of those tools and information that you typically would just grab right through AEM itself. So outside of the development environment, there is no CRXDE Lite. There’s no access to the OSGi console. There’s no real manual package installation, so you’re not in there pushing out service packs or anything like that. The logs themselves can’t be tailed directly without going through a command line tool but there are access to the logs through that developer console. An interesting point here is that there’s also a local Quickstart SDK, so basically a local AEM that you can use for local development but it should be noted that it’s not an actual one-to-one mapping through Cloud Service. So there’s instances where you can write to certain areas on the local Quickstart that you couldn’t do on Cloud Service. So it’s not completely analogous to the environment but it has the latest and greatest code merged into it. One point here is mentioned sort of before with the stateful objects. It makes sense that there’s no reverse replication since there is no UGC recommended or creating custom replication agents. There’s also no streaming of binary, so there shouldn’t be picking up of the assets themselves off the file system and streaming them back to the client. All this stuff kind of pipes through the dispatcher and through the CDN to the end user.
Let’s look at a quick example and definition of mutable versus immutable.
So mutable is defined as something that’s just liable to change, so it’s basically writeable. Typical cases for mutable content are the default content, so your content trees, your different nodes and so on, assets and so on. Search index definitions are definitely mutable. ACLs and permission since they can change based on which user or group. And of course, the service users and user groups themselves. Immutable itself means it cannot change, so you can see the different paths here from mutable versus immutable. It should be noted that under etc up there at the top, in the mutable while we don’t recommend at all, it’s not a best practice to change anything in the etc path. Most of those nodes and content that you typically would change under etc have been moved to other directories. So, just as an FYI there. The immutable cannot change, so the apps and libs. So typically, your application that you’re creating will be in the apps directory but it will be part of that build process that we described earlier. So the build process of taking the AEM application code and merging it with your customer code and building it into the image itself and then when that image is built, it goes into that Apps and Live directories.
And that flows into bullet number one, there. Immutable content is baked into the image and it’s both on Author and Publish nodes and it ensures that each node is identical. So again, if we’re scaling up the instance to multiple published nodes or multiple author nodes or whatever, each one of those nodes starts off with exactly the same image, it uses that same image. And the changes to code and configuration can only be made going through the cloud manage pipeline. So going back to that pipeline, taking the existing AEM version, merging it with your code, building an image out of it and deploying it after doing all the various tests. Well, that wraps up our introduction to cloud service. -
recommendation-more-help
4859a77c-7971-4ac9-8f5c-4260823c6f69