Use case: limit throughput with External Data Sources and Custom Actions limit-throughput
Description of the use case
蜜豆视频 Journey Optimizer allows practitioners to send API calls to external systems through the use of Custom Actions and Data Sources.
This can be done with :
-
Data Sources: to gather information from external systems and use it in the journey context, for example to get weather information about the profile city and have a dedicated journey flow based on that.
-
Custom Actions: to send information to external systems, for example to send emails through an external solution using Journey Optimizer鈥檚 orchestration capabilities alongside profile information, audience data and journey context.
If you鈥檙e working with external data sources or custom actions, you may want to protect your external systems by limiting journey throughput: up to 5000 instances/second for unitary journeys and up to 20000 instances/second for audience-triggered ones.
For custom actions, throttling capabilities are available at product level. Refer to this page.
For external data sources, you can define a capping limits at endpoint level to avoid overwhelming those external systems through Journey Optimizer鈥檚 Capping APIs. However, all remaining requests after the limit is reached will be dropped. In this section, you will find workarounds that you can use to optimize your throughput.
For more information on how to integrate with external systems, refer to this page.
Implementation
For audience-triggered journeys, you can define the reading rate of your Read Audience activity that will impact journey throughput. Read more
You can modify this value from 500 to 20 000 instances per second. If you need to go lower than 500/s, you can also add 鈥減ercentage split鈥 conditions with wait activities to split your journey into multiple branches and have them execute at a specific time.
Let鈥檚 take an example of a audience-triggered journeys working with a population of 10 000 profiles and sending data to an external system supporting 100 requests/second.
-
You can define your Read Audience to read profiles with a throughput of 500 profiles/second, meaning that it will take 20 seconds to read all your profiles. On second 1, you will read 500 of them, on second 2 500 more, etc.
-
You can then add a 鈥減ercentage split鈥 Condition activity with a 20% split to have at each second 100 profiles in each branch.
-
After that, add Wait activities with a specific timer in each branch. Here we鈥檝e set up a 30 seconds wait for each one. At every second, 100 profiles will flow into each branch.
-
On branch 1, they will wait for 30 seconds, meaning that:
- on second 1, 100 profiles will wait for second 31
- on second 2, 100 profiles will wait for second 32, etc.
-
On branch 2, they will wait for 60 seconds, meaning that:
- On second 1, 100 profiles will wait for second 61 (1鈥01鈥')
- On second 2, 100 profiles will wait for second 62 (1鈥02鈥'), etc.
-
Knowing that we expect 20 seconds maximum to read all profiles, there will be no overlap between each branch, second 20 being the last one where profiles will flow into the condition. Between second 31 and second 51, all profiles in branch 1 will be processed. Between second 61 (1鈥01鈥欌) and second 81 (1鈥21鈥'), all profiles in branch 2 will be processed, etc.
-
As a guardrail, you can also add a sixth branch to have less than 100 profiles per branch, especially if your external system only supports 100 requests/second.
-
As an additional guardrail, you can also use Capping capabilities.