IDM 7.2.2

Scheduled tasks across a cluster

In a clustered environment, the scheduler service looks for pending jobs and handles them as follows:

  • Non-persistent (in-memory) jobs execute only on the node that created it.

  • Persistent scheduled jobs are picked up and executed by any available node in the cluster that has been configured to execute persistent jobs.

  • Jobs that are configured as persistent but not concurrent run on only one instance in the cluster at a time. That job will not run again at the scheduled time, on any instance in the cluster, until the current job is complete.

    For example, a reconciliation operation that runs for longer than the time between scheduled intervals will not trigger a duplicate job while it is still running.

In clustered environments, the scheduler service obtains an instanceID, and check-in and timeout settings from the cluster management service (defined in the project-dir/conf/cluster.json file).

IDM instances in a cluster claim jobs in a random order. If one instance fails, the cluster manager automatically reassigns unstarted jobs that were claimed by that failed instance.

For example, if instance A claims a job but does not start it, and then loses connectivity, instance B can claim that job. If instance A claims a job, starts it, and then loses connectivity, other instances in the cluster cannot claim that job. If the failed instance does not complete the task, the next action depends on the misfire policy, defined in the scheduler configuration.

You can override this behavior with an external load balancer.

If a LiveSync operation leads to multiple changes, a single instance processes all changes related to that operation.

Because all nodes in a cluster read their configuration from a single repository, you must use an instance’s resolver/boot.properties file to define a specific scheduler configuration for that instance. Settings in the boot.properties file are not persisted in the repository, so you can use this file to set different values for a property across different nodes in the cluster.

For example, if your deployment has a four-node cluster and you want only two of those nodes to execute persisted schedules, you can disable persisted schedules in the boot.properties files of the remaining two nodes. If you set these values directly in the scheduler.json file, the values are persisted to the repository and are therefore applied to all nodes in the cluster.

By default, instances in a cluster are able to execute persistent schedules. The setting in the boot.properties file that governs this behavior is:

openidm.scheduler.execute.persistent.schedules=true

To prevent a specific instance from claiming pending jobs, or processing clustered schedules, set openidm.scheduler.execute.persistent.schedules=false in the boot.properties file of that instance.

Changing the value of the openidm.scheduler.execute.persistent.schedules property in the boot.properties file changes the scheduler that manages scheduled tasks on that node. Because the persistent and in-memory schedulers are managed separately, a situation can arise where two separate schedules have the same schedule name.

For more information about persistent schedules, see Persistent schedules.

Copyright © 2010-2024 ForgeRock, all rights reserved.