ForgeRock Identity Platform
Does not apply to Identity Cloud

FAQ: Clusters in IDM

Last updated Jan 12, 2023

The purpose of this FAQ is to provide answers to commonly asked questions regarding clusters in IDM. Clusters are used to provide high availability in IDM.

Frequently asked questions

Q. What are the main features of a cluster configuration?

A. The main features of a cluster configuration are:

  • It keeps the keystore/truststore in sync between nodes and prevents conflicts. See IDM in a cluster for further information.
  • It provides a recovery mechanism for persisted schedules in the event of node failures.

Q. How does the keystore get loaded in a clustered environment?

A. In a clustered environment, you must copy the initialized keystore to each instance in the cluster or point to a single, centralized keystore. See IDM in a cluster for more information. The keystore is not persisted in the repository.

Q. What does IDM's cluster management service do?

A. The cluster management service performs the following tasks:

  • Detects when a node has failed or timed out.
  • Notifies other components (for example, the scheduler) about a node failure.
  • Provides a REST interface for getting information about the cluster. See Manage Nodes Over REST for further information.

The logic for managing the cluster is defined in the cluster.json file. See IDM in a cluster for further information.

Q. Do nodes in a cluster share the same configuration?

A. Yes, the configuration should always be the same; this is achieved by directing each instance to the same database.

Configuration is found in the following three locations:

  • Database - this configuration is always the same since it is shared. It is initially loaded from the filesystem configuration on the cluster-first node and propagated to the other (additional) nodes.
  • Configuration on filesystem - this configuration is always the same since it is shared.
  • Configuration in memory - this configuration could diverge if an instance gets cut from its cluster due to a networking issue or a misconfigured load balancer; in which case, changes from the database may not be detected and therefore, the configuration in memory will not be updated.

Q. How do I upgrade nodes in a cluster?

A. See Upgrade a Clustered Deployment for the necessary steps.

Q. Can I configure a cluster over multiple subnets or data centers?

A. IDM clustering requires a shared repository for all nodes in the cluster and otherwise does not have any direct communication between the nodes. Therefore configuring a cluster on multiple subnets versus a single subnet should effectively be the same. Please ensure you still observe the following cluster requirements (from the documentation):

  • Each instance in the cluster must be configured to use the same repository, that is, the database connection configuration file (datasource.jdbc-default.json) for each instance must point to the same port number and IP address for the database.
  • In a clustered environment, each instance points to the same external repository. If the database is also clustered, IDM points to the database cluster as a single system.

See IDM in a cluster for further information.


Clustering across multiple subnets has not been configured or tested by us; it is strongly recommended that you test this configuration in a development environment first to ensure it meets your needs.

Q. How do I stop liveSync running the same scheduled job on multiple nodes in a cluster?

A. You must ensure the persisted property is set to true in the schedule file (for example, the schedule-livesync.json file):

{ "enabled" : true, "persisted" : true, ... }

If this property is excluded or set to false, liveSync will process changes concurrently.

See Configure Persistent Schedules for further information.

Q. How do I ensure the scheduler runs consistently in a cluster setup?

A. When you run the scheduler in a cluster setup, you should store the scheduler file on a shared drive, which is accessible to all nodes. This configuration ensures the scheduler will run regardless of which node it is invoked from.

If this is not possible, and the scheduler file is stored on one node, you will notice it does not run if it is invoked from a different node (because the file is not available). To avoid this situation, you should make the scheduler run only on the node where the file is located. You can do this as follows:

  1. Disable the scheduler on the nodes where you do not want it to run by setting enabled to false. You can do this via REST or the Admin UI: Schedule Tasks and Events.
  2. Use Property Value Substitution to specify the specific node that should handle all scheduled jobs. See Property Value Substitution for further information.

Q. Why am I seeing errors about current Object revision being different than expected during synchronization?

A. When you are synchronizing changes in a clustered environment, you may see errors such as:

Update rejected as current Object revision 293 is different than expected by caller 291, the object has changed since retrieval.

This could happen in the following scenario:

  1. Node 1 starts a sync task and reads the source user that has changed.
  2. Node 2 starts a sync task and completes it.
  3. Node 1 then finishes processing but cannot save because the object has changed since it was read, resulting in the revision clash.

Or, where both nodes are running a liveSync against the same system object, causing the liveSync token version to change before one of the nodes has a chance to update it.

Providing you are not experiencing any other issues, such as users not being synced correctly, you can ignore these errors.

See Also

Administering and configuring IDM

Related Training


Related Issue Tracker IDs


Copyright and Trademarks Copyright © 2023 ForgeRock, all rights reserved.