IDM 7.3.1

IDM cluster configuration

Setting up multiple IDM instances in a cluster involves the following main steps:

  1. Ensure that each instance is shut down.

  2. Configure each instance to use the same external repository and the same keystore and truststore.

  3. Set a unique node ID for each instance.

  4. Configure the entire clustered system to use a load balancer or reverse proxy.

To configure an IDM instance as a part of a clustered deployment, follow these steps:

  1. Shut down the server if it is running.

  2. If you have not already done so, set up a supported repository, as described in Select a repository.

    Each instance in the cluster must be configured to use the same repository; that is, the database connection configuration file (datasource.jdbc-default.json ) for each instance must point to the same port number and IP address for the database.

    The configuration file datasource.jdbc-default.json must be the same on all nodes.

    Do not run the data definition language script file in Select a repository for each instance in the cluster—run it just once to set up the tables required for IDM.

    If an instance is not participating in the cluster, it must not share a repository with nodes that are participating in the cluster. Having non-clustered nodes use the same repository as clustered nodes will result in unexpected behavior.
  3. Specify a unique node ID ( for each instance, in one of the following ways:

    • Set the value of in the resolver/ file of the instance. For example: = node1
    • Set the value in the OPENIDM_OPTS environment variable and export that variable before starting the instance. You must include the JVM memory options when you set this variable. For example:

      export OPENIDM_OPTS="-Xmx1024m -Xms1024m" ./
      Executing ./
      Using OPENIDM_HOME:   /path/to/openidm
      Using PROJECT_HOME:   /path/to/openidm
      Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m
      Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/
      Using boot properties at /path/to/openidm/resolver/
      -> OpenIDM version "7.3.1"
      OpenIDM ready

      You can set any value for the, as long as the value is unique within the cluster. The cluster manager detects unavailable instances by their node ID.

      You must set a node ID for each instance, otherwise the instance fails to start. The default resolver/ file sets the node ID to

  4. Set the cluster configuration in the conf/cluster.json file.

    By default, configuration changes are persisted in the repository so changes that you make in this file apply to all nodes in the cluster.

    The default version of the cluster.json file assumes that the cluster management service is enabled:

      "instanceId" : "&{}",
      "instanceTimeout" : 30000,
      "instanceRecoveryTimeout" : 30000,
      "instanceCheckInInterval" : 5000,
      "instanceCheckInOffset" : 0,
      "enabled" : true

    The ID of this node in the cluster. By default, this is set to the value of the instance’s that you set in the previous step.


    The length of time (in milliseconds) that a member of the cluster can be "down" before the cluster manager considers that instance to be in recovery mode.

    Recovery mode indicates that the instanceTimeout of an instance has expired, and that another instance in the cluster has detected that event. The scheduler component of the second instance then moves any incomplete jobs into the queue for the cluster.


    Specifies the time (in milliseconds) that an instance can be in recovery mode before it is considered to be offline.

    This property sets a limit after which other members of the cluster stop trying to access an unavailable instance.


    Specifies the frequency (in milliseconds) that instances check in with the cluster manager to indicate that they are still online.


    Specifies an offset (in milliseconds) for the check-in timing, when multiple instances in a cluster are started simultaneously.

    The check-in offset prevents multiple instances from checking in simultaneously, which would strain the cluster manager resource.


    Specifies whether the cluster management service is enabled when you start the server. This property is set to true by default.

    If you disable the cluster manager while clustered nodes are running (by setting "enabled" : false in an instance’s cluster.json file), the following happens:

    • The cluster manager thread that causes instances to check in is not deactivated.

    • Nodes in the cluster no longer receive cluster events, which are used to broadcast configuration changes when they occur over the REST interface.

    • Nodes are unable to detect and attempt to recover failed instances within the cluster.

    • Persisted schedules associated with failed instances cannot be recovered by other nodes.

  5. Specify how the instance reads configuration changes. For more information, refer to How IDM Instances Read Configuration Changes.

  6. If you are using scheduled tasks, configure persistent schedules so that jobs and tasks are launched only once across the cluster.

  7. Configure each node in the cluster to work with host headers. If you are using a load balancer, adjust the default jetty.xml configuration, as described in Deploy Securely Behind a Load Balancer

  8. Make sure that each node in the cluster has the same keystore and truststore. You can do this in one of the following ways:

    • When the first instance has been started, copy the initialized keystore (/path/to/openidm/security/keystore.jceks ) and truststore (/path/to/openidm/security/truststore ) to all other instances in the cluster.

    • Use a single keystore that is shared between all the nodes. The shared keystore might be on a mounted filesystem, a Hardware Security Module (HSM) or something similar. If you use this method, set the following properties in the resolver/ file of each instance to point to the shared keystore:

    • The configuration file secrets.json in the /path/to/openidm/conf directory must be the same on all the nodes.

  9. Start each instance in the cluster.

The audit service logs configuration changes only on the modified instance. Although configuration changes are persisted in the repository, and replicated on other instances by default, those changes are not logged separately for each instance.

Configuration changes are persisted by default, but changes to workflows and scripts, and extensions to the UI are not. Any changes that you make in these areas must be manually copied to each node in the cluster.

Copyright © 2010-2024 ForgeRock, all rights reserved.