Configure an IDM Instance as Part of a Cluster

Setting up multiple IDM instances in a cluster involves the following main steps:

  1. Ensure that each instance is shut down.

  2. Configure each instance to use the same external repository and the same keystore and truststore.

  3. Set a unique node ID for each instance.

  4. Configure the entire clustered system to use a load balancer or reverse proxy.

To configure an IDM instance as a part of a clustered deployment, follow these steps:

  1. Shut down the server if it is running.

  2. If you have not already done so, set up a supported repository, as described in Select a Repository.

    Each instance in the cluster must be configured to use the same repository, that is, the database connection configuration file (datasource.jdbc-default.json) for each instance must point to the same port number and IP address for the database.

    Note

    The configuration file datasource.jdbc-default.json must be the same on all nodes.

    In Select a Repository, you will see a reference to a data definition language script file. Do not run that script for each instance in the cluster—run it just once to set up the tables required for IDM.

    Important

    If an instance is not participating in the cluster, it must not share a repository with nodes that are participating in the cluster. Having non-clustered nodes use the same repository as clustered nodes will result in unexpected behavior.

  3. Specify a unique node ID (openidm.node.id) for each instance, in one of the following ways:

    • Set the value of openidm.node.id in the resolver/boot.properties file of the instance, for example:

      openidm.node.id = node1
    • Set the value in the OPENIDM_OPTS environment variable and export that variable before starting the instance. You must include the JVM memory options when you set this variable. For example:

      export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dopenidm.node.id=node1"
      ./startup.sh
      Executing ./startup.sh...
      Using OPENIDM_HOME:   /path/to/openidm
      Using PROJECT_HOME:   /path/to/openidm
      Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Dopenidm.node.id=node1
      Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties
      Using boot properties at /path/to/openidm/resolver/boot.properties
      -> OpenIDM version "7.1.6"
      OpenIDM ready

    You can set any value for the openidm.node.id, as long as the value is unique within the cluster. The cluster manager detects unavailable instances by their node ID.

    You must set a node ID for each instance, otherwise the instance fails to start. The default resolver/boot.properties file sets the node ID to openidm.node.id=node1.

  4. Set the cluster configuration in the conf/cluster.json file.

    By default, configuration changes are persisted in the repository so changes that you make in this file apply to all nodes in the cluster.

    The default version of the cluster.json file assumes that the cluster management service is enabled:

    {
      "instanceId" : "&{openidm.node.id}",
      "instanceTimeout" : 30000,
      "instanceRecoveryTimeout" : 30000,
      "instanceCheckInInterval" : 5000,
      "instanceCheckInOffset" : 0,
      "enabled" : true
    }
    instanceId

    The ID of this node in the cluster. By default, this is set to the value of the instance's openidm.node.id that you set in the previous step.

    instanceTimeout

    The length of time (in milliseconds) that a member of the cluster can be "down" before the cluster manager considers that instance to be in recovery mode.

    Recovery mode indicates that the instanceTimeout of an instance has expired, and that another instance in the cluster has detected that event. The scheduler component of the second instance then moves any incomplete jobs into the queue for the cluster.

    instanceRecoveryTimeout

    Specifies the time (in milliseconds) that an instance can be in recovery mode before it is considered to be offline.

    This property sets a limit after which other members of the cluster stop trying to access an unavailable instance.

    instanceCheckInInterval

    Specifies the frequency (in milliseconds) that instances check in with the cluster manager to indicate that they are still online.

    instanceCheckInOffset

    Specifies an offset (in milliseconds) for the check-in timing, when multiple instances in a cluster are started simultaneously.

    The check-in offset prevents multiple instances from checking in simultaneously, which would strain the cluster manager resource.

    enabled

    Specifies whether the cluster management service is enabled when you start the server. This property is set to true by default.

    Important

    If you disable the cluster manager while clustered nodes are running (by setting "enabled" : false in an instance's cluster.json file), the following happens:

    • The cluster manager thread that causes instances to check in is not deactivated.

    • Nodes in the cluster no longer receive cluster events, which are used to broadcast configuration changes when they occur over the REST interface.

    • Nodes are unable to detect and attempt to recover failed instances within the cluster.

    • Persisted schedules associated with failed instances cannot be recovered by other nodes.

  5. Specify how the instance reads configuration changes. For more information, see "How IDM Instances Read Configuration Changes".

  6. If you are using scheduled tasks, configure persistent schedules so that jobs and tasks are launched only once across the cluster.

  7. Configure each node in the cluster to work with host headers. If you are using a load balancer, adjust the default jetty.xml configuration, as described in "Deploy Securely Behind a Load Balancer"

  8. Make sure that each node in the cluster has the same keystore and truststore. You can do this in one of the following ways:

    • When the first instance has been started, copy the initialized keystore (/path/to/openidm/security/keystore.jceks) and truststore (/path/to/openidm/security/truststore) to all other instances in the cluster.

    • Use a single keystore that is shared between all the nodes. The shared keystore might be on a mounted filesystem, a Hardware Security Module (HSM) or something similar. If you use this method, set the following properties in the resolver/boot.properties file of each instance to point to the shared keystore:

      openidm.keystore.location=path/to/keystore
      openidm.truststore.location=path/to/truststore

      For information on configuring IDM to use an HSM device, see "Configuring IDM For a Hardware Security Module (HSM) Device".

    • The configuration file secrets.json in the /path/to/openidm/conf directory must be the same on all the nodes.

  9. Start each instance in the cluster.

Important

The audit service logs configuration changes only on the modified instance. Although configuration changes are persisted in the repository, and replicated on other instances by default, those changes are not logged separately for each instance.

Configuration changes are persisted by default, but changes to workflows and scripts, and extensions to the UI are not. Any changes that you make in these areas must be manually copied to each node in the cluster.

Read a different version of :