Deploying multiple IDM instances in a cluster gives you high availability. You can have an active-active high availability cluster if you configure your clustered deployment with a load balancer.
The following list provides an overview of the main considerations for clustering in IDM:
- Each instance must have a unique node ID (openidm.node.id).
- If you are using an external DS as the repository:
- Each IDM instance must point to the same external repository.
- This repository must be a single instance that is available for reads/writes.
- You should not use a clustered instance for the repository as this can cause major issues for clustering and synchronization, for example, where there is a delay in replication and a query occurs immediately after an update.
- You can configure the REST2LDAP interface (which is used for the connection to DS) to handle failover. In particular, you should configure the primaryLdapServers and secondaryLdapServers settings in the Gateway configuration file. See REST to LDAP reference for further information.
- If you are using a database as the repository:
- Each IDM instance must point to the same database.
- If the database is also clustered, you should point to the cluster as a single system instead.
- Each IDM instance must point to the same connector configuration.
Refer to IDM cluster configuration for instructions on configuring an IDM instance as part of a cluster. Pay particular attention to the step on specifying the cluster configuration in the conf/cluster.json file of each instance.
By default, IDM reads its configuration from the JSON files in its conf/ directory when starting up and monitors them automatically for any configuration changes; any changes are persisted in the repository. This is a useful setup for the development/testing phases as any changes in configuration via the JSON files are applied almost immediately.
However, in production, we strongly recommend that the repository is the authoritative source for configuration for the following reasons:
- Untested changes are not applied automatically since these can be disruptive in a production environment. Instead you should apply configuration changes using REST calls, which gives you control over what is applied and when. Since configuration changes in production should be rare, this approach is not too onerous. See Configure the server over REST for further information.
- File-based configurations are more likely to result in lost changes if the system restarts since each node holds its own version of the configuration in memory only (not persisted). Similarly, configurations can easily become corrupted or out of sync with other nodes unless a very rigorous change management structure is in place and strictly adhered to.
Changes for production
To configure the repository as the authoritative source for configuration in production, you should ensure
openidm.fileinstall.enabled is set to false and
openidm.repo.enabled is set to
true in the conf/system.properties file on all nodes:
See Disable automatic configuration updates for further information.
All nodes must have the same settings with regards to whether they are file-based or repository-based configurations. Nodes within a cluster cannot handle configuration changes differently.
To troubleshoot a clustered deployment effectively, you should collect everything on all nodes. The IDM log (openidm0.log.n) is located in the /path/to/idm/logs directory.
You can obtain the full configuration for each node by querying the openidm/config endpoint, for example:
- IDM 7 and later: $ curl -X GET -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" -H "Accept-API-Version: resource=1.0" -H "Content-Type: application/json" http://localhost:8080/openidm/config?_queryFilter=true
- Pre-IDM 7: $ curl -X GET -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" -H "Content-Type: application/json" http://localhost:8080/openidm/config?_queryFilter=true
See Troubleshooting IDM for further information.