How To
ForgeRock Identity Platform
Does not apply to Identity Cloud

How do I repair replication configuration in DS 6.x when dsreplication has failed?

Last updated Jan 11, 2023

The purpose of this article is to provide information on re-aligning replication configuration in DS when the dsreplication tool has been unable to make a full set of necessary updates.

1 reader recommends this article


DS replication configuration is split between two main locations:

  • Each DS or RS server has local configuration in their cn=config (config.ldif) backend.
  • Shared configuration (including public key entries and global admin users) is stored in the replicated backend: cn=admin data (admin-backend.ldif).

When enabling and disabling replication, changes are made to the local configuration of each server in the topology; this is done by the dsreplication tool directly against the admin port on each server. Changes to the global configuration are made once and replicated to the other instances.

Due to this mechanism there are a number of scenarios where configuration could become inconsistent across the topology. For example, if Server4 is removed from a 4-server topology while Server1 is unavailable (offline or otherwise unable to execute configuration changes, such as being out of disk space) then only Server2 and Server3 will have their local configuration updated and the replicated admin data configuration will be updated. When Server1 comes back online later, it will still pick up the replicated admin data changes via another RS changelog, but it will not get the local configuration changes.

Checking and repairing local replication configuration with dsconfig

You can list the replication domains and servers in the local configuration of a DS instance with the following commands:

  • To list replication domains in the local configuration: $ ./dsconfig list-replication-domains --provider-name "Multimaster Synchronization" --hostname --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --no-prompt --trustAll Replication Domain : server-id : replication-server : base-dn -------------------:-----------:------------------------------------------------------------------------:-------------------- cn=admin data : 300 :,, : cn=admin data cn=schema : 23155 :,, : cn=schema dc=example,dc=com : 15187 :,, : "dc=example,dc=com"
  • To list replication servers in the local configuration: $ ./dsconfig list-replication-server --provider-name "Multimaster Synchronization" --hostname --port 4444 --bindDN cn="Directory Manager" --bindPassword password --no-prompt Replication Server : replication-server-id : replication-port : replication-server -------------------:-----------------------:------------------:----------------------------------------------------------------------------------- replication-server : 14275 : 18989 :,,

In this particular example, this is the configuration on ds-1, but ds-3 is actually gone from the topology. To remove it here, you can either navigate the interactive dsconfig menus or use the non-interactive commands as follows:

  • To remove from the replication server configuration: $ ./dsconfig set-replication-server-prop --provider-name "Multimaster Synchronization" --remove --hostname --port 4444 --bindDN cn="Directory Manager" --bindPassword password --no-prompt
  • To remove from the replication domain (dc=example,dc=com): $ ./dsconfig set-replication-domain-prop --provider-name "Multimaster Synchronization" --domain-name dc=example,dc=com --remove --hostname --port 4444 --bindDN cn="Directory Manager" --bindPassword password --no-prompt

Repeat for each replication domain, including cn=schema and cn=admin data.

Checking and repairing cn=admin data

Inconsistencies in the admin data configuration are less likely. This is a replicated backend that can catch up from the changelog if a server is temporarily unavailable when topology changes are made.

The most important thing is to ensure that all admin data backends contain the same data. Since this is a file-based backend, you can find all the data in LDIF format in admin-backend.ldif (located in /db/adminRoot in DS 6.5.x or /db/admin in DS 6). Use the ldifdiff tool to compare the file from one instance to another. The only differences present should be for ds-sync-* attributes (replication housekeeping).

You can copy an admin-backend.ldif from a good server to overwrite one on a bad server (stop the server first and start it afterwards). If everything else is in order this will resolve any inconsistencies.

There is one particular case where you may need to make changes to the admin backend manually. This is when a server has been completely removed and is no longer available, but replication was not disabled beforehand. You can use the following suggested manual process:

  1. Create a copy of admin-backend.ldif called admin-backend-new.ldif.
  2. Manually edit this new ldif file and delete references to the server being removed:
    • The uniqueMember reference in 'cn=all-servers,cn=Server Groups,cn=admin data'
    • The entry for the server under 'cn=Servers,cn=admin data', for example: cn=<server>,cn=Servers,cn=admin data
  3. Run a ldifdiff between the existing and new ldif files: $ ./ldifdiff --outputLdif changes.ldif admin-backend.ldif admin-backend-new.ldif
  4. Execute changes.ldif against a server in the topology:$ ./ldapmodify --hostname --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --continueOnError --useSSL changes.ldif
  5. Check that the change has been replicated to all the other servers.
  6. Perform the steps in the previous section (local configuration) for each server still in the topology.

See Also

How do I delete an AM 6.x instance from a site along with the replicated embedded DS server?

How do I troubleshoot replication issues in DS 6.x?

Replication in DS

Replication Server

Replication Domain

Replication Synchronization Provider

Related Training


Related Issue Tracker IDs


Copyright and Trademarks Copyright © 2023 ForgeRock, all rights reserved.