How To
ForgeRock Identity Platform
Does not apply to Identity Cloud

How do I design and implement my backup and restore strategies for DS 5.x and 6.x?

Last updated Apr 8, 2021

The purpose of this article is to provide information to help you design and implement your backup and restore strategies in DS.


3 readers recommend this article

Backup Strategies

Note

This article does not apply to DS 7 and later, because DS 7 introduces a simplified implementation for backup and restore operations. See Release Notes › What's New (Backup and Restore) for further information.

Each DS integration is different and the backup strategies must be evaluated based on your business requirements.

Full Backups

Full backups are normally done on a scheduled basis, but they are considered a Point In Time (PIT) backup that is not 100% up to date.

Considerations:

  • Take the DS instance out of the load balancer until the backup is complete.
  • Include the Other Required Files (see Other Required Files section below) by manually copying these files to a safe location.
  • Ensure you have enough disk space to store sufficient backup snapshots to adhere to your business requirements.
  • Consider splitting the suffixes into separate backend databases if you have multiple suffixes and you expect your database to be massive. Each backup will be smaller and take less time to complete. It will also allow restoration and maintenance of individual backends without affecting the rest.
  • Consider adding Incremental Backups (see the Incremental Backups sub-section below) to augment the full backup. This will allow for a more complete restore point, but can take more time as each incremental backup is restored.

Frequency:

  • It is recommended to create full backups at least once a week with daily incremental backups.
  • It is recommended to create full backups at least once per purge delay interval with incremental backups more frequently.

Incremental Backups

Incremental backups can be used exclusively to back up data over time, but each and every incremental backup from the point of the last full backup must be applied on top of each other to fully restore the database to a usable state.

Considerations:

  • Consider using a daily full backup followed by incremental backups every six hours or less.

Frequency:

  • Daily incremental backups are recommended.
  • The frequency can be increased if the business requirements stipulate more up to date backups are needed.

Expectations:

When entry counts are small, incremental backups may appear to look like full backups in the logs. 

In the following example, the backup contained only one database file and it was thought the incremental backup looked more like a full backup. In truth, the entry counts were small and only one database jdb file was being used:

ds-task-backup-incremental: true ds-task-log-message: [19/Dec/2018:10:00:00 +0000] severity="NOTICE" msgCount=0 msgID=9896349 message="Backup task BackupTask-3294216c-2ed8-4f08-98b9-757ac a98fcf1-20150319100000000 started execution" ds-task-log-message: [19/Dec/2018:10:00:00 +0000] severity="NOTICE" msgCount=1 msgID=10944792 message="Starting backup for backend userRoot" ds-task-log-message: [19/Dec/2018:10:00:00 +0000] severity="NOTICE" msgCount=2 msgID=8847442 message="Not changed: 00000000.jdb" ds-task-log-message: [19/Dec/2018:10:00:00 +0000] severity="NOTICE" msgCount=3 msgID=10944795 message="The backup process completed successfully"

The number of jdb files present can vary depending on the DS version you are using, the number of entries in the database and the size of the entries. The default database size (db-log-file-max) varies by version:

  • DS 6.x - default database size 1 GB.
  • DS 5.x - default database size 100 MB.

The following test was completed on non-replicated DS 5 instance with no special configuration (default database size of 100 MB). When example template entries are added with this database file size, a new database file will be created after approximately 56800+ entries. In tests, the second jdb file was created when between 56802 and 56902 entries were imported:

-rw-r--r-- 1 opendj opendj 99999153 Dec 20 08:17 db/userRoot/00000000.jdb -rw-r--r-- 1 opendj opendj 31697 Dec 20 08:17 db/userRoot/00000001.jdb

See Administration Guide › Database Cache Settings for further information.

System Backups

Considerations:

  • Full system-level backups can be used in place of full backups.
  • It is best to stop the DS instance before doing a full system backup. This ensures the database and relevant files have been properly closed at the OS level.

Frequency:

  • It is recommended you create full system-level backups at least once a week.

Backup Commands

DS comes with two commands that can be used individually to back up data from the DS instance: backup and export-ldif. Both can be used for normal every day backups, but they do miss a few data points that need to be considered.

backup 

The DS backup command can be used:

  • For ad hoc or scheduled backups.
  • To backup individual or all backends.
  • To backup data in full or incrementally.

export-ldif 

The export-ldif command allows administrators to extract a text based copy of the backend userRoot database in the form of an LDIF (Lightweight Data Interchange Format) file.

See Administration Guide › Backing Up Directory Data for further information.

Files Backed Up

backup

Files captured with the backup command can include jdb database files, schema files and task ldif files.

Note

backup --backUpAll will backup all the following each time this command option is used.

  • Database files: userRoot backend: -rw-r--r-- 1 opendj opendj 99999973 Jan 19 14:54 00000000.jdb -rw-r--r-- 1 opendj opendj 99999797 Jan 19 14:55 00000001.jdb -rw-r--r-- 1 opendj opendj 99999864 Jan 19 14:55 00000002.jdb -rw-r--r-- 1 opendj opendj 99997251 Jan 19 14:55 00000003.jdb
  • Schema files: schema backend: -rw-r--r-- 1 opendj opendj 43693 Jan 19 11:04 00-core.ldif -rw-r--r-- 1 opendj opendj 6596 Jan 19 11:04 01-pwpolicy.ldif -rw-r--r-- 1 opendj opendj 194661 Jan 19 11:04 02-config.ldif -rw-r--r-- 1 opendj opendj 5058 Jan 19 11:04 03-changelog.ldif -rw-r--r-- 1 opendj opendj 1227 Jan 19 11:04 03-pwpolicyextension.ldif -rw-r--r-- 1 opendj opendj 3561 Jan 19 11:04 03-rfc2713.ldif -rw-r--r-- 1 opendj opendj 2133 Jan 19 11:04 03-rfc2714.ldif -rw-r--r-- 1 opendj opendj 3235 Jan 19 11:04 03-rfc2739.ldif -rw-r--r-- 1 opendj opendj 3272 Jan 19 11:04 03-rfc2926.ldif -rw-r--r-- 1 opendj opendj 1710 Jan 19 11:04 03-rfc3112.ldif -rw-r--r-- 1 opendj opendj 12652 Jan 19 11:04 03-rfc3712.ldif -rw-r--r-- 1 opendj opendj 15858 Jan 19 11:04 03-uddiv3.ldif -rw-r--r-- 1 opendj opendj 12207 Jan 19 11:04 04-rfc2307bis.ldif -rw-r--r-- 1 opendj opendj 6339 Jan 19 11:04 05-rfc4876.ldif -rw-r--r-- 1 opendj opendj 12223 Jan 19 11:04 05-samba.ldif -rw-r--r-- 1 opendj opendj 14141 Jan 19 11:04 05-solaris.ldif -rw-r--r-- 1 opendj opendj 1125 Jan 19 11:04 06-compat.ldif -rw-r--r-- 1 opendj opendj 122837 Jan 19 12:09 99-user.ldif
  • Scheduled Tasks: tasks backend: -rw------- 1 opendj opendj 572 Jan 19 11:05 tasks.ldif

export-ldif

The export-ldif command is used to export entry data from the DS database backend into a text based file called LDIF. The export-ldif command extracts the binary data from the above *.jdb database files and saves it as an LDIF file.

Format of Files Backed Up

Each backup command saves the files backed up in various ways.

backup

All files with each backend backup are contained within a compressed .zip file. Other than a schema backend backup, all files within the backup itself use the original file names.

  • Database files: userRoot backend: backup-userRoot-20180204231456Z: Zip archive data, at least v2.0 to extract userRoot/$ unzip -l backup-userRoot-20180204231456Z Archive: backup-userRoot-20180204231456Z OpenDJ backup 20180204231456Z of backend userRoot  Length Date Time Name  -------- ---- ---- ----   6849913 02-04-18 16:14 00000000.jdb  -------- -------   6849913 1 file
  • Schema files: schema backend: schema-backup-20180204231456Z: Zip archive data, at least v2.0 to extract schema/$ unzip -l schema-backup-20180204231456Z Archive: schema-backup-20180204231456Z OpenDJ schema backup 20180204231456Z  Length Date Time Name  -------- ---- ---- ----         0 02-04-18 16:14 schema.comment     43693 02-04-18 16:14 00-core.ldif.instance      6596 02-04-18 16:14 01-pwpolicy.ldif.instance    194661 02-04-18 16:14 02-config.ldif.instance      5058 02-04-18 16:14 03-changelog.ldif.instance      1227 02-04-18 16:14 03-pwpolicyextension.ldif.instance      3561 02-04-18 16:14 03-rfc2713.ldif.instance      2133 02-04-18 16:14 03-rfc2714.ldif.instance      3235 02-04-18 16:14 03-rfc2739.ldif.instance      3272 02-04-18 16:14 03-rfc2926.ldif.instance      1710 02-04-18 16:14 03-rfc3112.ldif.instance     12652 02-04-18 16:14 03-rfc3712.ldif.instance     15858 02-04-18 16:14 03-uddiv3.ldif.instance     12207 02-04-18 16:14 04-rfc2307bis.ldif.instance      6339 02-04-18 16:14 05-rfc4876.ldif.instance     12223 02-04-18 16:14 05-samba.ldif.instance     14141 02-04-18 16:14 05-solaris.ldif.instance      1125 02-04-18 16:14 06-compat.ldif.instance  -------- -------    339691 18 files
  • Scheduled Tasks: tasks backend: tasks-backup-20180204231456Z: Zip archive data, at least v2.0 to extract tasks/$ unzip -l tasks-backup-20180204231456Z Archive: tasks-backup-20180204231456Z  OpenDJ tasks backup 20180204231456Z    Length Date Time Name  -------- ---- ---- ----      1277 02-04-18 16:14 tasks.ldif  -------- -------      1277 1 file

ldif-export

As mentioned, export-ldif saves a copy of the database in a text based file in LDIF format:

bin/$ file userRoot-export.ldif userRoot-export.ldif: ASCII English text -rw------- 1 opendj opendj 18053 Feb 12 16:21 userRoot-export.ldif

Example of an LDIF export (truncated):

dn: dc=example,dc=com objectClass: domain objectClass: top dc: example entryUUID: 5340ce76-ad75-3e8a-98a0-6e7674680f45 dn: ou=People,dc=example,dc=com objectClass: organizationalunit objectClass: top ou: People entryUUID: bbc3fc6f-7476-3428-9b3e-dd0b264720da dn: uid=user.0,ou=People,dc=example,dc=com objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: top postalAddress: Aaccf Amar$01251 Chestnut Street$Panama City, DE 50369 postalCode: 50369 uid: user.0 description: This is the description for Aaccf Amar. userPassword: {SSHA}XOkWvX53G4YBu7KtLBi3pIIcQiJ2uzMeqh5XBg== ... ... ... bin/$ file userRoot-export.ldif* userRoot-export.ldif: ASCII English text userRoot-export.ldif.gzip: gzip compressed data, from FAT filesystem (MS-DOS, OS/2, NT)

Other Required Files

While the backup and export-ldif commands are good for backing up database backends, they miss crucial data points that are required to fully restore instances to their working state. It is recommended that you include each of the following (as needed) that are most relevant to your integration.

Critical files

  • config.ldif - contains the main configuration for the DS instance, including replication data. This file is located in /path/to/ds/config.
  • admin-backend.ldif - contains the certificate for each replicated instance and the admin entry used for replication. The admin backend "cn=admin data" is created at startup using this file. This file is located in /db/adminRoot in DS 6.5.x, /db/admin in DS 6 or /config in DS 5.x.
  • keystore/truststores - contains client (truststore and keystore), replication (ads-truststore) and administration (admin-truststore and admin-keystore) key pairs. These files are located in /path/to/ds/config; as of DS 6.x, ads-truststore is located in /path/to/ds/db/ads-truststore for new installs.
  • java.properties - contains the java properties used by the various command lines when executed. This file is located in /path/to/ds/config.

Non-critical files

  • http-config.json - contains the REST 2 LDAP configuration. This file is located in /path/to/ds/config.
  • wordlist.txt - contains all the words that all password policies use to match against when adding or changing passwords. This file is located in /path/to/ds/config.
  • Init scripts - all init.d scripts used to start and stop DS instances when the system boots or shuts down.

Restoration Strategies

Replicated Servers

Special considerations must be taken when restoring replicated servers.

See How do I restore old backup data to a DS 5.x or 6.x replication topology? and Administration Guide › To Restore a Replica for further information on the process of restoring replicated servers.

Restore Considerations for Replicated Servers: (*)

For all types of restores for a Replicated server, the following considerations must be adhered to:

  • Use a backup that is not older than the replication-purge-delay.
  • If the available backups are older than the purge delay, initialization from an up-to-date Master is a better option.
  • Restores using backups that are older than the purge delay will succeed but there will be a gap in the replicated changes as discussed in the documentation link above and the server will need to be initialized.

Full Restores

Considerations:

  • Other than the Purge Delay restriction above (*), no special considerations are needed.
  • Offline (server stopped) restores will complete faster than online (server started) restores.

Incremental Restores

Incremental Restores must be completed in the following way:

Example: If a full backup was taken on Sunday and incremental backups were taken daily, Monday through Saturday, the administrator would have 7 backups to restore the instance up to the latest restore point. This should be done as follows:

  1. Restore the latest full backup.
  2. Restore each incremental backup starting with Monday and ending with Saturday's backup.

Considerations:

  • Other than the Purge Delay restriction above (*), no special considerations are needed.
  • Offline (server stopped) restores will complete faster than online (server started) restores.

System Restores

Considerations:

  • Other than the Purge Delay restriction above (*), no special considerations are needed.

Bare Metal or Ground Zero Restores

When restoring to a newly installed instance, the instance must have been configured with the ./setup command before the restore takes place. Failing to first configure the new unzipped instance and issuing the restore command will result in the following error and the restore will fail:

[root@bin]# ./restore --backupDirectory /opt/opendj/bak/userRoot/ --backupID BackupTask-97308e2e-30b2-487f-8ee5-0eae8e411128-20150129020000000 ERROR An error occurred while trying to load the Directory Server schema: An error occurred at or near line 1700 while trying to parse the configuration from LDIF file /opt/opendj/config/config.ldif: org.opends.server.util.LDIFException: Entry cn=Character Set,cn=Password Validators,cn=config read from LDIF starting at line 1700 includes a duplicate attribute ds-cfg-character-set with value 1:ABCDEFGHIJKLMNOPQRSTUVWXYZ. The second occurrence of that attribute value has been skipped

The following process must be adhered to if you need to build a new server or restore a server from scratch:

  1. Install and configure the instance to a basic level, that is, using the setup command.
  2. Execute the restore command with the appropriate --backupID.

Considerations:

  • Other than the Purge Delay restriction above (*), no special considerations are needed.

Point In Time Restores

If you need to restore all servers to a certain point in time you must do so as described in How do I roll back an entire network of DS 5.x or 6.x replicas to a previous backup?

Considerations:

  • Other than the Purge Delay restriction above (*), no special considerations are needed.

See Also

How do I configure DS 5.x or 6.x to ensure accidentally deleted or changed data can be restored when replication is enabled?

Generation IDs do not match error after restoring a DS (All versions) replica

FAQ: Backup and restore in DS 5.x and 6.x

Installing and Administering DS

Administration Guide › Backing Up and Restoring Data

Related Training

ForgeRock Directory Services Core Concepts (DS-400)

Related Issue Tracker IDs

N/A


Copyright and Trademarks Copyright © 2021 ForgeRock, all rights reserved.