What's New

New in 7.0.1

  • The DS password synchronization plugin for IDM now supports OAuth 2.0 access token bearer authentication.

    For details, see Synchronizing Passwords With ForgeRock Directory Services (DS) in the IDM Password Synchronization Plugin Guide.

  • DS command options that have secrets as arguments now support :env and :file modifier suffixes. Use these with the following options to provide the secret in an environment variable (:env), or in a file (:file):

    • --bindPassword[:env|:file]

    • --deploymentKeyPassword[:env|:file]

    • --keyStorePassword[:env|:file]

    • --monitorUserPassword[:env|:file]

    • --rootUserPassword[:env|:file]

    • --set[:env|:file] (for setup profile parameters)

    • --trustStorePassword[:env|:file]

    For example, if the bind password is stored in a ~/.pass file, use --bindPassword:file ~/.pass. If the password is stored in the environment variable PASS, use --bindPassword:env PASS.

  • The supportextract command now uses the jcmd command, if available, for heap dumps. Otherwise, it uses the jmap command.

    Issue: OPENDJ-7662

New in 7.0.0

Access Control

This release supports use of aliases in addition to OIDs for LDAP controls and extended operations in ACIs, making those ACIs significantly more human-readable. For details, see "Directory Server ACIs".

Since previous releases support only OIDs, only use aliases in ACIs after upgrading all directory servers. Otherwise, older servers will log warning messages for the unrecognized aliases, such as the following:

Access Control Instruction (ACI) targetcontrol expression value "value" is invalid.
 A valid targetcontrol keyword expression value requires one or more valid control OID strings in the following format:
 oid [|| oid1] ... [|| oidN]

When resolving the identity, the server uses the first identity mapper that finds a match. If multiple identity mappers match different entries, however, then the server returns LDAP error code 19, Constraint Violation.

For background information, see "Identity Mappers".

Backup and Restore

The release provides a new, simplified implementation for DS backup and restore operations:

  • The new implementation replaces backup archives with collections of backup files.

    The collection includes backend files and backup metadata. The files always follow the same layout, regardless of what you back up.

    You manage backup files by retaining an entire backup directory. You are no longer required to use a separate backup directory for each backend.

  • You can now stream backup files directly to cloud storage, and restore directly from cloud storage.

  • You no longer have to make a choice between full and incremental backup operations. Backup operations are incremental by design. When you reuse the same backup directory, the process only backs up new data.

  • The new implementation includes a purge command for removing old backup files. You can purge old files either as an external command, or as a server task.

  • In the event of a disaster, you can restore from a backup directory stored off-site using only the deployment key and password, and a backup copy of the server configurations.

    The new implementation protects (encrypts) the backup encryption keys with the shared master key. It stores the encrypted encryption keys in the backup files.

    You no longer need to configure replication between new replicas and a server from the existing topology. Instead, you first set up replacement replicas with the deployment key and password, restoring the backed up server configurations to match those servers lost in the disaster. You then restore the data using the off-site backup directory.

  • The new implementation always signs and verifies the integrity of backups, and always encrypts backup files.

    The new implementation encrypts the keys used for signing and encryption with the shared master key. It stores the encrypted keys in the backup files.

    You can verify the integrity and ability to decrypt backups before restoring a backend.

  • The new implementation makes it possible to list and verify backups while the server is online.

  • The new implementation improves restore performance compared to restore of incremental backups in previous versions.

    The previous implementation restored files from the full backup archive, and then restored files from each incremental backup archive. Files could be restored and then removed, or restored multiple times.

    The new implementation only restores one version of each file in the backup directory.

  • A new command, dsbackup, replaces the backup and restore commands.

    The dsbackup command performs operations formerly performed using separate commands:

    • dsbackup create performs backup operations.

    • dsbackup list displays a summary of available backups, and lets you verify them.

    • dsbackup purge removes old backup files.

    • dsbackup restore performs restore operations.

  • The new dsbackup restore command has a --backendName option, which lets you restore only the specified backend.

For examples, see Backup and Restore.

Cloud Deployments

This release lifts restrictions on running DS servers in Docker and Kubernetes deployments. Many individual improvements make this possible, including the following:

  • Replication improvements let you scale the number of DS replicas in your stateful sets up and down.

  • The new dsrepl command runs well in Docker containers.

ForgeRock supports customers deploying DS in Docker containers and Kubernetes platforms, as highlighted in the important note under Requirements.

To get started, try the following:

  • Use the forgeops repository and the unsupported, evaluation-only base images for the ForgeRock Identity Platform. The images are available in ForgeRock's public Docker registry.

    For details, see Base Docker Images in the ForgeRock DevOps documentation.

  • Build your own sample DS Docker image.

    Unpack the .zip distribution, then see the opendj/samples/docker/README.md file for instructions.

Collective Attributes

DS servers now support specifying the relative parent in collective attribute subentries.

For details, see "Inherit From a Parent Entry".

Data Storage

By default, DS servers now share cache memory among JE database backends. The server keeps JE database internal and leaf nodes in the database heap cache.

For existing servers, the upgrade command does not change the database cache behavior. Consider setting the global property je-backend-shared-cache-enabled:true, and the JE backends' properties db-cache-mode:cache-ln after upgrade.

This release upgrades JE backend databases to Berkeley DB Java Edition 18.3.12.


Different DS server versions continue to replicate data during the upgrade process. However, the JE upgrade has the following implications for the portability of local DS data. Once you upgrade the data in a JE backend database:

  • You cannot downgrade a directory server without also restoring JE backend data from a pre-upgrade server.

  • You cannot restore backups of an upgraded JE backend on a pre-upgrade directory server.

In addition, several JE backend properties that affect cache sizing and database maintenance can now be changed at runtime without restarting the backend. For details, see "JE Backend".

Data Encryption

DS servers now store symmetric keys, encrypted with the shared master key, with the data they encrypt.

It is no longer necessary for disaster recovery to maintain a file system backup of a server from each replication topology in your deployment. It is now sufficient to keep the backup directory and a means to recover the shared master key. As long as a server has the same shared master key as the server that encrypted the data, it can recover symmetric keys needed to decrypt data.

Be aware that this feature is new, and not provided in previous versions of DS software. Replication is fully compatible with previous server versions, but backup files are not. For this feature to work, you must use a backup from an upgraded or new server.

DS directory servers now support Galois/Counter Mode (GCM) with AES for encrypted data confidentiality. GCM is efficient and improves integrity protection for encrypted backend data.

Set the data encryption cipher transformation, as described in Data Encryption. The default setting for the backend property, cipher-transformation, is now AES/GCM/NoPadding.

Email Notifications

Email notifications now support SMTP authentication and use of TLS.

For details, see "Send Account Status Mail", and "Mail Server".


DS software now supports the * character in malformed attribute options for interoperability with the Microsoft Active Directory "range retrieval" mechanism.


ForgeRock Common Audit loggers now whitelist all fields that are safe to log by default. The whitelist is processed before the blacklist, so blacklist settings overwrite the whitelist defaults.

For details, see "Whitelist Log Message Fields".

DS servers can now send error messages to standard output.

For details, see "Log Errors to Standard Output".

DS servers now record additional information about LDAP operations in access log messages:

  • For LDAP bind operations, the security strength factor (SSF) negotiated for secure client connections appears in the response field of the access log message. For example:

  • For persistent searches, the log messages include "additionalItems":"persistent".

When a connection handler fails to start, DS servers now log an error message indicating the cause.


DS servers now replicate the monitor user created at setup time (default DN: uid=monitor).

This lets commands like dsrepl status use the same account credentials to retrieve monitoring information from all servers. You can use the account in the same way for multi-server monitoring operations.


DS servers now have a new, required, global property, advertised-listen-address. This setting specifies the hostname or IP address that clients should use for connecting to the server. The advertised-listen-address can be multi-valued in systems with multiple network interfaces. DS servers also now have a global property, listen-address. The listen-address property can be set to the wildcard IP address,, but the advertised-listen-address property cannot. By default, replication and connection handlers inherit their settings for listen addresses from these global properties.

This improvement lets DS servers make fewer DNS requests than before.

When setting up a new server, the setup command sets the advertised-listen-address property to the IP address or the FQDN provided as the --hostname argument.

During upgrade, the value for the advertised-listen-address property is assigned using the hostname derived from administrative data under cn=admin data. If any listen-address properties are set to the same value, then those settings are removed during upgrade, and the values are inherited instead.


DS software now supports Salted Challenge Response Authentication Mechanism (SCRAM) SASL binds.

A SASL SCRAM mechanism provides a secure alternative to transmitting plaintext passwords during binds. It is an appropriate replacement for DIGEST-MD5 and CRAM-MD5.

With a SCRAM SASL bind, the client must demonstrate proof that it has the original plaintext password. During the SASL bind, the client must perform computationally intensive processing to prove that it has the plaintext password. This computation is like what the server performs for PBKDF2, but the password is not communicated during the bind.

Once the server has stored the password, the client pays the computational cost to perform the bind. The server only pays a high computational cost when the password is updated, for example, when an entry with a password is added or during a password modify operation. A SASL SCRAM mechanism therefore offers a way to offload the high computational cost of secure password storage to client applications during authentication.

Passwords storage using a SCRAM storage scheme is compatible with simple binds and SASL PLAIN binds. When a password is stored using a SCRAM storage scheme, the server pays the computational cost to perform the bind during a simple bind or SASL PLAIN bind.

The SCRAM password storage scheme must match the SASL SCRAM mechanism used for authentication. In other words, SASL SCRAM-SHA-256 requires a SCRAM-SHA-256 password storage scheme. SASL SCRAM-SHA-512 requires a SCRAM-SHA-512 password storage scheme.

DS software offers the following in the configuration for new servers:

Password Storage SchemeSASL Mechanism

For additional information, see "Password Storage" for the server, and "Gateway LDAP Connections" for the REST to LDAP gateway.

DS servers now support LDAP subentry password policies that match all features available in per-server password policies.

Servers store subentry policies in the directory data, and therefore replicate them. This improvement significantly simplifies password policy management across multiple replicas.

For details, see "DS Subentry Password Policies". Many samples in the documentation now demonstrate features of the improved subentry password policies.

DS servers now support additional password storage schemes, PBKDF2-HMAC-SHA256 and PBKDF2-HMAC-SHA512.

The new password storage schemes use SHA-256 and SHA-512 hash-based message authentication code settings. The PBKDF2 password storage scheme uses SHA-1.

To migrate passwords to a new storage scheme, see "Deprecate a Password Storage Scheme".

Salted hashed password storage schemes now use 128-bit salt when generating a hash.

This change applies to the following password storage schemes:

Salted MD5
Salted SHA-1
Salted SHA-256
Salted SHA-384
Salted SHA-512

You can now configure BCrypt and PBKDF2-based password storage schemes to recalculate password hashes after the iterations settings are changed. DS servers recalculate and store an account's password hash when the user binds successfully with their password.

For details regarding BCrypt, see the reference for the property rehash-policy. For details regarding PBKDF2-based schemes, see the reference for the property rehash-policy.

DS servers support a new control to request password quality advice when changing a password. Should the request fail due to low password quality, the response control indicates which password validator settings led to the failure.

The ldappasswordmodify and ldapmodify commands support the new control. Use them to test and debug password policy validation settings.

The new LDAP control has interface stability: Evolving. It may be removed in a future release, or replaced with a more general mechanism.

For details, see "Check Password Quality", and "Check Password Quality".


The export-ldif command can now complete an export up to twice as fast as before. This improvement is particularly useful with large data sets including tens or hundreds of millions of entries.

DS servers now perform better for REST to LDAP searches, and operations that rely on ETags for MVCC.


The setup command now lets a proxy backend bind to remote servers with mutual TLS. The setup profile for a proxy server configures the server to use mutual TLS to authenticate when binding to backend servers. As a result, you must provision the key manager for the proxy with the proxy service account keys, and include the certificate in the proxy user account when using the DS proxy server setup profile.

For details, see Install Directory Proxy.

When setting up new DS replicas, use the ds-proxied-server setup profile to prepare the replicas for use with new DS proxy servers.

For details, see Install DS For Use With DS Proxy.


REST to LDAP mappings now support references by resource paths, simplifying access to all resource fields. RESTful clients can use this to issue graph-like queries. For example, the following path and query filter returns the groups that Babs Jensen's manager belongs to:


For an example, see "Graph-Like Queries".

To demonstrate this feature, the sample REST to LDAP mapping now uses resource paths. The configuration is simpler than the configuration with base DN references.

For example, this excerpt shows a manager reference from the version that uses a base DN:

  "manager": {
    "type": "reference",
    "ldapAttribute": "manager",
    "baseDn": "..",
    "primaryKey": "uid",
    "mapper": {
      "type": "object",
      "properties": {
        "_id": {
          "type": "simple",
          "ldapAttribute": "uid",
          "isRequired": true
        "displayName": {
          "type": "simple",
          "ldapAttribute": "cn",
          "writability": "readOnlyDiscardWrites"

The same manager reference using a resource path now looks like this:

  "manager": {
    "type": "reference",
    "resourcePath": ".."

The latter definition ensures access to all fields defined for the referenced resource.

REST to LDAP mappings now support reverse references.

Reverse references are similar to the isMemberOf LDAP attribute used for groups. For example, use a reference reference mapping to lists a user's devices, or to list a manager's reports:

  "reports": {
    "type": "reverseReference",
    "resourcePath": "..",
    "propertyName": "manager"

For an example in context, see "Reverse References".

REST to LDAP now supports passwordQualityAdvice and dryRun query string parameters.

The passwordQualityAdvice parameter relies on the DS LDAP password quality advice control, OID, which users must have access to request. The dryRun parameter relies on the LDAP no-op control, OID

The password quality advice control and the passwordQualityAdvice parameter have interface stability: Evolving. They may be removed in a future release, or replaced with a more general mechanism.

For details, see "Check Password Quality".

REST to LDAP now includes an accountUsability action.

For details, see "Account Usability Action".

The REST to LDAP gateway now supports SASL EXTERNAL and SASL SCRAM binds.

For details, see "Gateway LDAP Connections".

DS servers now let you create per-server (configuration-based) password policies over REST.

For an example, see "Per-Server Password Policies".

The REST to LDAP gateway supports using attributes with NameAndOptionalJSON syntax as references.

For details, see "API Configuration".


The setup command now lets you configure replication at setup time.

You therefore no longer need to get all peer servers running before configuring replication. The server begins replicating with peer servers when it comes online, and when it can contact the peers. For this reason, the setup command no longer starts the server by default. To ensure replication proceeds smoothly from the beginning, finish configuring the server before starting it for the first time.

These new setup command options enable replication:

  • When you set the -r, --replicationPort option, the server runs a replication service and maintains a changelog.

    If you add local application data at setup time, the server replicates the data with other replicas. There is no need to configure and initialize replication separately.

  • When you set the --bootstrapReplicationServer option, the server contacts the specified replication server(s) to discover peer replicas and replication servers. This option is required when replicating between multiple servers.

    Use this option multiple times to specify redundant bootstrap servers for availability. Specify the same list of bootstrap servers each time you set up a replica.

    Your first bootstrap server(s) must have replication ports, because the first bootstrap server(s) must play the replication server role.

For examples, see the Installation Guide.

After configuring servers to replicate as part of the setup process, use the new dsrepl command to manage replication.

For details, see Replication.

This release lets you set server IDs to alphanumeric strings, such as ds1-us-west.

When you set a server ID, take care to choose a relatively short string.

The server ID appears in historical data values that include a change sequence number. For example, it shows up in monitoring metrics, and in the values of ds-sync-state and ds-sync-hist attributes in application data on DS replicas. As a result, historical data is potentially easier to interpret, but larger than in previous versions where server IDs were numbers.

Servers are now identified by a single, global server ID. For details, see server-id.

For new servers, use the setup command to specify the server ID, or accept the generated default string.

For existing servers, the upgrade command derives the ID in the following way:

  1. The command the existing global server ID, if available.

  2. Otherwise, the command uses the first server ID found in cn=admin data. Other server ID values are no longer used.

  3. If replication has not yet been configured, the commmand generates a new ID for the server.

Servers now have a single, global group ID. For details, see group-id.

For existing servers with group IDs, the upgrade command determines which ID is used most, and uses that ID as the single, global ID.

This release introduces replication receive delay and replay delay monitoring metrics. These metrics provide the best means yet to help you estimate whether the data in your directory server replicas is converging toward a consistent state.

For details, see "Replication Delay (LDAP)", or "Replication Delay (Prometheus)".

This release improves replication replay performance, reduces disk space used by the replication changelog database, and reduces replication delay in deployments under extreme load.

Servers now replicate changes made offline to an LDIF backend. The server replicates the offline changes once it starts again.

This release purges out-of-date replicas from the changelog. The replica is purged when it has been out of contact for longer than the replication purge delay.

This enables DS servers to eventually discard information about replicas that you have removed from service, for example.

You can also use the dsrepl purge-meta-data to eliminate stale historical data. For details, see "Manual Purge".

This release introduces a new replication server property to exclude domains from the changelog indexes, changelog-enabled-excluded-domains. Use this to prevent applications that read the external change log from having to process update notifications for entries that are not relevant to them.

This property eliminates the need for a separate external changelog domain configuration.

For an example, see "Exclude a Domain".

The am-cts setup profile now excludes the CTS base DN from change number indexing.

There is no need to update the changelog configuration manually after installing a new DS replica for as a CTS store.

DS servers now log additional information about naming conflicts, which helps you identify the server that generated the conflicting operation.


The DS server distribution now includes a sample Dockerfile and related files for building custom DS Docker images.

The DS server distribution now includes an updated sample monitoring dashboard for use with Grafana and Prometheus.


DS servers now support an attribute syntax for a DN optionally prepended with a JSON object. The associated matching rules let the server index and match the prepended JSON, or ignore it.

For details, see:

DS servers now support an option to require strict compliance for boolean attribute values.

By default, DS servers accept a range of values for boolean attributes. For details, see strict-format-boolean.


Default settings for new DS servers are more secure than before.

The explicit --productionMode option has been removed, as server configurations and profiles are now secure by default. New server installations require:

Secure connections

All operations except bind requests and StartTLS requests, and base object searches on the root DSE, require secure connections.

This behavior is governed by the global configuration property, unauthenticated-requests-policy, which is now set to allow-discovery, instead of allow, unless the last setup profile applied is the ds-evaluation profile.

For details on securing connections, see Secure Connections.


By default, servers deny anonymous access to most LDAP operations, controls, and extended operations.

For details on access control, see Access Control.

Additional access policies

By default, servers deny access to directory data. You must configure access policies to grant access to directory data. For details on granting access, see Access Control.

Only the evaluation setup profile is more lenient. It grants global permission to perform operations over insecure connections, and open access to sample Example.com data. For details, see "Learn About the Evaluation Setup Profile".

Stronger passwords

Passwords must have at least 8 characters. Common passwords are rejected.

For details on changing password policy, see "Configure Password Policies".

Permission to read log files

Log files are now read/write only by the DS server user.

For details on log file permissions, see "File Permissions".

As the upgrade process preserves the existing configuration, upgraded servers are not affected.

Review the changes in "Default Security Settings".

The setup and dskeymgr commands simplify creation and management of a public key infrastructure (PKI).

This release introduces the concept of a deployment key and deployment key password. The deployment key and password serve as an alternative to a private CA, simplifying evaluation, development, and testing, and managing directory services. They also serve to derive a shared master key to protect secret keys. The deployment key and password are required as part of the setup process. For details, see "Key Management".

When you use an existing CA, you can continue to use key pairs with CA-signed certificates.

For public-facing directory services, you can continue to configure connection handlers with additional key and trust manager providers using certificates signed by a well-known CA. For details, see "Key Management".

To manage deployment keys, key pairs, CA certificates, and master keys after setting up a server, use the dskeymgr command.

Many examples in the documentation now demonstrate use of deployment keys and passwords.

DS servers now reload file-based keystores and truststores when their contents change.

This lets you rotate certificates and keys without restarting the key manager or trust manager components.

This release greatly simplifies rotating the key pairs used to secure replication connections. By default, replication now uses the same keys as the other connection handlers.

For details on changing key pairs, see "Key Management".

PKCS#11 key managers and trust managers now let you set the keystore or truststore type. The default type is PKCS11.

If your JVM supports other types, set the keystore or truststore type with one of the following properties:

The following configuration objects can now reference multiple "Trust Manager Provider" objects:

Use this feature to allow trust for both well-known CAs whose certificates are stored in the JVM truststore, and internal or deployment-specific CAs whose certificates are stored in a separate truststore.

An external SASL mechanism handler can now reference multiple certificate-mapper configurations. The server uses the first certificate mapper that finds a successful match.

When you create a user data backend using the ds-user-data setup profile, the setup process now configures equality indexes for the ds-certificate-fingerprint and ds-certificate-subject-dn attributes. Certificate mappers use these indexes during certificate-based authentication.

DS servers now record additional items in access log messages when multiple password policy subentries apply to a user. The messages are logged only for bind, add, and modify operations. The messages show the DN of the user having more than one applicable policy, and the DN of the policy the server actually used for the operation. The server logs a message such as the following for a bind request with two conflicting policies:

"additionalItems":{"pwdpolicywarning":"Found 2 conflicting password policy subentries for user <user-dn>,
used <policy-dn>","ssf":"0"}

As described in "Assign Password Policies", you must not assign more than one password policy to the same account.


A new command, setup-profile, enables configuration of setup profiles following initial installation. Use the setup-profile command when the server is offline.

This command is intended for use in DevOps deployments where you apply additional configuration to a base image that is the same for all deployments.

If you have changes that apply to each server you set up, you can create and maintain your own setup profile. For details, see "Create Your Own".

All setup command profiles, except the ds-evaluation profile, now allow you to set the domain or the base DN. For details, see Setup Profiles.

The create-rc-script command now produces a systemd service file when you use the --systemdService option.

The ds-evaluation setup profile now lets you generate an arbitrarily large number of similar user entries. By default, the profile adds 100,000 generated users in addition to users previously included, such as Babs Jensen and Kirsten Vaughan.

Each user entry has a uid RDN like user.number. Each user entry's password is password.

The capability replaces the setup command option -d, --sampleData.

The addrate, modrate, and searchrate commands now support proxied authorization with the -Y, --proxyAs {authzID} option.

Formatted integers can now be supplied to some integer arguments, making commands easier to read.

When setting the number of generated sample entries as an argument to the setup command, and when setting integer arguments for the addrate, authrate, modrate, and searchrate commands, you can now use formatted integers. For example, the following are equivalent to 10000000:

"10 000 000"

Templates for the makeldif command can also accept formatted integers for numbers declared in a subordinate template.

The manage-tasks command now has --status and --type options.

When used with the --summary option, these options filter the list to include only tasks of the specified type and status. The option arguments are case insensitive, and must be provided in the JVM locale. For example, to list only unscheduled tasks on a JVM with the French locale, use --status "non planifié" instead of --status unscheduled.

When you schedule a task, you can now set its identifier with the --taskId option, and its description with the --description option. The identifiers and descriptions appear in output and messages that describe the task.

These new options are especially useful for recurring tasks. Use the task identifier when managing the task in subsequent commands, for example.

The following tools now never write to JE backend databases when reading JE information:

  • backendstat

  • export-ldif

  • verify-index

The status command now displays the same types of information independently of the server configuration, and regardless of whether the command runs in online or offline mode.

The command still displays more detailed information in online mode than in offline mode.

  • The supportextract command now also collects:

    • The directory superuser and monitor user account files.

    • The archived configuration files.

    • The profile and backend database version files.

    • Information about the changelog database.

    • The server.out log file before capturing stack traces.

    • The server PID in a message in the tool's log.

    • The cpuinfo, meminfo, slabinfo, and buddyinfo files on UNIX and Linux systems.

    • Stack traces with jcmd tool, falling back to the jstack tool, and then to sigquit (or kill -3 on Linux) as necessary.

    • Environment variables used in configuration expressions.

  • The extract generated by the tool is now compatible with the Java 11 JVM unified logging framework.

The ldifdiff command now supports a new -x, --exactMatch option for byte-by-byte LDIF comparisons.

This is useful for comparing LDAP schema files, for example.

Read a different version of :