How To
ForgeRock Identity Platform
Does not apply to Identity Cloud

How do I use the Access log to troubleshoot DS (All versions)?

Last updated Jun 14, 2021

The purpose of this article is to provide information on understanding the Access log to help you troubleshoot DS.

1 reader recommends this article


This article provides information to help you understand the Access log and use it to troubleshoot DS. The following topics are covered:

You can also analyze operation performance in the log files using a tool called slowops; the source for the slowops tool can be found here: Githutb: OpenDJ Utilities - slowops.

Log details

The default log format is JSON, with each entry providing details of the request (REQ) and the response (RES). This article shows examples based on the JSON log formats. 

In all versions of DS, you can choose which log format to use: About Logs. The following table indicates which log formats are available depending on your version of DS:

Access log format DS 7.x DS 6.5.x DS 6 DS 5.x
JSON Yes (default) Yes (default) Yes (default) Yes (default)
CSV Yes Yes Yes Yes
Syslog Yes Yes Yes Yes
JDBC Yes Yes Yes Yes
Elasticsearch -- Yes Yes Yes
JMS Yes Yes Yes Yes
Splunk Yes * Yes Yes Yes
Standard Output Yes Yes -- --

* The Elasticsearch and Splunk audit event handlers are deprecated in DS 7.1.

The log files are located in the following directory depending on whether you are using an external or embedded DS:

  • External/path/to/ds/logs directory where DS is installed.
  • Embedded/path/to/openam/opends/logs

The default log files are called ldap-access.audit.json.<datestamp>, for example: ldap-access.audit.json.20210722112257

Identifying source of the change

You can identify whether a change was made locally or via replication by checking the logs as follows:

  • Local changes are made on a real connection as indicated by actual client/server addresses and the connection value ("connId":nnnn), for example: {"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"ADD","connId":5799,"msgId":361,"dn":"uid=user1,ou=People,ou=employees,dc=example,dc=com"},"transactionId":"0"
  • Replicated changes are shown with internal IP addresses ("ip":"internal","port":-1), a negative connection value ("connId":-n) and a sync type operation ("opType":"sync"), for example: {"eventName":"DJ-LDAP","client":{"ip":"internal","port":-1},"server":{"ip":"internal","port":-1},"request":{"protocol":"internal","operation":"ADD","connId":-4,"msgId":361,"opType":"sync","dn":"uid=user101,ou=People,ou=employees,dc=example,dc=com"},"transactionId":"0"

See How do I troubleshoot replication issues in DS 5.x and 6.x? for further information on troubleshooting any replication issues that are identified. 

Identifying missing entries

Missing entries are indicated by a "statusCode":"32" in your logs. These log entries can indicate a genuine missing entry or have an innocent explanation (such as a client application testing to see if the entry exists by doing a read):

  • If the entry is really missing, you will need to add the entry or restructure your operation to use an entry that does exist.
  • If you have a misconfigured client application, for example pointing to the wrong base DN, you need to correct the configuration.
  • If you have a client application doing test reads, you can ignore these entries.

Example log entries

  • Search issue: "response":{"status":"FAILED","statusCode":"32","elapsedTime":2,"elapsedTimeUnits":"MILLISECONDS","detail":"The search base entry 'ou=agent1,ou=default,ou=1.0,ou=AgentService,ou=services,dc=example,dc=com' does not exist","nentries":0},"timestamp":"2021-07-21T15:04:53.850Z","_id":"2fff31b5-4682-4760-b32a-fbbfed6bd9ac-433"}
  • Add issues: "response":{"status":"FAILED","statusCode":"32","elapsedTime":1,"elapsedTimeUnits":"MILLISECONDS","detail":"The provided entry entryuuid=f3182f2c-0ed1-83e1-7d61-4f1c712e7fd2+ou=People,o=employees cannot be added because its suffix is not defined as one of the suffixes within the Directory Server"},"timestamp":"2021-07-21T15:04:53.850Z","_id":"2fff31b5-4682-4760-b32a-fbbfed6bd9ac-433"} "response":{"status":"FAILED","statusCode":"32","elapsedTime":0,"elapsedTimeUnits":"MILLISECONDS","detail":"Entry uid=jdoe,ou=People,ou=employees,dc=example,dc=com cannot be added because its parent entry ou=People,ou=employees,dc=example,dc=com does not exist in the server"},"timestamp":"2021-07-21T15:04:53.850Z","_id":"2fff31b5-4682-4760-b32a-fbbfed6bd9ac-433"}

Understanding slow searches

Slow searches are indicated by high elapsedTimes in the access log and are often the result of unindexed searches. elapsedTimes are typically measured in milliseconds, although you can configure the server to use nanoseconds instead. You can also learn other things about the search, which may be causing an issue.

See How do I troubleshoot issues with my indexes in DS (All versions)? for further information on resolving index issues.

The following examples show some common log entries and their meanings/resolution.

Unindexed search

{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"SEARCH","connId":159078,"msgId":154,"dn":"ou=groups,dc=AMusers","scope":"wholeSubtree","filter":"(memberUid=JDoe)","attrs":["1.1"]},"transactionId":"0","response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":11999,"elapsedTimeUnits":"MILLISECONDS","additionalItems":" unindexed","nentries":0},"timestamp":"2021-07-21T15:04:51.850Z","_id":"e3a44b25-4800-8cfa-1bad-6e2bb6461607-38233"}

The search is shown as unindexed and has a high elapsedTime​​​​​​​ value (elapsedTime​​​​​​​ values should typically be 2 or 3). The high elapsedTime​​​​​​​ value is also an indicator that the search may be unindexed.

This could be resolved by appropriate indexing as detailed in Unindexed searches causing slow searches and poor performance on DS (All versions) server.

Unindexed search (objectclass=person filter)

{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"SEARCH","connId":159078,"msgId":135,"dn":"dc=example,dc=com","scope":"sub","filter":"(objectclass=person)","attrs":"cn,memberof,ismemberof"},"transactionId":"0","response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":11999,"elapsedTimeUnits":"MILLISECONDS","additionalItems":" unindexed","nentries":500},"timestamp":"2021-07-21T15:04:51.850Z","_id":"e3a44b25-4800-8cfa-1bad-6e2bb6461607-38233"}This example also indicates the search is unindexed.

Additionally, you can see the search is using the objectclass=person filter. It is very likely that all user entries have an objectclass=person value, which would take the IDs maintained by the objectclass=person index over the index-entry-limit. This means all subsequent searches would be forced to search through the entire backend database to find the search candidates. These searches only hit the objectclass=person index; the requested return attributes listed do not touch any indexes as the values for these attributes are returned by DS.

This could be resolved by appropriate indexing and also by reconstructing the search to avoid using the objectclass=person filter.

Unindexed search (objectclass=* complex filter) 

{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"SEARCH","connId":159078,"msgId":135,"dn":"dc=example,dc=com","scope":"sub","filter":"(|(objectclass=*)(myAttr=12345))","attrs":"cn,uid,mail"},"transactionId":"0","response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":20379,"elapsedTimeUnits":"MILLISECONDS","additionalItems":" unindexed","nentries":500},"timestamp":"2021-07-21T15:04:51.850Z","_id":"e3a44b25-4800-8cfa-1bad-6e2bb6461607-38233"}This example also indicates the search is unindexed.

Using objectclass=* is a presence filter, meaning it will match all objectclass values. Since all LDAP entries have at least one objectclass, this type of search will always be slow and come back unindexed unless your directory is tiny (less than 4k), above this the searches for objectClass=* will always be unindexed.

To further troubleshoot these types of issue, you may also need to log internal operations. You can enable this logging using the dsconfig command appropriate to your version:

  • DS 7.1  and later: $ ./dsconfig set-log-publisher-prop --hostname --port 4444 --bindDN uid=admin --bindPassword password --publisher-name "Json File-Based Access Logger" --set suppress-internal-operations:false --usePkcs12TrustStore /path/to/ds/config/keystore --trustStorePassword:file /path/to/ds/config/ --no-prompt
  • DS 7: $ ./dsconfig set-log-publisher-prop --hostname --port 4444 --bindDN uid=admin --bindPassword password --publisher-name "Json File-Based Access Logger" --set suppress-internal-operations:false --usePkcs12TrustStore /path/to/ds/config/keystore --trustStorePasswordFile /path/to/ds/config/ --no-prompt
  • Pre-DS 7: $ ./dsconfig set-log-publisher-prop --hostname --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --publisher-name "Json File-Based Access Logger" --set suppress-internal-operations:false --trustAll --no-prompt

High elapsedTime​​​​​​​

"response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":1187,"elapsedTimeUnits":"MILLISECONDS","nentries":1},"timestamp":"2021-07-21T15:04:51.850Z","_id":"e3a44b25-4800-8cfa-1bad-6e2bb6461607-38233"}This example has the high elapsedTime​​​​​​​ but does not state it is unindexed. Unindexed only appears in the access log if no indexes were used to process the search, but the high elapsedTime​​​​​​​ still suggests a partially unindexed search.

This could be resolved by appropriate indexing.

Understanding slow write operations

Slow write operations can cause their own issues, but if they are too slow, they can eventually timeout causing the write to fail. The following example shows entry locks that prevent the write taking place: 

{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"MODIFY","connId":2131,"msgId":47,"dn":"uid=jdoe,ou=People,ou=employees,dc=example,dc=com"},"transactionId":"0","response":{"status":"FAILED","statusCode":"51","elapsedTime":9000,"elapsedTimeUnits":"MILLISECONDS","detail":"Entry uid=jdoe,ou=People,ou=employees,dc=example,dc=com cannot be modified because the server failed to obtain a write lock for this entry after multiple attempts"},"timestamp":"2021-08-08T14:22:43.683Z","_id":"c2a5dbd3-4960-b7d2-96e2-20370bf25b31-737"}

These type of errors indicate potential contention issues with clients updating the same entries at the same time. If one client updates an entry on a system that is highly loaded and then another client tries to update the same entry, you will see these write errors.

Expensive indexes

Some index types (for example, substring) are expensive for the server to maintain, particularly in terms of write performance. Indexes should be configured for the types of searches specific to your client application's needs and unnecessary indexes should be avoided. See the following links for further information:

Static groups

If you see writes taking place against static groups, for example: 


You need to ask yourself the following questions to help understand why these write operations are not as fast as expected:

  • How big is this static group? That is, how many uniqueMembers does it contain?
  • How is the application doing the MODIFY operation on these static groups? Is it replacing the whole group or just adding a single member?
  • Are you performing any isMemberOf searches while the group modifications are taking place?

Finding out more about the operation

Once you have identified your issue, you can grep on a specific connId value to discover more about that operation to help pinpoint the issue. For example:

Identify the origin of the call

grepping for CONNECT in your logs for a specific connId value will tell you where that call originated from; in these examples, the call came from the IP address:

grep 'connId":159078' ldap-access.audit.* | grep CONNECT ldap-access.audit.json.20210722112257:{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"CONNECT","connId":159078},"transactionId":"0","response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":0,"elapsedTimeUnits":"MILLISECONDS"},"timestamp":"2021-07-21T00:38:33.437Z","_id":"3a6d298f-f9e7-4bbf-bc03-0b2c82224c15-1961"}

Identify who made the call

grepping for BIND in your logs for a specific connId value will tell you about the authentication request; in these examples, the call was made by uid=admin: 

grep 'connId":159078' ldap-access.audit.* | grep BIND ldap-access.audit.json.20210722112257:{"eventName":"DJ-LDAP","client":{"ip":"","port":8443},"server":{"ip":"","port":1636},"request":{"protocol":"LDAPS","operation":"BIND","connId":159078,"msgId":1,"version":"3","dn":"uid=admin","authType":"SIMPLE"},"transactionId":"1f80c37c-436a-800f-9528-33ec2b6707be-0/16","response":{"status":"SUCCESSFUL","statusCode":"0","elapsedTime":37,"elapsedTimeUnits":"MILLISECONDS"},"userId":"uid=admin","timestamp":"2021-07-21T00:38:33.437Z","_id":"1f80c37c-800f-436a-9528-33ec2b6707be-661"}

See Also

Unindexed searches causing slow searches and poor performance on DS (All versions) server

How do I troubleshoot issues with my indexes in DS (All versions)?

How do I use the Support Extract tool in DS (All versions) to capture troubleshooting data?

How do I troubleshoot replication issues in DS 5.x and 6.x?

How do I change the location of log files for DS (All versions)?

How do I configure DS (All versions) to use the Syslog audit event handler?

Troubleshooting DS

Logging Guide

Monitoring Guide

Related Training


Related Issue Tracker IDs


Copyright and Trademarks Copyright © 2021 ForgeRock, all rights reserved.