LDAP-Based Monitoring
DS servers publish whether the server is alive and able to handle requests in the root DSE.
They publish monitoring information over LDAP under the entry cn=monitor
.
The following example reads all available monitoring entries:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN cn=monitor \
"(&)"
The monitoring entries under cn=monitor
reflect activity since the server started.
Many different types of metrics are exposed. For details, see LDAP Metrics Reference.
Monitor Privilege
The following example assigns the required privilege to Kirsten Vaughan’s entry to read monitoring data, and shows monitoring information for the backend holding Example.com data:
$ ldapmodify \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=admin \
--bindPassword password << EOF
dn: uid=kvaughan,ou=People,dc=example,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: monitor-read
EOF
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=kvaughan,ou=People,dc=example,dc=com \
--bindPassword bribery \
--baseDN cn=monitor \
"(ds-cfg-backend-id=dsEvaluation)"
dn: ds-cfg-backend-id=dsEvaluation,cn=backends,cn=monitor
ds-mon-backend-is-private: false
ds-mon-backend-entry-count: <count>
ds-mon-backend-writability-mode: enabled
ds-mon-backend-degraded-index-count: <count>
ds-mon-backend-ttl-is-running: <boolean>
ds-mon-backend-ttl-last-run-time: <timestamp>
ds-mon-backend-ttl-thread-count: <count>
ds-mon-backend-ttl-queue-size: <size>
ds-mon-backend-ttl-entries-deleted: <summary>
ds-mon-backend-filter-use-start-time: <timestamp>
ds-mon-backend-filter-use-indexed: <count>
ds-mon-backend-filter-use-unindexed: <count>
ds-mon-db-version: <version>
ds-mon-db-cache-evict-internal-nodes-count: <count>
ds-mon-db-cache-evict-leaf-nodes-count: <count>
ds-mon-db-cache-total-tries-internal-nodes: <count>
ds-mon-db-cache-total-tries-leaf-nodes: <count>
ds-mon-db-cache-misses-internal-nodes: <count>
ds-mon-db-cache-misses-leaf-nodes: <count>
ds-mon-db-cache-size-active: <size>
ds-mon-db-log-size-active: <size>
ds-mon-db-log-cleaner-file-deletion-count: <count>
ds-mon-db-log-utilization-min: <percentage>
ds-mon-db-log-utilization-max: <percentage>
ds-mon-db-log-size-total: <size>
ds-mon-db-log-files-open: <count>
ds-mon-db-log-files-opened: <count>
ds-mon-db-checkpoint-count: <count>
objectClass: top
objectClass: ds-monitor
objectClass: ds-monitor-backend
objectClass: ds-monitor-backend-pluggable
objectClass: ds-monitor-backend-db
ds-cfg-backend-id: dsEvaluation
Server Health (LDAP)
Anonymous clients can monitor the health status of the DS server
by reading the alive
attribute of the root DSE:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--baseDN "" \
--searchScope base \
"(&)" \
alive
dn:
alive: true
When alive
is true
, the server’s internal tests have not found any errors requiring administrative action.
When it is false
, fix the errors and either restart or replace the server.
If the server returns false
for this attribute, get error information,
as described in Server Health Details (LDAP).
Server Health Details (LDAP)
The default monitor user can check whether the server is alive
and able to handle requests on cn=health status,cn=monitor
:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN "cn=health status,cn=monitor" \
--searchScope base \
"(&)"
dn: cn=health status,cn=monitor
ds-mon-alive: true
ds-mon-healthy: true
objectClass: top
objectClass: ds-monitor
objectClass: ds-monitor-health-status
cn: health status
When the server is either not alive or not able to handle requests,
this entry includes error diagnostics as strings on the ds-mon-alive-errors
and ds-mon-healthy-errors
attributes.
Replication Delay (LDAP)
The following example uses the default monitor user account to check the delay in replication:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN cn=monitor \
"(ds-mon-receive-delay=*)" \
ds-mon-receive-delay
dn: ds-mon-domain-name=dc=example\,dc=com,cn=replicas,cn=replication,cn=monitor
ds-mon-receive-delay: <delay>
dn: ds-mon-server-id=<id>,cn=remote replicas,ds-mon-domain-name=dc=example\,dc=com,cn=replicas,cn=replication,cn=monitor
ds-mon-receive-delay: <delay>
DS replicas measure replication delay as the local delay when receiving and replaying changes. A replica calculates these local delays based on changes received from other replicas. Therefore, a replica can only calculate delays based on changes it has received. Network outages cause inaccuracy in delay metrics.
A replica calculates delay metrics based on times reflecting the following events:
-
t0: the remote replica records the change in its data
-
t1: the remote replica sends the change to a replica server
-
t2: the local replica receives the change from a replica server
-
t3: the local replica applies the change to its data
This figure illustrates when these events occur:
Replication keeps track of changes using change sequence numbers (CSNs), opaque and unique identifiers for each change that indicate when and where each change first occurred. The tn values are CSNs.
When the CSNs for the last change received and the last change replayed are identical, the replica has applied all the changes it has received. In this case, there is no known delay. The receive and replay delay metrics are set to 0 (zero).
When the last received and last replayed CSNs differ:
-
Receive delay is set to the time t2 - t0 for the last change received.
Another name for receive delay is current delay.
-
Replay delay is approximately t3 - t2 for the last change replayed. In other words, it is an approximation of how long it took the last change to be replayed.
As long as replication delay tends toward zero regularly and over the long term, temporary spikes and increases in delay measurements are normal. When all replicas remain connected and yet replication delay remains high and increases over the long term, the high replication delay indicates a problem. Steadily high and increasing replication delay shows that replication is not converging, and the service is failing to achieve eventual consistency.
For a current snapshot of replication delays, you can also use the dsrepl status
command.
For details, see Replication Status.
Replication Status (LDAP)
The following example uses the default monitor user account to check the replication status of the local replica:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN cn=monitor \
"(ds-mon-status=*)" \
ds-mon-status
dn: ds-mon-domain-name=dc=example\,dc=com,cn=replicas,cn=replication,cn=monitor
ds-mon-status: Normal
If the status is not Normal
, how you react depends on the value of the ds-mon-status
attribute:
Status | Explanation | Actions to Take |
---|---|---|
|
Replication is broken. Internally, DS replicas store a shorthand form of the initial state called a generation ID. The generation ID is a hash of the first 1000 entries in a backend. If the replicas' generation IDs match, the servers can replicate data without user intervention. If the replicas' generation IDs do not match for a given backend, you must manually initialize replication between them to force the same initial state on all replicas. This status arises for one of the following reasons:
|
Whenever you see this status:
|
|
Unless this status is persistent, replication is operating normally. The replica has fallen further behind peer replicas than the degraded-status-threshold. By default, the threshold is 5000, meaning this state is triggered if the replica falls 5000 or more changes behind. Additionally, the number of pending changes to apply is an approximation calculated internally using change sequence numbers that are not necessarily sequential. This status can arise periodically during normal operation when, for example, replication absorbs a burst of updates.
In a directory service that sustains 5000 updates a second,
a temporary |
If the
|
|
Replication is operating normally. You have chosen to initialize replication over the network. The time to complete the operation depends on the network bandwidth and volume of data to synchronize. |
Monitor the server output and wait for initialization to complete. |
|
This status arises for one of the following reasons:
|
If this status happens during normal operation:
|
|
Replication is operating normally. |
Nothing to do. |
|
This status arises for one of the following reasons:
|
If this status happens during normal operation:
|
Request Statistics (LDAP)
DS server connection handlers respond to client requests. The following example uses the default monitor user account to read statistics about client operations on each of the available connection handlers:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN "cn=connection handlers,cn=monitor" \
"(&)"
For details about the content of metrics returned, see Metric Types Reference.
Work Queue (LDAP)
DS servers have a work queue to track request processing by worker threads, and whether the server has rejected any requests due to a full queue. If enough worker threads are available, then no requests are rejected. The following example uses the default monitor user account to read statistics about the work queue:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN "cn=work queue,cn=monitor" \
"(&)"
For details about the content of metrics returned, see Metric Types Reference. To adjust the number of worker threads, see the settings for Traditional Work Queue.
Database Size (LDAP)
DS servers maintain counts of the number of entries in each backend and under each base DN. The following example uses the default monitor user account to read the counts:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN cn=monitor \
"(|(ds-mon-backend-entry-count=*)(ds-mon-base-dn-entry-count=*))" \
ds-mon-backend-entry-count ds-mon-base-dn-entry-count
Active Users (LDAP)
DS server connection handlers respond to client requests. The following example uses the default monitor user account to read the metrics about active connections on each connection handler:
$ ldapsearch \
--hostname localhost \
--port 1636 \
--useSsl \
--usePkcs12TrustStore /path/to/opendj/config/keystore \
--trustStorePassword:file /path/to/opendj/config/keystore.pin \
--bindDN uid=monitor \
--bindPassword password \
--baseDN cn=monitor \
"(objectClass=ds-monitor-connection*)" \
ds-mon-active-connections-count ds-mon-active-persistent-searches ds-mon-connection ds-mon-listen-address
For details about the content of metrics returned, see Metric Types Reference.