Troubleshooting IDM/OpenIDM

This book provides information on troubleshooting various issues in IDM/OpenIDM, including collecting useful troubleshooting information such as logs, heap dumps and stack traces.

## Sending troubleshooting data to ForgeRock Support for analysis

#### Sending data for analysis

You can attach troubleshooting and diagnostic data to tickets in BackStage. When using BackStage to access tickets, there are no limits to the attachment size; this is the recommended method of sending us troubleshooting and diagnostic data.

AM/OpenAM

DS/OpenDJ

IDM/OpenIDM

IG/OpenIG

## How do I debug server-side Groovy code in IDM/OpenIDM (All versions)?

#### Starting IDM/OpenIDM in Debug Mode

Debugging of server-side Groovy scripts within IDM/OpenIDM requires the server instance to be started in debug mode.

You can start an IDM/OpenIDM instance in debug mode as follows:

1. Shutdown the instance if it is already running.
2. Use the command line to start IDM/OpenIDM interactively and specify the jpda option as a parameter for the startup.sh script:
./startup.sh jpda

Executing ./startup.sh...
Using OPENIDM_HOME:   /opt/server/openidm
Using PROJECT_HOME:   /opt/server/openidm
Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Djava.compiler=NONE -Xnoagent -Xdebug -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n
Using LOGGING_CONFIG: -Djava.util.logging.config.file=/opt/server/openidm/conf/logging.properties
Listening for transport dt_socket at address: 5005
Using boot properties at /opt/server/openidm/conf/boot/boot.properties
-> OpenIDM ready
##### Note

When started in debug mode, the IDM/OpenIDM JVM will listen on port 5005 for connections from a remote debugger. The defaults for suspend and address used by IDM/OpenIDM when running debug mode can be modified within the startup.sh script.

#### Debugging a Groovy script using JDB

Although there are many IDEs available that allow you to debug Groovy scripts within IDM/OpenIDM, one of the simplest methods available is to use the JDB bundled with the Oracle® JDK. In essence, you attach the JDB to the running IDM/OpenIDM instance and set a breakpoint within the script you want to debug after having set the sourcepath to the location of your Groovy script(s).

Example

In the following example, we are debugging the echo.groovy script included with the IDM/OpenIDM Custom Endpoint sample. The echo.groovy script has been modified to add the following at line 34.

def guid = java.util.UUID.randomUUID()


You can debug the echo.groovy script using JDB as follows:

1. Execute the JDB utility and attach it to port 5005 on which IDM/OpenIDM is listening:
$jdb -attach 5005 Set uncaught java.lang.Throwable Set deferred uncaught java.lang.Throwable Initializing jdb ... 2. Use the sourcepath command to set the location of the Groovy script(s) you intend to debug: > sourcepath /Users/jdoe/server/openidm/script 3. Use the stop in command to set a breakpoint within the echo.groovy script for the Groovy Script.run() method: > stop in echo.run Set breakpoint echo.run 4. Execute a REST call against the echo endpoint to invoke it. You can use a curl command such as: $ curl -u openidm-admin:openidm-admin http://localhost:8080/openidm/endpoint/echo
Once you have executed the curl command to invoke the custom echo endpoint, the JDB debugger will stop when the Groovy script engine executes the echo.groovy script. You should see output similar to the following within your JDB session:
>
Breakpoint hit: "thread=qtp80929581-100", echo.run(), line=34 bci=4
34    def guid = java.util.UUID.randomUUID()
5. Use the next command with the thread identified in the previous step to execute the call to UUID.randomUUID() on line 34:
qtp80929581-100[1] next
>
Step completed: "thread=qtp80929581-100", echo.run(), line=36 bci=18
36    if (request instanceof CreateRequest) {
You can now inspect the contents of the guid variable along within the bindings passed to the Groovy script by IDM/OpenIDM and perform other actions. For example:
• Use the locals command with the identified thread to list local variables that have been defined:
qtp80929581-100[1] locals
Method arguments:
Local variables:
guid = instance of java.util.UUID(id=7776)
• Print the contents of the guid variable:
qtp80929581-100[1] print guid
guid = "b862a202-cb20-4cf3-b060-3a434e901b79"
• Print the value of the request property bound to the script execution context by IDM/OpenIDM:
qtp80929581-100[1] print binding.getProperty("request")
binding.getProperty("request") = "{ "resourcePath": "", "method": "read", "fields": [  ] }"
• Continue execution of the Groovy script:
qtp80929581-100[1] run

N/A

N/A

## How do I query individual reconciliation synchronization failures using REST in IDM/OpenIDM (All versions)?

The purpose of this article is to provide information on querying individual recon sync failures using REST in IDM/OpenIDM. This can be useful when troubleshooting why a reconciliation synchronization has failed.

#### Querying individual reconciliation synchronization failures

You can use the following REST request to query the details of individual synchronizations that have failed:

$curl --cacert self-signed.crt -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" -X GET 'https://localhost:8443/openidm/audit/recon' -G --data-urlencode '_queryFilter=status+eq+\"FAILURE\"'  Example response (with data removed to only show response summary with resultCount): { "result": [ { ... } ], "resultCount": 6, "pagedResultsCookie": null, "totalPagedResultsPolicy": "NONE", "totalPagedResults": -1, "remainingPagedResults": -1 }  Specific reconciliation IDs If you want to query failed synchronization operations for a specific reconciliation, you can include the actual reconciliation ID. For example, you can use the following REST request: $ curl --cacert self-signed.crt -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" -X GET 'https://localhost:8443/openidm/audit/recon' -G --data-urlencode '_queryFilter=status+eq+\"FAILURE\"&_reconId+eq+\"208154c0-c97b-41d3-bebb-4cf22112c110-138\"'


Example response (which shows a resultCount of 1):

{
"result": [
{
"_id": "208154c0-c97b-41d3-bebb-4cf22112c110-176",
"_rev": "1",
"status": "FAILURE",
"reconId": "208154c0-c97b-41d3-bebb-4cf22112c110-138",
"transactionId": "208154c0-c97b-41d3-bebb-4cf22112c110-138",
"timestamp": "2016-08-17T12:50:54.447Z",
"eventName": "recon",
"action": "CREATE",
"exception": "org.forgerock.openidm.sync.impl.SynchronizationException: Create rejected as Object with same ID already exists. Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8\n\tat org.forgerock.openidm.sync.impl.ObjectMapping.createTargetObject(ObjectMapping.java:568)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping.access$1400(ObjectMapping.java:69)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping$SyncOperation.performAction(ObjectMapping.java:1776)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping$SourceSyncOperation.sync(ObjectMapping.java:2164)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping$2.recon(ObjectMapping.java:1174)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping$ReconTask.call(ObjectMapping.java:1302)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping$ReconTask.call(ObjectMapping.java:1275)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat java.lang.Thread.run(Thread.java:745)\nCaused by: org.forgerock.json.resource.PreconditionFailedException: Create rejected as Object with same ID already exists. Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8\n\tat org.forgerock.openidm.repo.orientdb.impl.OrientDBRepoService.create(OrientDBRepoService.java:295)\n\tat org.forgerock.openidm.repo.orientdb.impl.OrientDBRepoService.handleCreate(OrientDBRepoService.java:180)\n\tat org.forgerock.json.resource.Router.handleCreate(Router.java:255)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:69)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:62)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:62)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:62)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.audit.filter.AuditFilter.filterCreate(AuditFilter.java:110)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:60)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.servlet.internal.ServletConnectionFactory$5.filterCreate(ServletConnectionFactory.java:499)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.maintenance.impl.PassthroughFilter.filterCreate(PassthroughFilter.java:48)\n\tat org.forgerock.openidm.maintenance.impl.MaintenanceService.filterCreate(MaintenanceService.java:229)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:60)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.json.resource.FilterChain.handleCreate(FilterChain.java:213)\n\tat org.forgerock.json.resource.InternalConnection.createAsync(InternalConnection.java:44)\n\tat org.forgerock.json.resource.AbstractAsynchronousConnection.create(AbstractAsynchronousConnection.java:44)\n\tat org.forgerock.json.resource.AbstractConnectionWrapper.create(AbstractConnectionWrapper.java:96)\n\tat org.forgerock.openidm.servlet.internal.ServletConnectionFactory$1$1.create(ServletConnectionFactory.java:177)\n\tat org.forgerock.json.resource.AbstractConnectionWrapper.create(AbstractConnectionWrapper.java:96)\n\tat org.forgerock.openidm.managed.ManagedObjectSet.createInstance(ManagedObjectSet.java:650)\n\tat org.forgerock.json.resource.InterfaceCollectionHandler.handleCreate(InterfaceCollectionHandler.java:40)\n\tat org.forgerock.json.resource.Router.handleCreate(Router.java:255)\n\tat org.forgerock.openidm.managed.ManagedObjectService$ManagedObjectSetRequestHandler.handleCreate(ManagedObjectService.java:180)\n\tat org.forgerock.json.resource.Router.handleCreate(Router.java:255)\n\tat org.forgerock.openidm.managed.ManagedObjectService.handleCreate(ManagedObjectService.java:283)\n\tat org.forgerock.json.resource.Router.handleCreate(Router.java:255)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:69)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:62)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.servlet.internal.ScriptedFilter$2.apply(ScriptedFilter.java:96)\n\tat org.forgerock.openidm.servlet.internal.ScriptedFilter$2.apply(ScriptedFilter.java:93)\n\tat org.forgerock.util.promise.Promises$CompletedPromise.thenAsync(Promises.java:221)\n\tat org.forgerock.util.promise.Promises$CompletedPromise.thenAsync(Promises.java:205)\n\tat org.forgerock.openidm.servlet.internal.ScriptedFilter.filterResourceResponseRequest(ScriptedFilter.java:167)\n\tat org.forgerock.openidm.servlet.internal.ScriptedFilter.filterCreate(ScriptedFilter.java:92)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:60)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:62)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.audit.filter.AuditFilter.filterCreate(AuditFilter.java:110)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:60)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.servlet.internal.ServletConnectionFactory$5.filterCreate(ServletConnectionFactory.java:499)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.openidm.maintenance.impl.PassthroughFilter.filterCreate(PassthroughFilter.java:48)\n\tat org.forgerock.openidm.maintenance.impl.MaintenanceService.filterCreate(MaintenanceService.java:229)\n\tat org.forgerock.json.resource.Filters$ConditionalFilter.filterCreate(Filters.java:60)\n\tat org.forgerock.json.resource.FilterChain$Cursor.handleCreate(FilterChain.java:67)\n\tat org.forgerock.json.resource.FilterChain.handleCreate(FilterChain.java:213)\n\tat org.forgerock.json.resource.InternalConnection.createAsync(InternalConnection.java:44)\n\tat org.forgerock.json.resource.AbstractAsynchronousConnection.create(AbstractAsynchronousConnection.java:44)\n\tat org.forgerock.json.resource.AbstractConnectionWrapper.create(AbstractConnectionWrapper.java:96)\n\tat org.forgerock.openidm.servlet.internal.ServletConnectionFactory$1$1.create(ServletConnectionFactory.java:177)\n\tat org.forgerock.json.resource.AbstractConnectionWrapper.create(AbstractConnectionWrapper.java:96)\n\tat org.forgerock.openidm.sync.impl.ObjectMapping.createTargetObject(ObjectMapping.java:561)\n\t... 12 more\nCaused by: com.orientechnologies.orient.core.storage.ORecordDuplicatedException: Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8 RID=#12:8\n\tat com.orientechnologies.orient.core.index.OIndexTxAwareOneValue.checkEntry(OIndexTxAwareOneValue.java:52)\n\tat com.orientechnologies.orient.core.index.OClassIndexManager.checkIndexedPropertiesOnCreation(OClassIndexManager.java:517)\n\tat com.orientechnologies.orient.core.index.OClassIndexManager.checkIndexes(OClassIndexManager.java:495)\n\tat com.orientechnologies.orient.core.index.OClassIndexManager.onRecordBeforeCreate(OClassIndexManager.java:61)\n\tat com.orientechnologies.orient.core.hook.ODocumentHookAbstract.onTrigger(ODocumentHookAbstract.java:237)\n\tat com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.callbackHooks(ODatabaseRecordAbstract.java:1466)\n\tat com.orientechnologies.orient.core.db.record.ODatabaseRecordAbstract.executeSaveRecord(ODatabaseRecordAbstract.java:1103)\n\tat com.orientechnologies.orient.core.tx.OTransactionNoTx.saveRecord(OTransactionNoTx.java:84)\n\tat com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.save(ODatabaseRecordTx.java:322)\n\tat com.orientechnologies.orient.core.db.record.ODatabaseRecordTx.save(ODatabaseRecordTx.java:40)\n\tat com.orientechnologies.orient.core.record.ORecordAbstract.save(ORecordAbstract.java:335)\n\tat com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1439)\n\tat com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1428)\n\tat com.orientechnologies.orient.core.record.impl.ODocument.save(ODocument.java:1417)\n\tat org.forgerock.openidm.repo.orientdb.impl.OrientDBRepoService.create(OrientDBRepoService.java:287)\n\t... 72 more\n", "linkQualifier": "default", "mapping": "systemLdapAccounts_managedUser", "message": "Create rejected as Object with same ID already exists. Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8. Root cause: Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8", "messageDetail": { "code": 412, "reason": "Precondition Failed", "message": "Create rejected as Object with same ID already exists. Cannot index record managed_user{displayName:Morley Waghorne-2,description:null,givenName:Morley,mail:WaghornM@ns-mail3.com,telephoneNumber:+1 818 885-8439,sn:Waghorne,userName:WaghornM,ldapGroups:[0],accountStatus:active,effectiveRoles:[0],effectiveAssignments:[0],_openidm_id:68a9bac8-2994-4f1d-b388-100fc59e5d12}: found duplicated key 'WaghornM' in index 'managed_user!userName!Idx' previously assigned to the record #12:8" }, "situation": "ABSENT", "sourceObjectId": "system/ldap/account/cn=Morley Waghorne-2,ou=People,dc=example,dc=com", "targetObjectId": "managed/user/null", "reconciling": "source", "ambiguousTargetObjectIds": "", "entryType": "entry" } ], "resultCount": 1, "pagedResultsCookie": null, "totalPagedResultsPolicy": "NONE", "totalPagedResults": -1, "remainingPagedResults": -1 }   #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I troubleshoot the DS/OpenDJ Password Synchronization plugin in IDM/OpenIDM (All versions)? The purpose of this article is to provide assistance if you are experiencing issues with the DS/OpenDJ password sync plugin in IDM/OpenIDM. This article also includes information on enabling debug logging for just the DS/OpenDJ password sync plugin to minimize the size of the resulting DS/OpenDJ logs. #### Overview This article includes the following details that you should check (and rectify) in order to troubleshoot the DS/OpenDJ password sync plugin in IDM/OpenIDM: ##### Note If you are still unable to solve your issues, you should attach the collected log files and outputs when you raise a ticket to enable us to help you more quickly. See Sending troubleshooting data to ForgeRock Support for analysis for further information. #### DS/OpenDJ password sync flow The expected DS/OpenDJ password sync plugin flow is as follows: 1. DS/OpenDJ receives an operation to modify an account password. 2. The DS/OpenDJ password sync plugin is invoked and passes the cleartext password. 3. The DS/OpenDJ password sync plugin encrypts the password and performs a Patch operation against the corresponding IDM/OpenIDM managed user's ldapPassword attribute. 4. The onUpdate trigger within managed.json assigns the decrypted ldapPassword attribute value to the password attribute. 5. The Implicit sync process is triggered for the managed user to LDAP mapping. 6. The condition associated with the password attribute mapping is evaluated and returns false as the cleartext ldapPassword and password attribute values are the same. 7. The LDAP Connector does not perform an update to DS/OpenDJ as there are no modifications to be performed. Avoiding loops To ensure this flow is followed correctly and you do not end up with a loop whereby the password change originating from DS/OpenDJ results in an update from IDM/OpenIDM back to DS/OpenDJ for the same LDAP entry, the condition associated with the password attribute mapping (sync.json file) should be similar to the following: { "source" : "password", "condition" : { "type" : "text/javascript", "source" : "object.ldapPassword != null && (openidm.decrypt(object.password) != openidm.decrypt(object.ldapPassword))" },   This mapping applies a condition to the sync of the password attribute, which is dependent upon the managed user's ldapPassword attribute value being different from the password attribute value. ##### Note The ldapPassword attribute referred to above is configurable and may be different in your configuration; however, the DS/OpenDJ password sync plugin must be configured to Patch an attribute other than password when performing a bi-directional sync else it is impossible to determine whether an update is needed or not. If DS/OpenDJ syncs passwords with IDM/OpenIDM in both directions, you need to ensure you have configured IDM/OpenIDM correctly to avoid an infinite loop. See Password Synchronization Plugin Guide › Preventing Infinite Loops for further information. #### Keystores and certificates Incorrect keystore and certificate details can cause issues with the DS/OpenDJ password sync plugin (some of which have been shown in the IDM/OpenIDM log section below). The key things to check are: 1. The IDM/OpenIDM certificate has been imported into the DS/OpenDJ truststore. 2. The DS/OpenDJ certificate has been imported into the IDM/OpenIDM keystore. This step only applies if you have implemented mutual SSL authentication (server and client certificate authentication); the password sync plugin is configured to use mutual SSL authentication by default. 3. The encryption key used for the password is correct and present in both the DS/OpenDJ truststore and the IDM/OpenIDM keystore. IDM/OpenIDM certificate This process is documented in the Password Synchronization Plugin Guide: To Import the IDM Certificate into the DS Keystore. You can check that the IDM/OpenIDM certificate has been imported properly as follows: 1. Use the keytool command to list the certificate aliases in the IDM/OpenIDM keystore to check which certificate is being used; this could be the openidm-localhost certificate or a CA certificate depending on your setup. 2. Use the keytool command to list the certificate aliases in the DS/OpenDJ truststore and check that the certificate alias returned in step 1 is present. The following example keytool command shows that the openidm-localhost certificate has been imported into the DS/OpenDJ keystore: $ keytool -list -keystore truststore -storetype JKS -storepass password -v

Keystore type: JKS
Keystore provider: SUN

Alias name: openidm-localhost
Creation date: 13/07/2016
Entry type: trustedCertEntry

Owner: CN=localhost, O=OpenIDM Self-Signed Certificate, OU=None, L=None, ST=None, C=None
Issuer: CN=localhost, O=OpenIDM Self-Signed Certificate, OU=None, L=None, ST=None, C=None
Serial number: 152f283769d
Valid from: Tue Jan 19 14:54:07 AEDT 2016 until: Wed Feb 18 14:54:07 AEDT 2026
Certificate fingerprints:
MD5:  D4:B4:E7:A5:3C:E9:42:3C:F5:EB:57:5B:5C:D3:18:DC
SHA1: A6:82:85:49:95:35:71:DF:33:2F:E4:F5:92:AB:B9:65:51:97:2A:D6
SHA256: D4:55:A9:E8:B6:61:75:39:F4:E7:FE:EE:AE:B0:A2:46:2D:B2:82:81:4A:97:EB:31:BB:8C:E8:35:1E:80:D3:6C
Signature algorithm name: SHA512withRSA
Version: 3

*******************************************
*******************************************


DS/OpenDJ certificate

This process is documented in the Password Synchronization Plugin Guide: To Import the DS Certificate into the IDM Truststore and is only required for mutual authentication, although the password sync plugin is configured to use mutual SSL authentication by default.

You can check that the DS/OpenDJ certificate has been imported properly as follows:

1. Use the keytool command to list the certificate aliases in the IDM/OpenIDM keystore and check that the server-cert certificate (or equivalent CA certificate depending on your setup) is being used.
2. Check that the certificate DN has been added as a value of the allowedAuthenticationIdPatterns property for the CLIENT_CERT authentication module in the authentication.json file (located in the /path/to/idm/conf directory).

Encryption key

The DS/OpenDJ password is encrypted before it is sent to IDM/OpenIDM. The encryption key used is openidm-localhost by default but could be different if you have changed it.

You should check the encryption key as follows:

1. Check which encryption key is actually being used by DS/OpenDJ using one of the following methods:
• Look for the value of the ds-cfg-realm property in the config.ldif file (located in the /path/to/ds/config directory).
• Check the value of the private-key-alias property using the dsconfig command.
If this is not the key you expected, you should update the ds-cfg-realm property in the config.ldif file with the correct key.
2. Check that this encryption key is present in both the DS/OpenDJ truststore and the IDM/OpenIDM keystore, otherwise IDM/OpenIDM will not be able to decrypt the password when it is received. If it is not there, import it into the appropriate stores per the documentation links above.

#### IDM/OpenIDM log

The IDM/OpenIDM log (openidm.log, located in the /path/to/idm/logs directory) can be useful for identifying any issues with the password synchronization process from the IDM/OpenIDM side. You should increase logging in the IDM/OpenIDM debug log to FINEST (set the .level property to FINEST in logging.properties, which is located in the /path/to/idm/conf directory).

SSL and keys

If you see the following message in your IDM/OpenIDM log after changing a password in DS/OpenDJ, it means the SSL connection is working and the request has been received and successfully decrypted:

FINEST: Request: { "content": { "password": { "$crypto": { "value": { "data": "6m3EOH9Fs7gqKNlTphW8Iw==", "cipher": "AES/ECB/PKCS5Padding", "key": { "data": "SKZIzkHc4wSa56sfivwKpblZrN3a/d8H/M10C48VOiaMFwlGM311ISl6s620ysWWXNXyqU4E/+hQFRx4M9pG80aYWWbHZvaicAtXadEs3kPhC8ffpPu8cYmfYbryKzsgQ0CqR+Ke3qERWdLDXRRX+ionkC9CfY4nMMKGhYVXNW61/TtKBrn3g5/3RvmL+mI9IxDeMDTgPsMuqmJtkSwHe2do/WmgeTnMpzMHb3GuXBcbphilzsTtLQvfGFQMC/0WTgwy0tovASIcY0rPRiEm6ymVrYRZTD+Zqy9pTeSVPH+XrqJTJUafIbIQlA1gf54ZnOrB/7Sauyeo7FxS8CzhIQ==", "cipher": "RSA/ECB/OAEPWithSHA1AndMGF1Padding", "key": "openidm-localhost" } }, "type": "x-simple-encryption" } } }, "resourcePath": "policy/managed/user/7a2594ea-ae24-4084-a963-9fa95e1dccc5", "additionalParameters": { "external": "true" }, "action": "validateProperty", "method": "action", "fields": [ ] }  Although the SSL connection is working, you could still see errors if the provided key is unknown to IDM/OpenIDM, for example: FINEST: Resource exception: 500 Internal Server Error: "Wrapped org.forgerock.json.JsonException: org.forgerock.json.crypto.JsonCryptoException: key not found: openidm-unknown (/path/to/idm/bin/defaults/script/policy.js#690) in /path/to/idm/bin/defaults/script/policy.js at line number 690 at column number 0" If the SSL connection is not working, you will see a SSL handshake exception, for example: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.Alerts.getSSLException(Alerts.java:154) at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1959) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1077) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1339) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1323)  ##### Note If you are experiencing issues with SSL or keys, you can enable SSL debugging as detailed in the SSL debugging section. Configuration issues The following message indicates there is an issue with your configuration that is causing a loop whereby the password change originating from DS/OpenDJ results in an update from IDM/OpenIDM back to DS/OpenDJ for the same LDAP entry: Caused by: org.identityconnectors.framework.common.exceptions.ConnectorException: javax.naming.ServiceUnavailableException: [LDAP: error code 51 - Entry uid=user.1,ou=People,dc=forgerock,dc=com cannot be modified because the server failed to obtain a write lock for this entry after multiple attempts] To resolve this issue, you must prevent the managed user's password being synced to DS/OpenDJ if the change originated from the DS/OpenDJ password sync plugin. See the DS/OpenDJ password sync flow section above for more details on achieving this. Alternatively, you can change the value of the ds-cfg-update-interval property (in the openidm-pwsync-plugin-config.ldif or openidm-accountchange-plugin-sample-config file) to 1 or 2 seconds; this changes the sync operation to an asynchronous process and may resolve this issue #### SSL debugging SSL debugging traces the SSL handshaking phase. You can enable SSL debugging via the OPENIDM_OPTS environment variable. On Unix® and Linux® systems: $ cd /path/to/idm/
$export OPENIDM_OPTS="-Djavax.net.debug=all"$ ./startup.sh

On Microsoft® Windows® systems:

C:\> cd \path\to\idm
C:\path\to\idm> set OPENIDM_OPTS=-Djavax.net.debug=all
C:\path\to\idm> startup.bat

##### Note

You can also edit the startup.sh or startup.bat files to update the default OPENIDM_OPTS values.

#### DS/OpenDJ configuration files

The following configuration files are useful to check how DS/OpenDJ has been configured and how the password sync plugin has been configured on the DS/OpenDJ side:

• config.ldif (located in the /path/to/ds/config directory).
• openidm-pwsync-plugin-config.ldif or openidm-accountchange-plugin-sample-config (located in the /path/to/ds/config directory). The actual file depends on the password sync plugin version.

You can then compare the configuration on the DS/OpenDJ side with the configuration on the IDM/OpenIDM side to ensure they correspond and correct if necessary.

#### IDM/OpenIDM configuration files

The following configuration files (located in the /path/to/idm/conf directory) are useful to check how synchronization has been configured on the IDM/OpenIDM side and compare with the DS/OpenDJ configuration:

• sync.json
• managed.json
• authentication.json
• provisioner configuration file for DS/OpenDJ, for example provisioner.openicf-ldap.json.
• any associated scripts used to manage or transform the password attribute during sync, such as a separate onUpdate script (these are typically located in the /path/to/idm/script directory).
##### Note

It can be very useful to add logging to your scripts to help identify the point at which an issue can occur. Refer to How do I add logging to JavaScript files in IDM/OpenIDM (All versions)? and How do I add logging to Groovy scripts in IDM/OpenIDM (All versions)? for further information.

#### DS/OpenDJ logs

The DS/OpenDJ log files (access, errors and replication) can be useful for identifying any issues with the password synchronization process from the DS/OpenDJ side. The logs collected should cover the period during which you are experiencing issues and the timestamps should correspond to the logs collected for IDM/OpenIDM.

Enabling debug logging for the DS/OpenDJ password sync plugin

You can enable debug logging for just the DS/OpenDJ password sync plugin to minimize the size of the resulting DS/OpenDJ logs as follows:

1. Enable debug logging using the following dsconfig command:
$./dsconfig set-log-publisher-prop --hostname ds1.example.com --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --publisher-name "File-Based Debug Logger" --set enabled:true --no-prompt --trustAll  In OpenDJ 2.6.x, you must include the --set default-debug-level:all option as well. 2. Set the debug target class using one of the following dsconfig commands depending on your version: • OpenDJ 3.5.x / DS: $ ./dsconfig create-debug-target --hostname ds1.example.com --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --publisher-name "File-Based Debug Logger" --type generic --target-name org.forgerock.openidm.accountchange.OpenidmAccountStatusNotificationHandler --set enabled:true --no-prompt --trustAll

• OpenDJ 3.0 and earlier:
$./dsconfig create-debug-target --hostname ds1.example.com --port 4444 --bindDN "cn=Directory Manager" --bindPassword password --publisher-name "File-Based Debug Logger" --type generic --target-name org.forgerock.openidm.agent.accountchange.OpenidmAccountStatusNotificationHandler --set enabled:true --no-prompt --trustAll  #### IDM/OpenIDM audit logs The IDM/OpenIDM audit logs (located in the /path/to/idm/audit directory) can be useful for tracking system activity; the activity.csv file is particularly relevant. #### See Also #### Related Training #### Related Issue Tracker IDs ## How do I troubleshoot Active Directory password synchronization issues in IDM/OpenIDM (All versions)? The purpose of this article is to provide assistance if you are experiencing issues with the Active Directory® password synchronization service (AD Password Sync Plugin) in IDM/OpenIDM. Likely issues include Active Directory password changes not being populated in IDM/OpenIDM or the plugin not polling for changes. #### Troubleshooting synchronization issues This article includes the following details that you should check (and rectify) in order to troubleshoot the AD password sync plugin in IDM/OpenIDM: Avoiding loops If AD syncs passwords with IDM/OpenIDM in both directions, you need to ensure you have configured IDM/OpenIDM correctly to avoid an infinite loop. See Password Synchronization Plugin Guide › Preventing Infinite Loops for further information. ##### Note If you are still unable to solve your issues, you should attach the collected log files and outputs when you raise a ticket to enable us to help you more quickly. See Sending troubleshooting data to ForgeRock Support for analysis for further information. #### Keystores and certificates Incorrect keystore and certificate details can cause issues with the AD password sync plugin (some of which have been shown in the IDM/OpenIDM log section below). The key things to check are: • The content of the keystore used by the AD password sync plugin corresponds to the content in the IDM/OpenIDM keystore. • The certificate DN has been added to the IDM/OpenIDM configuration. Finally, you should review the steps in Password Synchronization Plugin Guide › Synchronizing Passwords With Active Directory again to ensure you have not missed a step. Keystores You can check that the content of the AD and IDM/OpenIDM keystores correspond as follows: 1. Use the keytool command to list the certificate aliases in the AD keystore (keystore.pk12) to check which certificate is being used; the default is ad-pwd-plugin-localhost, but could be different depending on your setup. 2. Use the keytool command to list the certificate aliases in the IDM/OpenIDM keystore to check that the certificate alias returned in step 1 is present. The following example keytool command shows that the ad-pwd-plugin-localhost certificate exists in the IDM/OpenIDM keystore: $ keytool -list -keystore security/keystore.jceks -storetype JCEKS -storepass password -v

Keystore type: JCEKS
Keystore provider: SunJCE

Creation date: Jan 28, 2017
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Serial number: 20be2082
Valid from: Sat Jan 28 10:08:00 EDT 2017 until: Sat Jan 30 10:08:00 EDT 2027
Certificate fingerprints:
MD5:  A7:29:7A:B7:3F:0D:2D:A1:D2:79:9E:77:E9:6E:8A:1E
SHA1: 54:C2:81:FF:B2:3B:89:6B:D0:AB:D9:A1:3C:95:B1:85:ED:5C:2B:35
SHA256: 3C:90:BA:9E:3B:78:1B:8D:64:A5:89:4B:64:EC:A2:A6:FA:FA:E3:78:4E:95:26:0A:50:C7:89:65:22:1F:F9:CB
Signature algorithm name: SHA256withRSA
Version: 3

*******************************************
*******************************************


Certificate DN

Check that the certificate DN has been added as a value of the allowedAuthenticationIdPatterns property for the CLIENT_CERT authentication module in the authentication.json file (located in the /path/to/idm/conf directory). For example:

{
"name" : "CLIENT_CERT",
"properties" : {
"queryOnResource" : "managed/user",
"defaultUserRoles" : [
"internal/role/openidm-cert"
],
"allowedAuthenticationIdPatterns" : [ "CN=localhost, O=AD-pwd-plugin Self-Signed Certificate" ]
},
"enabled" : true
}


You will know that authentication has been successful when you see something similar to the following in the openidm.log:

FINE: Authentication client certificate subject CN=localhost, O=AD-pwd-plugin Self-Signed Certificate
Jan 31, 2017 04:27:49 PM org.forgerock.caf.authentication.framework.AuthModules$LoggingAuthModule$4 handleResult
FINE: ClientCert has successfully authenticated the client

Password synchronization messages are logged to idm.log and these messages can help identify the cause of your issues. This file is located in the directory you specified when you installed the AD password sync plugin; you can confirm the location by checking the logPath registry entry under the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key.

Changing the log level to debug

It is recommended that the log level is set to debug during the install to increase the level of logging. If you didn't set it this way during the install, you can change it by setting the logLevel registry entry under the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key to debug, for example, "logLevel"="debug". You do not need to restart the domain controller if you change this registry entry as it will be picked up dynamically by the AD password sync plugin.

Common errors

Some common errors messages you might find in the idm.log are as follows, along with possible solutions:

Error message Possible solution
ERROR [468:1732]  encrypt_key(): couldn't import key file "\path\to\ad-pwd-plugin-idm.p12"
Check that pkcs12 file password exists and is valid.
ERROR [468:5372]  password_change_worker(): change request for user "user_ID" failed, network status: 0, response: (empty)
Check that the network connection between the AD password sync plugin and IDM/OpenIDM is working correctly.
ERROR [468:6692]  http_status() error parsing status header

Change the netTimeout registry entry to set the network connection/send/read timeout (in seconds) to prevent requests to IDM/OpenIDM failing if IDM/OpenIDM takes too long to respond.

The netTimeout registry entry is under the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key; a typical netTimeout setting is between 2 and 6 seconds (for example, "netTimeout"="5" for 5 seconds). You do not need to restart the domain controller if you change the netTimeout registry entry as it will be picked up dynamically by the AD password sync plugin.

#### IDM/OpenIDM log

The IDM/OpenIDM log (openidm.log, located in the /path/to/idm/logs directory) can be useful for identifying any issues with the password synchronization process from the IDM/OpenIDM side. You should increase logging in the IDM/OpenIDM debug log to FINEST (set the .level property to FINEST in logging.properties, which is located in the /path/to/idm/conf directory).

Some common errors messages you might find in the openidm.log are as follows, along with their meanings:

Error message Meaning
WARNING: JSON resource exception org.forgerock.openidm.objset.InternalServerErrorException: org.forgerock.json.fluent.JsonException: org.forgerock.json.crypto.JsonCryptoException: key not found: ad-pwd-plugin-idm
This message indicates there is a problem with the IDM/OpenIDM configuration; in particular, the key/alias name used to encrypt the password is unknown to IDM/OpenIDM. You should check your configuration and verify that it is correct.
FINE: Client certificate subject CN=localhost, O=AD-pwd-plugin Self-Signed Certificate did not match allowed patterns
This message indicates that the client certificate is not recognized; you must add the certificate DN to the IDM/OpenIDM configuration as detailed in the Keystores and certificates section.
One of the following error messages is shown:
FINE: Failed to evaluate path/to/openidm/bin/defaults/script/router-authz.js script.
FINEST: Resource exception: 403 Forbidden: "Access denied"
...
Caused by: org.mozilla.javascript.JavaScriptException: [object Object] (/path/to/idm/bin/defaults/script/router-authz.js#484)


You may also see this response:

{"code":403,"reason":"Forbidden","message":"Access denied"}

These messages indicate an issue with the configuration in access.js (located in the /path/to/idm/script directory).

In particular, this can happen if you have changed the default 'password' attribute in the AD password sync plugin configuration. See the Password attribute subsection below for resolution.

You can obtain the AD password sync plugin configuration using one of the following approaches:

• output settings and validate.
• export configuration.

Output settings and validate

The idmsync.exe --validate command output provides information about the current password synchronization service settings so these can be checked for any obvious errors. Essentially, this command outputs the values of the registry entries under the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key and provides feedback about them being valid or not. You should correct any that are invalid.

You can run this command as follows from the command prompt:

C:\> cd \path\to\PasswordSync
C:\path\to\PasswordSync> idmsync.exe --validate

Export configuration

You can export the AD password sync plugin configuration by navigating to the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key in Regedit, right-clicking on PasswordSync and then selecting export followed by text format. This exports the configuration to a text file.

#### IDM/OpenIDM configuration files

The following configuration files (located in the /path/to/idm/conf directory) are useful to check how synchronization has been configured on the IDM/OpenIDM side and compare with the AD configuration:

• sync.json
• managed.json
• authentication.json
• access.js (located in the /path/to/idm/script directory).
• any associated scripts used to manage or transform the password attribute during sync, such as a separate onUpdate script (these are typically located in the /path/to/idm/script directory).
##### Note

It can be very useful to add logging to your scripts to help identify the point at which an issue can occur. Refer to How do I add logging to JavaScript files in IDM/OpenIDM (All versions)? and How do I add logging to Groovy scripts in IDM/OpenIDM (All versions)? for further information.

If you have changed the default 'password' attribute in the AD password sync plugin configuration, you must also update access.js to use the new attribute. For example, if you have changed the attribute to adPassword, you would update the customAuthz field in the following section to replace 'password' with 'adPassword'; this attribute is case sensitive:

// Clients authenticated via SSL mutual authentication
{
"pattern"   : "managed/user",
"roles"     : "internal/role/openidm-cert",
"methods"   : "patch,action",
"actions"   : "patch",
},


If you are using Microsoft Windows 2003, there is a known issue where the password synchronization service may not start if there are blank lines in the Notification Packages registry entry within the list of dlls. You should check this entry (under the HKLM\SYSTEM\CurrentControlSet\Control\Lsa registry key) and remove any blank lines. If you change this registry entry, you must restart the domain controller for this to take effect.

## How do I collect data for troubleshooting high CPU utilization on IDM/OpenIDM (All versions) servers?

The purpose of this article is to provide troubleshooting guidance if you are experiencing consistently high CPU utilization on the IDM/OpenIDM server. If CPU utilization is in the range 70-80% under load, you should definitely investigate what is causing it as performance is likely to be impacted.

#### Troubleshooting high CPU usage

High CPU utilization is usually experienced in conjunction with other symptoms such as servers becoming unresponsive and poor performance.

##### Note

On Linux®, Unix® and Solaris® systems, there is a script you can run to capture the stack traces and process information as detailed in How do I use the msnapshots script to capture information for troubleshooting IDM/OpenIDM (All versions)?

There are a number of different types of data you should collect to troubleshoot high CPU utilization. If you are seeing high CPU usage, you should collect the following data and submit it to us when you raise the ticket to enable us to help you more quickly:

In addition, please include any information about other symptoms you are seeing at the same time as high CPU usage and any changes you have made to your system that may have contributed to the problem.

##### Note

It is important that you collect all the data at the point you are experiencing high CPU usage so that the timestamps of the data collected and the high CPU usage correspond. Please notify us of the exact timestamp when high CPU was noticed so we can focus our attention on the relevant information.

#### Server and audit logs

FINEST level server logs record all system and custom log messages for the server and are useful for troubleshooting. You can enable FINEST level logging as described in Integrator's Guide › Configuring Server Logs. Additionally, it is advisable to add thread IDs to log statements for enhanced troubleshooting as described in How do I add Thread IDs to log statements in IDM/OpenIDM (All versions)?

Audit logs record all system activity into three separate files (access, activity and reconciliation), which can provide useful information on the activities that may be contributing to the high CPU. See Integrator's Guide › Setting up Audit Logging for further information.

#### JVM stack traces and current options

JVM stack traces must be collected before killing the affected process or restarting the server; otherwise the information relating to the affected process is lost forever, which may make it impossible to identify the root cause. In addition to collecting stack traces, you should also provide details of the current JVM settings as they can provide an opportunity for tuning, which can reduce CPU usage.

Collecting stack traces and finding current JVM settings is described in How do I collect JVM data for troubleshooting IDM/OpenIDM (All versions)?

Detailed process / thread information is useful for identifying the exact thread within a process that is experiencing the high CPU. You should collect this data at the same time as the stack trace so we can look for the identified thread in the stack trace for more information.

• On Unix and Linux systems, you can use the top command to output process information for all threads to a file:
$top -b -n 1 -H -p [pid] > top-output.txt replacing [pid] with the process ID of the affected process. • On Solaris, you can use the prstat command to capture this information: $ prstat -L -p [pid]
replacing [pid] with the process ID of the affected process.
• On Microsoft® Windows®, you will need to use the Process Monitor.

See How do I find which thread is consuming CPU in a Java process in IDM/OpenIDM (All versions)? for further information on identifying the thread that is causing high CPU usage.

#### GC logs

GC logs also provide useful information about tuning issues that may be contributing to the high CPU usage. Enabling GC logging is described in How do I enable Garbage Collector (GC) Logging for IDM/OpenIDM (All versions)?

Once you have collected your GC logs, you should refer to Best practice for JVM Tuning for advice on resolving common issues identified in the logs.

N/A

N/A

## How do I find which thread is consuming CPU in a Java process in IDM/OpenIDM (All versions)?

You can identify which thread is consuming CPU in a Java process as follows:

1. Determine the process ID (PID) of the affected server process using the following command:
$jps -v 2. Determine which thread in the PID identified in step 1 is consuming the CPU: • On Unix® and Linux® systems, you can use the top command: $ top -n 1 -H -p [pid]
replacing [pid] with the process ID of the affected process.
• On Solaris®, you can use the prstat command:
$prstat -L -p [pid] replacing [pid] with the process ID of the affected process. • On Microsoft® Windows®, you can use the Process Monitor. These commands all give an output listing all the threads in the selected process along with various details, including the CPU usage percentage. For example, the top command gives an output similar to this: Tasks: 152 total, 0 running, 152 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 1.4%sy, 0.0%ni, 94.7%id, 0.8%wa, 0.0%hi, 1.1%si, 0.0%st Mem: 5008484k total, 4854428k used, 154056k free, 85624k buffers Swap: 5013496k total, 6516k used, 5006980k free, 3393948k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8706 user.1 20 0 2433m 561m 10m R 32.0 11.5 0:35.45 java 8624 user.1 20 0 2433m 561m 10m S 0.0 11.5 0:00.01 java 8625 user.1 20 0 2433m 561m 10m S 0.0 11.5 0:10.12 java 8626 user.1 20 0 2433m 561m 10m S 0.0 11.5 0:35.60 java 8627 user.1 20 0 2433m 561m 10m S 0.0 11.5 0:00.01 java 8628 user.1 20 0 2433m 561m 10m S 0.0 11.5 0:00.02 java  In this example, we can see that thread ID 8706 is consuming 32% of the CPU. 3. Produce a stack trace for the PID identified in step 1 using the following command: $ jstack [PID] > jstack-output.txt
4. Convert the thread ID identified as problematic in step 2 to a hex value. In our example output, the thread ID decimal value 8706 converts to hex value 2202.
5. Search the stack trace output for this hex value using grep. You are looking for a thread nid that matches this hex value:
[2] 346
$./cli.sh validate Executing ./cli.sh... Starting shell in /path/to/idm Jun 05, 2017 2:14:30 PM org.forgerock.openidm.core.FilePropertyAccessor loadProps INFO: Using properties at /path/to/idm/conf/boot/boot.properties ................................................................... [Validating] Load JSON configuration files from: [Validating] /path/to/idm/conf [Validating] audit.json .................................. SUCCESS [Validating] authentication.json ......................... SUCCESS [Validating] cluster.json ................................ SUCCESS [Validating] emailTemplate-resetPassword.json ............ SUCCESS [Validating] emailTemplate-welcome.json .................. SUCCESS [Validating] endpoint-getavailableuserstoassign.json ..... SUCCESS ... [Validating] ui.context-selfservice.json ................. SUCCESS [Validating] workflow.json ............................... SUCCESS #### JSON Parsing Error You might encounter this error message in the logs: "Configuration for org.forgerock.openidm.provisioner.openicf could not be parsed and may not be valid JSON : Unexpected character ('}' (code 125)): was expecting double-quote to start field name" The error message usually indicates the precise point where the JSON file has the syntax problem. The error above was caused by an extra comma in the JSON file, {"attributeName":{},{},}. The second comma is redundant. The situation usually results in the service that the specific JSON file configures being left in the unsatisfied state. After fixing your configuration, change to the /path/to/idm/ directory and use the cli.sh validate command to check that your configuration files are valid: $ cd /path/to/idm
$./cli.sh validate Executing ./cli.sh... Starting shell in /path/to/idm Jun 05, 2017 2:14:30 PM org.forgerock.openidm.core.FilePropertyAccessor loadProps INFO: Using properties at /path/to/idm/conf/boot/boot.properties ................................................................... [Validating] Load JSON configuration files from: [Validating] /path/to/idm/conf [Validating] audit.json .................................. SUCCESS [Validating] authentication.json ......................... SUCCESS [Validating] cluster.json ................................ SUCCESS [Validating] emailTemplate-resetPassword.json ............ SUCCESS [Validating] emailTemplate-welcome.json .................. SUCCESS [Validating] endpoint-getavailableuserstoassign.json ..... SUCCESS ... [Validating] ui.context-selfservice.json ................. SUCCESS [Validating] workflow.json ............................... SUCCESS #### Bad Connector Host Reference in Provisioner Configuration You might see the following error when a provisioner configuration loads: Wait for meta data for config org.forgerock.openidm.provisioner.openicf-hrdb In this case the configuration fails to load because information is missing. One possible cause is an incorrect value for connectorHostRef in the provisioner configuration file. For local Java® connector servers, the following rules apply: • If the connector .jar is installed as a bundle under /path/to/idm/bundle, then the value must be: "connectorHostRef" : "osgi:service/org.forgerock.openicf.framework.api.osgi.ConnectorManager",  • If the connector .jar is installed as a connector under /path/to/idm/connectors, then the value must be: "connectorHostRef" : "#LOCAL",  #### Missing Name Attribute In this case, the situation in the audit recon log shows "NULL". A missing name attribute error, followed by an IllegalArgumentException, points to misconfiguration of the correlation rule, with the correlation query pointing to the external system. Such queries usually reference the "name" field which, if empty, leads to the error below: Jan 20, 2012 1:59:58 PM org.forgerock.openidm.provisioner.openicf.commons.AttributeInfoHelper build SEVERE: Failed to build name attribute out of [null] Jan 20, 2012 1:59:58 PM org.forgerock.openidm.provisioner.openicf.impl.OpenICFProvisionerService query SEVERE: Operation [query, system/ad/account] failed with Exception on system object: java.lang.IllegalArgumentException: Attribute value must be an instance of String. Jan 20, 2012 1:59:58 PM org.forgerock.openidm.router.JsonResourceRouterService handle WARNING: JSON resource exception org.forgerock.json.resource.JsonResourceException: IllegalArgumentException at org.forgerock.openidm.provisioner....OpenICFProvisionerService.query... at org.forgerock.openidm.provisioner.....OpenICFProvisionerService.handle... at org.forgerock.openidm.provisioner.impl.SystemObjectSetService.handle... at org.forgerock.json.resource.JsonResourceRouter.handle... Check your correlationQuery. Another symptom of a broken correlation query is that the audit recon log shows a situation of "NULL", and no onCreate, onUpdate or similar scripts are executed. #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I debug server-side JavaScript code in OpenIDM 3.x and 4.x? The purpose of this article is to provide information on debugging server-side JavaScript® code in OpenIDM using Eclipse® IDE for Java EE developers. Debugging JavaScript code can help you troubleshoot issues with your code that cannot be resolved by adding logging to your scripts. #### Debugging JavaScript code ##### Note JavaScript debugging is only supported with Eclipse Luna; the more recent releases are not supported. To configure JavaScript debugging via Eclipse: 1. Download the Java EE version of Eclipse Luna from the Eclipse Downloads page. 2. Back up and delete the following files from the /path/to/openidm/bundle directory: • rhino.debugger-1.0.300.v201109150503.jar • transport-1.0.100.v201109150330.jar 3. Copy the following files from the Eclipse plugins directory to the /path/to/openidm/bundle directory: • org.eclipse.wst.jsdt.debug.rhino.debugger_1.0.400.v201402051800.jar • org.eclipse.wst.jsdt.debug.transport_1.0.200.v201402051800.jar 4. Delete the contents of the /path/to/openidm/felix-cache directory. 5. Edit the script.json file (located in the /path/to/openidm/project/conf directory) and uncomment the following line: "javascript.debug" : "transport=socket,suspend=y,address=9888,trace=true",  6. Start Eclipse and create a new JavaScript Project. 7. Select JavaScript > Include Path > Source within the Project Properties and add a link to the folder(s) containing the JavaScript files you want to debug via the Link External Folder... button. 8. Start the OpenIDM instance. 9. Attach the debugger to OpenIDM via the Run > Debug Configurations... menu item in Eclipse. #### See Also #### Related Training N/A #### Related Issue Tracker IDs ## Logging ## Where can I find useful logs for troubleshooting ForgeRock products? The purpose of this article is to provide information on finding logs across ForgeRock products (AM/OpenAM, DS/OpenDJ, IDM/OpenIDM and IG/OpenIG) for troubleshooting purposes. This article shows default log locations, although most of them can be changed. #### AM/OpenAM logs The following logs are available: Log type Location AM/OpenAM debug logs$HOME/[am_instance]/openam/debug
AM/OpenAM audit logs $HOME/[am_instance]/openam/log ssoadm logs /path/to/openam-tools/[ssoadm_install_folder]/debug Fedlet debug logs$HOME/fedlet/debug
Container logs if deployed on Apache Tomcat™ (localhost_access_log.YYYY-MM-DD.log and catalina.out) /path/to/tomcat/logs
SSL debug logs if deployed on Apache Tomcat (catalina.out) /path/to/tomcat/logs

#### Agents logs

The following logs are available:

Log type Location
Web agents debug.log /path/to/agent/instances/agent_n/logs/debug directory where the agent is installed.
Web agents audit.log /path/to/agent/instances/agent_n/logs/audit directory where the agent is installed.
Web agents install log (installYYYYMMDDHHMMSS.log) /path/to/agent/log directory where the agent is installed.

Web agents system information:

• system_n.log (Agents 5.5 and later)
• agent.log (Pre-Agents 5.5)
/path/to/agent/log directory where the agent is installed.
Validator log (validate_xx.log) - Agents 5.5 and later  /path/to/agent/log directory where the agent is installed.
Java agents debug log (amAgent) /path/to/agent/agent_n/logs/debug directory where the agent is installed.
Java agents audit logs /path/to/agent/agent_n/logs/audit directory where the agent is installed.
Java agents install logs /path/to/agent/installer-logs directory where the agent is installed.
Validator log (validate_xx.log) - Agents 5.5 and later  /path/to/agent/log directory where the agent is installed.

#### DS/OpenDJ logs

The following logs are available:

Log type Location
Embedded DS/OpenDJ logs (access, errors and replication) $HOME/[am_instance]/opends/logs External DS/OpenDJ logs (access, errors and replication) /path/to/ds/logs directory where DS/OpenDJ is installed. SSL debug logs (server.out) /path/to/ds/logs directory where DS/OpenDJ is installed #### IDM/OpenIDM logs The following logs are available: Log type Location IDM/OpenIDM log (openidm0.log.n) /path/to/idm/logs IDM/OpenIDM audit logs /path/to/idm/audit Jetty® Request logs /path/to/idm/logs DS/OpenDJ password sync plugin /path/to/ds/logs directory where DS/OpenDJ is installed. Active Directory® password sync plugin (idm.log) The location is specified in the logPath registry entry under the HKEY_LOCAL_MACHINE\SOFTWARE\ForgeRock\OpenIDM\PasswordSync registry key. #### IG/OpenIG logs The following logs are available: Log type Location IG 5 and later$HOME/.openig/logs
OpenIG 3.x and 4.x stdout (catalina.out if you are using Apache Tomcat™) or /tmp/gateway.log depending on how you have configured logging.
SAML logs $HOME/.openig/SAML/debug #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I add logging to JavaScript files in IDM/OpenIDM (All versions)? The purpose of this article is to provide information on adding logging to JavaScript® files in IDM/OpenIDM. You may want to add logging to troubleshoot a script that is not behaving as expected or desired. #### Background information JavaScript in IDM/OpenIDM uses the SLF4J logging facility. By default, the logger outputs logging to the standard IDM/OpenIDM log. There are five different logger levels available for JavaScript files, where ERROR is the least verbose as it only produces logs in the most serious circumstances through to TRACE, which is the most verbose as it provides detailed information about program execution. These loggers take the following format: logger.error(string message, optional params) logger.warn(string message, optional params) logger.info(string message, optional params) logger.debug(string message, optional params) logger.trace(string message, optional params) These five logger levels correspond to the log levels in the logging.properties file (located in the /path/to/idm/conf directory) as follows: loggers logging.properties ERROR SEVERE WARN WARNING INFO INFO DEBUG FINER and FINE TRACE FINEST #### Logging in JavaScript scripts Logging Level Within logging.properties, you can set different logging levels, which affect how logging is output in JavaScript files: 1. Global logging level - this logging level applies to everything, including JavaScript files and defaults to INFO: .level 2. All JavaScript files logging level - this logging level applies to all JavaScript files and overrides the global logging level when set. The property used for setting the log level varies by version: • IDM 5.5: org.forgerock.openidm.script.javascript.JavaScript.level • Pre-IDM 5.5: org.forgerock.script.javascript.JavaScript.level 3. Individual JavaScript files logging level - this logging level applies to specific JavaScript files and overrides the All JavaScript logging level when set. The property used for setting the log level varies by version: • IDM 5.5: org.forgerock.openidm.script.javascript.JavaScript.script-name.level • Pre-IDM 5.5: org.forgerock.script.javascript.JavaScript.script-name.level For example, in IDM 5.5, you could leave the global logging level set to the default INFO. You could then set the following logging levels in logging.properties to generate only WARN logging for all JavaScript files but output DEBUG information for a specific file (myScript): .level=INFO org.forgerock.openidm.script.javascript.JavaScript.level=WARNING org.forgerock.openidm.script.javascript.JavaScript.myScript.level=FINE  By setting logging levels in this way, you are not outputting DEBUG and INFO messages for all JavaScript files, only the one you want to debug. JavaScript files Within your JavaScript files, you can then add the appropriate logger code to log to the IDM/OpenIDM log. For example, in the JavaScript file that you want to debug (myScript in the above example), you could add the following: logger.debug("Test error message logged for user {}", userName);  This example logger will output to your IDM/OpenIDM log as: May 24, 2014 11:27:28 AM org.forgerock.openidm.script.javascript.LoggerPropertyFactory$1$3 call FINE: Test error message logged for user jdoe #### See Also #### Related Training N/A #### Related Issue Tracker IDs ## How do I add logging to Groovy scripts in IDM/OpenIDM (All versions)? The purpose of this article is to provide information on adding logging to Groovy scripts in IDM/OpenIDM. You may want to add logging to troubleshoot a script that is not behaving as expected or desired. #### Background Information Groovy in IDM/OpenIDM can either use the SLF4J logging facility or java.util logging depending on which Logger class you import into your script. By default, the logger outputs logging to the standard IDM/OpenIDM log. SLF4J and Groovy log levels There are five different log levels available for Groovy files when you use SLF4J logging, where ERROR is the least verbose as it only produces logs in the most serious circumstances through to TRACE, which is the most verbose as it provides detailed information about program execution. These logs take the following format: logger.error(string message, optional params) logger.warn(string message, optional params) logger.info(string message, optional params) logger.debug(string message, optional params) logger.trace(string message, optional params)  And there are six different log levels available for Groovy files when you use java.util logging, where SEVERE is the least verbose as it only produces logs in the most serious circumstances through to FINEST, which is the most verbose as it provides detailed information about program execution. These logs take the following format: logger.severe(string message, optional params) logger.warning(string message, optional params) logger.info(string message, optional params) logger.fine(string message, optional params) logger.finer(string message, optional params) logger.finest(string message, optional params) Logging Level The following table indicates how these log levels correspond to each other and also to the log levels in the logging.properties file (located in the /path/to/idm/conf directory): log levels for SLF4J logging log levels for java.util logging logging.properties ERROR SEVERE SEVERE WARN WARNING WARNING INFO INFO INFO DEBUG FINER and FINE FINER and FINE TRACE FINEST FINEST The Global logging level in the logging.properties file applies to everything, including Groovy files and defaults to INFO: .level=INFO You can change this if you require a different log level globally. Alternatively, you can use the SLF4J logging within a Groovy script to set logging levels that are specific to one or more Groovy scripts as demonstrated below. #### SLF4J logging in Groovy scripts (called directly) When Groovy scripts are called directly, for example from an endpoint, trigger or workflow, you can either use the SLF4J logging facility or java.util logging. This section details the use of the SLF4J logging facility. With SLF4J logging, you can use logger names (either explicitly or by passing a class) to configure different log levels that only apply to specific Groovy scripts. Explicit logger name This example demonstrates defining a custom logger name (myGroovyLogger); you can add the code from steps 1 and 2 to multiple Groovy scripts if you want to define the same log level in multiple scripts: 1. Include the following code at the top of your Groovy script to import the SLF4J logger and define your custom logger name: import org.slf4j.Logger; import org.slf4j.LoggerFactory; final Logger logger = LoggerFactory.getLogger("myGroovyLogger");  2. Include the appropriate logger code within your Groovy script where logging output is required. For example, you could add the following to output a simple message: logger.trace("Test log output message");  3. Add your logger name and required log level to the logging.properties file, for example to output at the finest log level: myGroovyLogger.level=FINEST This example log will output to your IDM/OpenIDM log as: Oct 24, 2017 11:24:12 AM org.slf4j.Logger$trace call
FINEST: Test log output message

Logger name by passing a class

This example demonstrates using a logger name by passing a class, where the class file name is the base name of the *.groovy script when compiled. For example, if you wanted to debug the echo.groovy script (from the echo endpoint sample), you would use echo.class and echo.level in your Groovy script as follows:

1. Include the following code at the top of your Groovy script to import the SLF4J logger and define your logger name:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

final Logger logger = LoggerFactory.getLogger(echo.class);

2. Include the appropriate logger code within your Groovy script where logging output is required. For example, you could add the following to output a simple message:
logger.trace("Test log output message for echo endpoint sample");

3. Add the logger name (in the format scriptName.level) and the required log level to the logging.properties file, for example to output at the finest log level:
echo.level=FINEST
This example log will output to your IDM/OpenIDM log as:
Oct 24, 2017 11:24:12 AM org.slf4j.Logger$trace call FINEST: Test log output message for echo endpoint sample #### java.util logging in Groovy scripts (called directly) When Groovy scripts are called directly, for example from an endpoint, trigger or workflow, you can either use the SLF4J logging facility or java.util logging. This section details the use of java.util logging. To configure java.util logging: 1. Change the Global logging level in in the logging.properties file if you want a level other than INFO. 2. Include the following code at the top of your Groovy script to import the java.util logger and define the logger: java.util.logging.Logger logger = java.util.logging.Logger.getLogger ""   3. Include the appropriate logger code within your Groovy script where logging output is required. For example, you could add the following to output a simple message: logger.info("Test log output message")  This example log will output to your IDM/OpenIDM log as: Oct 24, 2017 11:24:12 AM org.forgerock.openidm.script.javascript.LoggerPropertyFactory$1$3 call INFO: Test log output message #### Logging in Groovy scripts (called from Groovy connector) When Groovy scripts are called from the Groovy connector, you can only use the SLF4J logging facility: 1. Change the Global logging level in in the logging.properties file if you want a level other than INFO. 2. Include the following code at the top of your Groovy script to import the SLF4J logger: import org.identityconnectors.common.logging.Log   3. Include the following code to define the logger: logger = LoggerFactory.getLogger('logger');  4. Include the appropriate logger code within your Groovy script where logging output is required. For example, you could add the following to output a simple message: logger.info("Test log output message");  This example log will output to your IDM/OpenIDM log as: [2017-10-12 11:27:26.506] [INFO ] [org.forgerock.openicf.misc.scriptedcommon.ScriptedConfiguration evaluate]: Test log output message You can alternatively use the log.info format as demonstrated in the scripted-rest-with-dj sample (scriptedrest2dj) (located in the /path/to/idm/samples directory). The only difference between the sample and the process above is the code used to define the logger and the format of the logger code: • Step 3 - use the following code to define the logger: def log = log as Log  • Step 4 - use the following logger code format, that is log. instead of logger.: log.info("Test log output message");  #### See Also #### Related Training N/A #### Related Issue Tracker IDs ## How do I add Thread IDs to log statements in IDM/OpenIDM (All versions)? The purpose of this article is to provide information on adding thread IDs to log statements, including how to correlate thread IDs with transaction IDs for tracing transactions through standard audit and openidm0.log.0 logs. This information can help with troubleshooting and tracking performance issues. #### Overview IDM/OpenIDM uses java.util logging, which allows you to extend logging using custom handlers, formatters etc. This article provides examples of how you can use a custom log formatter to extend java.util.logging.SimpleFormatter to include thread IDs in log statements. ##### Note These examples are intended to show you what is possible; they are not recommendations from ForgeRock nor are they supported. It is your responsibility to verify that they are suitable for your needs. #### Adding thread IDs to log statements 1. Write a custom log formatter using Java® that includes the thread ID. For example: package org.forgerock.idm.sample; import java.util.logging.LogRecord; public class ThreadIdFormatter extends java.util.logging.SimpleFormatter { public String format(LogRecord record) { String formatted = super.format(record); return "ThreadID=" + Thread.currentThread().getId() + " ThreadName=" + Thread.currentThread().getName() + " - " + formatted; } }  2. Compile and package this log formatter into a jar file, for example: $ mkdir classes
$javac -d classes ThreadIdFormatter.java$ cd classes
$jar cvf ../thread-id-formatter.jar org  3. Shutdown the IDM/OpenIDM instance. $ cd /path/to/idm
$./shutdown.sh 4. Copy the packaged jar file to the /path/to/idm/bin directory, for example: $ cp thread-id-formatter.jar /path/to/idm/bin
5. Delete the felix cache:
$cd /path/to/idm$ rm -rf felix-cache/*

6. Update the logging.properties file (located in the /path/to/idm/conf directory) to include this log formatter and comment out the SimpleFormatter, for example:
# java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter

7. Update the logging.properties file to include the following new line, which includes milliseconds in the log times:
java.util.logging.SimpleFormatter.format = %1$tb %1$td, %1$tY %1$tl:%1$tM:%1$tS.%1$tL %1$Tp %2$s%n%4$s: %5$s%6$s%n

8. Start the IDM/OpenIDM server:
$cd /path/to/idm$ ./startup.sh

This will add thread IDs to all log statements, for example (with the additions shown in bold):

ThreadId : 156 - /openidm/managed/user/?_queryFilter=true - ThreadName : qtp440289386-156 - May 02, 2018 11:07:44.387 AM org.forgerock.openidm.repo.jdbc.impl.GenericQueryResultMapper mapQueryToObjectFINEST: Query result for queryId: null type: managed/user converted obj: {mail=jdoe@example.com, sn=doe, givenName=john, userName=jdoe, accountStatus=active, effectiveRoles=[], effectiveAssignments=[], _id=546dfe6d-7c7b-4a00-abe7-f1b4d8d1e5ee, _rev=0}


#### Correlating transaction IDs with thread IDs

If you have configured the audit logs to include the transactionId per Integrator's Guide › Configuring the Audit Service (IDM 5 and later; OpenIDM 4.x), you can use this log formatter in conjunction with the logging scripts used in router.json (located in the /path/to/idm/conf directory) to match transaction IDs with thread IDs.

For example, you could add a block similar to the following to router.json:

    {
"pattern": ".*",
"onRequest": {
"type": "text/javascript",
"source": "logger.debug(\"ONREQUEST \" + request.method + \" \" + request.resourcePath + \" xact=\" + context.transactionId.transactionId.value);"
},
"onResponse": {
"type": "text/javascript",
"source": "logger.debug(\"ONRESPONSE \" + request.method + \" \" + request.resourcePath + \" xact=\" + context.transactionId.transactionId.value);"
},
"onFailure": {
"type": "text/javascript",
"source": "logger.debug(\"ONFAILURE \" + request.method + \" \" + request.resourcePath + \" xact=\" + context.transactionId.transactionId.value);"
}
}


This change adds thread IDs to all log statements for these scripts; when FINE logging is enabled, the transaction IDs are shown as well, which allows you to correlate thread IDs with transaction IDs. For example (with the transaction ID shown in bold):

FINE: ONREQUEST read info/version xact=db061a02-fe76-4c34-b99c-ac8839ea763d-60
ThreadId=616 ThreadName=qtp405829296-616 - /openidm/info/version - May 02, 2018 11:15:12.593 AM org.forgerock.openidm.filter.ScriptedFilter evaluateOnResponse

From this example, you can see that thread ID 616 correlates to transaction ID db061a02-fe76-4c34-b99c-ac8839ea763d-60. This information can then be used to trace transactions through the standard IDM/OpenIDM log using the thread ID.

N/A

## How do I configure Jetty Request logging for IDM/OpenIDM (All versions)?

The purpose of this article is to provide assistance with configuring Jetty® Request logging for IDM/OpenIDM. Request logs provide details of every request that the server processes and can be a useful troubleshooting tool.

#### Configuring Jetty Request logging

You can configure Jetty Request logging by adding something similar to the following to the end of the jetty.xml file (located in the /path/to/idm/conf directory):

  <Get name="handler">
<Arg>
<New id="RequestLog" class="org.eclipse.jetty.server.handler.RequestLogHandler"/>
</Arg>
</Call>
</Get>

<Ref id="RequestLog">
<Set name="requestLog">
<New id="RequestLogImpl" class="org.eclipse.jetty.server.NCSARequestLog">
<Arg><SystemProperty name="jetty.logs" default="./logs"/>/yyyy_mm_dd.request.log</Arg>
<Set name="retainDays">90</Set>
<Set name="append">true</Set>
<Set name="extended">false</Set>
<Set name="LogTimeZone">GMT</Set>
</New>
</Set>
</Ref>


This example configuration would output the Request log to $IDM_HOME/logs and the log filenames would include the date (in yyyy_mm_dd format). Request log files would be kept for 90 days before being deleted. Existing log files would be appended to and the extended NCSA format would be used in the GMT timezone. You should change this example configuration as per your logging requirements. See Class NCSARequestLog for further configuration options. #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I enable database logging in IDM/OpenIDM (All versions)? The purpose of this article is to provide information on enabling database logging in IDM/OpenIDM. Database logging is useful for troubleshooting query related performance issues. #### Enabling database logging You can enable database logging by amending the logging.properties file (located in the /path/to/idm/conf directory). ##### Note All the examples show the level being set to FINEST, which is the most detailed logging level and recommended for troubleshooting; however, you can also set them to FINER, FINE, CONFIG, INFO, WARNING and SEVERE (least detailed) to achieve different levels of logging. You can change the global logging level to FINEST (although this affects all logging) by amending the .level value as follows: .level=FINEST Alternatively, you can set logging levels for specific items to override the global level: • To set logging to FINEST for JDBC databases only, you can add the following line: org.forgerock.openidm.repo.jdbc.level = FINEST • Where the logging entries are being generated by the Microsoft® SQL driver rather than IDM/OpenIDM, you need to add the following line instead: com.microsoft.sqlserver.jdbc.level = FINEST #### See Also #### Related Training #### Related Issue Tracker IDs ## How do I enable OpenICF logging for the .NET Connector Server in IDM/OpenIDM (All versions) to investigate server crashes? The purpose of this article is to provide information on enabling OpenICF logging for the .NET Connector Server in IDM/OpenIDM. Enabling logging is useful for investigating .NET Connector Server crashes. #### Enabling OpenICF logging In the ConnectorServer.exe.Config file (located in the directory in which the Connector Server is installed), you should change the value of the add key property from false to true: <add key="logging.proxy" value="true"/>  and change the initializeData type to Error (it is set to Information by default): <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error"/>  ##### Note There are other logging levels available, which can be set by changing the initializeData type. The available levels are Error, Warning, Information (default) Verbose and All. Verbose and All output very detailed logging; the level of logging has a direct effect on the performance of the .NET Connector Server so care should be taken when changing this setting. Your file should now look similar to this: <configuration> <appSettings> <add key="logging.proxy" value="true"/> </appSettings> <system.diagnostics> <trace autoflush="true" indentsize="4"> <listeners> <remove name="Default" /> <add name="myListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\connectorserver.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Error"/> </add> </listeners> </trace> </system.diagnostics> </configuration>  ##### Note You should also enable Microsoft® Windows® crash dumps as described here: Collecting User-Mode Dumps #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I create a HAR file for troubleshooting IDM/OpenIDM (All versions)? The purpose of this article is to provide information on creating a HAR file (HTTP ARchive) for troubleshooting IDM/OpenIDM. #### Overview A HAR file is output by browser developer tools and other browser extensions. It is essentially a recording of interactions between the browser and application in JSON format; it can be very useful for troubleshooting as it can help identify where the issue is occurring. ##### Note The equivalent file in Internet Explorer® 9, 10 and 11 is an XML file based on the HTTP Archive. #### Creating a HAR file You can create a HAR file as follows: Chrome™ browser 1. Launch the Developer Tools in Chrome and select the Network tab. 2. Enter the URL where you have been experiencing issues and re-create the issue. Do not open a new browser tab as Chrome records details on the tab initially shown when the Developer Tools were launched. 3. Right-click within the Network window and select the Save as HAR with content option. Firefox browser 1. Launch the Developer Tools in Firefox and select the Network option. 2. Select the Persist Logs option on the Network tab to ensure the entire request is saved, even if you are redirected. 3. Enter the URL where you have been experiencing issues and re-create the issue. 4. Right-click within the Network window and select the Save All As HAR option. Microsoft® Edge 1. Launch the Developer Tools in Internet Explorer and select the Network tab. 2. Click the green play button to start recording. 3. Enter the URL where you have been experiencing issues and re-create the issue. Do not open a new browser tab as Internet Explorer records details on the tab initially shown when the Developer Tools were launched. 4. Click the Save button to save the HAR file. #### Creating an XML file (Internet Explorer 9, 10 and 11) You can create an XML file based on the HTTP Archive format as follows: 1. Launch the Developer Tools in Internet Explorer and select the Network tab. 2. Click the Start Capturing button. 3. Enter the URL where you have been experiencing issues and re-create the issue. 4. Click the Save button and choose to save the file as XML. #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I enable Garbage Collector (GC) Logging for IDM/OpenIDM (All versions)? The purpose of this article is to provide information on enabling GC Logging for IDM/OpenIDM. It assumes that you already have a working IDM/OpenIDM server that is installed. #### Enabling GC Logging You should enable GC logging via the OPENIDM_OPTS environment variable. You can either add them prior to starting IDM/OpenIDM as demonstrated below or you can edit the startup.sh or startup.bat files to update the default OPENIDM_OPTS values. ##### Note You should ensure there is enough disk space for the GC logs; if not, you should enable GC log rotation as well by including the following options when you enable GC logging: -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=n, -XX:GCLogFileSize=n. On Unix® and Linux® systems: Enter the following prior to starting IDM/OpenIDM to enable GC logging: $ cd /path/to/idm/
$export OPENIDM_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintGCCause -Xloggc:[filename]"$ ./startup.sh

replacing [filename] with the path to the file that you would like to create to store the log file. The Xloggc property is optional; if omitted, the GC data is written to the openidm.log file (located in the /path/to/idm/logs directory). This can be useful when trying to correlate GC events with events occurring within the server instance.

Once IDM/OpenIDM has started, there should be a GC log file located in the directory specified in the -Xloggc: option or GC data included in the openidm.log file. You can use the Universal GC Log Analyzer to analyze your GC log. This is a third-party website that we suggest can be used for analysis but is not supported by ForgeRock.

On Microsoft® Windows® systems:

Enter the following prior to starting IDM/OpenIDM to enable GC logging:

C:\> cd \path\to\idm
C:\path\to\idm> set OPENIDM_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintGCCause -Xloggc:[filename]
C:\path\to\idm> startup.bat


replacing [filename] with the path to the file that you would like to create to store the log file. The Xloggc property is optional; if omitted, the GC data is written to the openidm.log file (located in the \path\to\idm\logs directory). This can be useful when trying to correlate GC events with events occurring within the server instance.

Once IDM/OpenIDM has started, there should be a GC log file located in the directory specified in the -Xloggc: option or GC data included in the openidm.log file. You can use the Universal GC Log Analyzer to analyze your GC log. This is a third-party website that we suggest can be used for analysis but is not supported by ForgeRock.

N/A

N/A

## How do I collect JVM data for troubleshooting IDM/OpenIDM (All versions)?

The purpose of this article is to provide information on collecting JVM data for troubleshooting IDM/OpenIDM. JVM data (such as stack traces, heap histograms and heap dumps) can be very useful for troubleshooting IDM/OpenIDM issues such as poor reconciliation performance, Out of Memory errors and suspected memory leaks. This data must be collected before killing the affected process or restarting the server; otherwise the information relating to the affected process is lost forever, which may make it impossible to identify the root cause.

#### Collecting JVM data

##### Note

The following utilities are all JDK utilities and are not included with the JRE; you must have the JDK installed in your production environments in order to collect this data. Additionally, you must be the owner of the identified process to be able to collect this data.

To collect JVM data, you must first determine the process ID of the affected server process using the following command:

$jps -v You can then collect the following types of data: #### Stack Traces Stack traces allow us to see what is happening with JVM threads and are sometimes referred to as thread dumps. You should collect several different stack traces (minimum of three) at roughly one minute intervals to enable us to compare them. It is recommended that you also identify the threads associated with the process ID for troubleshooting high CPU as detailed in How do I find which thread is consuming CPU in a Java process in IDM/OpenIDM (All versions)? You can use the following commands to collect stack traces: • You can collect long listing stack traces using the jstack command: $ jstack -F -l [pid] > [outputfile.txt]
replacing [pid] with the process ID of the affected server and [outputfile.txt] with the output file for the stack trace.
• You can use the kill -3 command (Unix® and Linux® systems only), which consistently produces better results:
$kill -3 [pid] replacing [pid] with the process ID of the affected server. The kill -3 command outputs the stack trace to the log file where the standard output is directed (typically the error log file). The kill -3 command does not stop the running process; it will continue to run after outputting the stack trace. You can use the Java Thread Dump Analyzer to analyze your stack traces. This is a third-party website that we suggest can be used for analysis but is not supported by ForgeRock. #### Heap Histograms Heap histograms enable us to visualize the objects in the JVM heap - we are only interested in live objects. You can collect heap histograms using the jmap command: $ jmap -histo:live [pid]

replacing [pid] with the process ID of the hung container instance.

##### Caution

The jmap utility must come from the same version of JVM that the container is running.

#### Heap Dumps

Heap dumps allow us to see what is happening with memory at the point the heap dump is taken, which can help us to identify high memory usage issues or memory leaks. You should collect heap dumps as close in time to the stack traces as possible to enable us to compare the contents. You can collect heap dumps using the jmap command:

$jmap -F -dump:file=[target file] [pid] replacing [target file] with the output file for the heap dump and [pid] with the process ID of the affected server. #### Current JVM settings You can find the current JVM settings for the server using one of the following commands (the jinfo command is the preferred way): • jinfo: $ jinfo [pid]
replacing [pid] with the required process ID.
• jps:
$jps -v For Unix® and Linux® systems where AM/OpenAM is running on an Apache Tomcat™ web container, you can also use: $ ps -ef | grep tomcat


N/A

N/A

## How do I perform an anonymous health check on the IDM/OpenIDM server (All versions)?

The purpose of this article is to provide information on performing an anonymous health check on the IDM/OpenIDM server. Anonymous access may be required if you have an application or load balancer that needs to check the availability of the IDM/OpenIDM instance but you don't want to provide any user credentials.

#### Performing anonymous health checks

Basic health checks can be performed against the info/ping endpoint to check that IDM/OpenIDM is available.

To perform an anonymous health check, you must specifically include the following headers in your rest call to identify yourself as anonymous, else the call will fail:

X-OpenIDM-Password: anonymous
X-OpenIDM-Username: anonymous

The anonymous user (internal/role/openidm-reg role) has access to the info/ping endpoint by default.

Example

To perform an anonymous health check, you would use a REST call such as the following:

$curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -X GET "http://idm.example.com:8080/openidm/info/ping" Example response: { "_id" : "", "state" : "ACTIVE_READY", "shortDesc" : "OpenIDM ready" } #### See Also #### Related Training N/A #### Related Issue Tracker IDs ## How do I monitor LiveSync activity using REST in IDM/OpenIDM (All versions)? The purpose of this article is to provide information on monitoring LiveSync activity in IDM/OpenIDM using the REST API. You can use this information to help you troubleshoot failed synchronization jobs. #### Monitoring LiveSync activity The following example demonstrates returning details for the last two LiveSync schedules and then following up on a specific schedule that failed. You will need to adjust these queries to fit your environment, but they should help you to monitor and resolve your own LiveSync schedule issues. ##### Note Using the syncToken endpoint is not a reliable indicator of the health of a job because a failed LiveSync will not update the syncToken. Example Return details of the last two schedule syncs using the following query, where you will need to update userId to match the LiveSync schedule: $ curl -X GET -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" "http://localhost:8080/openidm/audit/access?_queryFilter=userId+sw+\"Scheduled\"&_fields=timestamp,transactionId,userId,response,request&_sortKeys=-timestamp&_pageSize=2"


Example response (where both LiveSync schedules failed):

{  "result": [
{
"_id": "d33e369d-7b82-4f4f-9a70-30d196aacddd-24438",
"timestamp": "2017-11-14T00:15:45.500Z",
"transactionId": "d33e369d-7b82-4f4f-9a70-30d196aacddd-24420",
"userId": "Scheduled d4528b10-cb4f-4b2f-8c13-8deff9677d1b-Fri Nov 10 15:24:30 CDT 2017",
"response": {
"status": "FAILED",
"statusCode": null,
"elapsedTime": "75475",
"elapsedTimeUnits": "MILLISECONDS"
},
"request": {
"protocol": "CREST",
"detail": {
}
}
},
{
"_id": "d33e369d-7b82-4f4f-9a70-30d196aacddd-24382",
"timestamp": "2017-11-14T00:15:21.500Z",
"transactionId": "d33e369d-7b82-4f4f-9a70-30d196aacddd-24379",
"userId": "Scheduled d4528b10-cb4f-4b2f-8c13-8deff9677d1b-Fri Nov 10 15:19:00 CDT 2017",
"response": {
"status": "FAILED",
"statusCode": null,
"elapsedTime": "75517",
"elapsedTimeUnits": "MILLISECONDS"
},
"request": {
"protocol": "CREST",
"detail": {
}
}
}
],
"resultCount": 2,
"totalPagedResultsPolicy": "NONE",
"totalPagedResults": -1,
"remainingPagedResults": -1
}



You can then find out more about a failed sync in the audit log using the details returned from the above call. For example, you could use one of the following calls with the corresponding transactionId, depending on what type of events you are interested in:

• Access events:
$curl -X GET -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" "http://localhost:8080/openidm/audit/access/d33e369d-7b82-4f4f-9a70-30d196aacddd-24420"  This call will indicate the status of the job (either a SUCCESS or FAILED) and report the latest timestamp. • Activity events: $ curl -X GET -H "X-OpenIDM-Username: openidm-admin" -H "X-OpenIDM-Password: openidm-admin" "http://localhost:8080/openidm/audit/activity?_queryFilter=userId+sw+\"Scheduled\"&_status+eq+\"FAILED\"&_transactionId+eq+\"d33e369d-7b82-4f4f-9a70-30d196aacddd-24420\"&_sortKeys=-timestamp"

This call will query the activity logs to see if any failures were reported there.

N/A

N/A

## How do I monitor IDM 5 and OpenIDM 3.x, 4.x using SmartEvent?

#### Enabling SmartEvent

IDM/OpenIDM includes an event monitoring facility called SmartEvent. You can enable SmartEvent by using the openidm.smartevent.enabled system property. If you want to view a summary or statistics in the openidm log, you can enable openidm.smartevent.summarylogging.

To get started with SmartEvent, enable it via JAVA_OPTS during startup as follows:

$env JAVA_OPTS="-Dopenidm.smartevent.enabled=true -Dopenidm.smartevent.summarylogging=true" ./startup.sh The SmartEvent summary information is then output to the openidm.log.0 file, for example: Jun 16, 2016 14:22:51 PM org.forgerock.openidm.smartevent.core.StatisticsHandler$1 run
INFO: Summary: {openidm/internal/repo/orientdb/raw/query/query-all-ids=Invocations: 1 total time: 18.448 ms mean: 18.448 ms, openidm/internal/repo/orientdb/raw/query/query-cluster-events=Invocations: 119 total time: 31.958 ms mean: 0.269 ms, openidm/internal/repo/orientdb/raw/query/query-cluster-failed-instances=Invocations: 119 total time: 54.032 ms mean: 0.454 ms, openidm/internal/router/repo/scheduler/read=Invocations: 26 total time: 98.177 ms mean: 3.776 ms}

IDM/OpenIDM SmartEvent MBeans are also available locally by attaching a monitoring tool such as JConsole or VisualVM. These tools allow you to attach to a local process. If you wish to attach remotely, see the following section.

#### Accessing SmartEvent remotely using JMX

To access SmartEvent remotely using JMX, you will need to specify the following JAVA_OPTS when starting IDM/OpenIDM:

​$env JAVA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dopenidm.smartevent.enabled=true -Dopenidm.smartevent.summarylogging=false" ./startup.sh  Attach a monitoring tool to a remote process in the form <hostname>:<port>. #### See Also #### Related Issue Tracker IDs ## Performance Tuning ## FAQ: IDM/OpenIDM performance and tuning The purpose of this FAQ is to provide answers to commonly asked questions regarding performance and tuning for IDM/OpenIDM. #### Frequently asked questions #### Q. Are there any recommendations for sizing IDM/OpenIDM servers? A. No, the performance of IDM/OpenIDM depends entirely on your specific environment and the exact nature of scripted customizations. To establish appropriate sizing, you will need to perform adequate benchmark testing. See Release Notes › Fulfilling Memory Requirements for further information. ##### Note Sizing and/or tuning recommendations are outside the scope of ForgeRock support; if you want more tailored advice, consider engaging Deployment Support Services. #### Q. Is there any best practice advice for benchmark testing? A. ForgeRock recommends the following: • Maintain a staging environment that matches production where you can simulate the load you will have in production. This allows you to resolve any issues you identify in a non-production environment. • Establish a simple benchmark prior to adding external resources; you can then incrementally add resources and processes to establish what impact each one has. #### Q. What is the recommended Java Virtual Machines (JVM) heap size for IDM/OpenIDM? A. There are no definitive rules for the size of JVM heap size required as it will vary across individual environments and applications, but you should refer to Best practice for JVM Tuning for best practice advice. Additionally, ensure you configure JVM garbage collection appropriately as GC times can increase with large heap sizes causing significant impacts to application performance and CPU utilization. See How do I change the JVM heap size for IDM/OpenIDM (All versions)? for further information. You can monitor JVM memory usage using one of the health endpoints as detailed in Integrator's Guide › Memory Health Check. ##### Note For a 32-bit JVM or a 32-bit operating system, the limit for the process size is 4GB, that is, 2^32; this cannot be exceeded regardless of the amount of heap space allocated. #### Q. How can I troubleshoot performance issues? A. If you have no benchmark tests for comparison and encounter performance issues, the recommendation is to reduce your system to the bare minimum and then incrementally add back resources and processes in order to identify which one is causing the bottleneck. The following tips should also help: #### Q. How can I improve reconciliation performance? A. Firstly, you should identify reconciliation performance issues per the advice in How do I identify reconciliation performance issues in IDM/OpenIDM (All versions)?; this article also offers tuning advice. In addition to this, you can: #### Q. Are there any recommendations for sizing the database needed for the IDM/OpenIDM repository? A. Database sizing for the IDM/OpenIDM repository depends on various factors such as the number of managed objects (users, groups, roles etc), relationships, links, audit logs, configurations, workflow, reconciliations etc. Since all these factors vary from one deployment to another, it is not possible to give specific recommendations. However, the following guidelines should help when estimating the size of database required: Managed objects You can calculate the size of your IDM/OpenIDM deployment by first calculating the size of your database for sample data (such as 20 users) and then multiplying that by the expected number of users. The following tables typically grow as more managed objects are added: • managedobjects - one entry per managed object. • managedobjectproperties - N entries per managed object, where N is the number of indexed / searchable properties: Integrator's Guide › Creating and Modifying Managed Object Types. • links - the total number of links is less than or equal to the number of managed objects multiplied by the number of unique mappings. For example, if you have 20 managed users and 3 mappings (systemLdapAccounts_managedUser, managedUser_systemLdapAccounts and systemADAccount_managedUser), the total number of links would be less than or equal to 40 (20 * 2). The systemLdapAccounts_managedUser and managedUser_systemLdapAccounts mappings are bidirectional syncs and may use the same links: Integrator's Guide › Mapping Source Objects to Target Objects. • relationships, relationshipproperties - relationships are especially sensitive to growth when searchablebydefault is set to true (the various repo.jdbc.json files provided in the /path/to/idm/db directory have different defaults so you may end up generating more of this data that you need or expect). Additionally, roles utilize the relationships table as of OpenIDM 4. • Activiti - these tables can grow over time as running workflows result in persisting data in these tables. Most of the other tables such as configobjects, internaluser etc are static once you have a working IDM/OpenIDM configuration and should not have much impact on database sizing. Audit logs Audit logs can have a huge impact on database sizing and disk space. By default, IDM/OpenIDM stores audit logs in both CSV files and in the database. The size of audit logs depends on IDM/OpenIDM usage. There are different types of audit logs and their corresponding events: Integrator's Guide › Audit Event Topics. The following tables in particular grow with each reconciliation: • recon/audit - this table reports the status for each user and grows by one entry for each source record. Additionally, a further entry is created for each target record that is not linked or correlated (CONFIRMED, FOUND). The size of a single reconciliation report depends on the data size of source and target; the overall data requirement depends on how many reports you keep. You can reduce the size of the recon/audit table for actions you're not interested in by setting the action to NOREPORT. • recon/activity - this table reports all activity and grows by one entry for each modifying action regardless of the source of the action, for example, reconciliation, synchronization etc. The overall size depends on how many changes are processed and how long these records are kept. The following best practices should be heeded: #### Q. How can I manage the QueuedThreadPool in Jetty? A. You can change the Jetty thread pool settings by updating the config.properties file (located in the /path/to/idm/conf directory) or by setting the OPENIDM_OPTS environment variable when you start IDM/OpenIDM. See Integrator's Guide › Adjusting Jetty Thread Settings for further information. #### See Also #### Related Training N/A ## How do I identify reconciliation performance issues in IDM/OpenIDM (All versions)? The purpose of this article is to provide guidance on isolating and diagnosing IDM/OpenIDM reconciliation performance issues. This article also includes tips on tuning reconciliation performance based on your findings. #### Overview Isolating the cause(s) of performance bottlenecks within a large IDM/OpenIDM topology comes down to a process of elimination. Identifying bottlenecks requires the system to be broken down to its individual components; each component must then be tested and tuned to achieve the necessary throughput. For example, given the following basic topology there are multiple components which might limit the throughput of the system: Although far from exhaustive, the following are some of the possible bottlenecks which might be encountered within the above topology: • Source LDAP Read ops/s • Target LDAP Read/Write ops/s • PostgreSQL Read/Write ops/s • JVM Heap limitations • Host Disk I/O limitations • Host CPU limitations • Undersized / incorrectly configured virtualization • IDM/OpenIDM configuration issues • IDM/OpenIDM triggers/custom code • IDM/OpenIDM product defect ##### Note One configuration change that can help with reconciliation performance is to increase the number of connector instances in the connection pool as detailed in How do I configure pooled connections for a connector in IDM/OpenIDM (All versions)? #### Reconciliation performance Measuring the performance of individual components within the topology can be done outside the scope of IDM/OpenIDM by using external benchmarking tools and processes or via IDM/OpenIDM itself. Benchmarking components via IDM/OpenIDM consists of using targeted operations and configuration changes to isolate a single component within the system and measure its performance as it relates to reconciliation. You should consider the following components: #### External source system During reconciliation, IDM/OpenIDM queries the source system in order to obtain the complete list of source object IDs to be reconciled. It may also perform multiple queries to retrieve the complete source object during the reconciliation process or rely on a cache of the source objects returned by the configured sourceQuery. To begin to understand the performance of the source system, the following can be performed: • LDAP searchrate command to measure raw read ops capability. This command is provided in the OpenDJ LDAP Toolkit. • Measure execution time via targeted queries against the source system using IDM/OpenIDM REST APIs: • Execute the generic ‘query-all-ids’ query against the source system. • Execute any ‘sourceQuery’ queries which have been configured. • Execute individual source object queries via Query FIlter, specifying the individual source object attributes use within the configured ‘sourceQuery’ queries. Based on the results of the above actions, tune the source system to resolve performance bottlenecks. Examples of things which might reduce read performance on a source LDAP system are: • Lack of necessary indexes, resulting in un-indexed (full DB scans) searches. • Insufficient JVM Heap. • Insufficient CPU resources to handle the load. • Insufficient Disk I/O performance (non-SSD drives) to handle the load. #### External target system During reconciliation, IDM/OpenIDM queries the target system in order to obtain the complete list of target object IDs to be reconciled. It may also perform multiple queries to retrieve the complete target object during the reconciliation process or rely on a cache of the target objects returned by the configured ‘targetQuery’. Target objects calculated based on the configured mappings are then written to the target system. To begin to understand the performance of the target system, the following can be performed: • LDAP searchrate and modrate commands to measure raw read/write ops capability. These commands are provided in the OpenDJ LDAP Toolkit. • Measure execution time via targeted queries against the target system using IDM/OpenIDM REST APIs: • Execute the generic ‘query-all-ids’ query against the target system. • Execute any ‘targetQuery’ queries which have been configured. • Execute any ‘correlationQuery’ queries which have been configured. • Execute individual target object queries via Query FIlter, specifying the individual target object attributes use within both the configured ‘targetQuery’ and ‘correlationQuery’ queries. Based on the results of the above actions, tune the target system to resolve performance bottlenecks. Examples of things which might reduce write performance on a target LDAP system are: • Excessive indexes, resulting in increased overhead when writing entries. Specifically take note that substring indexes are extremely costly and may greatly reduce performance. • Insufficient JVM Heap. • Insufficient CPU resources to handle the load. • Insufficient Disk I/O performance (non-SSD drives) to handle the load. #### JDBC repository In order to isolate the JDBC repository from the performance of the reconciliation engine or the target system, the following can be performed: Read/Write Test Mapping: LDAP Source -> Managed Object Situation Action ABSENT CREATE (Default) Read Test Mapping: Managed Object -> LDAP Target Situation Action ALL ASYNC (Read Only) #### Reconciliation engine ##### Note Measuring the performance of the Reconciliation Engine should be done only after having resolved any performance issues observed during testing of the source and target systems. The IDM/OpenIDM reconciliation engine is responsible for the bidirectional synchronization of objects between different data stores. In the case of our topology above, this is the synchronization of the objects between the source and target LDAP systems. During reconciliation IDM/OpenIDM determines the state of the source and target objects (situation assessment), identifies the necessary actions to be performed and performs the actions against the source and target systems. Included within each phase of the reconciliation process are various JavaScript triggers which may or may not impact the performance of the reconciliation process depending on their efficiency. In order to isolate the reconciliation engine from the performance of the IDM/OpenIDM repository or the target system, the following can be performed: Read-Only Test Mapping: LDAP Source -> LDAP Target Situation Action ALL ASYNC (Read Only) ##### Note The following should be performed with a source system which has been populated with production-like data and an EMPTY target system. 1. Set the actions associated with all of the situation policies to ‘Read-Only’ via the IDM/OpenIDM Admin UI. This ensures that during the reconciliation process no actions are performed and that nothing is written to the IDM/OpenIDM repository, or the target system. 2. Perform a full reconciliation between the source and target system. You should monitor both the CPU usage of the IDM/OpenIDM instance as well as JVM heap usage via verbose GC logging. Read-Write Test Mapping: LDAP Source -> LDAP Target Situation Action ABSENT CREATE (Default) 1. Set the action associated with the ABSENT situation policy to CREATE. Set all other situation policies to ‘Read Only’. This ensures that during the reconciliation process objects are created and links are established between the source and target objects. No other actions are performed. 2. Perform a full reconciliation between the source and target system. You should monitor both the CPU usage of the IDM/OpenIDM instance as well as JVM heap usage via verbose GC logging. Link Test Mapping: LDAP Source -> LDAP Target Situation Action FOUND LINK (Custom) 1. Ensure you have defined your desired ‘correlationQuery’ within your mapping. 2. Delete the contents of the ‘links’ table within the IDM/OpenIDM repository. 3. Set the action associated with the FOUND situation policy to LINK. Set all other situation policies to ‘Read Only’. This ensures that during the reconciliation process the ‘correlationQuery’ will be executed to correlate the existing target objects (created during the read-write test) to source objects. For each correlated object a new link will be written to the IDM repository. 4. Perform a full reconciliation between the source and target system. You should monitor both the CPU usage of the IDM/OpenIDM instance as well as JVM heap usage via verbose GC logging. Source Phase Mapping: LDAP Source -> LDAP Target Situation Action CONFIRMED Update (Default) 1. Set the ‘runTargetPhase’ property within the mapping to ‘false’. This ensures that ONLY the source phase of the reconciliation process is executed. 2. Perform a full reconciliation between the source and target system. You should monitor both the CPU usage of the IDM/OpenIDM instance as well as JVM heap usage via verbose GC logging. Target Phase Mapping: LDAP Source -> LDAP Target Situation Action CONFIRMED Update (Default) 1. Delete the contents of the ‘links’ table within the IDM/OpenIDM repository. 2. Set the ‘sourceQuery’ within the mapping to point to a single specific source object. The goal is to reduce the scope of the source objects processed by the reconciliation engine down to a single object. 3. Set the ‘runTargetPhase’ property within the mapping to ‘true’. Ensure that either you do NOT define a ‘targetQuery’ or that your ‘targetQuery’ is set to your desired value within production. Do not set the ‘targetQuery’ to point to a single object as is done with the ‘sourceQuery’. This ensures that all of the target objects will be processed by the target phase of the reconciliation. 4. Perform a full reconciliation between the source and target system. You should monitor both the CPU usage of the IDM/OpenIDM instance as well as JVM heap usage via verbose GC logging. #### See Also #### Related Training N/A #### Related Issue Tracker IDs N/A ## How do I change the JVM heap size for IDM/OpenIDM (All versions)? The purpose of this article is to provide information on changing the JVM heap size for IDM/OpenIDM. #### Changing the JVM heap size Changing the JVM heap size can improve performance and reduce the time it takes to run reconciliations. You should try different heap sizes to see what impact it has to determine the best heap size for your setup. You can set the JVM heap size via the OPENIDM_OPTS environment variable. If OPENIDM_OPTS is undefined, the JVM maximum heap size defaults to 1GB. For example, to set the minimum and maximum heap sizes to 2GB, you would enter the following prior to starting IDM/OpenIDM: On Unix® and Linux® systems: $ cd /path/to/idm/
$export OPENIDM_OPTS="-Xms2048m -Xmx2048m"$ ./startup.sh

On Microsoft® Windows® systems:

C:\> cd \path\to\idm
C:\path\to\idm> set OPENIDM_OPTS=-Xms2048m -Xmx2048m
C:\path\to\idm> startup.bat

##### Note

You can also edit the startup.sh or startup.bat files to update the default OPENIDM_OPTS values.

##### Note

It is recommended that you set the minimum and maximum heap sizes to the same value for best performance. Otherwise, the JVM runs a full Garbage Collector (GC) cycle when increasing its heap, during which time it can pause ongoing operations for up to a few seconds. Generally, a smaller heap will increase the frequency at which the JVM GC executes but reduce the duration; similarly, a larger heap will reduce the frequency and increase the duration. When tuning the JVM heap, the goal is to strike a balance between frequency and duration so as to reduce the impact of the GC on the application.

N/A

N/A

## Best practice for JVM Tuning

The purpose of this article is to provide best practice advice on JVM tuning ; including key symptoms, understanding heap space and GC logs. This best practice advice applies to AM/OpenAM, DS/OpenDJ, IDM/OpenIDM and IG/OpenIG. JVM tuning is not an exact science and will vary across individual environments and applications. Consistent load and stress testing of your environment will be the only true way to gauge the effect of changes made.

#### JVM tuning considerations

Before you begin JVM tuning, you should consider the following points:

• Costs - depending on your environments, it may be more cost effective to add more hardware to improve performance rather than spending the time to tune.
• Desired outcome - tuning for stability is more important in the long run than tuning for performance, although there is an overlap.
• Ongoing problems - tuning may only delay or mask a problem temporarily if there is an underlying issue you need to resolve.
• Memory leaks - these will always cause Garbage Collection (GC) issues regardless of tuning.

#### What is Java® heap and how do I know it needs tuning?

The Java heap is the amount of memory allocated to applications running in the JVM. Objects in heap memory can be shared between threads. Garbage collection (GC) refers to the automatic process of managing the runtime memory. The JVM is subject to GC pauses that vary in frequency and duration; if these pauses become more frequent and/or last longer, you will see issues (such as application pauses) which indicate that you may need to tune your Java heap. The heap is initially created when the JVM starts and is split into different spaces or generations, the key ones being Young (Eden or Nursery) and Tenured (Old):

• The Young generation is used for new objects. A GC process (ParNew) automatically runs when it becomes full which removes unused objects and moves all objects that have lived long enough to the Tenured generation, thereby freeing up space in the Young generation for more new objects. Typically objects in the Young generation are short lived and temporary. The Young generation is small but active and GC happens frequently but with limited impact on performance.
• The Tenured generation is used for the longer lived objects. Another GC process (CMS) runs when it becomes full to remove any unused objects. The Tenured generation is larger and less active than the Young generation but GC tends to have more impact on performance.

The following diagram illustrates the heap and also the corresponding GC options that can be set in (Java 7 and earlier):

Key symptoms that indicate you need to tune your Java heap are:

• High CPU usage. When GC threads consume a lot of CPU, you will see spikes in overall CPU usage.
• Application hangs. When your application appears to hang and stops responding, there are gaps in your log files or you experience general slowness when performing routine tasks, poorly tuned GC can be the cause.

The first thing to check if you experience these symptoms is your GC logs; this are easy to check and may help you identify your issue quickly.

GC Logging

There are various different GCs available; ForgeRock recommends the use of CMS (ConcurrentMMarkSweep), which is a throughput collector and tends to give the best performance as it runs concurrently with the application, meaning less pauses. You should specify this collector by setting the following JVM option:

-XX:+UseConcMarkSweepGC


You can enable GC logging as described in the following articles:

##### Note

Typically you can leave GC logging on without impacting your system, however, you can just enable it when you are experiencing issues.

#### Understanding GC Logs

These example log files come from a CMS collector, where ParNew refers to the Young generation and CMS refers to the Tenured generation. These log file examples also show typical GC logs for a system where tuning is appropriate.

ParNew section

An example GC log output for the ParNew section looks like this:

24.245: [GC24.245: [ParNew: 545344K->39439K(613440K), 0.0424360 secs] 594671K->88767K(4101632K), 0.0426850 secs] [Times: user=0.17 sys=0.00, real=0.04 secs]
29.747: [GC29.747: [ParNew: 584783K->27488K(613440K), 0.0889130 secs] 634111K->104723K(4101632K), 0.0892180 secs] [Times: user=0.24 sys=0.01, real=0.09 secs]

Each line in this log file represents one GC and shows the following types of information for ParNew:

• Timestamp information - if you did not set the PrintGCTimeStamps and PrintGCDateStamps options, this just shows as the number of seconds from JVM startup:
24.245: [GC24.245:
29.747: [GC29.747:
This information is useful as it shows the frequency of GCs. You should be aiming for GCs to occur once every 1 to 5 seconds; if they are occurring more than once a second, you need to tune your JVM and it is likely that you will be seeing high CPU usage.
• Young generation information:
[ParNew: 545344K->39439K(613440K), 0.0424360 secs]
[ParNew: 584783K->27488K(613440K), 0.0889130 secs]
This information shows the initial size of the Young space before doing the GC, the size after doing the GC, the total size available and the time it took to do the GC. If the time taken to do the GC is large, that is more than 0.1 seconds, then it is possible that your Young generation heap size is too big. The total size available will not grow if you set NewSize and MaxNewSize to the same value and this is what you want to see in your logs.
• Overall heap information:
594671K->88767K(4101632K), 0.0426850 secs]
634111K->104723K(4101632K), 0.0892180 secs]
This information shows initial size of the overall heap before doing the GC, the size after doing the GC and the total size available. With the second lot of figures (size after doing the GC), you can see how this incrementally grows up to the point when the GC is done and then reduces back down to a baseline figure. If you have a memory leak, you will see the baseline figure growing incrementally as it becomes increasingly full even after doing GC.
• System information:
[Times: user=0.17 sys=0.00, real=0.04 secs]
[Times: user=0.24 sys=0.01, real=0.09 secs]
This information shows how much time was spent in user space, how much time spent in kernel or system space and what was the real impact to the user. If the user time is high, it indicates there isn't enough CPU for the number of user threads. If the system time is high, it indicates your system is swapping memory to disk which means that there isn't enough physical memory for your heap size.

CMS section

The CMS section of the log file is different to the ParNew section as it shows multiple lines for the same GC. An example GC log output for the CMS section looks like this:

40.146: [GC [1 CMS-initial-mark: 26386K(786432K)] 26404K(1048384K), 0.0074495 secs]
40.154: [CMS-concurrent-mark-start]
40.683: [CMS-concurrent-mark: 0.521/0.529 secs]
40.683: [CMS-concurrent-preclean-start]
40.701: [CMS-concurrent-preclean: 0.017/0.018 secs]
40.704: [GC40.704: [Rescan (parallel) , 0.1790103 secs]40.883: [weak refs processing, 0.0100966 secs] [1 CMS-remark: 26386K(786432K)] 52644K(1048384K), 0.1897792 secs]
40.894: [CMS-concurrent-sweep-start]
41.020: [CMS-concurrent-sweep: 0.126/0.126 secs]
41.020: [CMS-concurrent-reset-start]
41.147: [CMS-concurrent-reset: 0.127/0.127 secs]

This log shows the different phases the GC goes through. All the ones marked concurrent are happening while your application is running and therefore have little impact. The ones that don't say concurrent are stopping your application from running temporarily and therefore can have an impact. The CMS-initial-mark and CMS-remark show heap sizing details.

##### Note

Typically, you will see these sections intermingled, for example, some ParNews then a CMS, some more ParNews, another CMS and so on as the GC processes automatically occur to manage the heap.

#### Common issues shown in the GC logs and solutions

This section identifies common issues you might find in your GC logs and how you can resolve them. You can also use the Universal GC Log Analyzer to analyze your GC log. This is a third-party website that we suggest can be used for analysis but is not supported by ForgeRock.

Young generation - too small

The following log snippet shows an issue where the Young generation heap size is too small:

1.813: [GC1.813: [ParNew: 1152K­>128K(1152K), 0.0008100 secs] 16620K­->15756K(26936K), 0.0008350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1.816: [GC1.816: [ParNew: 1152K­>128K(1152K), 0.0006430 secs] 16780K­->15913K(26936K), 0.0006640 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1.819: [GC1.819: [ParNew: 1152K­>128K(1152K), 0.0005370 secs] 16937K-­>16038K(26936K), 0.0005570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]

As you can see, there are multiple GCs occurring within a second. This means you need to increase the size of the Young generation (NewSize and MaxNewSize) and may also need to increase total heap size to compensate for this change (Xms and Xmx). Alternatively you could change the NewRatio option (this is set to 2 by default, which means the Tenured generation heap is twice as big as the Young generation heap or that the Young generation heap is a 1/3 of the size of the Tenured generation heap).

Young generation - too large / Tenured generation - too small

The following log snippet shows an issue where the Young generation heap size is too large and consequently the Tenured generation heap size is too small:

276.716: [GC (CMS Initial Mark) [1 CMS-initial-mark: 104176K(135168K)] 2758985K(6741248K), 0.8762860 secs] [Times: user=0.88 sys=0.00, real=0.88 secs]
277.592: [CMS-concurrent-mark-start]
277.657: [CMS-concurrent-mark: 0.063/0.065 secs] [Times: user=0.12 sys=0.00, real=0.06 secs]
277.657: [CMS-concurrent-preclean-start]
277.658: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
277.658: [CMS-concurrent-abortable-preclean-start]
277.658: [CMS-concurrent-abortable-preclean: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
277.658: [GC (CMS Final Remark)[YG occupancy: 2657419 K (6606080 K)]277.658: [Rescan (parallel) , 0.9815460 secs]278.639: [weak refs processing, 0.0000320 secs]278.640: [scrub string table, 0.0011700 secs] [1 CMS-remark: 104176K(135168K)] 2761595K(6741248K), 0.9828250 secs] [Times: user=7.18 sys=0.09, real=0.99 secs]
278.641: [CMS-concurrent-sweep-start]
278.668: [CMS-concurrent-sweep: 0.026/0.027 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
278.668: [CMS-concurrent-reset-start]
278.668: [CMS-concurrent-reset: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
280.673: [GC (CMS Initial Mark) [1 CMS-initial-mark: 104079K(135168K)] 2774091K(6741248K), 0.9033730 secs] [Times: user=0.90 sys=0.00, real=0.90 secs]
281.577: [CMS-concurrent-mark-start]
281.640: [CMS-concurrent-mark: 0.063/0.063 secs] [Times: user=0.13 sys=0.00, real=0.07 secs]
281.640: [CMS-concurrent-preclean-start]
281.641: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
281.641: [CMS-concurrent-abortable-preclean-start]
281.641: [CMS-concurrent-abortable-preclean: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
281.641: [GC (CMS Final Remark)[YG occupancy: 2670011 K (6606080 K)]281.641: [Rescan (parallel) , 0.9914290 secs]282.633: [weak refs processing, 0.0000110 secs]282.633: [scrub string table, 0.0008100 secs] [1 CMS-remark: 104079K(135168K)] 2774091K(6741248K), 0.9923100 secs] [Times: user=7.14 sys=0.11, real=0.99 secs]
282.634: [CMS-concurrent-sweep-start]
282.659: [CMS-concurrent-sweep: 0.024/0.025 secs] [Times: user=0.06 sys=0.01, real=0.02 secs]
282.659: [CMS-concurrent-reset-start]
282.659: [CMS-concurrent-reset: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 

As you can see, the CMS collections are occurring too frequently, without any ParNew collections in between. This is a result of the Young generation heap size being too large so that each time a ParNew collection happens, the Tenured heap size instantly becomes full and a CMS collection runs. This could happen if you increased your Young generation heap size without increasing your total heap size. This log snippet could also happen if you have memory leaks.

CMS failure

The following log snippet shows a CMS failure (concurrent mode failure):

(concurrent mode failure): 1551948K->1548050K(1572864K), 15.1154540 secs] 1986510K->1548050K(2044736K), [CMS Perm : 78597K->78588K(524288K)], 15.4406700 secs] [Times: user=15.52 sys=0.00, real=15.44 secs]
703240.983: [GC [1 CMS-initial-mark: 1548050K(1572864K)] 1561545K(2044736K), 0.0059690 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]


This error means that the CMS collector cannot keep up with the amount of garbage being generated (the Tenured generation heap space is about 90% full) and GCs are taking longer meaning your application stops for longer periods of time. This example shows the application stopping for over 15 seconds, which will have a significant impact on performance.

You can add the following GC options to try to avoid the CMS failure:

• -XX:CMSInitiatingOccupancyFraction=<percentage-of-tenured>
• -XX:+UseCMSInitiatingOccupancyOnly

You should set -XX:CMSInitiatingOccupancyFraction to 70 or 80% (instead of default ~90%) which may enable the CMS collector to keep up with the amount of garbage being generated.

Full GC

The following log snippet shows a Full GC:

25.466: [Full GC )25.466: [CMS: 68756K­>75361K(1987392K), 0.2548820 secs] 96571K­->75361K(2064064K), [CMS Perm : 61069K­>60722K(61376K)], 0.2551850 secs] [Times: user=0.25 sys=0.01, real=0.25 secs]
26.185: [Full GC (System.gc())26.185: [CMS: 75361K­>73283K(1987392K), 0.2398450 secs] 76136K-­>73283K(2064064K), [CMS Perm : 60723K­>60723K(101204K)], 0.2400120 secs] [Times: user=0.24 sys=0.00, real=0.24 secs]
27.293: [Full GC (System.gc())27.293: [CMS: 73283K­>73291K(1987392K), 0.2111960 secs] 74096K­>73291K(2064064K), [CMS Perm : 60723K­>60723K(101396K)], 0.2112730 secs] [Times: user=0.21 sys=0.00, real=0.22 secs]
68.081: [Full GC (Heap Inspection Initiated GC)68.081: [CMS: 73291K­->73187K(1987392K), 0.2172670 secs] 77634K­>73187K(2064064K), [CMS Perm : 60742K­>60742K(101396K)], 0.2174520 secs] [Times: user=0.22 sys=0.00, real=0.21 secs]

These log snippets refer to Full GCs occurring, meaning that the entire heap is being cleared (Young and Tenured); this often happens due to system processes such as RMI which performs a Full GC every hour on the hour. The PrintGCCause option is very useful to help you find out why the GC is happening (the last three lines in the log above show the additional information output by this option, making it clear which ones are system related). You can also set the DisableExplicitGC option to prevent system related Full GCs from happening.

Permanent generation

##### Note

The following information does not apply to Java 8 since the Permanent generation space has been removed.

The following log snippet shows a Full GC as a result of the Permanent generation space growing (the Permanent generation space contains metadata and other data that should be permanently stored and does not typically grow):

1.241: [Full GC (Ergonomics) [PSYoungGen: 6125K­>0K(45056K)] [ParOldGen: 17659K­->20350K(20480K)] 23785K­>20350K(65536K) [PSPermGen: 13550K­->13546K(27136K)], 0.0531680 secs] [Times: user=0.25 sys=0.00, real=0.05 secs]
1.513: [Full GC (Ergonomics) [PSYoungGen: 38912K­>2783K(45056K)] [ParOldGen: 20350K­>20424K(20480K)] 59262K­>23208K(65536K) [PSPermGen: 15188K­->15188K(31744K)], 0.0992090 secs] [Times: user=0.65 sys=0.02, real=0.10 secs]

These log snippets show the Permanent generation space growing (figure in brackets after PSPermGen). You can increase the MaxPermSize option to resolve this issue.

#### Best practice advice on tuning

Initially, you should configure your JVM as per the recommended settings for your product (if available):

You can then tune your JVM as and when required.

You should always follow this basic process when tuning:

1. Establish a baseline using a valid load testing process. This step is essential as it is the only way you will know if your tuning is having the desired effect.
2. Change one thing at a time and document each change. This allows you to determine the impact of each change and revert if necessary.

Heap tuning guidelines

Although we cannot give recommended heap settings because they vary considerably between environments and applications, you should follow these guidelines when tuning:

• You should always use the latest supported software versions.
• You should use the CMS GC as recommended by ForgeRock.
• Your total heap size must not exceed the amount of physical RAM available.
• You must leave some physical RAM for other applications to run on your machine.
• You should set Xms and Xmx to the same value for best performance. These options determine your total heap size.
• You should set NewSize and MaxNewSize to the same value. These options determine how the heap is allocated between generations.
• You should set the DisableExplicitGC option to prevent system related full GCs from running.
• You must not add lots of options at once or add options you don't understand.
• You must not copy and paste options from other applications.
• You should use GC logs to help identify issues with your tuning.

Getting your total heap size and allocation between generations right is key; once you have these set correctly, you likely won't need to do anymore tuning.

Summary

In summary, the most important GC knobs to get right are:

1. The collector, which should be CMS.
2. The total heap size, where Xmx = Xms.
3. The split between generations, using NewSize or NewRatio, where NewSize = MaxNewSize.

AM/OpenAM

DS/OpenDJ

IDM/OpenIDM

IG/OpenIG