Guide to configuring and integrating OpenIDM into identity management solutions. OpenIDM identity management software offers flexible, open source services for automating management of the identity life cycle.

Preface

In this guide you will learn how to integrate OpenIDM as part of a complete identity management solution.

1. Who Should Use This Guide

This guide is written for systems integrators building identity management solutions based on OpenIDM services. This guide describes OpenIDM, and shows you how to set up OpenIDM as part of your identity management solution.

You do not need to be an OpenIDM wizard to learn something from this guide, though a background in identity management and building identity management solutions can help.

2. Formatting Conventions

Most examples in the documentation are created in GNU/Linux or Mac OS X operating environments. If distinctions are necessary between operating environments, examples are labeled with the operating environment name in parentheses. To avoid repetition file system directory names are often given only in UNIX format as in /path/to/server, even if the text applies to C:\path\to\server as well.

Absolute path names usually begin with the placeholder /path/to/. This path might translate to /opt/, C:\Program Files\, or somewhere else on your system.

Command-line, terminal sessions are formatted as follows:

$ echo $JAVA_HOME
/path/to/jdk

Command output is sometimes formatted for narrower, more readable output even though formatting parameters are not shown in the command.

Program listings are formatted as follows:

class Test {
    public static void main(String [] args)  {
        System.out.println("This is a program listing.");
    }
}

3. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

4. Using the ForgeRock.org Site

The ForgeRock.org site has links to source code for ForgeRock open source software, as well as links to the ForgeRock forums and technical blogs.

If you are a ForgeRock customer, raise a support ticket instead of using the forums. ForgeRock support professionals will get in touch to help you.

Chapter 1. Architectural Overview

This chapter introduces the OpenIDM architecture, and describes the modules and services that make up the OpenIDM product.

In this chapter you will learn:

  • How OpenIDM uses the OSGi framework as a basis for its modular architecture

  • How the infrastructure modules provide the features required for OpenIDM's core services

  • What those core services are and how they fit in to the overall architecture

  • How OpenIDM provides access to the resources it manages

1.1. OpenIDM Modular Framework

OpenIDM implements infrastructure modules that run in an OSGi framework. It exposes core services through RESTful APIs to client applications.

The following figure provides an overview of the OpenIDM architecture, which is covered in more detail in subsequent sections of this chapter.

The OpenIDM Modular Architecture
OpenIDM architecture

The OpenIDM framework is based on OSGi:

OSGi

OSGi is a module system and service platform for the Java programming language that implements a complete and dynamic component model. For a good introduction to OSGi, see the OSGi site. OpenIDM currently runs in Apache Felix, an implementation of the OSGi Framework and Service Platform.

Servlet

The Servlet layer provides RESTful HTTP access to the managed objects and services. OpenIDM embeds the Jetty Servlet Container, which can be configured for either HTTP or HTTPS access.

1.2. Infrastructure Modules

OpenIDM infrastructure modules provide the underlying features needed for core services:

BPMN 2.0 Workflow Engine

OpenIDM provides an embedded workflow and business process engine based on Activiti and the Business Process Model and Notation (BPMN) 2.0 standard.

For more information, see "Integrating Business Processes and Workflows".

Task Scanner

OpenIDM provides a task-scanning mechanism that performs a batch scan for a specified property in OpenIDM data, on a scheduled interval. The task scanner then executes a task when the value of that property matches a specified value.

For more information, see "Scanning Data to Trigger Tasks".

Scheduler

The scheduler provides a cron-like scheduling component implemented using the Quartz library. Use the scheduler, for example, to enable regular synchronizations and reconciliations.

For more information, see "Scheduling Tasks and Events".

Script Engine

The script engine is a pluggable module that provides the triggers and plugin points for OpenIDM. OpenIDM currently supports JavaScript and Groovy.

Policy Service

OpenIDM provides an extensible policy service that applies validation requirements to objects and properties, when they are created or updated.

For more information, see "Using Policies to Validate Data".

Audit Logging

Auditing logs all relevant system activity to the configured log stores. This includes the data from reconciliation as a basis for reporting, as well as detailed activity logs to capture operations on the internal (managed) and external (system) objects.

For more information, see "Using Audit Logs".

Repository

The repository provides a common abstraction for a pluggable persistence layer. OpenIDM 4 supports reconciliation and synchronization with several major external repositories in production, including relational databases, LDAP servers, and even flat CSV and XML files.

The repository API uses a JSON-based object model with RESTful principles consistent with the other OpenIDM services. To facilitate testing, OpenIDM includes an embedded instance of OrientDB, a NoSQL database. You can then incorporate a supported internal repository, as described in "Installing a Repository For Production" in the Installation Guide.

1.3. Core Services

The core services are the heart of the OpenIDM resource-oriented unified object model and architecture:

Object Model

Artifacts handled by OpenIDM are Java object representations of the JavaScript object model as defined by JSON. The object model supports interoperability and potential integration with many applications, services, and programming languages.

OpenIDM can serialize and deserialize these structures to and from JSON as required. OpenIDM also exposes a set of triggers and functions that system administrators can define, in either JavaScript or Groovy, which can natively read and modify these JSON-based object model structures.

Managed Objects

A managed object is an object that represents the identity-related data managed by OpenIDM. Managed objects are configurable, JSON-based data structures that OpenIDM stores in its pluggable repository. The default configuration of a managed object is that of a user, but you can define any kind of managed object, for example, groups or roles.

You can access managed objects over the REST interface with a query similar to the following:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/..."
System Objects

System objects are pluggable representations of objects on external systems. For example, a user entry that is stored in an external LDAP directory is represented as a system object in OpenIDM.

System objects follow the same RESTful resource-based design principles as managed objects. They can be accessed over the REST interface with a query similar to the following:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/system/..."

There is a default implementation for the OpenICF framework, that allows any connector object to be represented as a system object.

Mappings

Mappings define policies between source and target objects and their attributes during synchronization and reconciliation. Mappings can also define triggers for validation, customization, filtering, and transformation of source and target objects.

For more information, see "Synchronizing Data Between Resources".

Synchronization and Reconciliation

Reconciliation enables on-demand and scheduled resource comparisons between the OpenIDM managed object repository and source or target systems. Comparisons can result in different actions, depending on the mappings defined between the systems.

Synchronization enables creating, updating, and deleting resources from a source to a target system, either on demand or according to a schedule.

For more information, see "Synchronizing Data Between Resources".

1.4. Secure Commons REST Commands

Representational State Transfer (REST) is a software architecture style for exposing resources, using the technologies and protocols of the World Wide Web. For more information on the ForgeRock REST API, see "REST API Reference".

REST interfaces are commonly tested with a curl command. Many of these commands are used in this document. They work with the standard ports associated with Java EE communications, 8080 and 8443.

To run curl over the secure port, 8443, you must include either the --insecure option, or follow the instructions shown in "Restrict REST Access to the HTTPS Port". You can use those instructions with the self-signed certificate generated when OpenIDM starts, or with a *.crt file provided by a certificate authority.

In many examples in this guide, curl commands to the secure port are shown with a --cacert self-signed.crt option. Instructions for creating that self-signed.crt file are shown in "Restrict REST Access to the HTTPS Port".

1.5. Access Layer

The access layer provides the user interfaces and public APIs for accessing and managing the OpenIDM repository and its functions:

RESTful Interfaces

OpenIDM provides REST APIs for CRUD operations, for invoking synchronization and reconciliation, and to access several other services.

For more information, see "REST API Reference".

User Interfaces

User interfaces provide password management, registration, self-service, and workflow services.

Chapter 2. Starting and Stopping OpenIDM

This chapter covers the scripts provided for starting and stopping OpenIDM, and describes how to verify the health of a system, that is, that all requirements are met for a successful system startup.

2.1. To Start and Stop OpenIDM

By default you start and stop OpenIDM in interactive mode.

To start OpenIDM interactively, open a terminal or command window, change to the openidm directory, and run the startup script:

  • startup.sh (UNIX)

  • startup.bat (Windows)

The startup script starts OpenIDM, and opens an OSGi console with a -> prompt where you can issue console commands.

To stop OpenIDM interactively in the OSGi console, run the shutdown command:

-> shutdown

You can also start OpenIDM as a background process on UNIX and Linux. Follow these steps before starting OpenIDM for the first time.

  1. If you have already started OpenIDM, shut down OpenIDM and remove the Felix cache files under openidm/felix-cache/:

    -> shutdown
    ...
    $ rm -rf felix-cache/*
  2. Start OpenIDM in the background. The nohup survives a logout and the 2>&1& redirects standard output and standard error to the noted console.out file:

    $ nohup ./startup.sh > logs/console.out 2>&1&
    [1] 2343
    

To stop OpenIDM running as a background process, use the shutdown.sh script:

$ ./shutdown.sh
./shutdown.sh
Stopping OpenIDM (2343)

Incidentally, the process identifier (PID) shown during startup should match the PID shown during shutdown.

Note

Although installations on OS X systems are not supported in production, you might want to run OpenIDM on OS X in a demo or test environment. To run OpenIDM in the background on an OS X system, take the following additional steps:

  • Remove the org.apache.felix.shell.tui-*.jar bundle from the openidm/bundle directory.

  • Disable ConsoleHandler logging, as described in "Disabling Logs".

2.2. Specifying the OpenIDM Startup Configuration

By default, OpenIDM starts with the configuration, script, and binary files in the openidm/conf, openidm/script, and openidm/bin directories. You can launch OpenIDM with a different set of configuration, script, and binary files for test purposes, to manage different OpenIDM projects, or to run one of the included samples.

The startup.sh script enables you to specify the following elements of a running OpenIDM instance:

  • --project-location or -p /path/to/project/directory

    The project location specifies the directory with OpenIDM configuration and script files.

    All configuration objects and any artifacts that are not in the bundled defaults (such as custom scripts) must be included in the project location. These objects include all files otherwise included in the openidm/conf and openidm/script directories.

    For example, the following command starts OpenIDM with the configuration of Sample 1, with a project location of /path/to/openidm/samples/sample1:

    $ ./startup.sh -p /path/to/openidm/samples/sample1

    If you do not provide an absolute path, the project location path is relative to the system property, user.dir. OpenIDM then sets launcher.project.location to that relative directory path. Alternatively, if you start OpenIDM without the -p option, OpenIDM sets launcher.project.location to /path/to/openidm/conf.

    Note

    When we refer to "your project" in ForgeRock's OpenIDM documentation, we're referring to the value of launcher.project.location.

  • --working-location or -w /path/to/working/directory

    The working location specifies the directory to which OpenIDM writes its database cache, audit logs, and felix cache. The working location includes everything that is in the default db/ and audit/, and felix-cache/ subdirectories.

    The following command specifies that OpenIDM writes its database cache and audit data to /Users/admin/openidm/storage:

    $ ./startup.sh -w /Users/admin/openidm/storage

    If you do not provide an absolute path, the path is relative to the system property, user.dir. If you do not specify a working location, OpenIDM writes this data to the openidm/db, openidm/felix-cache and openidm/audit directories.

    Note that this property does not affect the location of the OpenIDM system logs. To change the location of the OpenIDM logs, edit the conf/logging.properties file.

    You can also change the location of the Felix cache, by editing the conf/config.properties file, or by starting OpenIDM with the -s option, described later in this section.

  • --config or -c /path/to/config/file

    A customizable startup configuration file (named launcher.json) enables you to specify how the OSGi Framework is started.

    Unless you are working with a highly customized deployment, you should not modify the default framework configuration. This option is therefore described in more detail in "Advanced Configuration".

  • --storage or -s /path/to/storage/directory

    Specifies the OSGi storage location of the cached configuration files.

    You can use this option to redirect output if you are installing OpenIDM on a read-only filesystem volume. For more information, see "Installing OpenIDM on a Read-Only Volume" in the Installation Guide. This option is also useful when you are testing different configurations. Sometimes when you start OpenIDM with two different sample configurations, one after the other, the cached configurations are merged and cause problems. Specifying a storage location creates a separate felix-cache directory in that location, and the cached configuration files remain completely separate.

By default, properties files are loaded in the following order, and property values are resolved in the reverse order:

  1. system.properties

  2. config.properties

  3. boot.properties

If both system and boot properties define the same attribute, the property substitution process locates the attribute in boot.properties and does not attempt to locate the property in system.properties.

You can use variable substitution in any .json configuration file with the install, working and project locations described previously. You can substitute the following properties:

install.location
install.url
working.location
working.url
project.location
project.url

Property substitution takes the following syntax:

&{launcher.property}

For example, to specify the location of the OrientDB database, you can set the dbUrl property in repo.orientdb.json as follows:

"dbUrl" : "local:&{launcher.working.location}/db/openidm",
     

The database location is then relative to a working location defined in the startup configuration.

You can find more examples of property substitution in many other files in your project's conf/ subdirectory.

Note that property substitution does not work for connector reference properties. So, for example, the following configuration would not be valid:

"connectorRef" : {
    "connectorName" : "&{connectorName}",
    "bundleName" : "org.forgerock.openicf.connectors.ldap-connector",
    "bundleVersion" : "&{LDAP.BundleVersion}"
    ...
  

The "connectorName" must be the precise string from the connector configuration. If you need to specify multiple connector version numbers, use a range of versions, for example:

"connectorRef" : {
    "connectorName" : "org.identityconnectors.ldap.LdapConnector",
    "bundleName" : "org.forgerock.openicf.connectors.ldap-connector",
    "bundleVersion" : "[1.4.0.0,2.0.0.0)",
    ...
  

2.3. Monitoring the Basic Health of an OpenIDM System

Due to the highly modular, configurable nature of OpenIDM, it is often difficult to assess whether a system has started up successfully, or whether the system is ready and stable after dynamic configuration changes have been made.

OpenIDM includes a health check service, with options to monitor the status of internal resources.

To monitor the status of external resources such as LDAP servers and external databases, use the commands described in "Checking the Status of External Systems Over REST".

2.3.1. Basic Health Checks

The health check service reports on the state of the OpenIDM system and outputs this state to the OSGi console and to the log files. The system can be in one of the following states:

  • STARTING - OpenIDM is starting up

  • ACTIVE_READY - all of the specified requirements have been met to consider the OpenIDM system ready

  • ACTIVE_NOT_READY - one or more of the specified requirements have not been met and the OpenIDM system is not considered ready

  • STOPPING - OpenIDM is shutting down

You can verify the current state of an OpenIDM system with the following REST call:

 $ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/info/ping"

{
  "_id" : "",
  "state" : "ACTIVE_READY",
  "shortDesc" : "OpenIDM ready"
}

The information is provided by the following script: openidm/bin/defaults/script/info/ping.js.

2.3.2. Getting Current OpenIDM Session Information

You can get more information about the current OpenIDM session, beyond basic health checks, with the following REST call:

$ curl \
--cacert self-signed.crt \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
"https://localhost:8443/openidm/info/login" 
{
  "_id" : "",
  "class" : "org.forgerock.services.context.SecurityContext",
  "name" : "security",
  "authenticationId" : "openidm-admin",
  "authorization" : {
    "id" : "openidm-admin",
    "component" : "repo/internal/user",
    "roles" : [ "openidm-admin", "openidm-authorized" ],
    "ipAddress" : "127.0.0.1"
  },
  "parent" : {
    "class" : "org.forgerock.caf.authentication.framework.MessageContextImpl",
    "name" : "jaspi",
    "parent" : {
      "class" : "org.forgerock.services.context.TransactionIdContext",
      "id" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288",
      "name" : "transactionId",
      "transactionId" : {
        "value" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288",
        "subTransactionIdCounter" : 0
      },
      "parent" : {
        "class" : "org.forgerock.services.context.ClientContext",
        "name" : "client",
        "remoteUser" : null,
        "remoteAddress" : "127.0.0.1",
        "remoteHost" : "127.0.0.1",
        "remotePort" : 56534,
        "certificates" : "",
...

The information is provided by the following script: openidm/bin/defaults/script/info/login.js.

2.3.3. Monitoring OpenIDM Tuning and Health Parameters

You can extend OpenIDM monitoring beyond what you can check on the openidm/info/ping and openidm/info/login endpoints. Specifically, you can get more detailed information about the state of the:

  • Operating System on the openidm/health/os endpoint

  • Memory on the openidm/health/memory endpoint

  • JDBC Pooling, based on the openidm/health/jdbc endpoint

  • Reconciliation, on the openidm/health/recon endpoint.

You can regulate access to these endpoints as described in the following section: "access.js".

2.3.3.1. Operating System Health Check

With the following REST call, you can get basic information about the host operating system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/health/os"
{
    "_id" : "",
    "_rev" : "",
    "availableProcessors" : 1,
    "systemLoadAverage" : 0.06,
    "operatingSystemArchitecture" : "amd64",
    "operatingSystemName" : "Linux",
    "operatingSystemVersion" : "2.6.32-504.30.3.el6.x86_64"
}

From the output, you can see that this particular system has one 64-bit CPU, with a load average of 6 percent, on a Linux system with the noted kernel operatingSystemVersion number.

2.3.3.2. Memory Health Check

With the following REST call, you can get basic information about overall JVM memory use:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/health/memory"
{
    "_id" : "",
    "_rev" : "",
    "objectPendingFinalization" : 0,
    "heapMemoryUsage" : {
        "init" : 1073741824,
        "used" : 88538392,
        "committed" : 1037959168,
        "max" : 1037959168
    },
    "nonHeapMemoryUsage" : {
        "init" : 24313856,
        "used" : 69255024,
        "committed" : 69664768,
        "max" : 224395264
    }
}

The output includes information on JVM Heap and Non-Heap memory, in bytes. Briefly,

  • JVM Heap memory is used to store Java objects.

  • JVM Non-Heap Memory is used by Java to store loaded classes and related meta-data

2.3.3.3. JDBC Health Check

With the following REST call, you can get basic information about the status of the configured internal JDBC database:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/health/jdbc"
{
   "_id" : "",
   "_rev" : "",
   "com.jolbox.bonecp:type=BoneCP-547b64b7-6765-4915-937b-e940cf74ed82" : {
      "connectionWaitTimeAvg" : 0.010752126251079611,
      "statementExecuteTimeAvg" : 0.8933237895474139,
      "statementPrepareTimeAvg" : 8.45602988656923,
      "totalLeasedConnections" : 0,
      "totalFreeConnections" : 7,
      "totalCreatedConnections" : 7,
      "cacheHits" : 0,
      "cacheMiss" : 0,
      "statementsCached" : 0,
      "statementsPrepared" : 27840,
      "connectionsRequested" : 19683,
      "cumulativeConnectionWaitTime" : 211,
      "cumulativeStatementExecutionTime" : 24870,
      "cumulativeStatementPrepareTime" : 3292,
      "cacheHitRatio" : 0.0,
      "statementsExecuted" : 27840
   },
   "com.jolbox.bonecp:type=BoneCP-856008a7-3553-4756-8ae7-0d3e244708fe" : {
      "connectionWaitTimeAvg" : 0.015448195945945946,
      "statementExecuteTimeAvg" : 0.6599738874458875,
      "statementPrepareTimeAvg" : 1.4170901010615866,
      "totalLeasedConnections" : 0,
      "totalFreeConnections" : 1,
      "totalCreatedConnections" : 1,
      "cacheHits" : 0,
      "cacheMiss" : 0,
      "statementsCached" : 0,
      "statementsPrepared" : 153,
      "connectionsRequested" : 148,
      "cumulativeConnectionWaitTime" : 2,
      "cumulativeStatementExecutionTime" : 152,
      "cumulativeStatementPrepareTime" : 107,
      "cacheHitRatio" : 0.0,
      "statementsExecuted" : 231
   }
}

The statistics shown relate to the time and connections related to SQL statements.

Note

To check the health of a JDBC repository, you need to make two changes to your configuration:

  • Install a JDBC repository, as described in "Installing a Repository For Production" in the Installation Guide.

  • Open the boot.properties file in your project-dir/conf/boot directory, and enable the statistics MBean for the BoneCP JDBC connection pool:

    openidm.bonecp.statistics.enabled=true

2.3.3.4. Reconciliation Health Check

With the following REST call, you can get basic information about the system demands related to reconciliation:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/health/recon"
{
    "_id" : "",
    "_rev" : "",
    "activeThreads" : 1,
    "corePoolSize" : 10,
    "largestPoolSize" : 1,
    "maximumPoolSize" : 10,
    "currentPoolSize" : 1
}

From the output, you can review the number of active threads used by the reconciliation, as well as the available thread pool.

2.3.4. Customizing Health Check Scripts

You can extend or override the default information that is provided by creating your own script file and its corresponding configuration file in openidm/conf/info-name.json. Custom script files can be located anywhere, although a best practice is to place them in openidm/script/info. A sample customized script file for extending the default ping service is provided in openidm/samples/infoservice/script/info/customping.js. The corresponding configuration file is provided in openidm/samples/infoservice/conf/info-customping.json.

The configuration file has the following syntax:

{
    "infocontext" : "ping",
    "type" : "text/javascript",
    "file" : "script/info/customping.js"
}

The parameters in the configuration file are as follows:

  • infocontext specifies the relative name of the info endpoint under the info context. The information can be accessed over REST at this endpoint, for example, setting infocontext to mycontext/myendpoint would make the information accessible over REST at https://localhost:8443/openidm/info/mycontext/myendpoint.

  • type specifies the type of the information source. JavaScript ("type" : "text/javascript") and Groovy ("type" : "groovy") are supported.

  • file specifies the path to the JavaScript or Groovy file, if you do not provide a "source" parameter.

  • source specifies the actual JavaScript or Groovy script, if you have not provided a "file" parameter.

Additional properties can be passed to the script as depicted in this configuration file (openidm/samples/infoservice/conf/info-name.json).

Script files in openidm/samples/infoservice/script/info/ have access to the following objects:

  • request - the request details, including the method called and any parameters passed.

  • healthinfo - the current health status of the system.

  • openidm - access to the JSON resource API.

  • Any additional properties that are depicted in the configuration file ( openidm/samples/infoservice/conf/info-name.json.)

2.3.5. Verifying the State of Health Check Service Modules

The configurable OpenIDM health check service can verify the status of required modules and services for an operational system. During system startup, OpenIDM checks that these modules and services are available and reports on whether any requirements for an operational system have not been met. If dynamic configuration changes are made, OpenIDM rechecks that the required modules and services are functioning, to allow ongoing monitoring of system operation.

Examples of Required Modules

OpenIDM checks all required modules. Examples of those modules are shown here:

     "org.forgerock.openicf.framework.connector-framework"
     "org.forgerock.openicf.framework.connector-framework-internal"
     "org.forgerock.openicf.framework.connector-framework-osgi"
     "org.forgerock.openidm.audit"
     "org.forgerock.openidm.core"
     "org.forgerock.openidm.enhanced-config"
     "org.forgerock.openidm.external-email"
     ...
     "org.forgerock.openidm.system"
     "org.forgerock.openidm.ui"
     "org.forgerock.openidm.util"
     "org.forgerock.commons.org.forgerock.json.resource"
     "org.forgerock.commons.org.forgerock.json.resource.restlet"
     "org.forgerock.commons.org.forgerock.restlet"
     "org.forgerock.commons.org.forgerock.util"
     "org.forgerock.openidm.security-jetty"
     "org.forgerock.openidm.jetty-fragment"
     "org.forgerock.openidm.quartz-fragment"
     "org.ops4j.pax.web.pax-web-extender-whiteboard"
     "org.forgerock.openidm.scheduler"
     "org.ops4j.pax.web.pax-web-jetty-bundle"
     "org.forgerock.openidm.repo-jdbc"
     "org.forgerock.openidm.repo-orientdb"
     "org.forgerock.openidm.config"
     "org.forgerock.openidm.crypto"

Examples of Required Services

OpenIDM checks all required services. Examples of those services are shown here:

     "org.forgerock.openidm.config"
     "org.forgerock.openidm.provisioner"
     "org.forgerock.openidm.provisioner.openicf.connectorinfoprovider"
     "org.forgerock.openidm.external.rest"
     "org.forgerock.openidm.audit"
     "org.forgerock.openidm.policy"
     "org.forgerock.openidm.managed"
     "org.forgerock.openidm.script"
     "org.forgerock.openidm.crypto"
     "org.forgerock.openidm.recon"
     "org.forgerock.openidm.info"
     "org.forgerock.openidm.router"
     "org.forgerock.openidm.scheduler"
     "org.forgerock.openidm.scope"
     "org.forgerock.openidm.taskscanner"

You can replace the list of required modules and services, or add to it, by adding the following lines to your project's conf/boot/boot.properties file. Bundles and services are specified as a list of symbolic names, separated by commas:

  • openidm.healthservice.reqbundles - overrides the default required bundles.

  • openidm.healthservice.reqservices - overrides the default required services.

  • openidm.healthservice.additionalreqbundles - specifies required bundles (in addition to the default list).

  • openidm.healthservice.additionalreqservices - specifies required services (in addition to the default list).

By default, OpenIDM gives the system 15 seconds to start up all the required bundles and services, before the system readiness is assessed. Note that this is not the total start time, but the time required to complete the service startup after the framework has started. You can change this default by setting the value of the servicestartmax property (in milliseconds) in your project's conf/boot/boot.properties file. This example sets the startup time to five seconds:

openidm.healthservice.servicestartmax=5000

2.4. Displaying Information About Installed Modules

On a running OpenIDM instance, you can list the installed modules and their states by typing the following command in the OSGi console. (The output will vary by configuration):

-> scr list 
  
   Id   State          Name
[  12] [active       ] org.forgerock.openidm.endpoint
[  13] [active       ] org.forgerock.openidm.endpoint
[  14] [active       ] org.forgerock.openidm.endpoint
[  15] [active       ] org.forgerock.openidm.endpoint
[  16] [active       ] org.forgerock.openidm.endpoint
      ...
[  34] [active       ] org.forgerock.openidm.taskscanner
[  20] [active       ] org.forgerock.openidm.external.rest
[   6] [active       ] org.forgerock.openidm.router
[  33] [active       ] org.forgerock.openidm.scheduler
[  19] [unsatisfied  ] org.forgerock.openidm.external.email
[  11] [active       ] org.forgerock.openidm.sync
[  25] [active       ] org.forgerock.openidm.policy
[   8] [active       ] org.forgerock.openidm.script
[  10] [active       ] org.forgerock.openidm.recon
[   4] [active       ] org.forgerock.openidm.http.contextregistrator
[   1] [active       ] org.forgerock.openidm.config
[  18] [active       ] org.forgerock.openidm.endpointservice
[  30] [unsatisfied  ] org.forgerock.openidm.servletfilter
[  24] [active       ] org.forgerock.openidm.infoservice
[  21] [active       ] org.forgerock.openidm.authentication
->

To display additional information about a particular module or service, run the following command, substituting the Id of that module from the preceding list:

-> scr info Id

The following example displays additional information about the router service:

-> scr info 6 
  
ID: 6
Name: org.forgerock.openidm.router
Bundle: org.forgerock.openidm.core (41)
State: active
Default State: enabled
Activation: immediate
Configuration Policy: optional
Activate Method: activate (declared in the descriptor)
Deactivate Method: deactivate (declared in the descriptor)
Modified Method: modified
Services: org.forgerock.json.resource.JsonResource
Service Type: service
Reference: ref_JsonResourceRouterService_ScopeFactory
    Satisfied: satisfied
    Service Name: org.forgerock.openidm.scope.ScopeFactory
    Multiple: single
    Optional: mandatory
    Policy: dynamic
Properties:
    component.id = 6
    component.name = org.forgerock.openidm.router
    felix.fileinstall.filename = file:/openidm/samples/sample1/conf/router.json
    jsonconfig = {
    "filters" : [
        {
            "onRequest" : {
                "type" : "text/javascript",
                "file" : "bin/defaults/script/router-authz.js"
            }
        },
        {
            "onRequest" : {
                "type" : "text/javascript",
                "file" : "bin/defaults/script/policyFilter.js"
            },
            "methods" : [
                "create",
                "update"
            ]
        }
    ]
}
    openidm.restlet.path = /
    service.description = OpenIDM internal JSON resource router
    service.pid = org.forgerock.openidm.router
    service.vendor = ForgeRock AS 
->

2.5. Starting OpenIDM in Debug Mode

To debug custom libraries, you can start OpenIDM with the option to use the Java Platform Debugger Architecture (JPDA):

  • Start OpenIDM with the jpda option:

    $ cd /path/to/openidm
    $ ./startup.sh jpda
    Executing ./startup.sh...
    Using OPENIDM_HOME:   /path/to/openidm
    Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Denvironment=PROD -Djava.compiler=NONE
       -Xnoagent -Xdebug -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n
    Using LOGGING_CONFIG:
       -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties
    Listening for transport dt_socket at address: 5005
    Using boot properties at /path/to/openidm/conf/boot/boot.properties
    -> OpenIDM version "4.0.0" (revision: xxxx)
    OpenIDM ready

    The relevant JPDA options are outlined in the startup script (startup.sh).

  • In your IDE, attach a Java debugger to the JVM via socket, on port 5005.

Caution

This interface is internal and subject to change. If you depend on this interface, contact ForgeRock support.

2.6. Running OpenIDM As a Service on Linux Systems

OpenIDM provides a script that generates an initialization script to run OpenIDM as a service on Linux systems. You can start the script as the root user, or configure it to start during the boot process.

When OpenIDM runs as a service, logs are written to the directory in which OpenIDM was installed.

To run OpenIDM as a service, take the following steps:

  1. If you have not yet installed OpenIDM, follow the procedure described in "Installing OpenIDM Services" in the Installation Guide.

  2. Run the RC script:

    $ cd /path/to/openidm/bin
    $ ./create-openidm-rc.sh
  3. As a user with administrative privileges, copy the openidm script to the /etc/init.d directory:

    $ sudo cp openidm /etc/init.d/
  4. If you run Linux with SELinux enabled, change the file context of the newly copied script with the following command:

    $ sudo restorecon /etc/init.d/openidm

    You can verify the change to SELinux contexts with the ls -Z /etc/init.d command. For consistency, change the user context to match other scripts in the same directory with the sudo chcon -u system_u /etc/init.d/openidm command.

  5. Run the appropriate commands to add OpenIDM to the list of RC services:

    • On Red Hat-based systems, run the following commands:

      $ sudo chkconfig --add openidm
      $ sudo chkconfig openidm on
    • On Debian/Ubuntu systems, run the following command:

      $ sudo update-rc.d openidm defaults
      Adding system startup for /etc/init.d/openidm ...
      /etc/rc0.d/K20openidm -> ../init.d/openidm
      /etc/rc1.d/K20openidm -> ../init.d/openidm
      /etc/rc6.d/K20openidm -> ../init.d/openidm
      /etc/rc2.d/S20openidm -> ../init.d/openidm
      /etc/rc3.d/S20openidm -> ../init.d/openidm
      /etc/rc4.d/S20openidm -> ../init.d/openidm
      /etc/rc5.d/S20openidm -> ../init.d/openidm

      Note the output, as Debian/Ubuntu adds start and kill scripts to appropriate runlevels.

      When you run the command, you may get the following warning message: update-rc.d: warning: /etc/init.d/openidm missing LSB information. You can safely ignore that message.

  6. As an administrative user, start the OpenIDM service:

    $ sudo /etc/init.d/openidm start

    Alternatively, reboot the system to start the OpenIDM service automatically.

  7. (Optional) The following commands stops and restarts the service:

    $ sudo /etc/init.d/openidm stop
    $ sudo /etc/init.d/openidm restart

If you have set up a deployment of OpenIDM in a custom directory, such as /path/to/openidm/production, you can modify the /etc/init.d/openidm script.

Open the openidm script in a text editor and navigate to the START_CMD line.

At the end of the command, you should see the following line:

org.forgerock.commons.launcher.Main -c bin/launcher.json > logs/server.out 2>&1 &"

Include the path to the production directory. In this case, you would add -p production as shown:

org.forgerock.commons.launcher.Main -c bin/launcher.json -p production > logs/server.out 2>&1 &

Save the openidm script file in the /etc/init.d directory. The sudo /etc/init.d/openidm start command should now start OpenIDM with the files in your production subdirectory.

Chapter 3. OpenIDM Command-Line Interface

This chapter describes the basic command-line interface provided with OpenIDM. The command-line interface includes a number of utilities for managing an OpenIDM instance.

All of the utilities are subcommands of the cli.sh (UNIX) or cli.bat (Windows) scripts. To use the utilities, you can either run them as subcommands, or launch the cli script first, and then run the utility. For example, to run the encrypt utility on a UNIX system:

$ cd /path/to/openidm 
$ ./cli.sh 
Using boot properties at /path/to/openidm/conf/boot/boot.properties
openidm# encrypt ....

or

$ cd /path/to/openidm
$ ./cli.sh encrypt ... 

By default, the command-line utilities run with the properties defined in your project's conf/boot/boot.properties file.

If you run the cli.sh command by itself, it opens an OpenIDM-specific shell prompt:

openidm#

The startup and shutdown scripts are not discussed in this chapter. For information about these scripts, see "Starting and Stopping OpenIDM".

The following sections describe the subcommands and their use. Examples assume that you are running the commands on a UNIX system. For Windows systems, use cli.bat instead of cli.sh.

For a list of subcommands available from the openidm# prompt, run the cli.sh help command. The help and exit options shown below are self-explanatory. The other subcommands are explained in the subsections that follow:

local:keytool  Export or import a SecretKeyEntry. 
	   The Java Keytool does not allow for exporting or importing SecretKeyEntries.
local:encrypt    Encrypt the input string.
local:secureHash   Hash the input string.
local:validate   Validates all json configuration files in the configuration
    (default: /conf) folder.
basic:help   Displays available commands.
basic:exit   Exit from the console.
remote:update               Update the system with the provided update file.
remote:configureconnector   Generate connector configuration.
remote:configexport         Exports all configurations.
remote:configimport         Imports the configuration set from local file/directory.

The configexport, configimport, and configureconnector subcommands support up to four options:

-u or --user USER[:PASSWORD]

Allows you to specify the server user and password. Specifying a username is mandatory. If you do not specify a username, the following error is output to the OSGi console: Remote operation failed: Unauthorized. If you do not specify a password, you are prompted for one. This option is used by all three subcommands.

--url URL

The URL of the OpenIDM REST service. The default URL is http://localhost:8080/openidm/. This can be used to import configuration files from a remote running instance of OpenIDM. This option is used by all three subcommands.

-P or --port PORT

The port number associated with the OpenIDM REST service. If specified, this option overrides any port number specified with the --url option. The default port is 8080. This option is used by all three subcommands.

-r or --replaceall or --replaceAll

Replaces the entire list of configuration files with the files in the specified backup directory. This option is used with only the configimport command.

3.1. Using the configexport Subcommand

The configexport subcommand exports all configuration objects to a specified location, enabling you to reuse a system configuration in another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment. This subcommand also enables you to inspect the active configuration of an OpenIDM instance.

OpenIDM must be running when you execute this command.

Usage is as follows:

$ ./cli.sh configexport --user username:password export-location

For example:

$ ./cli.sh configexport --user openidm-admin:openidm-admin /tmp/conf

On Windows systems, the export-location must be provided in quotation marks, for example:

C:\openidm\cli.bat configexport --user openidm-admin:openidm-admin "C:\temp\openidm"

Configuration objects are exported as .json files to the specified directory. The command creates the directory if needed. Configuration files that are present in this directory are renamed as backup files, with a timestamp, for example, audit.json.2014-02-19T12-00-28.bkp, and are not overwritten. The following configuration objects are exported:

  • The internal repository table configuration (repo.orientdb.json or repo.jdbc.json) and the datasource connection configuration, for JDBC repositories (datasource.jdbc-default.json)

  • Default and custom configuration directories (script.json)

  • The log configuration (audit.json)

  • The authentication configuration (authentication.json)

  • The cluster configuration (cluster.json)

  • The configuration of a connected SMTP email server (external.email.json)

  • Custom configuration information (info-name.json)

  • The managed object configuration (managed.json)

  • The connector configuration (provisioner.openicf-*.json)

  • The router service configuration (router.json)

  • The scheduler service configuration (scheduler.json)

  • Any configured schedules (schedule-*.json)

  • Standard knowledge-based authentication questions (selfservice.kba.json)

  • The synchronization mapping configuration (sync.json)

  • If workflows are defined, the configuration of the workflow engine (workflow.json) and the workflow access configuration (process-access.json)

  • Any configuration files related to the user interface (ui-*.json)

  • The configuration of any custom endpoints (endpoint-*.json)

  • The configuration of servlet filters (servletfilter-*.json)

  • The policy configuration (policy.json)

3.2. Using the configimport Subcommand

The configimport subcommand imports configuration objects from the specified directory, enabling you to reuse a system configuration from another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment.

The command updates the existing configuration from the import-location over the OpenIDM REST interface. By default, if configuration objects are present in the import-location and not in the existing configuration, these objects are added. If configuration objects are present in the existing location but not in the import-location, these objects are left untouched in the existing configuration.

If you include the --replaceAll parameter, the command wipes out the existing configuration and replaces it with the configuration in the import-location. Objects in the existing configuration that are not present in the import-location are deleted.

Usage is as follows:

$ ./cli.sh configimport --user username:password [--replaceAll] import-location

For example:

$ ./cli.sh configimport --user openidm-admin:openidm-admin --replaceAll /tmp/conf

On Windows systems, the import-location must be provided in quotation marks, for example:

C:\openidm\cli.bat configimport --user openidm-admin:openidm-admin --replaceAll "C:\temp\openidm"

Configuration objects are imported as .json files from the specified directory to the conf directory. The configuration objects that are imported are the same as those for the export command, described in the previous section.

3.3. Using the configureconnector Subcommand

The configureconnector subcommand generates a configuration for an OpenICF connector.

Usage is as follows:

$ ./cli.sh configureconnector --user username:password --name connector-name

Select the type of connector that you want to configure. The following example configures a new XML connector:

$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myXmlConnector
 Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
0. CSV File Connector version 1.5.0.0
1. Database Table Connector version 1.1.0.1
2. Scripted Poolable Groovy Connector version 1.4.2.0
3. Scripted Groovy Connector version 1.4.2.0
4. Scripted CREST Connector version 1.4.2.0
5. Scripted SQL Connector version 1.4.2.0
6. Scripted REST Connector version 1.4.2.0
7. LDAP Connector version 1.4.1.0
8. XML Connector version 1.1.0.2
9. Exit
Select [0..9]: 8
Edit the configuration file and run the command again. The configuration was 
saved to /openidm/temp/provisioner.openicf-myXmlConnector.json

The basic configuration is saved in a file named /openidm/temp/provisioner.openicf-connector-name.json. Edit the configurationProperties parameter in this file to complete the connector configuration. For an XML connector, you can use the schema definitions in Sample 1 for an example configuration:

  "configurationProperties" : {
    "xmlFilePath" : "samples/sample1/data/resource-schema-1.xsd",
    "createFileIfNotExists" : false,
    "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd",
    "xsdIcfFilePath" : "samples/sample1/data/xmlConnectorData.xml"
  },  
  

For more information about the connector configuration properties, see "Configuring Connectors".

When you have modified the file, run the configureconnector command again so that OpenIDM can pick up the new connector configuration:

$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myXmlConnector
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Configuration was found and read from: /path/to/openidm/temp/provisioner.openicf-myXmlConnector.json

You can now copy the new provisioner.openicf-myXmlConnector.json file to the conf/ subdirectory.

You can also configure connectors over the REST interface, or through the Admin UI. For more information, see "Creating Default Connector Configurations" and "Adding New Connectors from the Admin UI".

3.4. Using the encrypt Subcommand

The encrypt subcommand encrypts an input string, or JSON object, provided at the command line. This subcommand can be used to encrypt passwords, or other sensitive data, to be stored in the OpenIDM repository. The encrypted value is output to standard output and provides details of the cryptography key that is used to encrypt the data.

Usage is as follows:

$ ./cli.sh encrypt [-j] string

The -j option specifies that the string to be encrypted is a JSON object. If you do not enter the string as part of the command, the command prompts for the string to be encrypted. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.

The following example encrypts a normal string value:

$ ./cli.sh encrypt mypassword
Executing ./cli.sh
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN ENCRYPTED VALUE-----
{
  "$crypto" : {
    "value" : {
      "iv" : "M2913T5ZADlC2ip2imeOyg==",
      "data" : "DZAAAM1nKjQM1qpLwh3BgA==",
      "cipher" : "AES/CBC/PKCS5Padding",
      "key" : "openidm-sym-default"
    },
    "type" : "x-simple-encryption"
  }
}
------END ENCRYPTED VALUE------ 

The following example encrypts a JSON object. The input string must be a valid JSON object:

$ ./cli.sh encrypt -j {\"password\":\"myPassw0rd\"}
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN ENCRYPTED VALUE-----
{
  "$crypto" : {
    "value" : {
      "iv" : "M2913T5ZADlC2ip2imeOyg==",
      "data" : "DZAAAM1nKjQM1qpLwh3BgA==",
      "cipher" : "AES/CBC/PKCS5Padding",
      "key" : "openidm-sym-default"
    },
    "type" : "x-simple-encryption"
  }
}
------END ENCRYPTED VALUE------  

The following example prompts for a JSON object to be encrypted. In this case, you do not need to escape the special characters:

$ ./cli.sh encrypt -j
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Enter the Json value

> Press ctrl-D to finish input
Start data input:
{"password":"myPassw0rd"}
^D        
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN ENCRYPTED VALUE-----
{
  "$crypto" : {
    "value" : {
      "iv" : "6e0RK8/4F1EK5FzSZHwNYQ==",
      "data" : "gwHSdDTmzmUXeD6Gtfn6JFC8cAUiksiAGfvzTsdnAqQ=",
      "cipher" : "AES/CBC/PKCS5Padding",
      "key" : "openidm-sym-default"
    },
    "type" : "x-simple-encryption"
  }
}
------END ENCRYPTED VALUE------

3.5. Using the secureHash Subcommand

The secureHash subcommand hashes an input string, or JSON object, using the specified hash algorithm. This subcommand can be used to hash password values, or other sensitive data, to be stored in the OpenIDM repository. The hashed value is output to standard output and provides details of the algorithm that was used to hash the data.

Usage is as follows:

$ ./cli.sh secureHash --algorithm [-j] string

The -a or --algorithm option specifies the hash algorithm to use. OpenIDM supports the following hash algorithms: MD5, SHA-1, SHA-256, SHA-384, and SHA-512. If you do not specify a hash algorithm, SHA-256 is used.

The -j option specifies that the string to be hashed is a JSON object. If you do not enter the string as part of the command, the command prompts for the string to be hashed. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.

The following example hashes a password value (mypassword) using the SHA-1 algorithm:

$ ./cli.sh secureHash --algorithm SHA-1 mypassword
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN HASHED VALUE-----
{
  "$crypto" : {
    "value" : {
      "algorithm" : "SHA-1",
      "data" : "YNBVgtR/jlOaMm01W8xnCBAj2J+x73iFpbhgMEXl7cOsCeWm"
    },
    "type" : "salted-hash"
  }
}
------END HASHED VALUE------

The following example hashes a JSON object. The input string must be a valid JSON object:

$ ./cli.sh secureHash --algorithm SHA-1 -j {\"password\":\"myPassw0rd\"}
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN HASHED VALUE-----
{
  "$crypto" : {
    "value" : {
      "algorithm" : "SHA-1",
      "data" : "ztpt8rEbeqvLXUE3asgA3uf5gJ77I3cED2OvOIxd5bi1eHtG"
    },
    "type" : "salted-hash"
  }
}
------END HASHED VALUE------

The following example prompts for a JSON object to be hashed. In this case, you do not need to escape the special characters:

$ ./cli.sh secureHash --algorithm SHA-1 -j
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Enter the Json value

> Press ctrl-D to finish input
Start data input:
{"password":"myPassw0rd"}
^D        
Activating cryptography service of type: JCEKS provider:  location: security/keystore.jceks
Available cryptography key: openidm-sym-default
Available cryptography key: openidm-localhost
CryptoService is initialized with 2 keys.
-----BEGIN HASHED VALUE-----
{
  "$crypto" : {
    "value" : {
      "algorithm" : "SHA-1",
      "data" : "ztpt8rEbeqvLXUE3asgA3uf5gJ77I3cED2OvOIxd5bi1eHtG"
    },
    "type" : "salted-hash"
  }
}
------END HASHED VALUE------

3.6. Using the keytool Subcommand

The keytool subcommand exports or imports secret key values.

The Java keytool command enables you to export and import public keys and certificates, but not secret or symmetric keys. The OpenIDM keytool subcommand provides this functionality.

Usage is as follows:

$ ./cli.sh keytool [--export, --import] alias

For example, to export the default OpenIDM symmetric key, run the following command:

$ ./cli.sh keytool --export openidm-sym-default
   Using boot properties at /openidm/conf/boot/boot.properties
Use KeyStore from: /openidm/security/keystore.jceks
Please enter the password: 
[OK] Secret key entry with algorithm AES
AES:606d80ae316be58e94439f91ad8ce1c0  

The default keystore password is changeit. For security reasons, you must change this password in a production environment. For information about changing the keystore password, see "Change the Default Keystore Password".

To import a new secret key named my-new-key, run the following command:

$ ./cli.sh keytool --import my-new-key   
Using boot properties at /openidm/conf/boot/boot.properties
Use KeyStore from: /openidm/security/keystore.jceks
Please enter the password: 
Enter the key: 
AES:606d80ae316be58e94439f91ad8ce1c0 

If a secret key of that name already exists, OpenIDM returns the following error:

"KeyStore contains a key with this alias"

3.7. Using the validate Subcommand

The validate subcommand validates all .json configuration files in your project's conf/ directory.

Usage is as follows:

$ ./cli.sh validate
Executing ./cli.sh
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
...................................................................
[Validating] Load JSON configuration files from:
[Validating] 	/path/to/openidm/conf
[Validating] audit.json .................................. SUCCESS
[Validating] authentication.json ......................... SUCCESS

    ...

[Validating] sync.json ................................... SUCCESS
[Validating] ui-configuration.json ....................... SUCCESS
[Validating] ui-countries.json ........................... SUCCESS
[Validating] ui-secquestions.json ........................ SUCCESS
[Validating] workflow.json ............................... SUCCESS
  

3.8. Using the update Subcommand

The update subcommand supports updates of OpenIDM 4 for patches and migrations. For an example of this process, see "Updating OpenIDM" in the Installation Guide.

Chapter 4. OpenIDM Web-Based User Interfaces

OpenIDM provides a customizable, browser-based user interface. The functionality is subdivided into Administrative and Self-Service User Interfaces.

If you are administering OpenIDM, navigate to the Administrative User Interface, also known as the Admin UI. If OpenIDM is installed on the local system, you can get to the Admin UI at the following URL: https://localhost:8443/admin. In the Admin UI, you can configure connectors, customize managed objects, set up attribute mappings, manage accounts, and more.

The Self-Service User Interface, also known as the Self-Service UI, provides role-based access to tasks based on BPMN2 workflows, and allows users to manage certain aspects of their own accounts, including configurable self-service registration. When OpenIDM starts, you can access the Self-Service UI at https://localhost:8443/.

Warning

The default password for the OpenIDM administrative user, openidm-admin, is openidm-admin. To protect your deployment in production, change this password.

All users, including openidm-admin, can change their password through the Self-Service UI. After you have logged in, click Change Password.

4.1. Configuring OpenIDM from the Admin UI

You can set up a basic configuration for OpenIDM with the Administrative User Interface (Admin UI).

Through the Admin UI, you can connect to resources, configure attribute mapping and scheduled reconciliation, and set up and manage objects, such as users, groups, and devices.

When you log into the Admin UI, the first screen you should see is the Dashboard.

The OpenIDM Administrative UI Dashboard
OpenIDM Administrative User Interface: Main Screen

The Admin UI includes a fixed top menu bar. As you navigate around the Admin UI, you should see the same menu bar throughout. You can click the Dashboard link on the top menu bar to return to the Dashboard.

The Dashboard is split into four sections:

  • Quick Start cards support one-click access to common administrative tasks, and are described in detail in the following section.

  • Last Reconciliation includes data from the most recent reconciliation between data stores. After you run a reconciliation, you should see data similar to:

    OpenIDM Administrative User Interface: Reconciliation Statistics
  • System Health includes data on current CPU and memory usage.

  • Resources include an abbreviated list of configured connectors, mappings, and managed objects.

The Quick Start cards allow quick access to the labeled configuration options, described here:

You can configure more of OpenIDM than what is shown in the Quick Start cards. In the top menu bar, select the Configure and Manage drop-down menus and see what happens when you select each option.

4.2. Working With the Self-Service UI

For all users, the Self-Service UI includes Dashboard and Profile links in the top menu bar.

To access the Self-Service UI, start OpenIDM, then navigate to https://localhost:8443/. If you have not installed a certificate that is trusted by a certificate authority, you are prompted with an Untrusted Connection warning the first time you log in to the UI.

The Dashboard includes a list tasks assigned to the user who has logged in, tasks assigned to the relevant group, processes available to be invoked, current notifications for that user, along with Quick Start cards for that user's profile and password.

The OpenIDM Self-Service UI Dashboard
OpenIDM Self-Service User Interface

For examples of these tasks, processes, and notifications, see "Workflow Samples" in the Samples Guide.

4.3. Configuring User Self-Service

The following sections describe how you can configure three functions of user self-service: User Registration, Forgotten Username, and Password Reset.

  • User Registration: You can configure limited access that allows a current anonymous user to create their own accounts. To aid in this process, you can configure reCAPTCHA, email validation, and KBA questions.

  • Forgotten Username: You can set up OpenIDM to allow users to recover forgotten usernames via their email addresses or first and last names. OpenIDM can then display that username on the screen, and / or email such information to that user.

  • Password Reset: You can set up OpenIDM to verify user identities via KBA questions. If email configuration is included, OpenIDM would email a link that allows users to reset their passwords.

If you enable email functionality, the one solution that works for all three self-service functions is to configure an outgoing email service for OpenIDM, as described in "Sending Email".

May specify "Email Validation" or "Email Username"

Note

If you disable email validation only for user registration, you should perform one of the following actions:

  • Disable validation for mail in the managed user schema. Click Configure > Managed Objects > User > Schema. Under Schema Properties, click Mail, scroll down to Validation Policies, and set Required to false.

  • Configure the User Registration template to support user email entries. To do so, use "Customizing the User Registration Page", and substitute mail for employeeNum.

Without these changes, users who try to register accounts will see a Forbidden Request Error.

You can configure user self-service through the UI and through configuration files.

  • In the UI, log into the Admin UI. You can enable these features when you click Configure > User Registration, Configure > Forgotten Username, and Configure > Password Reset.

  • In the command-line interface, copy the following files from samples/misc to your working project-dir/conf directory:

    User Registration: selfservice-registration.json
    Forgotten username: selfservice-username.json
    Password reset: selfservice-reset.json

    Examine the ui-configuration.json file in the same directory. You can activate or deactivate User Registration and Password Reset by changing the value associated with the selfRegistration and passwordReset properties:

    {
       "configuration" : {
          "selfRegistration" : true,
          "passwordReset" : true,
          "forgotUsername" : true,
        ...

For each of these functions, you can configure several options, including:

reCAPTCHA

Google reCAPTCHA helps prevent bots from registering users or resetting passwords on your system. For Google documentation, see Google reCAPTCHA. For directions on how to configure reCAPTCHA for user self-service, see "Configuring Google reCAPTCHA".

Email Validation / Email Username

You can configure the email messages that OpenIDM sends to users, as a way to verify identities for user self-service. For more information, see "Configuring Self-Service Email Messages".

If you configure email validation, you must also configure an outgoing email service in OpenIDM. To do so, click Configure > System Preferences > Email. For more information, read "Sending Email".

User Details

You can modify the Identity Email Field associated with user registration; by default, it is set to mail.

User Query

When configuring password reset and forgotten username functionality, you can modify the fields that a user is allowed to query. If you do, you may need to modify the HTML templates that appear to users who request such functionality. For more information, see "Modifying Valid Query Fields".

Valid Query Fields

Property names that you can use to help users find their usernames or verify their identity, such as userName, mail, or givenName.

Identity ID Field

Property name associated with the User ID, typically _id.

Identity Email Field

Property name associated with the user email field, typically something like mail or email.

Identity Service URL

The path associated with the identity data store, such as managed/user.

KBA Stage

You can modify Knowledge-based Authentication (KBA) questions. Users can then select the questions of their choice to help users verify their own identities. For directions on how to configure KBA questions, see "Configuring Self-Service Questions". For User Registration, you cannot configure these questions in the Admin UI.

Registration Form / Password Reset Form

You can change the Identity Service URL for the target repository, to an entry such as managed/user.

Password Reset Form

You can change the Identity Service URL for the target repository, to an entry such as managed/user. You can also cite the property associated with user passwords, such as password.

Display Username

For forgotten username retrieval, you can configure OpenIDM to display the username on the website, instead of (or in addition to) sending that username to the associated email account.

Snapshot Token

OpenIDM User Self-Service uses JWT tokens, with a default token lifetime of 1800 seconds.

You can reorder how OpenIDM works with relevant self-service options, specifically reCAPTCHA, KBA stage questions, and email validation. Based on the following screen, users who need to reset their passwords will go through reCAPTCHA, followed by email validation, and then answer any configured KBA questions.

OpenIDM Self-Service UI - Password Reset Sequence
OpenIDM Self-Service UI - Password Reset Sequence

To reorder the steps, either "drag and drop" the options in the Admin UI, or change the sequence in the associated configuration file, in the project-dir/conf directory.

OpenIDM generates a token for each process. For example, users who forget their usernames and passwords go through two steps:

  • The user goes through the User Registration process gets a JWT token, and has the token lifetime (default = 1800 seconds) to get to the next step in the process.

  • With username in hand, that user may then start the Password Reset process. That user gets a second JWT token, with the token lifetime configured for that process.

4.3.1. Common Configuration Details

This section describes configuration details common to both User Registration and Password Reset.

4.3.1.1. Configuring Self-Service Email Messages

When a user requests a new account, or a Password Reset, you can configure OpenIDM to send that user an email message, to confirm the request. That email can include a link that the user would select to continue the process.

You can configure that email message either through the UI or the associated configuration files, as illustrated in the following excerpt of the selfservice-registration.json file:

{
   "stageConfigs" : {
      {
         "name" : "emailValidation",
         "identityEmailField" : "mail",
         "emailServiceUrl" : "external/email",
         "from" : "admin@example.net",
         "subject" : "Register new account",
         "mimeType" : "text/html",
         "subjectTranslations" : {
            "en" : "Create a new username"
         },
         "messageTranslations" : {
            "<h3>This is your confirmation email.</h3><h4><a href=\"%link%\">Click to continue</a></h4>",
         "verificationLinkToken" : "%link%",
         "verificationLink" : "https://openidm.example.net:8443/#register/"
      }
...

Note the two languages in the subjectTranslations and messageTranslations code blocks. You can add translations for languages other than US English en and French fr. Use the appropriate two-letter code based on ISO 639. End users will see the message in the language configured in their web browsers.

You can set up similar emails for password reset and forgotten username functionality, in the selfservice-reset.json and selfservice-username.json files. For templates, see the /path/to/openidm/samples/misc directory.

One difference between User Registration and Password Reset is in the "verificationLink"; for Password Reset, the corresponding URL is:

...
     "verificationLink" : "https://openidm.example.net:8443/#passwordReset/"
...

4.3.1.2. Configuring Google reCAPTCHA

To use Google reCAPTCHA, you will need a Google account and your domain name (RFC 2606-compliant URLs such as localhost and example.com are acceptable for test purposes). Google then provides a Site key and a Secret key that you can include in the self-service function configuration.

For example, you can add the following reCAPTCHA code block (with appropriate keys as defined by Google) into the selfservice-registration.json, selfservice-reset.json or the selfservice-username.json configuration files:

{
   "stageConfigs" : [
      {
         "name" : "captcha",
         "recaptchaSiteKey" : "< Insert Site Key Here >",
         "recaptchaSecretKey" : "< Insert Secret Key Here >",
         "recaptchaUri" : "https://www.google.com/recaptcha/api/siteverify"
      },

You may also add the reCAPTCHA keys through the UI.

4.3.1.3. Configuring Self-Service Questions

OpenIDM uses Knowledge-based Authentication (KBA) to help users prove their identity when they perform the noted functions. In other words, they get a choice of questions configured in the following file: selfservice.kba.json.

The default version of this file is straightforward:

{
   "questions" : {
      "1" : {
         "en" : "What's your favorite color?",
         "en_GB" : "What is your favourite colour?",
         "fr" : "Quelle est votre couleur préférée?"
      },
      "2" : {
         "en" : "Who was your first employer?"
      }
   }
}

You may change or add the questions of your choice, in JSON format.

At this time, OpenIDM supports editing KBA questions only through the noted configuration file. However, individual users can configure their own questions and answers, during the User Registration process.

After a regular user logs into the Self-Service UI, that user can modify, add, and delete KBA questions under the Profile tab:

Users can modify their own KBA questions

4.3.1.4. Setting a Minimum Number of Self-Service Questions

In addition, you can set a minimum number of questions that users have to define to register for their accounts. To do so, open the associated configuration file, selfservice-registration.json, in your project-dir/conf directory. Look for the code block that starts with kbaSecurityAnswerDefinitionStage:

{
     "name" : "kbaSecurityAnswerDefinitionStage",
     "numberOfAnswersUserMustSet" : 1,
     "kbaConfig" : null
},

In a similar fashion, you can set a minimum number of questions that users have to answer before OpenIDM allows them to reset their passwords. The associated configuration file is selfservice-reset.json, and the relevant code block is:

{
     "name" : "kbaSecurityAnswerVerificationStage",
     "kbaPropertyName" : "kbaInfo",
     "identityServiceUrl" : "managed/user",
     "numberOfQuestionsUserMustAnswer" : "1",
     "kbaConfig" : null
},

4.3.2. The End User and Commons User Self-Service

When all self-service features are enabled, OpenIDM includes three links on the self-service login page: Reset your password, Register, and Forgot Username?.

When the account registration page is used to create an account, OpenIDM normally creates a managed object in the OpenIDM repository, and applies default policies for managed objects.

4.4. Customizing a UI Template

You may want to customize information included in the Self-Service UI.

These procedures do not address actual data store requirements. If you add text boxes in the UI, it is your responsibility to set up associated properties in your repositories.

To do so, you should copy existing default template files in the openidm/ui/selfservice/default subdirectory to associated extension/ subdirectories.

To simplify the process, you can copy some or all of the content from the openidm/ui/selfservice/default/templates to the openidm/ui/selfservice/extension/templates directory.

You can use a similar process to modify what is shown in the Admin UI.

4.4.1. Customizing User Self-Service Screens

In the following procedure, you will customize the screen that users see during the User Registration process. You can use a similar process to customize what a user sees during the Password Reset and Forgotten Username processes.

For user Self-Service features, you can customize options in three files. Navigate to the extension/templates/user/process subdirectory, and examine the following files:

  • User Registration: registration/userDetails-initial.html

  • Password Reset: reset/userQuery-initial.html

  • Forgotten Username: username/userQuery-initial.html

The following procedure demonstrates the process for User Registration.

Customizing the User Registration Page
  1. When you configure user self service, as described in "Configuring User Self-Service", anonymous users who choose to register will see a screen similar to:

    The OpenIDM Self-Registration Screen
  2. The screen you see is from the following file: userDetails-initial.html, in the selfservice/extension/templates/user/process/registration subdirectory. Open that file in a text editor.

  3. Assume that you want new users to enter an employee ID number when they register.

    Create a new form-group stanza for that number. For this procedure, the stanza appears after the stanza for Last Name (or surname) sn:

    <div class="form-group">
        <label class="sr-only" for="input-employeeNum">{{t 'common.user.employeeNum'}}</label>
        <input type="text" placeholder="{{t 'common.user.employeeNum'}}" id="input-employeeNum" name="user.employeeNum" class="form-control input-lg" />
    </div>
  4. Edit the relevant translation.json file. As this is the customized file for the Self-Service UI, you will find it in the selfservice/extension/locales/en directory that you set up in "Customizing the UI".

    You need to find the right place to enter text associated with the employeeNum property. Look for the other properties in the userDetails-initial.html file.

    The following excerpt illustrates the employeeNum property as added to the translation.json file.

    ...
    "givenName" : "First Name",
    "sn" : "Last Name",
    "employeeNum" : "Employee ID Number",
    ...
  5. The next time an anonymous user tries to create an account, that user should see a screen similar to:

    A Customized OpenIDM Self-Registration Screen

In the following procedure, you will customize what users can modify when they navigate to their User Profile page:

Adding a Custom Tab to the User Profile Page

If you want to allow users to modify additional data on their profiles, this procedure is for you.

  1. Log in to the Self-Service UI. Click the Profile tab. You should see at least the following tabs: Basic Info and Password. In this procedure, you will add a Mobile Phone tab.

  2. OpenIDM generates the user profile page from the following file: UserProfileTemplate.html. Assuming you set up custom extension subdirectories, as described in "Customizing a UI Template", you should find a copy of this file in the following directory: selfservice/extension/templates/user.

  3. Examine the first few lines of that file. Note how the tablist includes the tabs in the Self-Service UI user profile. The following excerpt includes a third tab, with the userTelephoneNumberTab property:

    <div class="container">
       <div class="page-header">
          <h1>{{t "common.user.userProfile"}}</h1>
       </div>
       <div class="tab-menu">
          <ul class="nav nav-tabs" role="tablist">
             <li class="active"><a href="#userDetailsTab" role="tab" data-toggle="tab">{{t "common.user.basicInfo"}}</a></li>
             <li><a href="#userPasswordTab" role="tab" data-toggle="tab">{{t "common.user.password"}}</a></li>
             <li><a href="#userTelephoneNumberTab" role="tab" data-toggle="tab">{{t "common.user.telephoneNumber"}}</a></li>
          </ul>
       </div>
    ... 
  4. Next, you should provide information for the tab. Based on the comments in the file, and the entries in the Password tab, the following code sets up a Mobile Phone number entry:

    <div role="tabpanel" class="tab-pane panel panel-default fr-panel-tab" id="userTelephoneNumberTab">
       <form class="form-horizontal" id="password">
          <div class="panel-body">
             <div class="form-group">
                <label class="col-sm-3 control-label" for="input-telephoneNumber">{{t "common.user.telephoneNumber"}}</label>
                <div class="col-sm-6">
                   <input class="form-control" type="telephoneNumber" id="input-telephoneNumber" name="telephoneNumber" value="" />
                </div>
             </div>
          </div>
          <div class="panel-footer clearfix">
             {{> form/_basicSaveReset}}
          </div>
       </form>
    </div>
        ...

    Note

    For illustration, this procedure uses the HTML tags found in the UserProfileTemplate.html file. You can use any standard HTML content within tab-pane tags, as long as they include a standard form tag and standard input fields. OpenIDM picks up this information when the tab is saved, and uses it to PATCH user content.

  5. Review the managed.json file. Make sure it is viewable and userEditable as shown in the following excerpt:

    "telephoneNumber" : {
         "type" : "string",
         "title" : "Mobile Phone",
         "viewable" : true,
         "userEditable" : true,
         "pattern" : "^\\+?([0-9\\- \\(\\)])*$"
    }, 
  6. Review the result. Log in to the Self-Service UI, and click Profile. Note the entry for the Mobile Phone tab.

    A Custom OpenIDM User Profile Screen

4.4.2. Modifying Valid Query Fields

For Password Reset and Forgotten Username functionality, you may choose to modify Valid Query Fields, such as those described in "Configuring User Self-Service".

For example, if you click Configure > Password Reset > User Query Form, you can make changes to Valid Query Fields.

Valid Query Fields in a User Lookup Form

If you add, delete, or modify any Valid Query Fields, you will have to change the corresponding userQuery-initial.html file.

Assuming you set up custom extension subdirectories, as described in "Customizing a UI Template", you can find this file in the following directory: selfservice/extension/templates/user/process.

If you change any Valid Query Fields, you should make corresponding changes.

  • For Forgotten Username functionality, you would modify the username/userQuery-initial.html file.

  • For Password Reset functionality, you would modify the reset/userQuery-initial.html file.

For a model of how you can change the userQuery-initial.html file, see "Customizing the User Registration Page".

4.5. Managing Accounts

Only administrative users (with the role openidm-admin) can add, modify, and delete accounts from the Admin UI. Regular users can modify certain aspects of their own accounts from the Self-Service UI.

4.5.1. Account Configuration

In the Admin UI, you can manage most details associated with an account, as shown in the following screenshot.

Account, UI Configuration
Account Configuration Options in the UI

You can configure different functionality for an account under each tab:

Details

The Details tab includes basic identifying data for each user, with two special entries:

Status

By default, accounts are shown as active. To suspend an account, such as for a user who has taken a leave of absence, set that user's status to inactive.

Manager

You can assign a manager from the existing list of managed users.

Password

As an administrator, you can create new passwords for users in the managed user repository.

Provisioning Roles

Used to specify how objects are provisioned to an external system. For more information, see "Working With Managed Roles".

Authorization Roles

Used to specify the authorization rights of a managed user within OpenIDM. For more information, see "Working With Managed Roles".

Direct Reports

Users who are listed as managers of others have entries under the Direct Reports tab, as shown in the following illustration:

You can display direct reports in a chart
Linked Systems

Used to display account information reconciled from external systems.

4.5.2. Procedures for Managing Accounts

With the following procedures, you can add, update, and deactivate accounts for managed objects such as users.

The managed object does not have to be a user. It can be a role, a group, or even be a physical item such as an IoT device. The basic process for adding, modifying, deactivating, and deleting other objects is the same as it is with accounts. However, the details may vary; for example, many IoT devices do not have telephone numbers.

To Add a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin.

  2. Click Manage > User.

  3. Click New User.

  4. Complete the fields on the New User page.

    Most of these fields are self-explanatory. Be aware that the user interface is subject to policy validation, as described in "Using Policies to Validate Data". So, for example, the email address must be a valid email address, and the password must comply with the password validation settings that appear if you enter an invalid password.

In a similar way, you can create accounts for other managed objects.

You can review new managed object settings in the managed.json file of your project-dir/conf directory.

In the following procedures, you learn how to update, deactivate, and delete user accounts, as well as how to view that account in different user resources. You can follow essentially the same procedures for other managed objects such as IoT devices.

To Update a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin as an administrative user.

  2. Click Manage > User.

  3. Click the Username of the user that you want to update.

  4. On the profile page for the user, modify the fields you want to change and click Update.

    The user account is updated in the OpenIDM repository.

To Delete a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin as an administrative user.

  2. Click Manage > User.

  3. Select the checkbox next to the desired Username.

  4. Click the Delete Selected button.

  5. Click OK to confirm the deletion.

    The user is deleted from the internal repository.

To View an Account in External Resources

The Admin UI displays the details of the account in the OpenIDM repository (managed/user). When a mapping has been configured between the repository and one or more external resources, you can view details of that account in any external system to which it is linked. As this view is read-only, you cannot update a user record in a linked system from within the Self-Service UI.

By default, implicit synchronization is enabled for mappings from the managed/user repository to any external resource. This means that when you update a managed object, any mappings defined in the sync.json file that have the managed object as the source are automatically executed to update the target system. You can see these changes in the Linked Systems section of a user's profile.

To view a user's linked accounts:

  1. Log in to the Admin UI at https://localhost:8443/admin.

  2. Click Manage User > Username > Linked Systems.

  3. The Linked Systems panel indicates the external mapped resource or resources.

  4. Select the resource in which you want to view the account, from the Linked Resource list.

    The user record in the linked resource is displayed.

4.6. Configuring Account Relationships

This section will help you set up relationships between human users and devices, such as IoT devices.

You'll set this up with the help of the Admin UI schema editor, which allows you to create and customize managed objects such as Users and Devices as well as relationships between managed objects. You can also create these options in the managed.json file for your project.

When complete, you will have users who can own multiple unique devices. If you try to assign the same device to more than one owner, OpenIDM will stop you with an error message.

This section assumes that you've started OpenIDM with "Sample 2b - LDAP Two Way" in the Samples Guide.

After you've started OpenIDM with "Sample 2b", go through the following procedures, where you will:

Configuring Schema for a Device

This procedure illustrates how you might set up a Device managed object, with schema that configures relationships to users.

After you configure the schema for the Device managed object, you can collect information such as model, manufacturer, and serial number for each device. In the next procedure, you'll set up an owner schema property that includes a relationship to the User managed object.

  1. Click Configure > Managed Objects > New Managed Object. Give that object an appropriate IoT name. For this procedure, specify Device. You should also select a managed object icon. Click Save.

  2. You should now see four tabs: Details, Schema, Scripts, and Properties. Click the Schema tab.

    OpenIDM Admin UI - New Managed Object
  3. The items that you can add to the new managed object depend on the associated properties.

    The Schema tab includes the Readable Title of the device; in this case, set it to Device.

  4. You can add schema properties as needed in the UI. Click the Property button. Include the properties shown in the illustration: model, serialNumber, manufacturer, description, and category.

  5. Initially, the new property is named Property 1. As soon as you enter a property name such as model, OpenIDM changes that property name accordingly.

  6. To support UI-based searches of devices, make sure to set the Searchable option to true for all configured schema properties, unless it includes extensive text, In this case, you should set Searchable to false for the description property.

    The Searchable option is used in the data grid for the given object. When you click Manage > Device (or another object such as User), OpenIDM displays searchable properties for that object.

  7. After you save the properties for the new managed object type, OpenIDM saves those entries in the managed.json file in the project-dir/conf directory.

  8. Now click Manage > Device > New Device. Add a device as shown in the following illustration.

    OpenIDM Devices - All Properties
  9. You can continue adding new devices to the managed object, or reconcile that managed object with another data store. The other procedures in this section assume that you have set up the devices as shown in the next illustration.

  10. When complete, you can review the list of devices. Based on this procedure, click Manage > Device.

    OpenIDM list of generic IoT Devices
  11. Select one of the listed devices. You'll note that the label for the device in the Admin UI matches the name of the first property of the device.

    One OpenIDM IoT Device

    You can change the order of schema properties for the Device managed object by clicking Configure > Managed Object > Device > Schema, and select the property that you want to move up or down the list.

    Alternatively, you can make the same changes to this (or any managed object schema) in the managed.json file for your project.

Configure a Relationship from the Device Managed Object

In this procedure, you will add a property to the schema of the Device managed object.

  1. In the Admin UI, click Configure > Managed Objects > Device > Schema.

  2. Under the Schema tab, add a new property. For this procedure, we call it owner. Unlike other schema properties, set the Searchable property to false.

  3. Scroll down to Validation Policies; click the Type box and select Relationship. This opens additional relationship options.

  4. Set up a Reverse Property Name of IoT_Devices. You'll use that reverse property name in the next "Configure a Relationship From the User Managed Object".

    Device Validation for a Relationship

    Be sure to set the Reverse Relationship and Validate options to true, which ensures that each device is associated with no more than one user.

  5. Scroll down and add a Resource Collection. Set up a link to the managed/user object, with a label that matches the User managed object.

  6. Enable queries of the User managed object by setting Query Filter to true. The Query Filter value for this Device object allows you to identify the user who "owns" each device. For more information, see "Common Filter Expressions".

    Resource Collection in a Relationship
  7. Set up fields from managed/user properties. The properties shown in the illustration are just examples, based on "Sample 2b - LDAP Two Way" in the Samples Guide.

  8. Add one or more Sort Keys from the configured fields.

  9. Save your changes.

Configure a Relationship From the User Managed Object

In this procedure, you will configure an existing User Managed Object with schema to match what was created in "Configure a Relationship from the Device Managed Object".

With the settings you create, OpenIDM supports a relationship between a single user and multiple devices. In addition, this procedure prevents multiple users from "owning" any single device.

  1. In the Admin UI, click Configure > Managed Objects > User > Schema.

  2. Under the Schema tab, add a new property, called IoT_Devices.

  3. Make sure the searchable property is set to false, to minimize confusion in the relationship. Otherwise, you'll see every device owned by every user, when you click Manage > User.

  4. For validation policies, you'll set up an array with a relationship. Note how the reverse property name matches the property that you configured in "Configure a Relationship from the Device Managed Object".

    User Relationship Array with Devices

    Be sure to set the Reverse Relationship and Validate options to true, which ensures that no more than one user gets associated with a specific device.

  5. Scroll down to Resource Collection, and add references to the managed/device resource, as shown in the next illustration.

  6. Enter true in the Query Filter text box. In this relationship, OpenIDM will read all information from the managed/device managed object, with information from the device fields and sort keys that you configured in "Configure a Relationship from the Device Managed Object".

    User Relationship Resource Collection
Demonstrating an IoT Relationship

This procedure assumes that you've already taken the steps described in the previous procedures in this section, specifically, "Configuring Schema for a Device", "Configure a Relationship from the Device Managed Object", and "Configure a Relationship From the User Managed Object".

This procedure also assumes that you started OpenIDM with "Sample 2b - LDAP Two Way" in the Samples Guide, and have reconciled to set up users.

  1. From the Admin UI, click Manage > User. Select a user, and in this case, click the IoT Devices tab. See how you can select any of the devices that you may have added in "Configuring Schema for a Device".

    Assigning a Device to a User
  2. Alternatively, try to assign a device to an owner. To do so, click Manage > Device, and select a device. You'll see either an Add Owner or Update Owner button, which allows you to assign a device to a specific user.

    If you try to assign a device already assigned by a different user, you'll get the following message: Conflict with Existing Relationship.

4.7. Managing Workflows From the Self-Service UI

The Self-Service UI is integrated with the embedded Activiti worfklow engine, enabling users to interact with workflows. Available workflows are displayed under the Processes item on the Dashboard. In order for a workflow to be displayed here, the workflow definition file must be present in the openidm/workflow directory.

A sample workflow integration with the Self-Service UI is provided in openidm/samples/workflow, and documented in "Sample Workflow - Provisioning User Accounts" in the Samples Guide. Follow the steps in that sample for an understanding of how the workflow integration works.

General access to workflow-related endpoints is based on the access rules defined in the script/access.js file. The configuration defined in the conf/process-access.json file determines who can invoke workflows. By default all users with the role openidm-authorized or openidm-admin can invoke any available workflow. The default process-access.json file is as follows:

{
    "workflowAccess" : [
        {
            "propertiesCheck" : {
                "property" : "_id",
                "matches" : ".*",
                "requiresRole" : "openidm-authorized"
            }
        },
        {
            "propertiesCheck" : {
                "property" : "_id",
                "matches" : ".*",
                "requiresRole" : "openidm-admin"
            }
        }
    ]
}  
"property"

Specifies the property used to identify the process definition. By default, process definitions are identified by their _id.

"matches"

A regular expression match is performed on the process definitions, according to the specified property. The default ("matches" : ".*") implies that all process definition IDs match.

"requiresRole"

Specifies the OpenIDM role that is required for users to have access to the matched process definition IDs. In the default file, users with the role openidm-authorized or openidm-admin have access.

To extend the process action definition file, identify the processes to which users should have access, and specify the qualifying user roles. For example, if you wanted to restrict access to a process definition whose ID was 567, to users with the role ldap you would add the following to the process-access.json file:

{
    "propertiesCheck" : {
        "property" : "_id",
        "matches" : "567",
        "requiresRole" : "ldap"
    }
} 

4.8. Customizing the UI

OpenIDM allows you to customize both the Admin and Self-Service UIs. When you install OpenIDM, you can find the default UI configuration files in two directories:

  • Admin UI: openidm/ui/admin/default

  • Self-Service UI: openidm/ui/selfservice/default

OpenIDM looks for custom themes and templates in the following directories:

  • Admin UI: openidm/ui/admin/extension

  • Self-Service UI: openidm/ui/selfservice/extension

Before starting the customization process, you should create these directories. If you are running UNIX/Linux, the following commands create a copy of the appropriate subdirectories:

$ cd /path/to/openidm/ui
$ cp -r selfservice/default/. selfservice/extension
$ cp -r admin/default/. admin/extension

OpenIDM also includes templates that may help, in two other directories:

  • Admin UI: openidm/ui/admin/default/templates

  • Self-Service UI: openidm/ui/selfservice/default/templates

4.9. Changing the UI Theme

You can customize the theme of the user interface. OpenIDM uses the Bootstrap framework. You can download and customize the OpenIDM UI with the Bootstrap themes of your choice. OpenIDM is also configured with the Font Awesome CSS toolkit.

Note

If you use Brand Icons from the Font Awesome CSS Toolkit, be aware of the following statement:

All brand icons are trademarks of their respective owners. The use of these trademarks does not indicate endorsement of the trademark holder by ForgeRock, nor vice versa.

4.9.1. OpenIDM UI Themes and Bootstrap

You can configure a few features of the OpenIDM UI in the ui-themeconfig.json file in your project's conf/ subdirectory. However, to change most theme-related features of the UI, you must copy target files to the appropriate extension subdirectory, and then modify them as discussed in "Customizing the UI".

The default configuration files for the Admin and Self-Service UIs are identical for theme configuration.

By default the UI reads the stylesheets and images from the respective openidm/ui/function/default directories. Do not modify the files in this directory. Your changes may be overwritten the next time you update or even patch your system.

To customize your UI, first set up matching subdirectories for your system (openidm/ui/admin/extension and openidm/ui/selfservice/extension). For example, assume you want to customize colors, logos, and so on.

You can set up a new theme, primarily through custom Bootstrap CSS files, in appropriate extension/ subdirectories, such as openidm/ui/selfservice/extension/libs and openidm/ui/selfservice/extension/css.

You may also need to update the "stylesheets" listing in the ui-themeconfig.json file for your project, in the project-dir/conf directory.

...
"stylesheets" : ["css/bootstrap-3.3.5-custom.css", "css/structure.css", "css/theme.css"],
...

You can find these stylesheets in the /css subdirectory.

  • bootstrap-3.3.5-custom.css: Includes custom settings that you can get from various Bootstrap configuration sites, such as the Bootstrap Customize and Download website.

    You may find the ForgeRock version of this in the config.json file in the ui/selfservice/default/css/common/structure/ directory.

  • structure.css: Supports configuration of structural elements of the UI.

  • theme.css: Includes customizable options for UI themes such as colors, buttons, and navigation bars.

If you want to set up custom versions of these files, copy them to the extension/css subdirectories.

4.9.3. Changing the Language of the UI

Currently, the UI is provided only in US English. You can translate the UI and specify that your own locale is used. The following example shows how to translate the UI into French:

  1. Assuming you set up custom extension subdirectories, as described in "Customizing the UI", you can copy the default (en) locale to a new (fr) subdirectory as follows:

    $ cd /path/to/openidm/ui/selfservice/extension/locales
    $ cp -R en fr

    The new locale (fr) now contains the default translation.json file:

    $ ls fr/
    translation.json
  2. Translate the values of the properties in the fr/translate.json file. Do not translate the property names. For example:

    ...
    "UserMessages" : {
       "changedPassword" : "Mot de passe a été modifié",
       "profileUpdateFailed" : "Problème lors de la mise à jour du profil",
       "profileUpdateSuccessful" : "Profil a été mis à jour",
       "userNameUpdated" : "Nom d'utilisateur a été modifié",
    .... 
  3. Change the UI configuration to use the new locale by setting the value of the lang property in the project-dir/conf/ui-configuration.json file, as follows:

    "lang" : "fr",
  4. Refresh your browser window, and OpenIDM applies your change.

You can also change the labels for accounts in the UI. To do so, navigate to the Schema Properties for the managed object to be changed.

To change the labels for user accounts, navigate to the Admin UI. Click Configure > Managed Objects > User, and scroll down to Schema.

Under Schema Properties, select a property and modify the Readable Title. For example, you can modify the Readable Title for userName to a label in another language, such as Nom d'utilisateur.

4.9.4. Creating a Project-Specific UI Theme

You can create specific UI themes for different projects and then point a particular UI instance to use a defined theme on startup. To create a complete custom theme, follow these steps:

  1. Shut down the OpenIDM instance, if it is running. In the OSGi console, type:

    shutdown
    ->
  2. Copy the entire default Self-Service UI theme to an accessible location. For example:

    $ cd /path/to/openidm/ui/selfservice
    $ cp -r default /path/to/openidm/new-project-theme
  3. If desired, repeat the process with the Admin UI; just remember to copy files to a different directory:

    $ cd /path/to/openidm/ui/admin
    $ cp -r default /path/to/openidm/admin-project-theme
  4. In the copied theme, modify the required elements, as described in the previous sections. Note that nothing is copied to the extension folder in this case - changes are made in the copied theme.

  5. In the conf/ui.context-selfservice.json file, modify the values for defaultDir and extensionDir to the directory with your new-project-theme:

    {
        "enabled" : true,
        "urlContextRoot" : "/",
        "defaultDir" : "&{launcher.install.location}/ui/selfservice/default",
        "extensionDir" : "&{launcher.install.location}/ui/selfservice/extension"
    }
  6. If you want to repeat the process for the Admin UI, make parallel changes to the project-dir/conf/ui.context-admin.json file.

  7. Restart OpenIDM.

    $ cd /path/to/openidm
    $ ./startup.sh
  8. Relaunch the UI in your browser. The UI is displayed with the new custom theme.

4.10. Using an External System for Password Reset

By default, the Password Reset mechanism is handled internally, in OpenIDM. You can reroute Password Reset in the event that a user has forgotten their password, by specifying an external URL to which Password Reset requests are sent. Note that this URL applies to the Password Reset link on the login page only, not to the security data change facility that is available after a user has logged in.

To set an external URL to handle Password Reset, set the passwordResetLink parameter in the UI configuration file (conf/ui-configuration.json) file. The following example sets the passwordResetLink to https://accounts.example.com/account/reset-password:

passwordResetLink: "https://accounts.example.com/reset-password"

The passwordResetLink parameter takes either an empty string as a value (which indicates that no external link is used) or a full URL to the external system that handles Password Reset requests.

Note

External Password Reset and security questions for internal Password Reset are mutually exclusive. Therefore, if you set a value for the passwordResetLink parameter, users will not be prompted with any security questions, regardless of the setting of the securityQuestions parameter.

4.11. Providing a Logout URL to External Applications

By default, a UI session is invalidated when a user clicks on the Log out link. In certain situations your external applications might require a distinct logout URL to which users can be routed, to terminate their UI session.

The logout URL is #logout, appended to the UI URL, for example, https://localhost:8443/#logout/.

The logout URL effectively performs the same action as clicking on the Log out link of the UI.

4.12. Changing the UI Path

By default, the self service UI is registered at the root context and is accessible at the URL https://localhost:8443. To specify a different URL, edit the project-dir/conf/ui.context-selfservice.json file, setting the urlContextRoot property to the new URL.

For example, to change the URL of the self service UI to https://localhost:8443/exampleui, edit the file as follows:

"urlContextRoot" : "/exampleui",

Alternatively, to change the Self-Service UI URL in the Admin UI, follow these steps:

  1. Log in to the Admin UI.

  2. Select Configure > System Preferences, and select the Self-Service UI tab.

  3. Specify the new context route in the Relative URL field.

4.13. Disabling the UI

The UI is packaged as a separate bundle that can be disabled in the configuration before server startup. To disable the registration of the UI servlet, edit the project-dir/conf/ui.context-selfservice.json file, setting the enabled property to false:

"enabled" : false,

Chapter 5. Managing the OpenIDM Repository

OpenIDM stores managed objects, internal users, and configuration objects in a repository. By default, OpenIDM uses OrientDB for its internal repository. In production, you must replace OrientDB with a supported JDBC repository, as described in "Installing a Repository For Production" in the Installation Guide.

This chapter describes the JDBC repository configuration, the use of mappings in the repository, and how to configure a connection to the repository over SSL. It also describes how to interact with the OpenIDM repository over the REST interface.

5.1. Understanding the JDBC Repository Configuration File

OpenIDM provides configuration files for each supported JDBC repository, as well as example configurations for other repositories. These configuration files are located in the /path/to/openidm/db/database/conf directory. The configuration is defined in two files:

  • datasource.jdbc-default.json, which specifies the connection details to the repository.

  • repo.jdbc.json, which specifies the mapping between OpenIDM resources and the tables in the repository, and includes a number of predefined queries.

Copy the configuration files for your specific database type to your project's conf/ directory.

5.1.1. Understanding the Connection Configuration File

The default database connection configuration file for a MySQL database follows:

{
    "driverClass" : "com.mysql.jdbc.Driver",
    "jdbcUrl" : "jdbc:mysql://localhost:3306/openidm?allowMultiQueries=true&characterEncoding=utf8",
    "databaseName" : "openidm",
    "username" : "openidm",
    "password" : "openidm",
    "connectionTimeout" : 30000,
    "connectionPool" : {
        "type" : "bonecp"
    }
} 

The configuration file includes the following properties:

driverClass, jndiName, or jtaName

Depending on the mechanism you use to acquire the data source, set one of these properties:

  • "driverClass" : string

    To use the JDBC driver manager to acquire a data source, set this property, as well as "jdbcUrl", "username", and "password". The driver class must be the fully qualified class name of the database driver to use for your database.

    Using the JDBC driver manager to acquire a data source is the most likely option, and the only one supported "out of the box". The remaining options in the sample repository configuration file assume that you are using a JDBC driver manager.

    Example: "driverClass" : "com.mysql.jdbc.Driver"

  • "jndiName" : string

    If you use JNDI to acquire the data source, set this property to the JNDI name of the data source.

    This option might be relevant if you want to run OpenIDM inside your own web container.

    Example: "jndiName" : "jdbc/my-datasource"

  • "jtaName" : string

    If you use an OSGi service to acquire the data source, set this property to a stringified version of the OsgiName.

    This option would only be relevant in a highly customized deployment, for example, if you wanted to develop your own connection pool.

    Example: "jtaName" : "osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/openidm)"

"jdbcUrl"

The connection URL to the JDBC database. The URL should include all of the parameters required by your database. For example, to specify the encoding in MySQL use 'characterEncoding=utf8'.

Example: "jdbcUrl" : "jdbc:mysql://localhost:3306/openidm?characterEncoding=utf8"

"databaseName"

The name of the database to which OpenIDM connects. By default, this is openidm.

"username"

The username with which to access the JDBC database.

"password"

The password with which to access the JDBC database. OpenIDM automatically encrypts clear string passwords. To replace an existing encrypted value, replace the whole crypto-object value, including the brackets, with a string of the new password.

"connectionTimeout"

The period of time, in milliseconds, after which OpenIDM should consider an attempted connection to the database to have failed. The default period is 30000 milliseconds (30 seconds).

"connectionPool"

The library that manages database connection pooling. Currently OpenIDM supports bonecp only.

5.1.2. Understanding the Database Table Configuration

An excerpt from an database table configuration file follows:

{
    "dbType" : "MYSQL",
    "useDataSource" : "default",
    "maxBatchSize" : 100,
    "maxTxRetry" : 5,
    "queries" : {...},
    "commands" : {...},
    "resourceMapping" : {...}
}

The configuration file includes the following properties:

"dbType" : string, optional

The type of database. The database type might affect the queries used and other optimizations. Supported database types include MYSQL, SQLSERVER, ORACLE, MS SQL, and DB2.

"useDataSource" : string, optional

This option refers to the connection details that are defined in the configuration file, described previously. The default configuration file is named datasource.jdbc-default.json. This is the file that is used by default (and the value of the "useDataSource" is therefore "default"). You might want to specify a different connection configuration file, instead of overwriting the details in the default file. In this case, set your connection configuration file datasource.jdbc-name.json and set the value of "useDataSource" to whatever name you have used.

"maxBatchSize"

The maximum number of SQL statements that will be batched together. This parameter allows you to optimize the time taken to execute multiple queries. Certain databases do not support batching, or limit how many statements can be batched. A value of 1 disables batching.

"maxTxRetry"

The maximum number of times that a specific transaction should be attempted before that transaction is aborted.

"queries"

Enables you to create predefined queries that can be referenced from the configuration. For more information about predefined queries, see "Parameterized Queries". The queries are divided between those for "genericTables" and those for "explicitTables".

The following sample extract from the default MySQL configuration file shows two credential queries, one for a generic mapping, and one for an explicit mapping. Note that the lines have been broken here for legibility only. In a real configuration file, the query would be all on one line.

"queries" : {
    "genericTables" : {
        "credential-query" : "SELECT fullobject FROM ${_dbSchema}.${_mainTable}
          obj INNER JOIN ${_dbSchema}.${_propTable} prop ON
          obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes
          objtype ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName'
          AND prop.propvalue = ${username} AND objtype.objecttype = ${_resource}",
        ...
    "explicitTables" : {
        "credential-query" : "SELECT * FROM ${_dbSchema}.${_table}
          WHERE objectid = ${username} and accountStatus = 'active'",
        ...
    }
}    

Options supported for query parameters include the following:

  • A default string parameter, for example:

    openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });

    For more information about the query function, see "openidm.query(resourceName, params, fields)".

  • A list parameter (${list:propName}).

    Use this parameter to specify a set of indeterminate size as part of your query. For example:

    WHERE targetObjectId IN (${list:filteredIds})
  • An integer parameter (${int:propName}).

    Use this parameter if you need query for non-string values in the database. This is particularly useful with explicit tables.

"commands"

Specific commands configured for to managed the database over the REST interface. Currently, only two default commands are included in the configuration:

  • purge-by-recon-expired

  • purge-by-recon-number-of

Both of these commands assist with removing stale reconciliation audit information from the repository, and preventing the repository from growing too large. For more information about repository commands, see "Running Queries and Commands on the Repository".

"resourceMapping"

Defines the mapping between OpenIDM resource URIs (for example, managed/user) and JDBC tables. The structure of the resource mapping is as follows:

"resourceMapping" : {
    "default" : {
        "mainTable" : "genericobjects",
        "propertiesTable" : "genericobjectproperties",
        "searchableDefault" : true
    },
    "genericMapping" : {...},
    "explicitMapping" : {...}
}    

The default mapping object represents a default generic table in which any resource that does not have a more specific mapping is stored.

The generic and explicit mapping objects are described in the following section.

5.2. Using Explicit or Generic Object Mapping With a JDBC Repository

For JDBC repositories, there are two ways of mapping OpenIDM objects to the database tables:

  • Generic mapping, which allows arbitrary objects to be stored without special configuration or administration.

  • Explicit mapping, which allows for optimized storage and queries by explicitly mapping objects to tables and columns in the database.

These two mapping strategies are discussed in the following sections.

5.2.1. Using Generic Mappings

Generic mapping speeds up development, and can make system maintenance more flexible by providing a more stable database structure. However, generic mapping can have a performance impact and does not take full advantage of the database facilities (such as validation within the database and flexible indexing). In addition, queries can be more difficult to set up.

In a generic table, the entire object content is stored in a single large-character field named fullobject in the mainTable for the object. To search on specific fields, you can read them by referring to them in the corresponding properties table for that object. The disadvantage of generic objects is that, because every property you might like to filter by is stored in a separate table, you must join to that table each time you need to filter by anything.

The following diagram shows a pared down database structure for the default generic table, and indicates the relationship between the main table and the corresponding properties table for each object.

Generic Tables Entity Relationship Diagram
Generic tables entity relationship diagram

These separate tables can make the query syntax particularly complex. For example, a simple query to return user entries based on a user name would need to be implemented as follows:

SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj INNER JOIN ${_dbSchema}.${_propTable} prop
    ON obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes objtype
    ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName' AND prop.propvalue = ${uid}
    AND objtype.objecttype = ${_resource}",

The query can be broken down as follows:

  1. Select the full object from the main table:

    SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj
  2. Join to the properties table and locate the object with the corresponding ID:

    INNER JOIN ${_dbSchema}.${_propTable} prop  ON obj.id = prop.${_mainTable}_id
  3. Join to the object types table to restrict returned entries to objects of a specific type. For example, you might want to restrict returned entries to managed/user objects, or managed/role objects:

    INNER JOIN ${_dbSchema}.objecttypes objtype ON objtype.id = obj.objecttypes_id
  4. Filter records by the userName property, where the userName is equal to the specified uid and the object type is the specified type (in this case, managed/user objects):

    WHERE prop.propkey='/userName'
    AND prop.propvalue = ${uid}
    AND objtype.objecttype = ${_resource}",

    The value of the uid field is provided as part of the query call, for example:

    openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });

Tables for user definable objects use a generic mapping by default.

The following sample generic mapping object illustrates how managed/ objects are stored in a generic table:

"genericMapping" : {
      "managed/*" : {
          "mainTable" : "managedobjects",
          "propertiesTable" : "managedobjectproperties",
          "searchableDefault" : true,
          "properties" : {
              "/picture" : {
                  "searchable" : false
              }
          }
      }
  },
mainTable (string, mandatory)

Indicates the main table in which data is stored for this resource.

The complete object is stored in the fullobject column of this table. The table includes an entityType foreign key that is used to distinguish the different objects stored within the table. In addition, the revision of each stored object is tracked, in the rev column of the table, enabling multi version concurrency control (MVCC). For more information, see "Manipulating Managed Objects Programmatically".

propertiesTable (string, mandatory)

Indicates the properties table, used for searches.

The contents of the properties table is a defined subset of the properties, copied from the character large object (CLOB) that is stored in the fullobject column of the main table. The properties are stored in a one-to-many style separate table. The set of properties stored here is determined by the properties that are defined as searchable.

The stored set of searchable properties makes these values available as discrete rows that can be accessed with SQL queries, specifically, with WHERE clauses. It is not otherwise possible to query specific properties of the full object.

The properties table includes the following columns:

  • ${_mainTable}_id corresponds to the id of the full object in the main table, for example, manageobjects_id, or genericobjects_id.

  • propkey is the name of the searchable property, stored in JSON pointer format (for example /mail).

  • proptype is the data type of the property, for example java.lang.String. The property type is obtained from the Class associated with the value.

  • propvalue is the value of property, extracted from the full object that is stored in the main table.

    Regardless of the property data type, this value is stored as a string, so queries against it should treat it as such.

searchableDefault (boolean, optional)

Specifies whether all properties of the resource should be searchable by default. Properties that are searchable are stored and indexed. You can override the default for individual properties in the properties element of the mapping. The preceding example indicates that all properties are searchable, with the exception of the picture property.

For large, complex objects, having all properties searchable implies a substantial performance impact. In such a case, a separate insert statement is made in the properties table for each element in the object, every time the object is updated. Also, because these are indexed fields, the recreation of these properties incurs a cost in the maintenance of the index. You should therefore enable searchable only for those properties that must be used as part of a WHERE clause in a query.

properties

Lists any individual properties for which the searchable default should be overridden.

Note that if an object was originally created with a subset of searchable properties, changing this subset (by adding a new searchable property in the configuration, for example) will not cause the existing values to be updated in the properties table for that object. To add the new property to the properties table for that object, you must update or recreate the object.

5.2.2. Improving Search Performance for Generic Mappings

All properties in a generic mapping are searchable by default. In other words, the value of the searchableDefault property is true unless you explicitly set it to false. Although there are no individual indexes in a generic mapping, you can improve search performance by setting only those properties that you need to search as searchable. Properties that are searchable are created within the corresponding properties table. The properties table exists only for searches or look-ups, and has a composite index, based on the resource, then the property name.

The sample JDBC repository configuration files (db/database/conf/repo.jdbc.json) restrict searches to specific properties by setting the searchableDefault to false for managed/user mappings. You must explicitly set searchable to true for each property that should be searched. The following sample extract from repo.jdbc.json indicates searches restricted to the userName property:

"genericMapping" : {
    "managed/user" : {
        "mainTable" : "manageduserobjects",
        "propertiesTable" : "manageduserobjectproperties",
        "searchableDefault" : false,
        "properties" : {
            "/userName" : {
            "searchable" : true
            }
        }
    }
}, 

With this configuration, OpenIDM creates entries in the properties table only for userName properties of managed user objects.

If the global searchableDefault is set to false, properties that do not have a searchable attribute explicitly set to true are not written in the properties table.

5.2.3. Using Explicit Mappings

Explicit mapping is more difficult to set up and maintain, but can take complete advantage of the native database facilities.

An explicit table offers better performance and simpler queries. There is less work in the reading and writing of data, since the data is all in a single row of a single table. In addition, it is easier to create different types of indexes that apply to only specific fields in an explicit table. The disadvantage of explicit tables is the additional work required in creating the table in the schema. Also, because rows in a table are inherently more simple, it is more difficult to deal with complex objects. Any non-simple key:value pair in an object associated with an explicit table is converted to a JSON string and stored in the cell in that format. This makes the value difficult to use, from the perspective of a query attempting to search within it.

Note that it is possible to have a generic mapping configuration for most managed objects, and to have an explicit mapping that overrides the default generic mapping in certain cases. The sample configuration provided in /path/to/openidm/db/mysql/conf/repo.jdbc-mysql-explicit-managed-user.json has a generic mapping for managed objects, but an explicit mapping for managed user objects.

OpenIDM uses explicit mapping for internal system tables, such as the tables used for auditing.

Depending on the types of usage your system is supporting, you might find that an explicit mapping performs better than a generic mapping. Operations such as sorting and searching (such as those performed in the default UI) tend to be faster with explicitly-mapped objects, for example.

The following sample explicit mapping object illustrates how internal/user objects are stored in an explicit table:

"explicitMapping" : {
    "internal/user" : {
        "table" : "internaluser",
        "objectToColumn" : {
            "_id" : "objectid",
            "_rev" : "rev",
            "password" : "pwd",
            "roles" : "roles"
        }
    },
    ...
}   
<resource-uri> (string, mandatory)

Indicates the URI for the resources to which this mapping applies, for example, "internal/user".

table (string, mandatory)

The name of the database table in which the object (in this case internal users) is stored.

objectToColumn (string, mandatory)

The way in which specific managed object properties are mapped to columns in the table.

The mapping can be a simple one to one mapping, for example "userName": "userName", or a more complex JSON map or list. When a column is mapped to a JSON map or list, the syntax is as shown in the following examples:

"messageDetail" : { "column" : "messagedetail", "type" : "JSON_MAP" }

or

"roles": { "column" : "roles", "type" : "JSON_LIST" }

Caution

Support for data types in columns is restricted to String (VARCHAR in the case of MySQL). If you use a different data type, such as DATE or TIMESTAMP, your database must attempt to convert from String to the other data type. This conversion is not guaranteed to work.

If the conversion does work, the format might not be the same when it is read from the database as it was when it was saved. For example, your database might parse a date in the format 12/12/2012 and return the date in the format 2012-12-12 when the property is read.

5.3. Configuring SSL with a JDBC Repository

To configure SSL with a JDBC repository, you need to import the CA certificate file for the server into the OpenIDM truststore. That certificate file could have a name like ca-cert.pem. If you have a different genuine or self-signed certificate file, substitute accordingly.

To import the CA certificate file into the OpenIDM truststore, use the keytool command native to the Java environment, typically located in the /path/to/jre-version/bin directory. On some UNIX-based systems, /usr/bin/keytool may link to that command.

Preparing OpenIDM for SSL with a JDBC Repository
  1. Import the ca-cert.pem certificate into the OpenIDM truststore file with the following command:

    $ keytool \
     -importcert \
     -trustcacerts \
     -file ca-cert.pem \
     -alias "DB cert" \
     -keystore /path/to/openidm/security/truststore

    You're prompted for a keystore password. Be sure to use the same password as is shown in the boot.properties file for your project. The default is:

    openidm.keystore.password=changeit

    After entering a keystore password, you're prompted with the following question. Assuming you've included an appropriate ca-cert.pem file, enter yes.

    Trust this certificate? [no]: 
  2. Open the repository connection configuration file, datasource.jdbc-default.json .

    Look for the jdbcUrl properties. You should see a jdbc URL. Add a ?characterEncoding=utf8&useSSL=true to the end of that URL.

    The jdbcUrl that you configure depends on your JDBC repository. The following entries correspond to appropriate jdbcURL properties for MySQL, MSSQL, PostgreSQL, and Oracle DB, respectively:

    "jdbcUrl" : "jdbc:mysql://localhost:3306/openidm?characterEncoding=utf8&useSSL=true"
    "jdbcUrl" : "jdbc:sqlserver://localhost:1433;instanceName=default;
         databaseName=openidm;applicationName=OpenIDM?characterEncoding=utf8&useSSL=true"
    "jdbcUrl" : "jdbc:postgresql://localhost:5432/openidm?characterEncoding=utf8&useSSL=true"
    "jdbcUrl" : "jdbc:oracle:thin:@//localhost:1521/openidm?characterEncoding=utf8&useSSL=true"
  3. Open your project's conf/config.properties file. Find the org.osgi.framework.bootdelegation property. Make sure that property includes a reference to the javax.net.ssl option. If you started with the default version of config.properties that line should now read as follows:

    org.osgi.framework.bootdelegation=sun.*,com.sun.*,apple.*,com.apple.*,javax.net.ssl
  4. Open your project's conf/system.properties file. Add the following line to that file. If appropriate, substitute the path to your own truststore:

    # Set the truststore
    javax.net.ssl.trustStore=&{launcher.install.location}/security/truststore

    Even if you are setting up this instance of OpenIDM as part of a cluster, you still need to configure this initial truststore. After this instance joins a cluster, the SSL keys in this particular truststore are replaced. For more information on clustering, see "Configuring OpenIDM for High Availability".

  5. Only if you are using MySQL, add the client certificate and key to the OpenIDM keystore:

    1. Create the client certificate file, client.packet, with the following command:

      $ openssl \
       pkcs12 \
       -export \
       -inkey client-key.pem \
       -in client-cert.pem \
       -out client.packet

      In this case, the openssl command imports a client key, client-key.pem, with input data from the same file, exporting output to a client certificate file named client.packet, in PKCS12 format.

    2. When you're prompted to Enter Export Password:, make sure it matches the openidm.keystore.password setting in your project's boot.properties file.

      Warning

      If the export password you enter does not match the existing OpenIDM keystore password, OpenIDM will not provide the client certificate when negotiating the SSL connection.

    3. Add the client certificate to the OpenIDM keystore:

      $ keytool \
       -importkeystore \
       -srckeystore client.packet \
       -srcstoretype pkcs12 \
       -destkeystore /path/to/openidm/security/keystore.jceks \
       -storetype JCEKS

      This command should prompt you for the source and destination keystore password, which should also match the openidm.keystore.password setting in your project's boot.properties file.

      If you are successful, you will see the following message:

      Entry for alias 1 successfully imported.
      Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

5.4. Interacting With the Repository Over REST

The OpenIDM repository is accessible over the REST interface, at the openidm/repo endpoint.

In general, you must ensure that external calls to the openidm/repo endpoint are protected. Native queries and free-form command actions on this endpoint are disallowed by default, as the endpoint is vulnerable to injection attacks. For more information, see "Running Queries and Commands on the Repository".

5.4.1. Changing the Repository Password

In the case of an embedded OrientDB repository, the default username and password are admin and admin. You can change the default password, by sending the following POST request on the repo endpoint:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/repo?_action=updateDbCredentials&user=admin&password=newPassword"

You must restart OpenIDM for the change to take effect.

5.4.2. Running Queries and Commands on the Repository

Free-form commands and native queries on the repository are disallowed by default and should remain so in production to reduce the risk of injection attacks.

Common filter expressions, called with the _queryFilter keyword, enable you to form arbitrary queries on the repository, using a number of supported filter operations. For more information on these filter operations, see "Constructing Queries". Parameterized or predefined queries and commands (using the _queryId and _commandId keywords) can be authorized on the repository for external calls if necessary. For more information, see "Parameterized Queries".

Running commands on the repository is supported primarily from scripts. Certain scripts that interact with the repository are provided by default, for example, the scripts that enable you to purge the repository of reconciliation audit records.

You can define your own commands, and specify them in the database table configuration file (either repo.orientdb.json or repo.jdbc.json). In the following simple example, a command is called to clear out UI notification entries from the repository, for specific users.

The command is defined in the repository configuration file, as follows:

"commands" : {
"delete-notifications-by-id" : "DELETE FROM ui_notification WHERE receiverId = ${username}"
...
}, 

The command can be called from a script, as follows:

openidm.action("repo/ui/notification", "command", {},
{ "commandId" : "delete-notifications-by-id", "userName" : "scarter"});

Exercise caution when allowing commands to be run on the repository over the REST interface, as there is an attached risk to the underlying data.

Chapter 6. Configuring OpenIDM

OpenIDM configuration is split between .properties and container configuration files, and also dynamic configuration objects. Most of OpenIDM's configuration files are stored in your project's conf/ directory, as described in "File Layout".

OpenIDM stores configuration objects in its internal repository. You can manage the configuration by using REST access to the configuration objects, or by using the JSON file-based views. Several aspects of the configuration can also be managed by using the Admin UI, as described in "Configuring OpenIDM from the Admin UI".

6.1. OpenIDM Configuration Objects

OpenIDM exposes internal configuration objects in JSON format. Configuration elements can be either single instance or multiple instance for an OpenIDM installation.

6.1.1. Single Instance Configuration Objects

Single instance configuration objects correspond to services that have at most one instance per installation. JSON file views of these configuration objects are named object-name.json.

The following list describes the single instance configuration objects:

  • The audit configuration specifies how audit events are logged.

  • The authentication configuration controls REST access.

  • The cluster configuration defines how one OpenIDM instance can be configured in a cluster.

  • The endpoint configuration controls any custom REST endpoints.

  • The info configuration points to script files for the customizable information service.

  • The managed configuration defines managed objects and their schemas.

  • The policy configuration defines the policy validation service.

  • The process access configuration defines access to configured workflows.

  • The repo.repo-type configuration such as repo.orientdb or repo.jdbc configures the internal repository.

  • The router configuration specifies filters to apply for specific operations.

  • The script configuration defines the parameters that are used when compiling, debugging, and running JavaScript and Groovy scripts.

  • The sync configuration defines the mappings that OpenIDM uses when it synchronizes and reconciles managed objects.

  • The ui configuration defines the configurable aspects of the default user interfaces.

  • The workflow configuration defines the configuration of the workflow engine.

OpenIDM stores managed objects in the repository, and exposes them under /openidm/managed. System objects on external resources are exposed under /openidm/system.

The following image shows the paths to objects in the OpenIDM namespace.

OpenIDM Namespaces and Object Paths
OpenIDM namespace and object paths

6.1.2. Multiple Instance Configuration Objects

Multiple instance configuration objects correspond to services that can have many instances per installation. Multiple instance configuration objects are named objectname/instancename, for example, provisioner.openicf/xml.

JSON file views of these configuration objects are named objectname-instancename.json, for example, provisioner.openicf-xml.json.

OpenIDM provides the following multiple instance configuration objects:

  • Multiple schedule configurations can run reconciliations and other tasks on different schedules.

  • Multiple provisioner.openicf configurations correspond to the resources connected to OpenIDM.

  • Multiple servletfilter configurations can be used for different servlet filters such as the Cross Origin and GZip filters.

6.2. Changing the Default Configuration

When you change OpenIDM's configuration objects, take the following points into account:

  • OpenIDM's authoritative configuration source is the internal repository. JSON files provide a view of the configuration objects, but do not represent the authoritative source.

    OpenIDM updates JSON files after making configuration changes, whether those changes are made through REST access to configuration objects, or through edits to the JSON files.

  • OpenIDM recognizes changes to JSON files when it is running. OpenIDM must be running when you delete configuration objects, even if you do so by editing the JSON files.

  • Avoid editing configuration objects directly in the internal repository. Rather, edit the configuration over the REST API, or in the configuration JSON files to ensure consistent behavior and that operations are logged.

  • OpenIDM stores its configuration in the internal database by default. If you remove an OpenIDM instance and do not specifically drop the repository, the configuration remains in effect for a new OpenIDM instance that uses that repository. For testing or evaluation purposes, you can disable this persistent configuration in the conf/system.properties file by uncommenting the following line:

    # openidm.config.repo.enabled=false

    Disabling persistent configuration means that OpenIDM will store its configuration in memory only. You should not disable persistent configuration in a production environment.

6.3. Configuring an OpenIDM System for Production

Out of the box, OpenIDM is configured to make it easy to install and evaluate. Specific configuration changes are required before you deploy OpenIDM in a production environment.

6.3.1. Configuring a Production Repository

By default, OpenIDM uses OrientDB for its internal repository so that you do not have to install a database in order to evaluate OpenIDM. Before you use OpenIDM in production, you must replace OrientDB with a supported repository.

For more information, see "Installing a Repository For Production" in the Installation Guide.

6.3.2. Disabling Automatic Configuration Updates

By default, OpenIDM polls the JSON files in the conf directory periodically for any changes to the configuration. In a production system, it is recommended that you disable automatic polling for updates to prevent untested configuration changes from disrupting your identity service.

To disable automatic polling for configuration changes, edit the conf/system.properties file for your project, and uncomment the following line:

# openidm.fileinstall.enabled=false

This setting also disables the file-based configuration view, which means that OpenIDM reads its configuration only from the repository.

Before you disable automatic polling, you must have started the OpenIDM instance at least once to ensure that the configuration has been loaded into the repository. Be aware, if automatic polling is enabled, OpenIDM immediately uses changes to scripts called from a JSON configuration file.

When your configuration is complete, you can disable writes to configuration files. To do so, add the following line to the conf/config.properties file for your project:

felix.fileinstall.enableConfigSave=false

6.3.3. Communicating Through a Proxy Server

To set up OpenIDM to communicate through a proxy server, you need to use JVM parameters that identify the proxy host system, and the OpenIDM port number.

If you've configured OpenIDM behind a proxy server, include JVM properties from the following table, in the OpenIDM startup script:

JVM Proxy Properties
JVM PropertyExample ValuesDescription
-Dhttps.proxyHostproxy.example.com, 192.168.0.1Hostname or IP address of the proxy server
-Dhttps.proxyPort8443, 9443Port number used by OpenIDM

If an insecure port is acceptable, you can also use the -Dhttp.proxyHost and -Dhttp.proxyPort options. You can add these JVM proxy properties to the value of OPENIDM_OPTS in your startup script (startup.sh or startup.bat):

# Only set OPENIDM_OPTS if not already set
[ -z "$OPENIDM_OPTS" ] && OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8443"

6.4. Configuring OpenIDM Over REST

OpenIDM exposes configuration objects under the /openidm/config context path.

You can list the configuration on the local host by performing a GET https://localhost:8443/openidm/config. The examples shown in this section are based on first OpenIDM sample, described in "First OpenIDM Sample - Reconciling an XML File Resource" in the Samples Guide.

The following REST call includes excerpts of the default configuration for an OpenIDM instance started with Sample 1:

$ curl \
 --request GET \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --cacert self-signed.crt \
 https://localhost:8443/openidm/config
{
  "_id" : "",
  "configurations" : [ {
    "_id" : "endpoint/usernotifications",
    "pid" : "endpoint.95b46fcd-f0b7-4627-9f89-6f3180c826e4",
    "factoryPid" : "endpoint"
  }, {
    "_id" : "router",
    "pid" : "router",
    "factoryPid" : null
  },
   ...
  {
    "_id" : "endpoint/reconResults",
    "pid" : "endpoint.ad3f451c-f34e-4096-9a59-0a8b7bc6989a",
    "factoryPid" : "endpoint"
  }, {
    "_id" : "endpoint/gettasksview",
    "pid" : "endpoint.bc400043-f6db-4768-92e5-ebac0674e201",
    "factoryPid" : "endpoint"
  },
  ...
  {
    "_id" : "workflow",
    "pid" : "workflow",
    "factoryPid" : null
  }, {
    "_id" : "ui.context/selfservice",
    "pid" : "ui.context.537a5838-217b-4f67-9301-3fde19a51784",
    "factoryPid" : "ui.context"
  } ]
}

Single instance configuration objects are located under openidm/config/object-name. The following example shows the Sample 1 audit configuration:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "https://localhost:8443/openidm/config/audit"
{
  "_id" : "audit",
  "auditServiceConfig" : {
    "handlerForQueries" : "repo",
    "availableAuditEventHandlers" : [
      "org.forgerock.audit.handlers.csv.CsvAuditEventHandler",
      "org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
      "org.forgerock.openidm.audit.impl.RouterAuditEventHandler"
    ],
    "filterPolicies" : {
      "value" : {
        "excludeIf" : [
          "/access/http/request/headers/Authorization",
          "/access/http/request/headers/X-OpenIDM-Password",
          "/access/http/request/cookies/session-jwt",
          "/access/http/response/headers/Authorization",
          "/access/http/response/headers/X-OpenIDM-Password"
        ],
        "includeIf" : [ ]
      }
    }
  },
  "eventHandlers" : [ {
    "class" : "org.forgerock.audit.handlers.csv.CsvAuditEventHandler",
    "config" : {
      "name" : "csv",
      "logDirectory" : "/root/openidm/audit",
      "topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]
    }
  }, {
    "class" : "org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
    "config" : {
      "name" : "repo",
      "topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]
    }
  } ],
  "eventTopics" : {
    "config" : {
      "filter" : {
        "actions" : [ "create", "update", "delete", "patch", "action" ]
      }
    },
    "activity" : {
      "filter" : {
        "actions" : [ "create", "update", "delete", "patch", "action" ]
      },
      "watchedFields" : [ ],
      "passwordFields" : [ "password" ]
    }
  },
  "exceptionFormatter" : {
    "type" : "text/javascript",
    "file" : "bin/defaults/script/audit/stacktraceFormatter.js"
  }
}  

Multiple instance configuration objects are found under openidm/config/object-name/instance-name.

The following example shows the configuration for the XML connector provisioner shown in the first OpenIDM sample. The output has been cropped for legibility:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "https://localhost:8443/openidm/config/provisioner.openicf/xml"
{
  "_id" : "provisioner.openicf/xml",
  "name" : "xmlfile",
  "connectorRef" : {
    "bundleName" : "org.forgerock.openicf.connectors.xml-connector",
    "bundleVersion" : "1.1.0.2",
    "connectorName" : "org.forgerock.openicf.connectors.xml.XMLConnector"
  },
  ...
  "configurationProperties" : {
    "xsdIcfFilePath" : "/root/openidm/samples/sample1/data/resource-schema-1.xsd",
    "xsdFilePath" : "/root/openidm/samples/sample1/data/resource-schema-extension.xsd",
    "xmlFilePath" : "/root/openidm/samples/sample1/data/xmlConnectorData.xml"
  },
  "syncFailureHandler" : {
    "maxRetries" : 5,
    "postRetryAction" : "logged-ignore"
  },
  "objectTypes" : {
    "account" : {
      "$schema" : "http://json-schema.org/draft-03/schema",
      "id" : "__ACCOUNT__",
      "type" : "object",
      "nativeType" : "__ACCOUNT__",
      "properties" : {
        "description" : {
          "type" : "string",
          "nativeName" : "__DESCRIPTION__",
          "nativeType" : "string"
        },
        ...
        "roles" : {
          "type" : "string",
          "required" : false,
          "nativeName" : "roles",
          "nativeType" : "string"
        }
      }
    }
  },
  "operationOptions" : { }
}

You can change the configuration over REST by using an HTTP PUT or HTTP PATCH request to modify the required configuration object.

The following example uses a PUT request to modify the configuration of the scheduler service, increasing the maximum number of threads that are available for the concurrent execution of scheduled tasks:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "threadPool": {
        "threadCount": "20"
    },
    "scheduler": {
        "executePersistentSchedules": "&{openidm.scheduler.execute.persistent.schedules}"
    }
}' \
 "https://localhost:8443/openidm/config/scheduler"
{
  "_id" : "scheduler",
  "threadPool": {
    "threadCount": "20"
  },
  "scheduler": {
    "executePersistentSchedules": "true"
  }
}

The following example uses a PATCH request to reset the number of threads to their original value.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PATCH \
 --data '[
    {
      "operation" : "replace",
      "field" : "/threadPool/threadCount",
      "value" : "10"
    }
 ]' \
 "https://localhost:8443/openidm/config/scheduler"
{
  "_id": "scheduler",
  "threadPool": {
    "threadCount": "10"
  },
  "scheduler": {
    "executePersistentSchedules": "true"
  }
}

For more information about using the REST API to update objects, see "REST API Reference".

6.5. Using Property Value Substitution In the Configuration

In an environment where you have more than one OpenIDM instance, you might require a configuration that is similar, but not identical, across the different OpenIDM hosts. OpenIDM supports variable replacement in its configuration which means that you can modify the effective configuration according to the requirements of a specific environment or OpenIDM instance.

Property substitution enables you to achieve the following:

  • Define a configuration that is specific to a single OpenIDM instance, for example, setting the location of the keystore on a particular host.

  • Define a configuration whose parameters vary between different environments, for example, the URLs and passwords for test, development, and production environments.

  • Disable certain capabilities on specific nodes. For example, you might want to disable the workflow engine on specific instances.

When OpenIDM starts up, it combines the system configuration, which might contain specific environment variables, with the defined OpenIDM configuration properties. This combination makes up the effective configuration for that OpenIDM instance. By varying the environment properties, you can change specific configuration items that vary between OpenIDM instances or environments.

Property references are contained within the construct &{ }. When such references are found, OpenIDM replaces them with the appropriate property value, defined in the boot.properties file.

Using Separate OpenIDM Environments

The following example defines two separate OpenIDM environments - a development environment and a production environment. You can specify the environment at startup time and, depending on the environment, the database URL is set accordingly.

The environments are defined by adding the following lines to the conf/boot.properties file:

PROD.location=production
DEV.location=development

The database URL is then specified as follows in the repo.orientdb.json file:

{
    "dbUrl" : "plocal:./db/&{&{environment}.location}-openidm",
    ...
}   

The effective database URL is determined by setting the OPENIDM_OPTS environment variable when you start OpenIDM. To use the production environment, start OpenIDM as follows:

$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Denvironment=PROD"
$ ./startup.sh

To use the development environment, start OpenIDM as follows:

$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Denvironment=DEV"
$ ./startup.sh

6.5.1. Using Property Value Substitution With System Properties

You can use property value substitution in conjunction with the system properties, to modify the configuration according to the system on which the OpenIDM instance runs.

Custom Audit Log Location

The following example modifies the audit.json file so that the log file is written to the user's directory. The user.home property is a default Java System property:

{
    "logTo" : [
        {
            "logType" : "csv",
            "location" : "&{user.home}/audit"
        }
    ]
}  

You can define nested properties (that is a property definition within another property definition) and you can combine system properties and boot properties.

Defining Different Ports in the Configuration

The following example uses the user.country property, a default Java system property. The example defines specific LDAP ports, depending on the country (identified by the country code) in the boot.properties file. The value of the LDAP port (set in the provisioner.openicf-ldap.json file) depends on the value of the user.country system property.

The port numbers are defined in the boot.properties file as follows:

openidm.NO.ldap.port=2389
openidm.EN.ldap.port=3389
openidm.US.ldap.port=1389

The following excerpt of the provisioner.openicf-ldap.json file shows how the value of the LDAP port is eventually determined, based on the system property:

"configurationProperties" :
   {
      "credentials" : "Passw0rd",
      "port" : "&{openidm.&{user.country}.ldap.port}",
      "principal" : "cn=Directory Manager",
      "baseContexts" :
         [
            "dc=example,dc=com"
         ],
      "host" : "localhost"
   }

6.5.2. Limitations of Property Value Substitution

Note the following limitations when you use property value substitution:

  • You cannot reference complex objects or properties with syntaxes other than string. Property values are resolved from the boot.properties file or from the system properties and the value of these properties is always in string format.

    Property substitution of boolean values is currently only supported in stringified format, that is, resulting in "true" or "false".

  • Substitution of encrypted property values is not supported.

6.6. Adding Custom Endpoints

You can customize OpenIDM to meet the specific requirements of your deployment by adding your own RESTful endpoints. Endpoints are configured in files named conf/endpoint-name.json, where name generally describes the purpose of the endpoint.

A sample custom endpoint configuration is provided in the openidm/samples/customendpoint directory. The use of this sample is described in "Custom Endpoint Example". Custom endpoints in OpenIDM can be written either in JavaScript or Groovy. The sample includes three files:

conf/endpoint-echo.json

Provides the configuration for the endpoint.

script/echo.js

Supports an endpoint script written in JavaScript.

script/echo.groovy

Supports an endpoint script written in Groovy.

Endpoint configuration files have a certain structure. They may cite scripts written in JavaScript or Groovy.

The endpoint configuration file may include a context property that specifies the route to the endpoint.

The cited scripts include defined request and context global variables.

Note

This section uses the term context in two different ways.

In an endpoint configuration file, the context property specifies the route to an endpoint. For more information, see "The Components of a Custom Endpoint Configuration File".

In scripts, including the scripts found in the samples/customendpoint/script directory, context is a variable, described in more detail in "The Components of a Custom Endpoint Script File".

Warning

If you create a custom endpoint, we recommend that you set up read requests that do not impact the state of the resource, either on the client or the server.

OpenIDM READ endpoints are safe from Cross Site Request Forgery (CSRF) exploits, as they are inherently read-only. That is consistent with the Guidelines for Implementation of REST, from the US National Security Agency, as "... CSRF protections need only be applied to endpoints that will modify information in some way."

6.6.1. The Components of a Custom Endpoint Configuration File

The sample custom endpoint configuration file (/path/to/openidm/samples/customendpoint/conf/endpoint-echo.json) depicts a typical endpoint, the contents of which are shown here:

{
  "file" : "echo.groovy",
  "type" : "groovy",
  "_file" : "echo.js",
  "_type" : "text/javascript"
}    

The following list describes each property in typical custom endpoint configuration files:

type

string, required

A comment that specifies the type of script to be executed. The following types are supported: text/javascript and groovy.

file or source

Either a path to a script file, or an actual script, inline.

The script files associated with this sample, echo.js and echo.groovy, support requests using all ForgeRock RESTful CRUD operations: CREATE, READ, UPDATE, DELETE, PATCH, ACTION, and QUERY.

context

Requests are dispatched, routed, handled, processed, and more, in a context.

You can also include a context property in an endpoint configuration file.

As the context is not included in the default endpoint-echo.json file, OpenIDM takes the name of the endpoint from the name of the file. In this case, the endpoint is endpoint/echo.

With a context, the endpoint configuration file includes the route to the endpoint. For an example, see the endpoint-linkedView.json file in the /path/to/openidm/conf/ directory. The code shown here identifies the route as endpoint/linkedView/*:

{
    "context": "endpoint/linkedView/*",
    "type" : "text/javascript",
    "source" : "require('linkedView').fetch(request.resourcePath);"
}

In this case, the endpoint/linkedview/* route matches the following patterns:

endpoint/linkedView/managed/user/bjensen
endpoint/linkedView/system/ldap/account/bjensen

However, it does not work with the following patterns:

endpoint/linkedView/
endpoint/linkedView

To specify that endpoint, you would need to either remove the context or include it as follows:

"context": "endpoint/linkedView"

6.6.2. The Components of a Custom Endpoint Script File

The custom endpoint script files in the samples/customendpoint/script directory can provide insight into ForgeRock RESTful CRUD operations: CREATE, READ, UPDATE, DELETE, PATCH, ACTION, and QUERY.

Examine the request options in each of these files. The request object represents the framework-level CREST request, as described in "REST API Reference".

Each CREST request is associated with a method, which may be create, read, update, delete, patch, action or query.

Each method is associated with different sets of properties. Details for each property are included after the excerpts from the echo.js and echo.groovy files.

As an example, look at the following create excerpt from the echo.js file:

if (request.method === "create") {
     return {
             method: "create",
             resourceName: request.resourcePath,
             newResourceId: request.newResourceId,
             parameters: request.additionalParameters,
             content: request.content,
             context: context.current
             };

For contrast, examine the following query excerpt from the echo.groovy file:

else if (request instanceof QueryRequest) {
    // query results must be returned as a list of maps
    return [
            [
                    method: "query",
                    resourceName: request.resourcePath,
                    pagedResultsCookie: request.pagedResultsCookie,
                    pagedResultsOffset: request.pagedResultsOffset,
                    pageSize: request.pageSize,
                    queryExpression: request.queryExpression,
                    queryId: request.queryId,
                    queryFilter: request.queryFilter.toString(),
                    parameters: request.additionalParameters,
                    context: context.toJsonValue().getObject()
            ]
    ]
}

Depending on the request method, the associated request object may include the following properties:

resourceName

The local identifier, without the endpoint/ prefix, such as echo.

newResourceId

An identifier associated with a new resource, associated with the create method.

revision

The revision level associated with the method used, relative to a newResourceId.

parameters

The sample code returns request parameters from an HTTP GET with ?param=x, as "parameters":{"param":"x"}.

content

Content based on the latest version of the object, using getObject.

A query request in both files includes additional parameters. For information about the query parameters, see "Constructing Queries". For information about the paging parameters, pagedResultsCookie, pagedResultsOffset, and pageSize, see "Paging and Counting Query Results".

6.6.2.1. More on the context in a Custom Endpoint Script

The context property includes detail that varies depending on the context type:

security

Provides authentication / authorization data.

http

Provides data from the HTTP request.

router

Provides data on where the information is sent.

JavaScript and Groovy access these context structures in different ways. The term shown is the JavaScript access method; the definition includes the Groovy access method.

context.current

In Groovy, known as context.

The current context in which a script or a script-hook handles the request.

context.http

In Groovy, known as one of the following:

context.asContext(org.forgerock.json.resource.servlet.HttpContext.class)
context.getContext("http")

The HTTP context.

context.security

In Groovy, known as one of the following:

context.asContext(org.forgerock.json.resource.SecurityContext.class)
context.getContext("security")

The security context.

6.6.2.2. Custom Endpoint Scripts and request Objects

With a custom endpoint, you can configure OpenIDM to accept different REST requests.

The endpoint configuration file specifies a script (either inline with the source property, or in a separate file identified with the file property). The script is invoked with a global request variable in its scope.

All processes within OpenIDM are initiated with a request. Requests can come either from the REST API (see "REST API Reference") or internally, from a script, using the router service (see "Router Service Reference". Regardless of what initiates the process, the details of the request are represented in the same way - within an object named request.

Most request types include a complex object that stores the details required for that particular request. For example, when you start an action process over the REST interface, you might want to include certain detailed information for that action. You include this information as a JSON string in the POST body. The HTTP request header Content-type describes this string as application/json.

Consider the following REST request:

$ curl \
 --cacert self-signed.crt \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data { "name": "bob"} \
 "https://localhost:8443/openidm/endpoint/test?_action=myAction"

This request includes the string '{ "name": "bob"}' as the HTTP post body. OpenIDM expects this to be a JSON string, and will deserialize it into an object. The object is accessed using request.content.

6.6.2.3. Custom Endpoint Scripts, Contexts, and Chains

Custom endpoints include contexts that may be wrapped in different layers, analogous to the way network packets can be wrapped at ascending network levels.

As an example, start with a request such as the following:

GET https://localhost:8443/openidm/endpoint/echo?queryId=query-all-ids&_para=foo

A request at an endpoint starts with a root context, associated with a specific context ID, and the org.forgerock.json.resource.RootContext context.

The root context is wrapped in the security context that holds the authentication and authorization detail for the request. The associated class is org.forgerock.json.resource.SecurityContext, with an authenticationId user name such as openidm-admin, and associated roles such as openidm-authorized.

That security context is further wrapped by the HTTP context, with the target URI. The class is org.forgerock.json.resource.HttpContext, and it is associated with the normal parameters of a REST call, including a user agent, authorization token, and method.

The HTTP context is then further wrapped by one or more server/router context(s). That class is org.forgerock.json.resource.RouterContext, with an endpoint URI. You may see several layers of server and router contexts.

6.6.2.4. Additional Custom Endpoint Script Parameters

The query request method includes two additional parameters. You can review how this works in "Sample 2c - Synchronizing LDAP Group Membership" in the Samples Guide.

The final statement in the script is the return value. In the following example, there is no return keyword, and the value of the last statement (x) is returned:

var x = "Sample return";
functioncall();
x   

6.6.2.5. Set Up Exceptions in Scripts

When you create a custom script, you may need to build exception-handling logic. If you want to see meaningful messages in REST responses and in logs, there are language-specific ways of throwing errors which contain those details, as discussed in this section.

For a script written in JavaScript, you should comply with the following format:

throw {
    "code": 400, // any valid HTTP error code
    "message": "custom error message",
    "detail" : {
        "var": parameter1,
        "complexDetailObject" : [
            "detail1",
            "detail2"
        ]
    }
}

If you comply with this format, any exceptions will identify the noted HTTP error code, a standard error message such as Internal Server Error, a custom error message that can help you diagnose the error, and any additional detail that you think might be helpful.

For a script written in Groovy, if you have a list of supported operations, you should comply with the following format:

import org.forgerock.json.resource.ResourceException
import org.forgerock.json.JsonValue

throw new ResourceException(404, "Your error message").setDetail(new JsonValue([
    "var": "parameter1",
    "complexDetailObject" : [
        "detail1",
        "detail2"
    ]
])) 

6.7. Custom Endpoint Example

The following example uses the sample provided in the openidm/samples/customendpoint directory, copied to the openidm/conf and openidm/script directories. The output from the query shows the request structure. The output has been cropped for legibility:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/endpoint/echo?_queryId=query-all-ids"
{
 "result" : [ {
     "method" : "query",
     ...
     "parameters" : { },
     "context" : {
       "parent" : {
         ...
               "parent" : {
                 "parent" : null,
                 "name" : "root",
                 "rootContext" : true,
                 "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
               },
               "name" : "security",
               "authenticationId" : "openidm-admin",
               "authorization" : {
                 "id" : "openidm-admin",
                 "component" : "repo/internal/user",
                 "roles" : [ "openidm-admin", "openidm-authorized" ]
               },
               "rootContext" : false,
               "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
             },
             "headers" : {
               "X-OpenIDM-Username" : [ "openidm-admin" ],
               "Host" : [ "localhost:8443" ],
               "Accept" : [ "*/*" ],
               "X-OpenIDM-Password" : [ "openidm-admin" ],
               "User-Agent" : [ "curl/7.19.7 (x86_64-redhat-linux-gnu)
                 libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18
                 libssh2/1.4.2" ]
             },
             "parameters" : {
               "_queryId" : [ "query-all-ids" ],
               "_prettyPrint" : [ "true" ]
             },
             "external" : true,
             "name" : "http",
             "method" : "GET",
             "path" : "https://localhost:8443/openidm/endpoint/echo",
             "rootContext" : false,
             "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
           },
           "name" : "apiInfo",
           "apiVersion" : "2.3.1-SNAPSHOT",
           "apiName" : "org.forgerock.commons.json-resource-servlet",
           "rootContext" : false,
           "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
         },
         "name" : "server",
         "rootContext" : false,
         "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
       },
       "uriTemplateVariables" : { },
       "name" : "router",
       "matchedUri" : "endpoint/echo",
       "baseUri" : "endpoint/echo",
       "rootContext" : false,
       "id" : "43576021-fe54-4468-8d10-09b14af2a36d"
     }
 } ],
 ...
}  

You must protect access to any custom endpoints by configuring the appropriate authorization for those contexts. For more information, see "Authorization".

6.8. Setting the Script Configuration

The script configuration file (conf/script.json) enables you to modify the parameters that are used when compiling, debugging, and running JavaScript and Groovy scripts.

The default script.json file includes the following parameters:

properties

Any custom properties that should be provided to the script engine.

ECMAScript

Specifies JavaScript debug and compile options. JavaScript is an ECMAScript language.

  • javascript.recompile.minimumInterval - minimum time after which a script can be recompiled.

    The default value is 60000, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around 100 (100 milliseconds).

Groovy

Specifies compilation and debugging options related to Groovy scripts. Many of these options are commented out in the default script configuration file. Remove the comments to set these properties:

  • groovy.warnings - the log level for Groovy scripts. Possible values are none, likely, possible, and paranoia.

  • groovy.source.encoding - the encoding format for Groovy scripts. Possible values are UTF-8 and US-ASCII.

  • groovy.target.directory - the directory to which compiled Groovy classes will be output. The default directory is install-dir/classes.

  • groovy.target.bytecode - the bytecode version that is used to compile Groovy scripts. The default version is 1.5.

  • groovy.classpath - the directory in which the compiler should look for compiled classes. The default classpath is install-dir/lib.

    To call an external library from a Groovy script, you must specify the complete path to the .jar file or files, as a value of this property. For example:

    "groovy.classpath" : "/&{launcher.install.location}/lib/http-builder-0.7.1.jar:
             /&{launcher.install.location}/lib/json-lib-2.3-jdk15.jar:
             /&{launcher.install.location}/lib/xml-resolver-1.2.jar:
             /&{launcher.install.location}/lib/commons-collections-3.2.1.jar",
  • groovy.output.verbose - specifies the verbosity of stack traces. Boolean, true or false.

  • groovy.output.debug - specifies whether debugging messages are output. Boolean, true or false.

  • groovy.errors.tolerance - sets the number of non-fatal errors that can occur before a compilation is aborted. The default is 10 errors.

  • groovy.script.extension - specifies the file extension for Groovy scripts. The default is .groovy.

  • groovy.script.base - defines the base class for Groovy scripts. By default any class extends groovy.lang.Script.

  • groovy.recompile - indicates whether scripts can be recompiled. Boolean, true or false, with default true.

  • groovy.recompile.minimumInterval - sets the minimum time between which Groovy scripts can be recompiled.

    The default value is 60000, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around 100 (100 milliseconds).

  • groovy.target.indy - specifies whether a Groovy indy test can be used. Boolean, true or false, with default true.

  • groovy.disabled.global.ast.transformations - specifies a list of disabled Abstract Syntax Transformations (ASTs).

sources

Specifies the locations in which OpenIDM expects to find JavaScript and Groovy scripts that are referenced in the configuration.

The following excerpt of the script.json file shows the default locations:

...
"sources" : {
    "default" : {
        "directory" : "&{launcher.install.location}/bin/defaults/script"
    },
    "install" : {
        "directory" : "&{launcher.install.location}"
    },
    "project" : {
        "directory" : "&{launcher.project.location}"
    },
    "project-script" : {
        "directory" : "&{launcher.project.location}/script"
    }
...

Note

The order in which locations are listed in the sources property is important. Scripts are loaded from the bottom up in this list, that is, scripts found in the last location on the list are loaded first.

6.9. Calling a Script From a Configuration File

You can call a script from within a configuration file by providing the script source, or by referencing a file that contains the script source. For example:

{
    "type" : "text/javascript",
    "source": string
} 

or

{
    "type" : "text/javascript",
    "file" : file location
} 
type

string, required

Specifies the type of script to be executed. Supported types include text/javascript, and groovy.

source

string, required if file is not specified

Specifies the source code of the script to be executed.

file

string, required if source is not specified

Specifies the file containing the source code of the script to execute.

The following sample excerpts from configuration files indicate how scripts can be called.

The following example (included in the sync.json file) returns true if the employeeType is equal to external, otherwise returns false. This script can be useful during reconciliation to establish whether a target object should be included in the reconciliation process, or should be ignored:

"validTarget": {
    "type" : "text/javascript",
    "source": "target.employeeType == 'external'"
}  

The following example (included in the sync.json file) sets the __PASSWORD__ attribute to defaultpwd when OpenIDM creates a target object:

"onCreate" : {
    "type" : "text/javascript",
    "source": "target.__PASSWORD__ = 'defaultpwd'"
} 

The following example (included in the router.json file) shows a trigger to create Solaris home directories using a script. The script is located in the file, project-dir/script/createUnixHomeDir.js:

{
    "filters" : [ {
        "pattern" : "^system/solaris/account$",
        "methods" : [ "create" ],
        "onResponse" : {
            "type" : "text/javascript",
            "file" : "script/createUnixHomeDir.js"
        }
    } ]
} 

Often, script files are reused in different contexts. You can pass variables to your scripts to provide these contextual details at runtime. You pass variables to the scripts that are referenced in configuration files by declaring the variable name in the script reference.

The following example of a scheduled task configuration calls a script named triggerEmailNotification.js. The example sets the sender and recipient of the email in the schedule configuration, rather than in the script itself:

{
    "enabled" : true,
    "type" : "cron",
    "schedule" : "0 0/1 * * * ?",
    "invokeService" : "script",
    "invokeContext" : {
        "script": {
            "type" : "text/javascript",
            "file" : "script/triggerEmailNotification.js",
            "fromSender" : "admin@example.com",
            "toEmail" : "user@example.com"
        }
    }
} 

Tip

In general, you should namespace variables passed into scripts with the globals map. Passing variables in this way prevents collisions with the top-level reserved words for script maps, such as file, source, and type. The following example uses the globals map to namespace the variables passed in the previous example.

"script": {
    "type" : "text/javascript",
    "file" : "script/triggerEmailNotification.js",
    "globals" : {
        "fromSender" : "admin@example.com",
        "toEmail" : "user@example.com"
    }
} 

Script variables are not necessarily simple key:value pairs. A script variable can be any arbitrarily complex JSON object.

Chapter 7. Accessing Data Objects

OpenIDM supports a variety of objects that can be addressed via a URL or URI. You can access data objects by using scripts (through the Resource API) or by using direct HTTP calls (through the REST API).

The following sections describe these two methods of accessing data objects, and provide information on constructing and calling data queries.

7.1. Accessing Data Objects By Using Scripts

OpenIDM's uniform programming model means that all objects are queried and manipulated in the same way, using the Resource API. The URL or URI that is used to identify the target object for an operation depends on the object type. For an explanation of object types, see "Data Models and Objects Reference". For more information about scripts and the objects available to scripts, see "Scripting Reference".

You can use the Resource API to obtain managed, system, configuration, and repository objects, as follows:

val = openidm.read("managed/organization/mysampleorg")
val = openidm.read("system/mysystem/account")
val = openidm.read("config/custom/mylookuptable")
val = openidm.read("repo/custom/mylookuptable")

For information about constructing an object ID, see "URI Scheme".

You can update entire objects with the update() function, as follows:

openidm.update("managed/organization/mysampleorg", object)
openidm.update("system/mysystem/account", object)
openidm.update("config/custom/mylookuptable", object)
openidm.update("repo/custom/mylookuptable", object)

You can apply a partial update to a managed or system object by using the patch() function:

openidm.patch("managed/organization/mysampleorg", rev, value)

The create(), delete(), and query() functions work the same way.

7.2. Accessing Data Objects By Using the REST API

OpenIDM provides RESTful access to data objects via ForgeRock's Common REST API. To access objects over REST, you can use a browser-based REST client, such as the Simple REST Client for Chrome, or RESTClient for Firefox. Alternatively you can use the curl command-line utility.

For a comprehensive overview of the REST API, see "REST API Reference".

To obtain a managed object through the REST API, depending on your security settings and authentication configuration, perform an HTTP GET on the corresponding URL, for example https://localhost:8443/openidm/managed/organization/mysampleorg.

By default, the HTTP GET returns a JSON representation of the object.

In general, you can map any HTTP request to the corresponding openidm.method call. The following example shows how the parameters provided in an openidm.query request correspond with the key-value pairs that you would include in a similar HTTP GET request:

Reading an object using the Resource API:

openidm.query("managed/user", { "_queryId": "query-all-ids" }, ["userName","sn"])

Reading an object using the REST API:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids&_fields=userName,sn"

7.3. Defining and Calling Queries

OpenIDM supports an advanced query model that enables you to define queries, and to call them over the REST or Resource API. Three types of queries are supported, on both managed, and system objects:

  • Common filter expressions

  • Parameterized, or predefined queries

  • Native query expressions

Each of these mechanisms is discussed in the following sections.

7.3.1. Common Filter Expressions

The ForgeRock REST API defines common filter expressions that enable you to form arbitrary queries using a number of supported filter operations. This query capability is the standard way to query data if no predefined query exists, and is supported for all managed and system objects.

Common filter expressions are useful in that they do not require knowledge of how the object is stored and do not require additions to the repository configuration.

Common filter expressions are called with the _queryFilter keyword. The following example uses a common filter expression to retrieve managed user objects whose user name is Smith:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 'https://localhost:8443/openidm/managed/user?_queryFilter=userName+eq+"smith"'

The filter is URL encoded in this example. The corresponding filter using the resource API would be:

openidm.query("managed/user", { "_queryFilter" : '/userName eq "smith"' });

Note that, this JavaScript invocation is internal and is not subject to the same URL-encoding requirements that a GET request would be. Also, because JavaScript supports the use of single quotes, it is not necessary to escape the double quotes in this example.

For a list of supported filter operations, see "Constructing Queries".

Note that using common filter expressions to retrieve values from arrays is currently not supported. If you need to search within an array, you should set up a predefined (parameterized) in your repository configuration. For more information, see "Parameterized Queries".

7.3.2. Parameterized Queries

Managed objects in the supported OpenIDM repositories can be accessed using a parameterized query mechanism. Parameterized queries on repositories are defined in the repository configuration (repo.*.json) and are called by their _queryId.

Parameterized queries provide precise control over the query that is executed. Such control might be useful for tuning, or for performing database operations such as aggregation (which is not possible with a common filter expression.)

Parameterized queries provide security and portability for the query call signature, regardless of the backend implementation. Queries that are exposed over the REST interface must be parameterized queries to guard against injection attacks and other misuse. Queries on the officially supported repositories have been reviewed and hardened against injection attacks.

For system objects, support for parameterized queries is restricted to _queryId=query-all-ids. There is currently no support for user-defined parameterized queries on system objects. Typically, parameterized queries on system objects are not called directly over the REST interface, but are issued from internal calls, such as correlation queries.

A typical query definition is as follows:

"query-all-ids" : "select _openidm_id from ${unquoted:_resource}"

To call this query, you would reference its ID, as follows:

?_queryId=query-all-ids

The following example calls query-all-ids over the REST interface:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "https://localhost:8443/openidm/managed/user?_queryId=query-all-ids"

7.3.3. Native Query Expressions

Native query expressions are supported for all managed objects and system objects, and can be called directly, rather than being defined in the repository configuration.

Native queries are intended specifically for internal callers, such as custom scripts, and should be used only in situations where the common filter or parameterized query facilities are insufficient. For example, native queries are useful if the query needs to be generated dynamically.

The query expression is specific to the target resource. For repositories, queries use the native language of the underlying data store. For system objects that are backed by OpenICF connectors, queries use the applicable query language of the system resource.

Native queries on the repository are made using the _queryExpression keyword. For example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "https://localhost:8443/openidm/managed/user?_queryExpression=select+from+managed_user"

Unless you have specifically enabled native queries over REST, the previous command returns a 403 access denied error message. Native queries are not portable and do not guard against injection attacks. Such query expressions should therefore not be used or made accessible over the REST interface or over HTTP in production environments. They should be used only via the internal Resource API. If you want to enable native queries over REST for development, see "Protect Sensitive REST Interface URLs".

Alternatively, if you really need to expose native queries over HTTP, in a selective manner, you can design a custom endpoint to wrap such access.

7.3.4. Constructing Queries

The openidm.query function enables you to query OpenIDM managed and system objects. The query syntax is openidm.query(id, params), where id specifies the object on which the query should be performed and params provides the parameters that are passed to the query, either _queryFilter or _queryID. For example:

var params = {
    '_queryFilter' : 'givenName co "' + sourceCriteria + '" or ' + 'sn co "' + sourceCriteria + '"'
};
var results = openidm.query("system/ScriptedSQL/account", params)

Over the REST interface, the query filter is specified as _queryFilter=filter, for example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/managed/user?_queryFilter=userName+eq+"Smith"'

Note the use of double-quotes around the search term: Smith. In _queryFilter expressions, string values must use double-quotes. Numeric and boolean expressions should not use quotes.

When called over REST, you must URL encode the filter expression. The following examples show the filter expressions using the resource API and the REST API, but do not show the URL encoding, to make them easier to read.

Note that, for generic mappings, any fields that are included in the query filter (for example userName in the previous query), must be explicitly defined as searchable, if you have set the global searchableDefault to false. For more information, see "Improving Search Performance for Generic Mappings".

The filter expression is constructed from the building blocks shown in this section. In these expressions the simplest json-pointer is a field of the JSON resource, such as userName or id. A JSON pointer can, however, point to nested elements.

Note

You can also use the negation operator (!) to help construct a query. For example, a _queryFilter=!(userName+eq+"jdoe") query would return every userName except for jdoe.

You can set up query filters with one of the following types of expressions.

7.3.4.1. Comparison Expressions

7.3.4.1.1. Querying Objects That Equal the Given Value

This is the associated JSON comparison expression: json-pointer eq json-value.

Review the following example:

"_queryFilter" : '/givenName eq "Dan"'

The following REST call returns the user name and given name of all managed users whose first name (givenName) is "Dan":

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/managed/user?_queryFilter=givenName+eq+"Dan"&_fields=userName,givenName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "givenName": "Dan",
      "userName": "dlangdon"
    },
    {
      "givenName": "Dan",
      "userName": "dcope"
    },
    {
      "givenName": "Dan",
      "userName": "dlanoway"
    }
}
7.3.4.1.2. Querying Objects That Contain the Given Value

This is the associated JSON comparison expression: json-pointer co json-value.

Review the following example:

"_queryFilter" : '/givenName co "Da"'

The following REST call returns the user name and given name of all managed users whose first name (givenName) contains "Da":

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/managed/user?_queryFilter=givenName+co+"Da"&_fields=userName,givenName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 10,
  "result": [
    {
      "givenName": "Dave",
      "userName": "djensen"
    },
    {
      "givenName": "David",
      "userName": "dakers"
    },
    {
      "givenName": "Dan",
      "userName": "dlangdon"
    },
    {
      "givenName": "Dan",
      "userName": "dcope"
    },
    {
      "givenName": "Dan",
      "userName": "dlanoway"
    },
    {
      "givenName": "Daniel",
      "userName": "dsmith"
    },
...
}
7.3.4.1.3. Querying Objects That Start With the Given Value

This is the associated JSON comparison expression: json-pointer sw json-value.

Review the following example:

"_queryFilter" : '/sn sw "Jen"'

The following REST call returns the user names of all managed users whose last name (sn) starts with "Jen":

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=sn+sw+"Jen"&_fields=userName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 4,
  "result": [
    {
      "userName": "bjensen"
    },
    {
      "userName": "djensen"
    },
    {
      "userName": "cjenkins"
    },
    {
      "userName": "mjennings"
    }
  ]
}
7.3.4.1.4. Querying Objects That Are Less Than the Given Value

This is the associated JSON comparison expression: json-pointer lt json-value.

Review the following example:

"_queryFilter" : '/employeeNumber lt 5000'

The following REST call returns the user names of all managed users whose employeeNumber is lower than 5000:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=employeeNumber+lt+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 4999,
  "result": [
    {
      "employeeNumber": 4907,
      "userName": "jnorris"
    },
    {
      "employeeNumber": 4905,
      "userName": "afrancis"
    },
    {
      "employeeNumber": 3095,
      "userName": "twhite"
    },
    {
      "employeeNumber": 3921,
      "userName": "abasson"
    },
    {
      "employeeNumber": 2892,
      "userName": "dcarter"
    }
...
  ]
}
7.3.4.1.5. Querying Objects That Are Less Than or Equal to the Given Value

This is the associated JSON comparison expression: json-pointer le json-value.

Review the following example:

"_queryFilter" : '/employeeNumber le 5000'

The following REST call returns the user names of all managed users whose employeeNumber is 5000 or less:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=employeeNumber+le+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 5000,
  "result": [
    {
      "employeeNumber": 4907,
      "userName": "jnorris"
    },
    {
      "employeeNumber": 4905,
      "userName": "afrancis"
    },
    {
      "employeeNumber": 3095,
      "userName": "twhite"
    },
    {
      "employeeNumber": 3921,
      "userName": "abasson"
    },
    {
      "employeeNumber": 2892,
      "userName": "dcarter"
    }
...
  ]
}
7.3.4.1.6. Querying Objects That Are Greater Than the Given Value

This is the associated JSON comparison expression: json-pointer gt json-value

Review the following example:

"_queryFilter" : '/employeeNumber gt 5000'

The following REST call returns the user names of all managed users whose employeeNumber is higher than 5000:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=employeeNumber+gt+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 1458,
  "result": [
    {
      "employeeNumber": 5003,
      "userName": "agilder"
    },
    {
      "employeeNumber": 5011,
      "userName": "bsmith"
    },
    {
      "employeeNumber": 5034,
      "userName": "bjensen"
    },
    {
      "employeeNumber": 5027,
      "userName": "cclarke"
    },
    {
      "employeeNumber": 5033,
      "userName": "scarter"
    }
...
  ]
}
7.3.4.1.7. Querying Objects That Are Greater Than or Equal to the Given Value

This is the associated JSON comparison expression: json-pointer ge json-value.

Review the following example:

"_queryFilter" : '/employeeNumber ge 5000'

The following REST call returns the user names of all managed users whose employeeNumber is 5000 or greater:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=employeeNumber+ge+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 1457,
  "result": [
    {
      "employeeNumber": 5000,
      "userName": "agilder"
    },
    {
      "employeeNumber": 5011,
      "userName": "bsmith"
    },
    {
      "employeeNumber": 5034,
      "userName": "bjensen"
    },
    {
      "employeeNumber": 5027,
      "userName": "cclarke"
    },
    {
      "employeeNumber": 5033,
      "userName": "scarter"
    }
...
  ]
}

7.3.4.2. Presence Expressions

The following examples show how you can build filters using the following types of presence expressions.

Evaluates to true when a json-pointer pr matches any object in which the json-pointer is present, and contains a non-null value. Review the following example:

"_queryFilter" : '/mail pr'

The following REST call returns the mail addresses for all managed users who have a mail property in their entry:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/managed/user?_queryFilter=mail+pr&_fields=mail'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "mail": "jdoe@exampleAD.com"
    },
    {
      "mail": "bjensen@example.com"
    }
  ]
}

The presence filter is not currently supported for system objects. To query for presence on a system object, specify any attribute that exists for all entries, such as the uid on an LDAP system, and use the starts with (sw) filter, with an empty value. For example, the following query returns the uid of all users in an LDAP system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8443/openidm/system/ldap/account?_queryFilter=uid+sw+""&_fields=uid'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "uid": "jdoe"
    },
    {
      "uid": "bjensen"
    }
  ]
}

Evaluates to false for elements that are present with a null value, and for elements that are missing.

7.3.4.3. Literal Expressions

A literal expression is a boolean:

  • true matches any object in the resource.

  • false matches no object in the resource.

For example, you can list the _id of all managed objects as follows:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET
 'https://localhost:8443/openidm/managed/user?_queryFilter=true&_fields=_id'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "_id": "d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c"
    },
    {
      "_id": "709fed03-897b-4ff0-8a59-6faaa34e3af6"
    }
  ]
}
    

Note

Literal expressions (true and false) can be used only in queries on managed objects. Queries on system objects cannot use literal expressions. To replicate the behavior of a _queryFilter=true query on a system resource, you can use the sw filter, with a value of "". For example, the following query returns all user accounts in an LDAP system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/system/ldap/account?_queryFilter=sn+sw+""'

7.3.4.4. Complex Expressions

You can combine expressions using the boolean operators and, or, and ! (not). The following example queries managed user objects located in London, with last name Jensen:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/managed/user/?_queryFilter=city+eq+"London"+and+sn+eq+"Jensen"&_fields=userName,givenName,sn'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "sn": "Jensen",
      "givenName": "Clive",
      "userName": "cjensen"
    },
    {
      "sn": "Jensen",
      "givenName": "Dave",
      "userName": "djensen"
    },
    {
      "sn": "Jensen",
      "givenName": "Margaret",
      "userName": "mjensen"
    }
  ]
}

7.3.5. Paging and Counting Query Results

The common filter query mechanism supports paged query results for managed objects, and for some system objects, depending on the system resource.

Predefined queries must be configured to support paging, in the repository configuration. For example:

"query-all-ids" : "select _openidm_id from ${unquoted:_resource} SKIP ${unquoted:_pagedResultsOffset}
        LIMIT ${unquoted:_pageSize}",

The query implementation includes a configurable count policy that can be set per query. Currently, counting results is supported only for predefined queries, not for filtered queries.

The count policy can be one of the following:

  • NONE - to disable counting entirely for that query.

  • EXACT - to return the precise number of query results. Note that this has a negative impact on query performance.

  • ESTIMATE - to return a best estimate of the number of query results in the shortest possible time. This number generally correlates with the number of records in the index.

If no count policy is specified, the policy is assumed to be NONE. This prevents the overhead of counting results, unless a result count is specifically required.

The following query returns the first three records in the managed user repository:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user?_queryId=query-all-ids&_pageSize=3"
{
  "result": [
    {
      "_id": "scarter",
      "_rev": "1"
    },
    {
      "_id": "bjensen",
      "_rev": "1"
    },
    {
      "_id": "asmith",
      "_rev": "1"
    }
  ],
  "resultCount": 3,
  "pagedResultsCookie": "3",
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "remainingPagedResults": -1
}   

Notice that no counting is done in this query, so the returned value the of "totalPagedResults" and "remainingPagedResults" fields is -1.

To specify that either an EXACT or ESTIMATE result count be applied, add the "totalPagedResultsPolicy" to the query.

The following query is identical to the previous query but includes a count of the total results in the result set.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user?_queryId=query-all-ids&_pageSize=3&_totalPagedResultsPolicy=EXACT"
{
  "result": [
    {
      "_id": "scarter",
      "_rev": "1"
    },
    {
      "_id": "bjensen",
      "_rev": "1"
    },
    {
      "_id": "asmith",
      "_rev": "1"
    }
  ],
  "resultCount": 3,
  "pagedResultsCookie": "3",
  "totalPagedResultsPolicy": "EXACT",
  "totalPagedResults": 4,
  "remainingPagedResults": -1
}   

Note that the totalPagedResultsPolicy is EXACT for this query. To return an exact result count, a corresponding count query must be defined in the repository configuration. The following excerpt of the default repo.orientdb.json file shows the predefined query-all-ids query, and its corresponding count query:

"query-all-ids" : "select _openidm_id, @version from ${unquoted:_resource}
      SKIP ${unquoted:_pagedResultsOffset} LIMIT ${unquoted:_pageSize}",
"query-all-ids-count" : "select count(_openidm_id) AS total from ${unquoted:_resource}",

The following paging parameters are supported:

_pagedResultsCookie

Opaque cookie used by the server to keep track of the position in the search results. The format of the cookie is a string value.

The server provides the cookie value on the first request. You should then supply the cookie value in subsequent requests until the server returns a null cookie, meaning that the final page of results has been returned.

Paged results are enabled only if the _pageSize is a non-zero integer.

_pagedResultsOffset

Specifies the index within the result set of the number of records to be skipped before the first result is returned. The format of the _pagedResultsOffset is an integer value. When the value of _pagedResultsOffset is greater than or equal to 1, the server returns pages, starting after the specified index.

This request assumes that the _pageSize is set, and not equal to zero.

For example, if the result set includes 10 records, the _pageSize is 2, and the _pagedResultsOffset is 6, the server skips the first 6 records, then returns 2 records, 7 and 8. The _pagedResultsCookie value would then be 8 (the index of the last returned record) and the _remainingPagedResults value would be 2, the last two records (9 and 10) that have not yet been returned.

If the offset points to a page beyond the last of the search results, the result set returned is empty.

Note that the totalPagedResults and _remainingPagedResults parameters are not supported for all queries. Where they are not supported, their returned value is always -1.

_pageSize

An optional parameter indicating that query results should be returned in pages of the specified size. For all paged result requests other than the initial request, a cookie should be provided with the query request.

The default behavior is not to return paged query results. If set, this parameter should be an integer value, greater than zero.

7.3.6. Sorting Query Results

For common filter query expressions, you can sort the results of a query using the _sortKeys parameter. This parameter takes a comma-separated list as a value and orders the way in which the JSON result is returned, based on this list.

The _sortKeys parameter is not supported for predefined queries.

The following query returns all users with the givenName Dan, and sorts the results alphabetically, according to surname (sn):

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'https://localhost:8443/openidm/system/ldap/account?_queryFilter=givenName+eq+"Dan"&_fields=givenName,sn&_sortKeys=sn'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "sn": "Cope",
      "givenName": "Dan"
    },
    {
      "sn": "Langdon",
      "givenName": "Dan"
    },
    {
      "sn": "Lanoway",
      "givenName": "Dan"
    }
  ]
}   

Chapter 8. Managing Users, Groups, Roles and Relationships

OpenIDM provides a default schema for typical managed object types, such as users and roles, but does not control the structure of objects that you want to store in the OpenIDM repository. You can modify or extend the schema for the default object types, and you can set up a new managed object type for any item that can be collected in a data set. For example, with the right schema, you can set up any device associated with the Internet of Things (IoT).

Managed objects and their properties are defined in your project's conf/managed.json file.

This chapter describes how to work with the default managed object types and how to create new object types as required by your deployment. For more information about the OpenIDM object model, see "Data Models and Objects Reference".

8.1. Creating and Modifying Managed Object Types

If the managed object types provided in the default configuration are not sufficient for your deployment, you can create any number of new managed object types.

The easiest way to create a new managed object type is to use the Admin UI, as follows:

  1. Navigate to the Admin UI URL (https://localhost:8443/admin) then select Configure > Managed Objects > New Managed Object.

  2. Enter a name for the new managed object and, optionally, an icon that will be displayed for that object type in the UI.

    Click Save.

  3. Select the Scripts tab and specify any scripts that should be applied on various events associated with that object type, for example, when an object of that type is created, updated or deleted.

  4. Specify the schema for the object type, that is, the properties that make up the object, and any policies or restrictions that must be applied to the property values.

Click the JSON button on the Schema tab to display the properties in JSON format. You can also create a new managed object type by adding its configuration, in JSON, to your project's conf/managed.json file. The following excerpt of the managed.json file shows the configuration of a "Phone" object, that was created through the UI.

{
    "name": "Phone",
    "schema": {
        "$schema": "http://forgerock.org/json-schema#",
        "type": "object",
        "properties": {
            "brand": {
                "description": "The supplier of the mobile phone",
                "title": "Brand",
                "viewable": true,
                "searchable": true,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": "string"
            },
            "assetNumber": {
                "description": "The asset tag number of the mobile device",
                "title": "Asset Number",
                "viewable": true,
                "searchable": true,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": "string"
            },
            "model": {
                "description": "The model number of the mobile device, such as 6 plus, Galaxy S4",
                "title": "Model",
                "viewable": true,
                "searchable": false,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": "string"
            }
        },
        "required": [],
        "order": [
            "brand",
            "assetNumber",
            "model"
        ]
    }
}

You can add any arbitrary properties to the schema of a new managed object type. A property definition typically includes the following fields:

  • name - the name of the property

  • title - the name of the property, in human-readable language, used to display the property in the UI

  • description - a description of the property

  • viewable - specifies whether this property is viewable in the object's profile in the UI). Boolean, true or false (true by default).

  • searchable - specifies whether this property can be searched in the UI. A searchable property is visible within the Managed Object data grid in the Self-Service UI. Note that for a property to be searchable in the UI, it must be indexed in the repository configuration. For information on indexing properties in a repository, see "Using Explicit or Generic Object Mapping With a JDBC Repository".

    Boolean, true or false (false by default).

  • userEditable - specifies whether users can edit the property value in the UI. This property applies in the context of the self-service UI, where users are able to edit certain properties of their own accounts. Boolean, true or false (false by default).

  • minLength - the minimum number of characters that the value of this property must have.

  • pattern - any specific pattern to which the value of the property must adhere. For example, a property whose value is a date might require a specific date format.

  • policies - any policy validation that must be applied to the property. For more information on managed object policies, see "Configuring the Default Policy for Managed Objects".

  • required - specifies whether or the property must be supplied when an object of this type is created. Boolean, true or false.

  • type - the data type for the property value; can be String, Array, Boolean, Integer, Number, Object, or Resource Collection.

  • isVirtual - specifies whether the property takes a static value, or whether its value is calculated "on the fly" as the result of a script. Boolean, true or false.

  • returnByDefault - for non-core attributes (virtual attributes and relationship fields), specifies whether the property will be returned in the results of a query on an object of this type if it is not explicitly requested. Virtual attributes and relationship fields are not returned by default. Boolean, true or false.

8.2. Working with Managed Users

User objects that are stored in OpenIDM's repository are referred to as managed users. For a JDBC repository, OpenIDM stores managed users in the managedobjects table. A second table, managedobjectproperties, serves as the index table. For an OrientDB repository, managed users are stored in the managed_user table.

OpenIDM provides RESTful access to managed users, at the context path /openidm/managed/user. For more information, see "Getting Started With the OpenIDM REST Interface" in the Installation Guide.

8.3. Working With Managed Groups

OpenIDM provides support for a managed group object. For a JDBC repository, OpenIDM stores managed groups with all other managed objects, in the managedobjects table, and uses the managedobjectproperties for indexing. For an OrientDB repository, managed groups are stored in the managed_group table.

The managed group object is not provided by default. To use managed groups, add an object similar to the following to your conf/managed.json file:

{
   "name" : "group"
},  

With this addition, OpenIDM provides RESTful access to managed groups, at the context path /openidm/managed/group.

For an example of a deployment that uses managed groups, see "Sample 2d - Synchronizing LDAP Groups" in the Samples Guide.

8.4. Working With Managed Roles

OpenIDM supports two types of roles:

  • Provisioning roles - used to specify how objects are provisioned to an external system.

  • Authorization roles - used to specify the authorization rights of a managed object internally, within OpenIDM.

Provisioning roles are always created as managed role objects, at the context path openidm/managed/role/role-name. Provisioning roles are assigned to managed user objects as values of the object's roles property.

Authorization roles can be created either as managed role objects (at the context path openidm/managed/role/role-name) or as internal role objects (at the context path openidm/repo/internal/role/role-name). Authorization roles are assigned to managed user objects as values of the object's authzRoles property.

Both provisioning roles and authorization roles use the relationships mechanism to link the role object, and the managed object to which the role applies. For more information about relationships between objects, see "Managing Relationships Between Objects".

This section describes how to create and use managed roles, either managed provisioning roles, or managed authorization roles. For more information about authorization roles, and how OpenIDM controls authorization to its own endpoints, see "Authorization".

Managed roles are defined like any other managed object, and are assigned to managed users by using the relationships mechanism.

A managed role can be assigned directly, as a static value of the user's roles or authzRoles attribute, or indirectly, through a script or a rule that assigns the role value. For example, a user might acquire an indirect role such as sales-role, if that user is in the sales organization.

A managed user's roles and authzRoles attributes take an array of references as a value, where the references point to the managed role objects. For example, if user bjensen has been assigned two provisioning roles (employee and supervisor), the value of bjensen's roles attribute would look something like the following:

"roles": [
    {
      "_ref": "managed/role/employee",
      "_refProperties": {
        "_id": "c090818d-57fd-435c-b1b1-bb23f47eaf09",
        "_rev": "1"
      }
    },
    {
      "_ref": "managed/role/supervisor",
      "_refProperties": {
        "_id": "4961912a-e2df-411a-8c0f-8e63b62dbef6",
        "_rev": "1"
      }
    }
  ]

Note that the _ref property points to the managed role object that has been assigned to the managed user object.

The following sections describe how to create, read, update, and delete managed role objects, and how to assign roles to users. For information about how roles are used to provision users to external systems, see "Working With Role Assignments". For a sample that demonstrates the basic CRUD operations on roles, see "Roles Samples - Demonstrating the OpenIDM Roles Implementation" in the Samples Guide.

8.4.1. Creating, Listing, Assigning, and Deleting Roles

Managed role objects are stored in the repository and are accessible at the context path /openidm/managed/role. This section describes how to manipulate managed roles over the REST interface, and by using the Admin UI.

8.4.1.1. Creating a Managed Role

The easiest way to create a new managed role is by using the Admin UI. Select Manage > Role and click New Role on the Role List page. Enter a name and description for the new role and click Create.

Select the Managed Assignments tab to add assignments to the role. This assumes that you have already created the required assignments that should be associated with the role. For more information, see "Working With Role Assignments".

To create a new managed role over REST, send a PUT or POST request to the /openidm/managed/role context path. The following example creates a new managed role named employee:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --header "If-None-Match: *" \
 --request PUT \
 --data '{
     "name" : "employee",
     "description" : "Role assigned to workers on the company payroll"
 }' \
 "https://localhost:8443/openidm/managed/role/employee"
{
  "_id": "employee",
  "_rev": "1",
  "name": "employee",
  "description": "Role assigned to workers on the company payroll",
  "assignments": []
}

At this stage, the employee role has no corresponding assignments. Assignments are what enables the provisioning logic to the external system. Assignments are created and maintained as separate managed objects, and are referred to within role definitions. For more information about assignments, see "Working With Role Assignments".

8.4.1.2. Listing Existing Roles

You can display a list of all configured managed roles over REST or by using the Admin UI.

To list the managed roles in the Admin UI, select Manage > Role.

To list the managed roles over REST, query the openidm/managed/role endpoint. The following example shows the employee role that you created in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/role?_queryFilter=true"
{
  "result": [
    {
      "_id": "employee",
      "_rev": "1",
      "name": "employee",
      "description": "Role assigned to workers on the company payroll",
      "assignments": []
    }
  ],
  "resultCount": 1,
  "pagedResultsCookie": null,
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "remainingPagedResults": -1
}

8.4.1.3. Assigning a Managed Role to a User

Roles are assigned to users through the relationship mechanism. Relationships are essentially references from one managed object to another, in this case from a managed user object to a managed role object. For more information about relationships, see "Managing Relationships Between Objects".

You can assign a role to a managed user in two ways:

  • Update the value of the user object's roles property (if the role is a provisioning role) or authzRoles property (if the role is an authorization role).

  • Update the value of the role object's members property to reference the user object.

Both of these actions can be achieved by using the Admin UI, or over REST.

Using the Admin UI
  • Select Manage > User and click on the user to whom you want to assign the role.

    Select the Provisioning Roles tab, select the role from the dropdown list, click Add Role, and click Save.

Over the REST interface

Use one of the following methods to assign a role to a user object:

  • Update the user object to refer to the role object.

    The following sample command assigns the employee role to user scarter:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --header "If-Match: *" \
     --request PATCH \
     --data '[
       {
          "operation": "replace",
          "field": "/roles/-",
          "value": [ {"_ref" : "managed/role/employee"} ]
       }
     ]' \
     "https://localhost:8443/openidm/managed/user/scarter"
    {
      "_id": "scarter",
      "_rev": "2",
      "mail": "scarter@example.com",
      "givenName": "Steven",
      "sn": "Carter",
      "description": "Created By XML1",
      "userName": "scarter@example.com",
      "telephoneNumber": "1234567",
      "accountStatus": "active",
      "effectiveRoles": [
        {
          "_ref": "managed/role/employee",
          "_refProperties": {
            "_id": "026536cf-dcb7-4224-960b-3bdb259a4f0c",
            "_rev": "1"
          }
        }
      ],
      "effectiveAssignments": [],
      "roles": [
        {
          "_ref": "managed/role/employee",
          "_refProperties": {
            "_id": "026536cf-dcb7-4224-960b-3bdb259a4f0c",
            "_rev": "1"
          }
        }
      ]
    }

    Note that scarter's roles and effectiveRoles attributes have been updated with a reference to the new role. For more information about effective roles and effective assignments, see "Effective Roles and Effective Assignments".

  • Update the role object to refer to the user object.

    The following sample command makes scarter a member of the employee role:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --header "If-Match: *" \
     --request PATCH \
     --data '[
       {
          "operation": "add",
          "field": "/members/-",
          "value": [ {"_ref" : "managed/user/scarter"} ]
       }
     ]' \
     "https://localhost:8443/openidm/managed/role/employee"
    {
      "_id": "employee",
      "_rev": "3",
      "name": "employee",
      "description": "Role assigned to workers on the company payroll"
    }

    Note that the members attribute of a role is not returned by default in the output. To show all members of a role, you must specifically request the relationship properties in your query. The following sample command lists the members of the employee role (currently only scarter):

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request GET \
     "https://localhost:8443/openidm/managed/role/employee?_fields=*_ref"
    {
      "_id": "employee",
      "_rev": "3",
      "members": [
        {
          "_ref": "managed/user/scarter",
          "_refProperties": {
            "_id": "44f7062a-62b5-4d8a-8ada-cea1881bc68a",
            "_rev": "5"
          }
        }
      ],
      "assignments": []
    }

8.4.1.4. Querying the Roles Assigned to a User

The easiest way to check what roles are assigned to a managed user object is to look at that object in the Admin UI. Select Manage > User and click on the user whose role or roles you want to see.

To obtain a list of roles assigned to a user, over the REST interface, you can query the user's roles property. The following sample query shows that bjensen has one assigned role, managed/role/employee:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user/bjensen/roles?_queryFilter=true"
{
  "result": [
    {
      "_ref": "managed/role/employee",
      "_refProperties": {
        "_id": "daf12ce0-b059-4c07-b364-c9e3b1d2255f",
        "_rev": "5"
      }
    }
  ],
  "resultCount": 1,
  "pagedResultsCookie": null,
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "remainingPagedResults": -1
}

8.4.1.5. Deleting a User's Managed Roles

Exactly like assigning roles, you can remove a user's managed roles in two ways:

  • Update the value of the user object's roles property (if the role is a provisioning role) or authzRoles property (if the role is an authorization role).

  • Update the value of the role object's members property to remove the reference to that user object.

Both of these actions can be achieved by using the Admin UI, or over REST.

Using the Admin UI

Use one of the following methods to remove a user's managed roles:

  • Select Manage > User and click on the user whose role or roles you want to remove.

    Select the Provisioning Roles tab, click the X icon next to the role that you want to remove, and click Save.

  • Select Manage > Role and click on the role whose members you want to remove.

    Click the Users tab, select the users whose membership you want to remove and click Remove Users.

Over the REST interface

Use one of the following methods to remove a role from a user object:

  • Update the user object to remove the reference to the role object.

    The following sample command removes the employee role from user scarter:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --request PATCH \
     --data '[
       {
          "operation": "remove",
          "field": "/roles/0"
       }
     ]' \
     "https://localhost:8443/openidm/managed/user/scarter"
    {
      "_id": "scarter",
      "_rev": "6",
      "mail": "scarter@example.com",
      "givenName": "Steven",
      "sn": "Carter",
      "userName": "scarter@example.com",
      "telephoneNumber": "1234567",
      "accountStatus": "active",
      "effectiveRoles": [],
      "effectiveAssignments": [],
      "roles": []
    }

    Note that this command assumes scarter has no other provisioning roles, and effectively overwrites scarter's roles attribute with an empty array. If there are other provisioning roles that should be retained, include the reference to those roles in the value field.

  • Update the role object to remove the reference to the user object.

    The following sample command removes scarter's membership from the employee role:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --request PATCH \
     --data '[
       {
          "operation": "replace",
          "field": "/members",
          "value": []
       }
     ]' \
     "https://localhost:8443/openidm/managed/role/employee"
    {
      "_id": "employee",
      "_rev": "3",
      "name": "employee",
      "description": "Role assigned to workers on the company payroll"
    }

8.4.1.6. Deleting a Role Definition

You can delete a managed provisioning or authorization role by using the Admin UI, or over the REST interface.

To delete a role by using the Admin UI, select Manage > Role, select the role you want to remove, and click Delete.

To delete a managed role over the REST interface, simply delete that managed object. The following command deletes the employee role created in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "https://localhost:8443/openidm/managed/role/employee"
{
  "_id": "employee",
  "_rev": "5",
  "name": "employee",
  "description": "Role assigned to workers on the company payroll",
  "assignments": []
}

Note

You cannot delete a role if it is currently assigned to one or more managed users. If you attempt to delete a role that is assigned to a user (either over the REST interface, or by using the Admin UI), OpenIDM returns an error. The following command indicates an attempt to remove the employee role while it is still assigned to user scarter:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "https://localhost:8443/openidm/managed/role/employee"
{
    "code":409,
    "reason":"Conflict",
    "message":"Cannot delete a role that is currently assigned"
 }

8.4.2. Working With Role Assignments

Authorization roles control access to OpenIDM itself. Provisioning roles define rules for how attribute values are updated on external systems. These rules are configured through assignments that are attached to a provisioning role definition. The purpose of an assignment is to provision an attribute or set of attributes, based on an object's role membership.

The synchronization mapping configuration between two resources (defined in the sync.json file) provides the basic account provisioning logic (how an account is mapped from a source to a target system). Role assignments provide additional provisioning logic that is not covered in the basic mapping configuration. The attributes and values that are updated by using assignments might include group membership, access to specific external resources, and so on. A group of assignments can collectively represent a role.

Assignment objects are created, updated and deleted like any other managed object, and are attached to a role by using the relationships mechanism, in much the same way as a role is assigned to a user. Assignment objects are stored in the repository and are accessible at the context path /openidm/managed/assignment.

This section describes how to manipulate managed assignments over the REST interface, and by using the Admin UI. When you have created an assignment, and attached it to a role definition, all user objects that reference that role definition will, as a result, reference the corresponding assignment in their effectiveAssignments attribute.

8.4.2.1. Creating an Assignment Object

The easiest way to create a new managed assignment is by using the Admin UI, as follows:

  1. Select Manage > Assignment and click New Assignment on the Assignment List page.

  2. Enter a name and description for the new assignment, and select the mapping to which the assignment should apply. The mapping indicates the target resource, that is, the resource on which the attributes specified in the assignment will be adjusted.

  3. Click Add Assignment.

  4. Select the Attributes tab and select the attribute or attributes whose values will be adjusted by this assignment. In the text field, specify what the value of the attribute should be, when this assignment is applied.

  5. Select the assignment operation from the dropdown list:

    • Merge With Target - the attribute value will be added to any existing values for that attribute. This operation first merges the value of the source object attribute with the existing target attribute, then adds the value(s) from the assignment. If duplicate values are found (for attributes that take a list as a value), each value is included only once in the resulting target. The mergeWithTarget assignment operation is used only with complex attribute values like arrays and objects, and does not work with strings or numbers.

    • Remove From Target - the attribute value will be removed from the existing value or values for that attribute.

    • Replace Target - the attribute value will overwrite any existing values for that attribute. The value from the assignment becomes the authoritative source for the attribute.

    • No Operation - the assignment will not affect the attribute value on the target system.

    Select the unassignment operation from the dropdown list. Currently, only Remove From Target is supported, which means that the attribute value is removed from the system object when the user is no longer a member of the role, or when the assignment itself is removed from the role definition.

  6. Optionally, click the Events tab to specify any scriptable events associated with this assignment.

    The assignment and unassigment operations described in the previous step operate at the attribute level. That is, you specify what should happen with each attribute affected by the assignment when the assignment is applied to a user, or removed from a user.

    The scriptable On assignment and On unassignment events operate at the assignment level, rather than the attribute level. You define scripts here to apply additional logic or operations that should be performed when a user (or other object) receives or loses an entire assignment. This logic can be anything that is not restricted to an operation on a single attribute.

To create a new managed assignment over REST, send a PUT or POST request to the /openidm/managed/assignment context path.

The following example creates a new managed assignment named employee. The JSON payload in this example shows the following:

  • The assignment is applied for the mapping managedUser_systemLdapAccounts, so attributes will be updated on the external LDAP system specified in this mapping.

  • The name of the attribute on the external system whose value will be set is employeeType and its value will be set to Employee.

  • When the assignment is applied during a sync operation, the attribute value Employee will be added to any existing values for that attribute. When the assignment is removed (if the role is deleted, or if the managed user is no longer a member of that role), the attribute value Employee will be removed from the values of that attribute.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --header "If-None-Match: *" \
 --request PUT \
 --data '{
   "name" : "employee",
   "description": "Assignment for employees.",
   "mapping" : "managedUser_systemLdapAccounts",
   "attributes": [
       {
           "name": "employeeType",
           "value": "Employee",
           "assignmentOperation" : "mergeWithTarget",
           "unassignmentOperation" : "removeFromTarget"
       }
   ]
 }' \
 "https://localhost:8443/openidm/managed/assignment/employee"
{
  "_id": "employee",
  "_rev": "1",
  "name": "employee",
  "description": "Assignment for employees.",
  "mapping": "managedUser_systemLdapAccounts",
  "attributes": [
    {
      "name": "employeeType",
      "value": "Employee",
      "assignmentOperation": "mergeWithTarget",
      "unassignmentOperation": "removeFromTarget"
    }
  ]
}

8.4.2.2. Adding an Assignment to a Role

When you have created a managed role object, and a managed assignment object, you reference the assignment from the role, in much the same way as a user object references a role.

You can update a role definition to include one or more assignments, either by using the Admin UI, or over the REST interface.

Using the Admin UI

Select Manage > Role and click on the role to which you want to add an assignment.

Select the Managed Assignments tab, then select the assignment that you want to add to the role and click Save.

Over the REST interface

Update the role definition to include a reference to the assignment in the assignments property of the role. The following sample command adds the employee assignment to the employee role that was created in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PATCH \
 --data '[
   {
       "operation" : "add",
       "field" : "/assignments/-",
       "value" : { "_ref": "managed/assignment/employee" }
   }
 ]' \
 "https://localhost:8443/openidm/managed/role/employee"
{
  "_id": "employee",
  "_rev": "2",
  "name": "employee",
  "description": "Role assigned to workers on the company payroll",
  "assignments": [
    {
      "_ref": "managed/assignment/employee",
      "_refProperties": {
        "_id": "e72544a7-7aa6-4c5f-baf5-eec4781f710d",
        "_rev": "1"
      }
    }
  ]
}

To remove an assignment from a role definition, remove the reference to the assignment object from the role's assignments property.

8.4.2.3. Deleting a Managed Assignment

You can delete a managed assignment object by using the Admin UI, or over the REST interface.

To delete an assignment by using the Admin UI, select Manage > Assignment, select the assignment you want to remove, and click Delete.

To delete a managed assignment over the REST interface, simply delete that managed object. The following command deletes the employee assignment created in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "https://localhost:8443/openidm/managed/assignment/employee"
     {
  "_id": "employee",{
  "_id": "employee",
  "_rev": "1",
  "name": "employee",
  "description": "Assignment for employees.",
  "mapping": "managedUser_systemLdapAccounts",
  "attributes": [
    {
      "name": "employeeType",
      "value": "Employee",
      "assignmentOperation": "mergeWithTarget",
      "unassignmentOperation": "removeFromTarget"
    }
  ]
}

Note

You can delete an assignment, even if it is referenced by a managed role. When the assignment is removed, any users to whom the corresponding roles were assigned will no longer have that assignment in their list of effectiveAssignments. For more information about effective roles and effective assignments, see "Effective Roles and Effective Assignments".

8.4.3. Effective Roles and Effective Assignments

Effective roles and effective assignments are virtual properties of a managed user object. Their values are calculated on the fly by the openidm/bin/defaults/script/roles/effectiveRoles.js and openidm/bin/defaults/script/roles/effectiveAssignments.js scripts. These scripts are triggered every time a role definition is changed, an assignment is added or changed, or when a user is added to or removed from a role's list of members.

The following excerpt of a managed.json file shows how these two virtual properties are constructed for each managed user object:

{
    "name" : "effectiveRoles",
    "type" : "virtual",
    "onRetrieve" : {
        "type" : "text/javascript",
        "file" : "roles/effectiveRoles.js",
        "rolesPropName" : "roles"
    }
},
{
     "name" : "effectiveAssignments",
     "type" : "virtual",
     "onRetrieve" : {
          "type" : "text/javascript",
          "file" : "roles/effectiveAssignments.js",
          "effectiveRolesPropName" : "effectiveRoles"
     }
}  

When a role references an assignment, and a user object references the role, that user object automatically references the assignment in its list of effective assignments.

Do not change the default effectiveRoles.js and effectiveAssignments.js scripts. If you need to change the logic that calculates effectiveRoles and effectiveAssignments, create your own custom script and include a reference to it in your project's conf/managed.json file. For more information about using custom scripts, see "Scripting Reference".

The effectiveRoles attribute lists the specific role definitions that are referenced in the user object's roles attribute. By default, the effective roles script supports only direct role assignments.

To set up a dynamic role assignment, you need a custom script that overrides the default effectiveRoles.js script. For more information, see "Adding Support for Dynamic Assignments".

The synchronization engine reads the calculated value of the effectiveAssignments attribute when it processes the managed user object. The target system is updated according to the configured assignmentOperation for each assignment.

By default, the effectiveRoles.js script uses the roles attribute of a user entry to derive the direct roles assigned to the user. The effectiveAssignments.js script uses the virtual effectiveRoles attribute from the user object to calculate that user's effective assignments.

When a role is assigned to a user entry, OpenIDM calculates the effectiveRoles and effectiveAssignments for that user from the definition of the role. The previous set of examples showed the creation of a role employee that referenced an assignment employee and was assigned to bjensen's user entry. Querying that user entry would show the following effective roles and effective assignments:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin"  \
 --request GET \
 "https://localhost:8443/openidm/managed/user/bjensen&_fields=userName,roles,effectiveRoles,effectiveAssignments"
{
  "result": [
    {
      "_rev": "4",
      "userName": "bjensen",
      "roles": [
        {
          "_ref": "managed/role/employee",
          "_refProperties": {
            "_id": "daf12ce0-b059-4c07-b364-c9e3b1d2255f",
            "_rev": "5"
          }
        }
      ],
      "effectiveRoles": [
        {
          "_ref": "managed/role/employee",
          "_refProperties": {
            "_id": "daf12ce0-b059-4c07-b364-c9e3b1d2255f",
            "_rev": "5"
          }
        }
      ],
      "effectiveAssignments": [
        {
          "name": "employee",
          "description": "Assignment for employees.",
          "mapping": "managedUser_systemLdapAccounts",
          "attributes": [
            {
              "name": "employeeType",
              "value": "employee",
              "assignmentOperation": "mergeWithTarget",
              "unassignmentOperation": "removeFromTarget"
            }
          ],
          "_id": "employee",
          "_rev": "2"
        }
      ]
    }
  ],
  "resultCount": 1,
  "pagedResultsCookie": null,
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "remainingPagedResults": -1
}

In this example, synchronizing the managed/user repository with the external LDAP system defined in the mapping should populate user bjensen's employeeType attribute in LDAP with the value employee.

8.4.4. Adding Support for Dynamic Assignments

Although support for dynamic role assignments is not included in the default configuration, you can add such support with a custom script, as follows:

  1. Create a roles directory in your project's script directory and copy the default effective roles script to that new directory:

    $ mkdir project-dir/script/roles/
    $ cp /path/to/openidm/bin/defaults/script/roles/effectiveRoles.js \
     project-dir/script/roles/

    The new script will override the default effective roles script.

  2. Modify the effective roles script to include the references to the roles that you want to assign dynamically.

    For example, the following addition to the effectiveRoles.js script assigns the roles dynamic-role1 and dynamic-role2 to all active users (managed user objects whose accountStatus value is active). This example assumes that you have already created the managed roles, dynamic-role1 and dynamic-role2, and their corresponding assignments:

    // This is the location to expand to dynamic roles,
    // project role script return values can then be added via
    // effectiveRoles = effectiveRoles.concat(dynamicRolesArray);
    
    if (object.accountStatus === 'active') {
        effectiveRoles = effectiveRoles.concat([
          {"_ref": "managed/role/dynamic-role1"},
          {"_ref": "managed/role/dynamic-role2"}
        ]);
    }
  3. (Optional) To apply changes to the dynamic assignment rules to existing users, run a reconciliation operation on those users.

If you make any of the following changes to dynamic role assignments, you must perform a manual reconciliation of all affected users before the changes take effect:

  • If you create a new dynamic role definition.

  • If you change the definition of an existing dynamic role.

  • If you change a dynamic assignment rule.

Alternatively, you can modify or synchronize a user entry, in which case, all dynamic role assignments are reassessed automatically.

8.4.5. Managed Role Object Script Hooks

Like any other object, a managed role object has script hooks that enable you to configure role behavior. The default role object definition in conf/managed.json includes the following script hooks:

{
    "name" : "role",
    "onDelete" : {
        "type" : "text/javascript",
        "file" : "roles/onDelete-roles.js"
    },
    "onSync" : {
        "type" : "text/javascript",
        "source" : "require('roles/onSync-roles').syncUsersOfRoles(resourceName, oldObject, newObject, ['members']);"
    },
...

When a role object is deleted, the onDelete script hook calls the bin/default/script/roles/onDelete-roles.js script.

When a role object is synchronized, the onSync hook causes a synchronization operation on all managed objects that reference the role.

8.5. Managing Relationships Between Objects

OpenIDM enables you to define relationships between two managed objects. Managed roles are implemented using relationship objects, but you can create a variety of relationship objects, as required by your deployment.

8.5.1. Defining a Relationship Type

Relationship objects are defined in your project's managed object configuration file (conf/managed.json). By default, OpenIDM provides a relationship object named manager, that enables you to configure a management relationship between two managed user objects. The manager relationship object is a good example from which to understand how relationship objects work.

The default manager relationship object is configured as follows:

"manager" : {
    "type" : "relationship",
    "returnByDefault" : false,
    "description" : "",
    "title" : "Manager",
    "viewable" : true,
    "searchable" : false,
    "properties" : {
        "_ref" : { "type" : "string" },
        "_refProperties": {
            "type": "object",
            "properties": {
                "_id": { "type": "string" }
            }
    }
},

All relationship objects have the following configurable properties:

type (string)

The object type. Must be relationship for a relationship object.

returnByDefault (boolean true, false)

Specifies whether the relationship object should be returned in the result of a read or search query on the managed object that has the relationship, if it is not explicitly requested. By default, relationship objects are not returned, unless they are explicitly requested.

description (string, optional)

An optional string that provides additional information about the relationship object.

title (string)

Used by the UI to refer to the relationship object.

viewable (boolean, true, false)

Specifies whether the relationship object is visible as a field in the UI. The default value is true.

searchable (boolean, true, false)

Specifies whether values of the relationship object can be searched, in the UI. For example, if you set this property to true on the manager relationship object, a user will be able to search for managed user entries using the manager field as a filter.

_ref (JSON object)

Specifies how the relationship between two managed objects is referenced.

In the relationship object definition, the value of this property is { "type" : "string" }. In a managed user entry, the value of the _ref property is the reference to the other resource. The _ref property is described in more detail in "Establishing a Relationship Between Two Objects".

_refProperties (JSON object)

Specifies any required properties from the relationship object that should be included in the managed object. The _refProperties field includes a unique ID (_id) and the revision (_rev) of the object. _refProperties can also contain arbitrary fields to support metadata within the relationship.

8.5.2. Establishing a Relationship Between Two Objects

When you have defined a relationship type, (such as the manager relationship, described in the previous section), you can reference that relationship from a managed user object, using the _ref property.

For example, imagine that you are creating a new user, psmith, and that psmith's manager will be bjensen. You would add psmith's user entry, and reference bjensen's entry with the _ref property, as follows:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "If-None-Match: *" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "sn":"Smith",
    "userName":"psmith",
    "givenName":"Patricia",
    "displayName":"Patti Smith",
    "description" : "psmith - new user",
    "mail" : "psmith@example.com",
    "phoneNumber" : "0831245986",
    "password" : "Passw0rd",
    "manager" : {"_ref" : "managed/user/bjensen"}
  }' \
"https://localhost:8443/openidm/managed/user/psmith" 
{
  "_id": "psmith",
  "_rev": "1",
  "sn": "Smith",
  "userName": "psmith",
  "givenName": "Patricia",
  "displayName": "Patti Smith",
  "description": "psmith - new user",
  "mail": "psmith@example.com",
  "phoneNumber": "0831245986",
  "accountStatus": "active",
  "effectiveRoles": null,
  "effectiveAssignments": [],
  "roles": []
}

Note that the relationship information is not returned by default in the command-line output.

Any change to a relationship object triggers a synchronization operation on any other managed objects that are referenced by the relationship object. For example, OpenIDM maintains referential integrity by deleting the relationship reference, if the object referred to by that relationship is deleted. In our example, if bjensen's user entry is deleted, the corresponding reference in psmith's manager property is removed.

8.5.3. Validating Relationships Between Objects

Optionally, you can specify that a relationship between two objects must be validated when the relationship is created. For example, you can indicate that a user object cannot reference a role object, if that object does not exist.

When you create a new relationship type, validation is disabled by default as it entails a query to the relationship object that can be expensive, if it is not required. To configure validation of a referenced relationship, set "validate": true in the object configuration (in managed.json). The managed.json files provided with OpenIDM enable validation for the following relationships:

  • For user objects ‒ roles, managers, and reports

  • For role objects ‒ members and assignments

  • For assignment objects ‒ roles

The following configuration of the manager relationship object enables validation, and prevents a user object from referencing a manager that has not already been created:

"manager" : {
    "type" : "relationship",
    ...
    "validate" : true,

8.5.4. Working With Bi-Directional Relationships

In some cases, it is useful to define a relationship between two objects in both directions. For example, a relationship between a user and his manager might indicate a reverse relationship between the manager and her direct report. Reverse relationships are particularly useful in querying. For example, you might want to query jdoe's user object to discover who his manager is, or query bjensen's user object to discover all the users who report to bjensen.

A reverse relationship is declared in the managed object configuration (conf/managed.json). Consider the following sample excerpt of the default managed object configuration:

"roles" : {
    "description" : "",
    "title" : "Provisioning Roles",
    ...
    "type" : "array",
    "items" : {
        "type" : "relationship",
        "validate": false,
        "reverseRelationship" : true,
        "reversePropertyName" : "members",
    ...

The roles object is a relationship object. So, you can refer to a managed user's roles by referencing the role object definition. However, the roles object is also a reverse relationship object ("reverseRelationship" : true) which means that you can list all user objects that reference that role object. In other words, you can list all members of the role. The members property is therefore the reversePropertyName.

8.5.5. Viewing Relationships Over REST

By default, information about relationships is not returned as the result of a GET request on a managed object. You must explicitly include the relationship property in the request, for example:

$ curl
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user/psmith?_fields=manager"
{
  "_id": "psmith",
  "_rev": "1",
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99",
      "_rev": "1"
    }
  }
}

To obtain more information about the referenced object (psmith's manager, in this case), you can include additional fields from the referenced object in the query, using the syntax object/property (for a simple string value) or object/*/property (for an array of values).

The following example returns the email address and contact number for psmith's manager:

$ curl
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user/psmith?_fields=manager/mail,manager/phoneNumber"
{
  "_id": "psmith",
  "_rev": "1",
  "phoneNumber": "1234567",
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99",
      "_rev": "1"
    },
    "mail": "bjensen@example.com",
    "phoneNumber": "1234567"
  }
}

You can query all the relationships associated with a managed object by querying the reference (*_ref) property of the object. For example, the following query shows all the objects that are referenced by psmith's entry:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/managed/user/psmith?_fields=*_ref"
{
  "_id": "psmith",
  "_rev": "1",
  "roles": [],
  "authzRoles": [
    {
      "_ref": "repo/internal/role/openidm-authorized",
      "_refProperties": {
        "_id": "8e7b2c97-dfa8-4eec-a95b-b40b710d443d",
        "_rev": "1"
      }
    }
  ],
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "3a246327-a972-4576-b6a6-7126df780029",
      "_rev": "1"
    }
  }
}

8.6. Running Scripts on Managed Objects

OpenIDM provides a number of hooks that enable you to manipulate managed objects using scripts. These scripts can be triggered during various stages of the lifecycle of the managed object, and are defined in the managed objects configuration file (managed.json).

The scripts can be triggered when a managed object is created (onCreate), updated (onUpdate), retrieved (onRetrieve), deleted (onDelete), validated (onValidate), or stored in the repository (onStore). A script can also be triggered when a change to a managed object triggers an implicit synchronization operation (onSync).

In addition, OpenIDM supports the use of post-action scripts for managed objects, including after the creation of an object is complete (postCreate), after the update of an object is complete (postUpdate), and after the deletion of an object (postDelete).

The following sample extract of a managed.json file runs a script to calculate the effective assignments of a managed object, whenever that object is retrieved from the repository:

"effectiveAssignments" : {
    "type" : "array",
    "title" : "Effective Assignments",
    "viewable" : false,
    "returnByDefault" : true,
    "isVirtual" : true,
    "onRetrieve" : {
        "type" : "text/javascript",
        "file" : "roles/effectiveAssignments.js",
        "effectiveRolesPropName" : "effectiveRoles"
    },
    "items" : {
        "type" : "object"
    }
},

8.7. Encoding Attribute Values

OpenIDM supports two methods of encoding attribute values for managed objects - reversible encryption and the use of salted hashing algorithms. Attribute values that might be encoded include passwords, authentication questions, credit card numbers, and social security numbers. If passwords are already encoded on the external resource, they are generally excluded from the synchronization process. For more information, see "Managing Passwords".

You configure attribute value encoding, per schema property, in the managed object configuration (in your project's conf/managed.json file). The following sections show how to use reversible encryption and salted hash algorithms to encode attribute values.

8.7.1. Encoding Attribute Values With Reversible Encryption

The following excerpt of a managed.json file shows a managed object configuration that encrypts and decrypts the password attribute using the default symmetric key:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                ...
                "properties" : {
                    ...
                    "password" : {
                        "title" : "Password",
                        ...
                        "encryption" : {
                            "key" : "openidm-sym-default"
                        },
                        "scope" : "private",
         ...
        }
    ]
} 

Tip

To configure encryption of properties by using the Admin UI:

  1. Select Configure > Managed Objects, and click on the object type whose property values you want to encrypt (for example User).

  2. On the Properties tab, select the property whose value should be encrypted and select the Encrypt checkbox.

For information about encrypting attribute values from the command-line, see "Using the encrypt Subcommand".

8.7.2. Encoding Attribute Values by Using Salted Hash Algorithms

To encode attribute values with salted hash algorithms, add the secureHash property to the attribute definition, and specify the algorithm that should be used to hash the value. OpenIDM supports the following hash algorithms:

MD5
SHA-1
SHA-256
SHA-384
SHA-512

The following excerpt of a managed.json file shows a managed object configuration that hashes the values of the password attribute using the SHA-1 algorithm:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                ...
                "properties" : {
                    ...
                    "password" : {
                        "title" : "Password",
                        ...
                        "secureHash" : {
                            "algorithm" : "SHA-1"
                        },
                        "scope" : "private",
         ...
        }
    ]
} 

Tip

To configure hashing of properties by using the Admin UI:

  1. Select Configure > Managed Objects, and click on the object type whose property values you want to hash (for example User).

  2. On the Properties tab, select the property whose value must be hashed and select the Hash checkbox.

  3. Select the algorithm that should be used to hash the property value.

    OpenIDM supports the following hash algorithms:

    MD5
    SHA-1
    SHA-256
    SHA-384
    SHA-512

For information about hashing attribute values from the command-line, see "Using the secureHash Subcommand".

8.8. Restricting HTTP Access to Sensitive Data

You can protect specific sensitive managed data by marking the corresponding properties as private. Private data, whether it is encrypted or not, is not accessible over the REST interface. Properties that are marked as private are removed from an object when that object is retrieved over REST.

To mark a property as private, set its scope to private in the conf/managed.json file.

The following extract of the managed.json file shows how HTTP access is prevented on the password and securityAnswer properties:

{
    "objects": [
        {
            "name": "user",
            "schema": {
                "id" : "http://jsonschema.net",
                "title" : "User",
                ...
                "properties": {
                ...
                    {
                        "name": "securityAnswer",
                        "encryption": {
                            "key": "openidm-sym-default"
                        },
                        "scope" : "private"
                    },
                    {
                        "name": "password",
                        "encryption": {
                            "key": "openidm-sym-default"
                        }'
                        "scope" : "private"
                    }
         },
         ...
      }
   ]
}

Tip

To configure private properties by using the Admin UI:

  1. Select Configure > Managed Objects, and click on the object type whose property values you want to make private (for example User).

  2. On the Properties tab, select the property that must be private and select the Private checkbox.

A potential caveat with using private properties is that private properties are removed if an object is updated by using an HTTP PUT request. A PUT request replaces the entire object in the repository. Because properties that are marked as private are ignored in HTTP requests, these properties are effectively removed from the object when the update is done. To work around this limitation, do not use PUT requests if you have configured private properties. Instead, use a PATCH request to update only those properties that need to be changed.

For example, to update the givenName of user jdoe, you could run the following command:

$ curl \
--cacert self-signed.crt \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '[
   {
   "operation":"replace",
   "field":"/givenName",
   "value":"Jon"
   }
]' \
"https://localhost:8443/openidm/managed/user?_action=patch&_queryId=for-userName&uid=jdoe"

Note

The filtering of private data applies only to direct HTTP read and query calls on managed objects. No automatic filtering is done for internal callers, and the data that these callers choose to expose.

Chapter 9. Using Policies to Validate Data

OpenIDM provides an extensible policy service that enables you to apply specific validation requirements to various components and properties. This chapter describes the policy service, and provides instructions on configuring policies for managed objects.

The policy service provides a REST interface for reading policy requirements and validating the properties of components against configured policies. Objects and properties are validated automatically when they are created, updated, or patched. Policies are generally applied to user passwords, but can also be applied to any managed or system object, and to internal user objects.

The policy service enables you to accomplish the following tasks:

  • Read the configured policy requirements of a specific component.

  • Read the configured policy requirements of all components.

  • Validate a component object against the configured policies.

  • Validate the properties of a component against the configured policies.

The OpenIDM router service limits policy application to managed, system, and internal user objects. To apply policies to additional objects, such as the audit service, you must modify your project's conf/router.json file. For more information about the router service, see "Router Service Reference".

A default policy applies to all managed objects. You can configure this default policy to suit your requirements, or you can extend the policy service by supplying your own scripted policies.

9.1. Configuring the Default Policy for Managed Objects

Policies applied to managed objects are configured in two files:

  • A policy script file (openidm/bin/defaults/script/policy.js) that defines each policy and specifies how policy validation is performed. For more information, see "Understanding the Policy Script File".

  • A managed object policy configuration element, defined in your project's conf/managed.json file, that specifies which policies are applicable to each managed resource. For more information, see "Understanding the Policy Configuration Element".

    Note

    The configuration for determining which policies apply to resources other than managed objects is defined in your project's conf/policy.json file. The default policy.json file includes policies that are applied to internal user objects, but you can extend the configuration in this file to apply policies to system objects.

9.1.1. Understanding the Policy Script File

The policy script file (openidm/bin/defaults/script/policy.js) separates policy configuration into two parts:

  • A policy configuration object, which defines each element of the policy. For more information, see "Policy Configuration Objects".

  • A policy implementation function, which describes the requirements that are enforced by that policy.

Together, the configuration object and the implementation function determine whether an object is valid in terms of the applied policy. The following excerpt of a policy script file configures a policy that specifies that the value of a property must contain a certain number of capital letters:

...
{   "policyId" : "at-least-X-capitals",
    "policyExec" : "atLeastXCapitalLetters",
    "clientValidation": true,
    "validateOnlyIfPresent":true,
    "policyRequirements" : ["AT_LEAST_X_CAPITAL_LETTERS"]
},
...

policyFunctions.atLeastXCapitalLetters = function(fullObject, value, params, property) {
  var isRequired = _.find(this.failedPolicyRequirements, function (fpr) {
      return fpr.policyRequirement === "REQUIRED";
    }),
    isNonEmptyString = (typeof(value) === "string" && value.length),
    valuePassesRegexp = (function (v) {
      var test = isNonEmptyString ? v.match(/[(A-Z)]/g) : null;
      return test !== null && test.length >= params.numCaps;
    }(value));

  if ((isRequired || isNonEmptyString) && !valuePassesRegexp) {
    return [ { "policyRequirement" : "AT_LEAST_X_CAPITAL_LETTERS", "params" : {"numCaps": params.numCaps} } ];
  }

  return [];
}     
...

To enforce user passwords that contain at least one capital letter, the policyId from the preceding example is applied to the appropriate resource (managed/user/*). The required number of capital letters is defined in the policy configuration element of the managed object configuration file (see "Understanding the Policy Configuration Element".

9.1.1.1. Policy Configuration Objects

Each element of the policy is defined in a policy configuration object. The structure of a policy configuration object is as follows:

{
    "policyId" : "minimum-length",
    "policyExec" : "propertyMinLength",
    "clientValidation": true,
    "validateOnlyIfPresent": true,
    "policyRequirements" : ["MIN_LENGTH"]
}   
  • policyId - a unique ID that enables the policy to be referenced by component objects.

  • policyExec - the name of the function that contains the policy implementation. For more information, see "Policy Implementation Functions".

  • clientValidation - indicates whether the policy decision can be made on the client. When "clientValidation": true, the source code for the policy decision function is returned when the client requests the requirements for a property.

  • validateOnlyIfPresent - notes that the policy is to be validated only if it exists.

  • policyRequirements - an array containing the policy requirement ID of each requirement that is associated with the policy. Typically, a policy will validate only one requirement, but it can validate more than one.

9.1.1.2. Policy Implementation Functions

Each policy ID has a corresponding policy implementation function that performs the validation. Implementation functions take the following form:

function <name>(fullObject, value, params, propName) {	
	<implementation_logic>
}   
  • fullObject is the full resource object that is supplied with the request.

  • value is the value of the property that is being validated.

  • params refers to the params array that is specified in the property's policy configuration.

  • propName is the name of the property that is being validated.

The following example shows the implementation function for the required policy:

function required(fullObject, value, params, propName) {
    if (value === undefined) {
        return [ { "policyRequirement" : "REQUIRED" } ];
    }
    return [];
}      

9.1.2. Understanding the Policy Configuration Element

The configuration of a managed object property (in the managed.json file) can include a policies element that specifies how policy validation should be applied to that property. The following excerpt of the default managed.json file shows how policy validation is applied to the password and _id properties of a managed/user object:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                "id" : "http://jsonschema.net",
                ...
                "properties" : {
                    "_id" : {
                        "type" : "string",
                        "viewable" : false,
                        "searchable" : false,
                        "userEditable" : false,
                        "policies" : [
                            {
                                "policyId" : "cannot-contain-characters",
                                "params" : {
                                    "forbiddenChars" : ["/"]
                                }
                            }
                        ]
                    },
                    "password" : {
                        "type" : "string",
                        "viewable" : false,
                        "searchable" : false,
                        "minLength" : 8,
                        "userEditable" : true,
                        "policies" : [
                            {
                                "policyId" : "at-least-X-capitals",
                                "params" : {
                                    "numCaps" : 1
                                }
                            },
                            {
                                "policyId" : "at-least-X-numbers",
                                "params" : {
                                    "numNums" : 1
                                }
                            },
                            {
                                "policyId" : "cannot-contain-others",
                                "params" : {
                                    "disallowedFields" : [
                                        "userName",
                                        "givenName",
                                        "sn"
                                    ]
                                }
                            },
                            {
                                "policyId" : "re-auth-required",
                                "params" : {
                                    "exceptRoles" : [
                                        "system",
                                        "openidm-admin",
                                        "openidm-reg",
                                        "openidm-cert"
                                    ]
                                }
                            }
                        ]
                    },

Note that the policy for the _id property references the function cannot-contain-characters, that is defined in the policy.js file. The policy for the password property references the at-least-X-capitals, at-least-X-numbers, cannot-contain-others, and re-auth-required functions that are defined in the policy.js file. The parameters that are passed to these functions (number of capitals required, and so forth) are specified in the same element.

9.1.3. Configuring Policy Validation in the UI

The Admin UI provides rudimentary support for applying policy validation to managed object properties. To configure policy validation for a managed object type update the configuration of the object type in the UI. For example, to specify validation policies for specific properties of managed user objects, select Configure > Managed Objects then click on the User object. Scroll down to the bottom of the Managed Object configuration, then update, or add, a validation policy. The Policy field here refers to a function that has been defined in the policy script file. For more information, see "Understanding the Policy Script File". You cannot define additional policy functions by using the UI.

9.2. Extending the Policy Service

You can extend the policy service by adding custom scripted policies, and by adding policies that are applied only under certain conditions.

9.2.1. Adding Custom Scripted Policies

If your deployment requires additional validation functionality that is not supplied by the default policies, you can add your own policy scripts to your project's script directory, and reference them from your project's conf/policy.json file.

Do not modify the default policy script file (openidm/bin/defaults/script/policy.js) as doing so might result in interoperability issues in a future release. To reference additional policy scripts, set the additionalFiles property conf/policy.json.

The following example creates a custom policy that rejects properties with null values. The policy is defined in a script named mypolicy.js:

var policy = {   "policyId" : "notNull",
       "policyExec" : "notNull",  
       "policyRequirements" : ["NOT_NULL"]
}

addPolicy(policy);

function notNull(fullObject, value, params, property) {
   if (value == null) {
      var requireNotNull = [
        {"policyRequirement": "NOT_NULL"}
      ];
      return requireNotNull;
   }
   return [];
}  

The mypolicy.js policy is referenced in the policy.json configuration file as follows:

{
    "type" : "text/javascript",
    "file" : "bin/defaults/script/policy.js",
    "additionalFiles" : ["script/mypolicy.js"],
    "resources" : [
        {
...
   

9.2.2. Adding Conditional Policy Definitions

You can extend the policy service to support policies that are applied only under specific conditions. To apply a conditional policy to managed objects, add the policy to your project's managed.json file. To apply a conditional policy to other objects, add it to your project's policy.json file.

The following excerpt of a managed.json file shows a sample conditional policy configuration for the "password" property of managed user objects. The policy indicates that sys-admin users have a more lenient password policy than regular employees:

{
    "objects" : [
        {
            "name" : "user",
            ...
                "properties" : {
                ...
                    "password" : {
                        "title" : "Password",
                        "type" : "string",
                        ...
                        "conditionalPolicies" : [
                            {
                                "condition" : {
                                    "type" : "text/javascript",
                                    "source" : "(fullObject.org === 'sys-admin')"
                                },
                                "dependencies" : [ "org" ],
                                "policies" : [
                                    {
                                        "policyId" : "max-age",
                                        "params" : {
                                            "maxDays" : ["90"]
                                        }
                                    }
                                ]
                            },
                            {
                                "condition" : {
                                    "type" : "text/javascript",
                                    "source" : "(fullObject.org === 'employees')"
                                },
                                "dependencies" : [ "org" ],
                                "policies" : [
                                    {
                                        "policyId" : "max-age",
                                        "params" : {
                                            "maxDays" : ["30"]
                                        }
                                    }
                                ]
                            }
                        ],
                        "fallbackPolicies" : [
                            {
                                "policyId" : "max-age",
                                "params" : {
                                    "maxDays" : ["7"]
                                }
                            }
                        ]
            }

To understand how a conditional policy is defined, examine the components of this sample policy.

There are two distinct scripted conditions (defined in the condition elements). The first condition asserts that the user object is a member of the sys-admin org. If that assertion is true, the max-age policy is applied to the password attribute of the user object, and the maximum number of days that a password may remain unchanged is set to 90.

The second condition asserts that the user object is a member of the employees org. If that assertion is true, the max-age policy is applied to the password attribute of the user object, and the maximum number of days that a password may remain unchanged is set to 30.

In the event that neither condition is met (the user object is not a member of the sys-admin org or the employees org), an optional fallback policy can be applied. In this example, the fallback policy also references the max-age policy and specifies that for such users, their password must be changed after 7 days.

The dependencies field prevents the condition scripts from being run at all, if the user object does not include an org attribute.

Note

This example assumes that a custom max-age policy validation function has been defined, as described in "Adding Custom Scripted Policies".

9.3. Disabling Policy Enforcement

Policy enforcement is the automatic validation of data when it is created, updated, or patched. In certain situations you might want to disable policy enforcement temporarily. You might, for example, want to import existing data that does not meet the validation requirements with the intention of cleaning up this data at a later stage.

You can disable policy enforcement by setting openidm.policy.enforcement.enabled to false in your project's conf/boot/boot.properties file. This setting disables policy enforcement in the back-end only, and has no impact on direct policy validation calls to the Policy Service (which the UI makes to validate input fields). So, with policy enforcement disabled, data added directly over REST is not subject to validation, but data added with the UI is still subject to validation.

You should not disable policy enforcement permanently, in a production environment.

9.4. Managing Policies Over REST

You can manage the policy service over the REST interface, by calling the REST endpoint https://localhost:8443/openidm/policy, as shown in the following examples.

9.4.1. Listing the Defined Policies

The following REST call displays a list of all the policies defined in policy.json (policies for objects other than managed objects). The policy objects are returned in JSON format, with one object for each defined policy ID:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/policy"
{
  "_id": "",
  "resources": [
    {
      "resource": "repo/internal/user/*",
      "properties": [
        {
          "name": "_id",
          "policies": [
            {
              "policyId": "cannot-contain-characters",
              "params": {
                "forbiddenChars": [
                  "/"
                ]
              },
              "policyFunction": "\nfunction (fullObject, value, params, property)
...

To display the policies that apply to a specific resource, include the resource name in the URL. For example, the following REST call displays the policies that apply to managed users:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/policy/managed/user/*"
{
  "_id": "*",
  "resource": "managed/user/*",
  "properties": [
    {
      "name": "_id",
      "conditionalPolicies": null,
      "fallbackPolicies": null,
      "policyRequirements": [
        "CANNOT_CONTAIN_CHARACTERS"
      ],
      "policies": [
        {
          "policyId": "cannot-contain-characters",
          "params": {
            "forbiddenChars": [
              "/"
            ]
...

9.4.2. Validating Objects and Properties Over REST

To verify that an object adheres to the requirements of all applied policies, include the validateObject action in the request.

The following example verifies that a new managed user object is acceptable, in terms of the policy requirements:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
  "sn":"Jones",
  "givenName":"Bob",
  "_id":"bjones",
  "telephoneNumber":"0827878921",
  "passPhrase":null,
  "mail":"bjones@example.com",
  "accountStatus":"active",
  "userName":"bjones@example.com",
  "password":"123"
 }' \
 "https://localhost:8443/openidm/policy/managed/user/bjones?_action=validateObject"
{
  "result": false,
  "failedPolicyRequirements": [
    {
      "policyRequirements": [
        {
          "policyRequirement": "MIN_LENGTH",
          "params": {
            "minLength": 8
          }
        }
      ],
      "property": "password"
    },
    {
      "policyRequirements": [
        {
          "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS",
          "params": {
            "numCaps": 1
          }
        }
      ],
      "property": "password"
    }
  ]
}

The result (false) indicates that the object is not valid. The unfulfilled policy requirements are provided as part of the response - in this case, the user password does not meet the validation requirements.

Use the validateProperty action to verify that a specific property adheres to the requirements of a policy.

The following example checks whether Barbara Jensen's new password (12345) is acceptable:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{ "password" : "12345" }' \
 "https://localhost:8443/openidm/policy/managed/user/bjensen?_action=validateProperty"
{
  "result": false,
  "failedPolicyRequirements": [
    {
      "policyRequirements": [
        {
          "policyRequirement": "MIN_LENGTH",
          "params": {
            "minLength": 8
          }
        }
      ],
      "property": "password"
    },
    {
      "policyRequirements": [
        {
          "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS",
          "params": {
            "numCaps": 1
          }
        }
      ],
      "property": "password"
    }
  ]
}

The result (false) indicates that the password is not valid. The unfulfilled policy requirements are provided as part of the response - in this case, the minimum length and the minimum number of capital letters.

Validating a property that does fulfil the policy requirements returns a true result, for example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{ "password" : "1NewPassword" }' \
 "https://localhost:8443/openidm/policy/managed/user/bjensen?_action=validateProperty"
{
  "result": true,
  "failedPolicyRequirements": []
}

Chapter 10. Configuring Server Logs

In this chapter, you will learn about server logging, that is, the messages that OpenIDM logs related to server activity.

Server logging is separate from auditing. Auditing logs activity on the OpenIDM system, such as access and synchronization. For information about audit logging, see "Using Audit Logs". To configure server logging, edit the logging.properties file in your project-dir/conf directory.

10.1. Log Message Files

The default configuration writes log messages in simple format to openidm/logs/openidm*.log files, rotating files when the size reaches 5 MB, and retaining up to 5 files. Also by default, OpenIDM writes all system and custom log messages to the files.

You can modify these limits in the following properties in the logging.properties file for your project:

# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit = 5242880

# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count = 5

10.2. Specifying the Logging Level

By default, OpenIDM logs messages at the INFO level. This logging level is specified with the following global property in conf/logging.properties:

.level=INFO

You can specify different separate logging levels for individual server features which override the global logging level. Set the log level, per package to one of the following:

SEVERE (highest value)
WARNING
INFO
CONFIG
FINE
FINER
FINEST (lowest value)

For example, the following setting decreases the messages logged by the embedded PostgreSQL database:

# reduce the logging of embedded postgres since it is very verbose
ru.yandex.qatools.embed.postgresql.level = SEVERE

Set the log level to OFF to disable logging completely (see in "Disabling Logs"), or to ALL to capture all possible log messages.

If you use logger functions in your JavaScript scripts, set the log level for the scripts as follows:

org.forgerock.openidm.script.javascript.JavaScript.level=level

You can override the log level settings, per script, with the following setting:

org.forgerock.openidm.script.javascript.JavaScript.script-name.level

For more information about using logger functions in scripts, see "Logging Functions".

Important

It is strongly recommended that you do not log messages at the FINE or FINEST levels in a production environment. Although these levels are useful for debugging issues in a test environment, they can result in accidental exposure of sensitive data. For example, a password change patch request can expose the updated password in the Jetty logs.

10.3. Disabling Logs

You can also disable logs if desired. For example, before starting OpenIDM, you can disable ConsoleHandler logging in your project's conf/logging.properties file.

Just set java.util.logging.ConsoleHandler.level = OFF, and comment out other references to ConsoleHandler, as shown in the following excerpt:

   # ConsoleHandler: A simple handler for writing formatted records to System.err
   #handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler
   handlers=java.util.logging.FileHandler
   ...
   # --- ConsoleHandler ---
   # Default: java.util.logging.ConsoleHandler.level = INFO
   java.util.logging.ConsoleHandler.level = OFF
   #java.util.logging.ConsoleHandler.formatter = ...
   #java.util.logging.ConsoleHandler.filter=...

Chapter 11. Connecting to External Resources

This chapter describes how to connect to external resources such as LDAP, Active Directory, flat files, and others. Configurations shown here are simplified to show essential aspects. Not all resources support all OpenIDM operations; however, the resources shown here support most of the CRUD operations, and also reconciliation and LiveSync.

In OpenIDM, resources are external systems, databases, directory servers, and other sources of identity data that are managed and audited by the identity management system. To connect to resources, OpenIDM loads the Identity Connector Framework, OpenICF. OpenICF aims to avoid the need to install agents to access resources, instead using the resources' native protocols. For example, OpenICF connects to database resources using the database's Java connection libraries or JDBC driver. It connects to directory servers over LDAP. It connects to UNIX systems by using ssh.

11.1. About OpenIDM and OpenICF

OpenICF provides a common interface to allow identity services access to the resources that contain user information. OpenIDM loads the OpenICF API as one of its OSGi modules. OpenICF uses connectors to separate the OpenIDM implementation from the dependencies of the resource to which OpenIDM is connecting. A specific connector is required for each remote resource. Connectors can run either locally or remotely.

Local connectors are loaded by OpenICF as regular bundles in the OSGi container. Remote connectors must be executed on a remote connector server. Most connectors can be run locally. However, a remote connector server is required when access libraries that cannot be included as part of the OpenIDM process are needed. If a resource, such as Microsoft Active Directory, does not provide a connection library that can be included inside the Java Virtual Machine, OpenICF can use the native .dll with a remote .NET connector server. In other words, OpenICF connects to Active Directory through a remote connector server that is implemented as a .NET service.

Connections to remote connector servers are configured in a single connector info provider configuration file, located in your project's conf/ directory.

Connectors themselves are configured through provisioner files. One provisioner file must exist for each connector. Provisioner files are named provisioner.openicf-name where name corresponds to the name of the connector, and are also located in the conf/ directory.

A number of sample connector configurations are available in the openidm/samples/provisioners directory. To use these connectors, edit the configuration files as required, and copy them to your project's conf/ directory.

The following figure shows how OpenIDM connects to resources by using connectors and remote connector servers. The figure shows one local connector (LDAP) and two remote connectors (Scripted SQL and PowerShell). In this example, the remote Scripted SQL connector uses a remote Java connector server. The remote PowerShell connector always requires a remote .NET connector server.

How OpenIDM Uses the OpenICF Framework and Connectors
OpenICF architecture

Tip

Connectors that use the .NET framework must run remotely. Java connectors can be run locally or remotely. You might run a Java connector remotely for security reasons (firewall constraints), for geographical reasons, or if the JVM version that is required by the connector conflicts with the JVM version that is required by OpenIDM.

11.2. Accessing Remote Connectors

When you configure a remote connector, you use the connector info provider service to connect through a remote connector server. The connector info provider service configuration is stored in the file project-dir/conf/provisioner.openicf.connectorinfoprovider.json. A sample configuration file is provided in the openidm/samples/provisioners/ directory. To use this sample configuration, edit the file as required, and copy it to your project's conf/ directory.

The sample connector info provider configuration is as follows:

{
   "remoteConnectorServers" :
      [
         {
            "name" : "dotnet",
            "host" : "127.0.0.1",
            "port" : 8759,
            "useSSL" : false,
            "timeout" : 0,
            "protocol" : "websocket",
            "key" : "Passw0rd"
         }
      ]
}

You can configure the following remote connector server properties:

name

string, required

The name of the remote connector server object. This name is used to identify the remote connector server in the list of connector reference objects.

host

string, required

The remote host to connect to.

port

integer, optional

The remote port to connect to. The default remote port is 8759.

heartbeatInterval

integer, optional

The interval, in seconds, at which heartbeat packets are transmitted. If the connector server is unreachable based on this heartbeat interval, all services that use the connector server are made unavailable until the connector server can be reached again. The default interval is 60 seconds.

useSSL

boolean, optional

Specifies whether to connect to the connector server over SSL. The default value is false.

timeout

integer, optional

Specifies the timeout (in milliseconds) to use for the connection. The default value is 0, which means that there is no timeout.

protocol

string

Version 1.5.0.0 of the OpenICF framework supports a new communication protocol with remote connector servers. This protocol is enabled by default, and its value is websocket in the default configuration.

For compatibility reasons, you might want to enable the legacy protocol for specific remote connectors. For example, if you deploy the connector server on a Java 5 or 6 JVM, you must use the old protocol. In this case, remove the protocol property from the connector server configuration.

For the .NET connector server, the service with the new protocol listens on port 8759 and the service with the legacy protocol listens on port 8760 by default.

For the Java connector server, the service listens on port 8759 by default, for both the new and legacy protocols. The new protocol runs by default. To run the service with the legacy protocol, you must change the main class that is executed in the ConnectorServer.sh or ConnectorServer.bat file. The class that starts the websocket protocol is MAIN_CLASS=org.forgerock.openicf.framework.server.Main. The class that starts the legacy protocol is MAIN_CLASS=org.identityconnectors.framework.server.Main. To change the port on which the Java connector server listens, change the connectorserver.port property in the openicf/conf/ConnectorServer.properties file.

Caution

Currently, the new, default protocol has specific known issues. You should therefore run the 1.5 .NET Connector Server in legacy mode, with the old protocol, as described in "Running the .NET Connector Server in Legacy Mode".

key

string, required

The secret key, or password, to use to authenticate to the remote connector server.

To run remotely, the connector .jar itself must be copied to the openicf/bundles directory, on the remote machine.

11.3. Configuring Connectors

Connectors are configured through the OpenICF provisioner service. Each connector configuration is stored in a file in your project's conf/ directory, and accessible over REST at the openidm/conf endpoint. Configuration files are named project-dir/conf/provisioner.openicf-name where name corresponds to the name of the connector. A number of sample connector configurations are available in the openidm/samples/provisioners directory. To use these connector configurations, edit the configuration files as required, and copy them to your project's conf directory.

If you are creating your own connector configuration files, do not include additional dash characters ( - ) in the connector name, as this might cause problems with the OSGi parser. For example, the name provisioner.openicf-hrdb.json is fine. The name provisioner.openicf-hr-db.json is not.

The following example shows a connector configuration for an XML file resource:

{
 "name"                      : "xml",
 "connectorRef"              : connector-ref-object,
 "producerBufferSize"        : integer,
 "connectorPoolingSupported" : boolean, true/false,
 "poolConfigOption"          : pool-config-option-object,
 "operationTimeout"          : operation-timeout-object,
 "configurationProperties"   : configuration-properties-object,
 "syncFailureHandler"        : sync-failure-handler-object,
 "resultsHandlerConfig"      : results-handler-config-object,
 "objectTypes"               : object-types-object,
 "operationOptions"          : operation-options-object
}

The name property specifies the name of the system to which you are connecting. This name must be alphanumeric.

11.3.1. Setting the Connector Reference Properties

The following example shows a connector reference object:

{
  "bundleName"       : "org.forgerock.openicf.connectors.xml-connector",
  "bundleVersion"    : "1.1.0.2",
  "connectorName"    : "org.forgerock.openicf.connectors.xml.XMLConnector",
  "connectorHostRef" : "host"
}
bundleName

string, required

The ConnectorBundle-Name of the OpenICF connector.

bundleVersion

string, required

The ConnectorBundle-Version of the OpenICF connector. The value can be a single version (such as1.4.0.0) or a range of versions, which enables you to support multiple connector versions in a single project.

You can specify a range of versions as follows:

  • [1.1.0.0,1.4.0.0] indicates that all connector versions from 1.1 to 1.4, inclusive, are supported.

  • [1.1.0.0,1.4.0.0) indicates that all connector versions from 1.1 to 1.4, including 1.1 but excluding 1.4, are supported.

  • (1.1.0.0,1.4.0.0] indicates that all connector versions from 1.1 to 1.4, excluding 1.1 but including 1.4, are supported.

  • (1.1.0.0,1.4.0.0) indicates that all connector versions from 1.1 to 1.4, exclusive, are supported.

When a range of versions is specified, OpenIDM uses the latest connector that is available within that range. If your project requires a specific connector version, you must explicitly state the version in your connector configuration file, or constrain the range to address only the version that you need.

connectorName

string, required

The connector implementation class name.

connectorHostRef

string, optional

If the connector runs remotely, the value of this field must match the name field of the RemoteConnectorServers object in the connector server configuration file (provisioner.openicf.connectorinfoprovider.json). For example:

...
    "remoteConnectorServers" :
        [
            {
                "name" : "dotnet",
...

If the connector runs locally, the value of this field can be one of the following:

  • If the connector .jar is installed in openidm/connectors/, the value must be "#LOCAL". This is currently the default, and recommended location.

  • If the connector .jar is installed in openidm/bundle/ (not recommended), the value must be "osgi:service/org.forgerock.openicf.framework.api.osgi.ConnectorManager".

11.3.2. Setting the Pool Configuration

The poolConfigOption specifies the pool configuration for poolable connectors only (connectors that have "connectorPoolingSupported" : true). Non-poolable connectors ignore this parameter.

The following example shows a pool configuration option object for a poolable connector:

{
  "maxObjects"                 : 10,
  "maxIdle"                    : 10,
  "maxWait"                    : 150000,
  "minEvictableIdleTimeMillis" : 120000,
  "minIdle"                    : 1
}
maxObjects

The maximum number of idle and active instances of the connector.

maxIdle

The maximum number of idle instances of the connector.

maxWait

The maximum time, in milliseconds, that the pool waits for an object before timing out. A value of 0 means that there is no timeout.

minEvictableIdleTimeMillis

The maximum time, in milliseconds, that an object can be idle before it is removed. A value of 0 means that there is no idle timeout.

minIdle

The minimum number of idle instances of the connector.

11.3.3. Setting the Operation Timeouts

The operation timeout property enables you to configure timeout values per operation type. By default, no timeout is configured for any operation type. A sample configuration follows:

{
  "CREATE"              : -1,
  "TEST"                : -1,
  "AUTHENTICATE"        : -1,
  "SEARCH"              : -1,
  "VALIDATE"            : -1,
  "GET"                 : -1,
  "UPDATE"              : -1,
  "DELETE"              : -1,
  "SCRIPT_ON_CONNECTOR" : -1,
  "SCRIPT_ON_RESOURCE"  : -1,
  "SYNC"                : -1,
  "SCHEMA"              : -1
}
operation-name

Timeout in milliseconds

A value of -1 disables the timeout.

11.3.4. Setting the Connection Configuration

The configurationProperties object specifies the configuration for the connection between the connector and the resource, and is therefore resource specific.

The following example shows a configuration properties object for the default XML sample resource connector:

"configurationProperties" : {
    "xsdIcfFilePath" : "&{launcher.project.location}/data/resource-schema-1.xsd",
    "xsdFilePath" : "&{launcher.project.location}/data/resource-schema-extension.xsd",
    "xmlFilePath" : "&{launcher.project.location}/data/xmlConnectorData.xml"
}
property

Individual properties depend on the type of connector.

11.3.5. Setting the Synchronization Failure Configuration

The syncFailureHandler object specifies what should happen if a LiveSync operation reports a failure for an operation. The following example shows a synchronization failure configuration:

{
    "maxRetries" : 5,
    "postRetryAction" : "logged-ignore"
}  
maxRetries

positive integer or -1, required

The number of attempts that OpenIDM should make to process a failed modification. A value of zero indicates that failed modifications should not be reattempted. In this case, the post retry action is executed immediately when a LiveSync operation fails. A value of -1 (or omitting the maxRetries property, or the entire syncFailureHandler object) indicates that failed modifications should be retried an infinite number of times. In this case, no post retry action is executed.

postRetryAction

string, required

The action that should be taken if the synchronization operation fails after the specified number of attempts. The post retry action can be one of the following:

  • logged-ignore indicates that OpenIDM should ignore the failed modification, and log its occurrence.

  • dead-letter-queue indicates that OpenIDM should save the details of the failed modification in a table in the repository (accessible over REST at repo/synchronisation/deadLetterQueue/provisioner-name).

  • script specifies a custom script that should be executed when the maximum number of retries has been reached.

For more information, see "Configuring the LiveSync Retry Policy".

11.3.6. Configuring How Results Are Handled

The resultsHandlerConfig object specifies how OpenICF returns results. These configuration properties depend on the connector type and on the interfaces that are implemented by that connector type. For information the interfaces that each connector supports, see the OpenICF Connector Configuration Reference.

The following example shows a results handler configuration object:

{
    "enableNormalizingResultsHandler" : true,
    "enableFilteredResultsHandler" : false,
    "enableCaseInsensitiveFilter" : false,
    "enableAttributesToGetSearchResultsHandler" : false
}  
enableNormalizingResultsHandler

boolean

If the connector implements the attribute normalizer interface, you can enable this interface by setting this configuration property to true. If the connector does not implement the attribute normalizer interface, the value of this property has no effect.

enableFilteredResultsHandler

boolean

If the connector uses the filtering and search capabilities of the remote connected system, you can set this property to false. If the connector does not use the remote system's filtering and search capabilities (for example, the CSV file connector), you must set this property to true, otherwise the connector performs an additional, case-sensitive search, which can cause problems.

enableCaseInsensitiveFilter

boolean

By default, the filtered results handler (described previously) is case-sensitive. If the filtered results handler is enabled, you can use this property to enable case-insensitive filtering. If you do not enable case-insensitive filtering, a search will not return results unless the case matches exactly. For example, a search for lastName = "Jensen" will not match a stored user with lastName : jensen.

enableAttributesToGetSearchResultsHandler

boolean

By default, OpenIDM determines which attributes should be retrieved in a search. If the enableAttributesToGetSearchResultsHandler property is set to true the OpenICF framework removes all attributes from the READ/QUERY response, except for those that are specifically requested. For performance reasons, you should set this property to false for local connectors and to true for remote connectors.

11.3.7. Specifying the Supported Object Types

The object-types configuration specifies the objects (user, group, and so on) that are supported by the connector. The property names set here define the objectType that is used in the URI. For example:

system/systemName/objectType

This configuration is based on the JSON Schema with the extensions described in the following section.

Attribute names that start or end with __ are regarded as special attributes. These attributes are specific to the resource type and are used by OpenICF for particular purposes, such as __NAME__, used as the naming attribute for objects on a resource.

The following excerpt shows the configuration of an account object type:

{
  "account" :
  {
    "$schema" : "http://json-schema.org/draft-03/schema",
    "id" : "__ACCOUNT__",
    "type" : "object",
    "nativeType" : "__ACCOUNT__",
    "properties" :
    {
      "name" :
      {
        "type" : "string",
        "nativeName" : "__NAME__",
        "nativeType" : "JAVA_TYPE_PRIMITIVE_LONG",
        "flags" :
        [
          "NOT_CREATABLE",
          "NOT_UPDATEABLE",
          "NOT_READABLE",
          "NOT_RETURNED_BY_DEFAULT"
        ]
      },
      "groups" :
      {
        "type" : "array",
        "items" :
        {
          "type" : "string",
          "nativeType" : "string"
        },
        "nativeName" : "__GROUPS__",
        "nativeType" : "string",
        "flags" :
        [
          "NOT_RETURNED_BY_DEFAULT"
        ]
      },                
      "givenName" : {
         "type" : "string",
         "nativeName" : "givenName",
         "nativeType" : "string"
         },
    }
  }
}

OpenICF supports an __ALL__ object type that ensures that objects of every type are included in a synchronization operation. The primary purpose of this object type is to prevent synchronization errors when multiple changes affect more than one object type.

For example, imagine a deployment synchronizing two external systems. On system A, the administrator creates a user, jdoe, then adds the user to a group, engineers. When these changes are synchronized to system B, if the __GROUPS__ object type is synchronized first, the synchronization will fail, because the group contains a user that does not yet exist on system B. Synchronizing the __ALL__ object type ensures that user jdoe is created on the external system before he is added to the group engineers.

The __ALL__ object type is assumed by default - you do not need to declare it in your provisioner configuration file. If it is not declared, the object type is named __ALL__. If you want to map a different name for this object type, declare it in your provisioner configuration. The following excerpt from a sample provisioner configuration uses the name allobjects:

"objectTypes": {
    "allobjects": {
        "$schema": "http://json-schema.org/draft-03/schema",
        "id": "__ALL__",
        "type": "object",
        "nativeType": "__ALL__"
    },
...

A LiveSync operation invoked with no object type assumes an object type of __ALL__. For example, the following call invokes a LiveSync operation on all defined object types in an LDAP system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/system/ldap?_action=liveSync"

Note

Using the __ALL__ object type requires a mechanism to ensure the order in which synchronization changes are processed. Servers that use the cn=changelog mechanism to order sync changes (such as OpenDJ, Oracle DSEE, and the legacy Sun Directory Server) cannot use the __ALL__ object type by default, and must be forced to use time stamps to order their sync changes. For these LDAP server types, set useTimestampsForSync to true in the provisioner configuration.

LDAP servers that use timestamps by default (such as Active Directory GCs and OpenLDAP) can use the __ALL__ object type without any additional configuration. Active Directory and Active Directory LDS, which use Update Sequence Numbers, can also use the __ALL__ object type without additional configuration.

11.3.7.1. Extending the Object Type Configuration

nativeType

string, optional

The native OpenICF object type.

The list of supported native object types is dependent on the resource, or on the connector. For example, an LDAP connector might have object types such as __ACCOUNT__ and __GROUP__.

11.3.7.2. Extending the Property Type Configuration

nativeType

string, optional

The native OpenICF attribute type.

The following native types are supported:

JAVA_TYPE_BIGDECIMAL
JAVA_TYPE_BIGINTEGER
JAVA_TYPE_BYTE
JAVA_TYPE_BYTE_ARRAY
JAVA_TYPE_CHAR
JAVA_TYPE_CHARACTER
JAVA_TYPE_DATE
JAVA_TYPE_DOUBLE
JAVA_TYPE_FILE
JAVA_TYPE_FLOAT
JAVA_TYPE_GUARDEDBYTEARRAY
JAVA_TYPE_GUARDEDSTRING
JAVA_TYPE_INT
JAVA_TYPE_INTEGER
JAVA_TYPE_LONG
JAVA_TYPE_OBJECT
JAVA_TYPE_PRIMITIVE_BOOLEAN
JAVA_TYPE_PRIMITIVE_BYTE
JAVA_TYPE_PRIMITIVE_DOUBLE
JAVA_TYPE_PRIMITIVE_FLOAT
JAVA_TYPE_PRIMITIVE_LONG
JAVA_TYPE_STRING

Note

The JAVA_TYPE_DATE property is deprecated. Functionality may be removed in a future release. This property-level extension is an alias for string. Any dates assigned to this extension should be formatted per ISO 8601.

nativeName

string, optional

The native OpenICF attribute name.

flags

string, optional

The native OpenICF attribute flags. OpenICF supports the following attribute flags:

  • MULTIVALUED - specifies that the property can be multivalued. This flag sets the type of the attribute as follows:

    "type" : "array"

    If the attribute type is array, an additional items field specifies the supported type for the objects in the array. For example:

    "groups" :
        {
            "type" : "array",
            "items" :
            {
              "type" : "string",
              "nativeType" : "string"
            },
        ....
  • NOT_CREATABLE, NOT_READABLE, NOT_RETURNED_BY_DEFAULT, NOT_UPDATEABLE

    In some cases, the connector might not support manipulating an attribute because the attribute can only be changed directly on the remote system. For example, if the name attribute of an account can only be created by Active Directory, and never changed by OpenIDM, you would add NOT_CREATABLE and NOT_UPDATEABLE to the provisioner configuration for that attribute.

    Certain attributes such as LDAP groups or other calculated attributes might be expensive to read. You might want to avoid returning these attributes in a default read of the object, unless they are explicitly requested. In this case, you would add the NOT_RETURNED_BY_DEFAULT flag to the provisioner configuration for that attribute.

  • REQUIRED - specifies that the property is required in create operations. This flag sets the required property of an attribute as follows:

    "required" : true

Note

Do not use the dash character ( - ) in property names, like last-name. Dashes in names make JavaScript syntax more complex. If you cannot avoid the dash, write source['last-name'] instead of source.last-name in your JavaScript scripts.

11.3.7.3. OpenICF Special Attributes

OpenICF includes a number of special attributes, that all begin and end with __ (for example __NAME__, and __UID__). These special attributes are essentially functional aliases for specific attributes or object types. The purpose of the special attributes is to enable a connector developer to create a contract regarding how a property can be referenced, regardless of the application that is using the connector. In this way, the connector can map specific object information between an arbitrary application and the resource, without knowing how that information is referenced in the application.

The special attributes are used extensively in the generic LDAP connector, which can be used with OpenDJ, Active Directory, OpenLDAP, and other LDAP directories. Each of these directories might use a different attribute name to represent the same type of information. For example, Active Directory uses unicodePassword and OpenDJ uses userPassword to represent the same thing, a user's password. The LDAP connector uses the special OpenICF __PASSWORD__ attribute to abstract that difference.

For a list of the special attributes, see the corresponding Javadoc.

11.3.8. Configuring the Operation Options

The operationOptions object enables you to deny specific operations on a resource. For example, you can use this configuration object to deny CREATE and DELETE operations on a read-only resource to avoid OpenIDM accidentally updating the resource during a synchronization operation.

The following example defines the options for the "SYNC" operation:

"operationOptions" : {
  {
    "SYNC" :
    {
      "denied" : true,
      "onDeny" : "DO_NOTHING",
      "objectFeatures" :
      {
        "__ACCOUNT__" :
        {
          "denied" : true,
          "onDeny" : "THROW_EXCEPTION",
          "operationOptionInfo" :
          {
            "$schema" : "http://json-schema.org/draft-03/schema",
            "id" : "FIX_ME",
            "type" : "object",
            "properties" :
            {
              "_OperationOption-float" :
              {
                 "type" : "number",
                 "nativeType" : "JAVA_TYPE_PRIMITIVE_FLOAT"
              }
            }
          }
        },
        "__GROUP__" :
        {
          "denied" : false,
          "onDeny" : "DO_NOTHING"
        }
      }
    }
  }
...

The OpenICF Framework supports the following operations:

The operationOptions object has the following configurable properties:

denied

boolean, optional

This property prevents operation execution if the value is true.

onDeny

string, optional

If denied is true, then the service uses this value. Default value: DO_NOTHING.

  • DO_NOTHING: On operation the service does nothing.

  • THROW_EXCEPTION: On operation the service throws a ForbiddenException exception.

11.4. Installing and Configuring Remote Connector Servers

Connectors that use the .NET framework must run remotely. Java connectors can run locally or remotely. Connectors that run remotely require a connector server to enable OpenIDM to access the connector.

For a list of supported versions, and compatibility between versions, see "Supported Connectors, Connector Servers, and Plugins" in the Release Notes.

This section describes the steps to install a .NET connector server and a remote Java Connector Server.

11.4.1. Installing and Configuring a .NET Connector Server

A .NET connector server is useful when an application is written in Java, but a connector bundle is written using C#. Because a Java application (for example, a J2EE application) cannot load C# classes, you must deploy the C# bundles under a .NET connector server. The Java application can communicate with the C# connector server over the network, and the C# connector server acts as a proxy to provide access to the C# bundles that are deployed within the C# connector server, to any authenticated application.

By default, the connector server outputs log messages to a file named connectorserver.log, in the C:\path\to\openicf directory. To change the location of the log file set the initializeData parameter in the configuration file, before you install the connector server. For example, the following excerpt sets the log directory to C:\openicf\logs\connectorserver.log:

<add name="file"
   type="System.Diagnostics.TextWriterTraceListener"
   initializeData="C:\openicf\logs\connectorserver.log"
   traceOutputOptions="DateTime">
     <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information"/>
     </add>

Important

Version 1.5 of the .NET connector server includes a new communication protocol that is enabled by default. Currently the new protocol has specific known stability issues. You should therefore run the 1.5 .NET connector server in legacy mode, with the old protocol, as described in "Running the .NET Connector Server in Legacy Mode".

Installing the .NET Connector Server
  1. Download the OpenICF .NET Connector Server from the ForgeRock BackStage site.

    The .NET connector server is distributed in two formats. The .msi file is a wizard that installs the Connector Server as a Windows Service. The .zip file is simply a bundle of all the files required to run the Connector Server.

    • If you do not want to run the Connector Server as a Windows service, download and extract the .zip file, then move on to "Configuring the .NET Connector Server".

    • If you have deployed the .zip file and then decide to run the Connector Server as a service, install the service manually with the following command:

      .\ConnectorServerService.exe /install /serviceName service-name

      Then proceed to "Configuring the .NET Connector Server".

    • To install the Connector Server as a Windows service automatically, follow the remaining steps in this section.

  2. Execute the openicf-zip--dotnet.msi installation file and complete the wizard.

    You must run the wizard as a user who has permissions to start and stop a Windows service, otherwise the service will not start.

    When you choose the Setup Type, select Typical unless you require backward compatibility with the 1.4.0.0 connector server. If you need backward compatibility, select Custom, and install the Legacy Connector Service.

    When the wizard has completed, the Connector Server is installed as a Windows Service.

  3. Open the Microsoft Services Console and make sure that the Connector Server is listed there.

    The name of the service is OpenICF Connector Server, by default.

    .Net Connector Server as Windows Service
Running the .NET Connector Server in Legacy Mode
  1. If you are installing the .NET Connector Server from the .msi distribution, select Custom for the Setup Type, and install the Legacy Connector Service.

  2. If you are installing the .NET Connector Server from the .zip distribution, launch the Connector Server by running the ConnectorServer.exe command, and not the ConnectorServerService.exe command.

  3. Adjust the port parameter in your OpenIDM remote connector server configuration file. In legacy mode, the connector server listens on port 8760 by default.

  4. Remove the "protocol" : "websocket", from your OpenIDM remote connector server configuration file to specify that the connector server should use the legacy protocol.

  5. In the commands shown in "Configuring the .NET Connector Server", replace ConnectorServerService.exe with ConnectorServer.exe.

Configuring the .NET Connector Server

After you have installed the .NET Connector Server, as described in the previous section, follow these steps to configure the Connector Server:

  1. Make sure that the Connector Server is not currently running. If it is running, use the Microsoft Services Console to stop it.

  2. At the command prompt, change to the directory where the Connector Server was installed:

    c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF"
  3. Run the ConnectorServerService /setkey command to set a secret key for the Connector Server. The key can be any string value. This example sets the secret key to Passw0rd:

    ConnectorServerService /setkey Passw0rd
    Key has been successfully updated.

    This key is used by clients connecting to the Connector Server. The key that you set here must also be set in the OpenIDM connector info provider configuration file (conf/provisioner.openicf.connectorinfoprovider.json). For more information, see "Configuring OpenIDM to Connect to the .NET Connector Server".

  4. Edit the Connector Server configuration.

    The Connector Server configuration is saved in a file named ConnectorServerService.exe.Config (in the directory in which the Connector Server is installed).

    Check and edit this file, as necessary, to reflect your installation. Specifically, verify that the baseAddress reflects the host and port on which the connector server is installed:

    <system.serviceModel>
      <services>
        <service name="Org.ForgeRock.OpenICF.Framework.Service.WcfServiceLibrary.WcfWebsocket">
          <host>
            <baseAddresses>
              <add baseAddress="http://0.0.0.0:8759/openicf" />
            </baseAddresses>
          <host>
        </service>
      </services>
    </system.serviceModel>

    The baseAddress specifies the host and port on which the Connector Server listens, and is set to http://0.0.0.0:8759/openicf by default. If you set a host value other than the default 0.0.0.0, connections from all IP addresses other than the one specified are denied.

    If Windows firewall is enabled, you must create an inbound port rule to open the TCP port for the connector server (8759 by default). If you do not open the TCP port, OpenIDM will be unable to contact the Connector Server. For more information, see the Microsoft documentation on creating an inbound port rule.

  5. Optionally, configure the Connector Server to use SSL:

    1. Use an existing CA certificate, or use the makecert utility to create an exportable self-signed Root CA Certificate:

      c:\"Program Files (x86)"\"Windows Kits"\8.1\bin\x64\makecert.exe ^
      -pe -r -sky signature -cy authority -a sha1 -n "CN=Dev Certification Authority" ^
      -ss Root -sr LocalMachine -sk RootCA signroot.cer
    2. Create an exportable server authentication certificate:

      c:\"Program Files (x86)"\"Windows Kits"\8.1\bin\x64\makecert.exe ^
      -pe -sky exchange -cy end -n "CN=localhost" -b 01/01/2015 -e 01/01/2050 -eku 1.3.6.1.5.5.7.3.1 ^
      -ir LocalMachine -is Root -ic signroot.cer -ss My -sr localMachine -sk server ^
      -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 server.cer
    3. Retrieve and set the certificate thumbprint:

              c:\Program Files (x86)\ForgeRock\OpenICF>ConnectorServerService.exe /setCertificate
      Select certificate you want to use:
      Index  Issued To         Thumbprint
      -----  ---------         -------------------------
        0)   localhost         4D01BE385BF079DD4B9C5A416E7B535904855E0A
      
      Certificate Thumbprint has been successfully updated to 4D01BE385BF079DD4B9C5A416E7B535904855E0A.
    4. Bind the certificate to the Connector Server port. For example:

      netsh http add sslcert ipport=0.0.0.0:8759 ^
      certhash=4D01BE385BF079DD4B9C5A416E7B535904855E0A ^
      appid={bca0631d-cab1-48c8-bd2a-eb049d7d3c55}
    5. Execute Service as a non-administrative user:

      netsh http add urlacl url=https://+:8759/ user=EVERYONE
    6. Change the Connector Server configuration to use HTTPS and not HTTP:

      <add baseAddress="https://0.0.0.0:8759/openicf" />
  6. Check the trace settings, in the same Connector Server configuration file, under the system.diagnostics item:

    <system.diagnostics>
      <trace autoflush="true" indentsize="4">
        <listeners>
          <remove name="Default" />
          <add name="console" />
          <add name="file" />
        </listeners>
      </trace>
      <sources>
        <source name="ConnectorServer" switchName="switch1">
          <listeners>
            <remove name="Default" />
            <add name="file" />
          </listeners>
        </source>
      </sources>
      <switches>
        <add name="switch1" value="Information" />
      </switches>
      <sharedListeners>
        <add name="console" type="System.Diagnostics.ConsoleTraceListener" />
        <add name="file" type="System.Diagnostics.TextWriterTraceListener"
                initializeData="logs\ConnectorServerService.log"
                traceOutputOptions="DateTime">
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information" />
        </add>
      </sharedListeners>
    </system.diagnostics>

    The Connector Server uses the standard .NET trace mechanism. For more information about tracing options, see Microsoft's .NET documentation for System.Diagnostics.

    The default trace settings are a good starting point. For less tracing, set the EventTypeFilter's initializeData to Warning or Error. For very verbose logging set the value to Verbose or All. The logging level has a direct effect on the performance of the Connector Servers, so take care when setting this level.

Starting the .NET Connector Server

Start the .NET Connector Server in one of the following ways:

  1. Start the server as a Windows service, by using the Microsoft Services Console.

    Locate the connector server service (OpenICF Connector Server), and click Start the service or Restart the service.

    The service is executed with the credentials of the "run as" user (System, by default).

  2. Start the server as a Windows service, by using the command line.

    In the Windows Command Prompt, run the following command:

    net start ConnectorServerService

    To stop the service in this manner, run the following command:

    net stop ConnectorServerService
  3. Start the server without using Windows services.

    In the Windows Command Prompt, change directory to the location where the Connector Server was installed. The default location is c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF".

    Start the server with the following command:

    ConnectorServerService.exe /run

    Note that this command starts the Connector Server with the credentials of the current user. It does not start the server as a Windows service.

Configuring OpenIDM to Connect to the .NET Connector Server

The connector info provider service configures one or more remote connector servers to which OpenIDM can connect. The connector info provider configuration is stored in a file named project-dir/conf/provisioner.openicf.connectorinfoprovider.json. A sample connector info provider configuration file is located in openidm/samples/provisioners/.

To configure OpenIDM to use the remote .NET connector server, follow these steps:

  1. Start OpenIDM, if it is not already running.

  2. Copy the sample connector info provider configuration file to your project's conf/ directory:

    $ cd /path/to/openidm
    $ cp samples/provisioners/provisioner.openicf.connectorinfoprovider.json project-dir/conf/
  3. Edit the connector info provider configuration, specifying the details of the remote connector server:

    "remoteConnectorServers" : [
        {
            "name" : "dotnet",
            "host" : "192.0.2.0",
            "port" : 8759,
            "useSSL" : false,
            "timeout" : 0,
            "protocol" : "websocket",
            "key" : "Passw0rd"
        }

    Configurable properties are as follows:

    name

    Specifies the name of the connection to the .NET connector server. The name can be any string. This name is referenced in the connectorHostRef property of the connector configuration file (provisioner.openicf-ad.json).

    host

    Specifies the IP address of the host on which the Connector Server is installed.

    port

    Specifies the port on which the Connector Server listens. This property matches the connectorserver.port property in the ConnectorServerService.exe.config file.

    For more information, see "Configuring the .NET Connector Server".

    useSSL

    Specifies whether the connection to the Connector Server should be secured. This property matches the "connectorserver.usessl" property in the ConnectorServerService.exe.config file.

    timeout

    Specifies the length of time, in seconds, that OpenIDM should attempt to connect to the Connector Server before abandoning the attempt. To disable the timeout, set the value of this property to 0.

    protocol

    Version 1.5.0.0 of the OpenICF framework supports a new communication protocol with remote connector servers. This protocol is enabled by default, and its value is websocket in the default configuration.

    Currently, the new, default protocol has specific known issues. You should therefore run the 1.5 .NET Connector Server in legacy mode, with the old protocol, as described in "Running the .NET Connector Server in Legacy Mode".

    key

    Specifies the connector server key. This property matches the key property in the ConnectorServerService.exe.config file. For more information, see "Configuring the .NET Connector Server".

    The string value that you enter here is encrypted as soon as the file is saved.

11.4.2. Installing and Configuring a Remote Java Connector Server

In certain situations, it might be necessary to set up a remote Java Connector Server. This section provides instructions for setting up a remote Java Connector Server on Unix/Linux and Windows.

Installing a Remote Java Connector Server for Unix/Linux
  1. Download the OpenICF Java Connector Server from the ForgeRock Backstage site.

  2. Change to the appropriate directory and unpack the zip file. The following command unzips the file in the current directory:

    $ unzip openicf-zip-1.5.0.0.zip
  3. Change to the openicf directory:

    $ cd path/to/openicf
  4. The Java Connector Server uses a key property to authenticate the connection. The default key value is changeit. To change the value of the secret key, run a command similar to the following. This example sets the key value to Passw0rd:

    $ cd /path/to/openicf
    $  bin/ConnectorServer.sh /setkey Passw0rd
    Key has been successfully updated.
  5. Review the ConnectorServer.properties file in the /path/to/openicf/conf directory, and make any required changes. By default, the configuration file has the following properties:

    connectorserver.port=8759
    connectorserver.libDir=lib
    connectorserver.usessl=false
    connectorserver.bundleDir=bundles
    connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog
    connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=

    The connectorserver.usessl parameter indicates whether client connections to the connector server should be over SSL. This property is set to false by default.

    To secure connections to the connector server, set this property to true and set the following properties before you start the connector server:

    java -Djavax.net.ssl.keyStore=mySrvKeystore -Djavax.net.ssl.keyStorePassword=Passw0rd
  6. Start the Java Connector Server:

    $ bin/ConnectorServer.sh /run

    The connector server is now running, and listening on port 8759, by default.

    Log files are available in the /path/to/openicf/logs directory.

    $ ls logs/
    Connector.log  ConnectorServer.log  ConnectorServerTrace.log
  7. If required, stop the Java Connector Server by pressing CTRL-C.

Installing a Remote Java Connector Server for Windows
  1. Download the OpenICF Java Connector Server from the ForgeRock Backstage site.

  2. Change to the appropriate directory and unpack the zip file.

  3. In a Command Prompt window, change to the openicf directory:

    C:\>cd C:\path\to\openicf\bin
  4. If required, secure the communication between OpenIDM and the Java Connector Server. The Java Connector Server uses a key property to authenticate the connection. The default key value is changeit.

    To change the value of the secret key, use the bin\ConnectorServer.bat /setkey command. The following example sets the key to Passw0rd:

    c:\path\to\openicf>bin\ConnectorServer.bat /setkey Passw0rd
    lib\framework\connector-framework.jar;lib\framework\connector-framework-internal
    .jar;lib\framework\groovy-all.jar;lib\framework\icfl-over-slf4j.jar;lib\framework
    \slf4j-api.jar;lib\framework\logback-core.jar;lib\framework\logback-classic.jar
  5. Review the ConnectorServer.properties file in the path\to\openicf\conf directory, and make any required changes. By default, the configuration file has the following properties:

    connectorserver.port=8759
    connectorserver.libDir=lib
    connectorserver.usessl=false
    connectorserver.bundleDir=bundles
    connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog
    connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=
  6. You can either run the Java Connector Server as a Windows service, or start and stop it from the command-line.

    • To install the Java Connector Server as a Windows service, run the following command:

      c:\path\to\openicf>bin\ConnectorServer.bat /install

      If you install the connector server as a Windows service you can use the Microsoft Services Console to start, stop and restart the service. The Java Connector Service is named OpenICFConnectorServerJava.

      To uninstall the Java Connector Server as a Windows service, run the following command:

      c:\path\to\openicf>bin\ConnectorServer.bat /uninstall
  7. To start the Java Connector Server from the command line, enter the following command:

    c:\path\to\openicf>bin\ConnectorServer.bat /run

    The connector server is now running, and listening on port 8759, by default.

    Log files are available in the \path\to\openicf\logs directory.

  8. If required, stop the Java Connector Server by pressing ^C.

11.5. Connectors Supported With OpenIDM 4

OpenIDM 4 provides several connectors by default, in the path/to/openidm/connectors directory. The supported connectors that are not bundled with OpenIDM, and a number of additional connectors, can be downloaded from the OpenICF community site.

This section describes the connectors that are supported for use with OpenIDM 4, and provides instructions for installing and configuring these connectors. For instructions on building connector configurations interactively, see "Creating Default Connector Configurations".

11.5.1. Generic LDAP Connector

The generic LDAP connector is based on JNDI, and can be used to connect to any LDAPv3-compliant directory server, such as OpenDJ, Active Directory, SunDS, Oracle Directory Server Enterprise Edition, IBM Security Directory Server, and OpenLDAP.

OpenICF does provide a legacy Active Directory connector, but you should use the generic LDAP connector in Active Directory deployments, unless your deployment has specific requirements that prevent you from doing so. Using the generic LDAP connector avoids the need to install a remote connector server in the overall deployment. In addition, the generic LDAP connector has significant performance advantages over the Active Directory connector.

OpenIDM 4 bundles version 1.4.1.0 of the LDAP connector. Three sample LDAP connector configurations are provided in the path/to/openidm/samples/provisioners/ directory:

  • provisioner.openicf-opendjldap.json provides a sample LDAP connector configuration for an OpenDJ directory server.

  • provisioner.openicf-adldap.json provides a sample LDAP connector configuration for an Active Directory server.

  • provisioner.openicf-adldsldap.json provides a sample LDAP connector configuration for an Active Directory Lightweight Directory Services (AD LDS) server.

You should be able to adapt one of these sample configurations for any LDAPv3-compliant server.

The connectorRef configuration property provides information about the LDAP connector bundle, and is the same in all three sample LDAP connector configurations:

{
    "connectorRef": {
        "connectorHostRef": "#LOCAL",
        "connectorName": "org.identityconnectors.ldap.LdapConnector",
        "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
        "bundleVersion": "[1.4.0.0,2.0.0.0)"
    }
}

The connectorHostRef property is optional, if you use the connector .jar provided in openidm/connectors, and you use a local connector server.

The following excerpt shows the configuration properties in the sample LDAP connector for OpenDJ. These properties are described in detail later in this section. For additional information on the properties that affect synchronization, see "Controlling What the LDAP Connector Synchronizes":

"configurationProperties" : {
    "host" : "localhost",
    "port" : 1389,
    "ssl" : false,
    "startTLS" : false,
    "principal" : "cn=Directory Manager",
    "credentials" : "password",
    "baseContexts" : [
        "dc=example,dc=com"
    ],
    "baseContextsToSynchronize" : [
        "dc=example,dc=com"
    ],
    "accountSearchFilter" : null,
    "accountSynchronizationFilter" : null,
    "groupSearchFilter" : null,
    "groupSynchronizationFilter" : null,
    "passwordAttributeToSynchronize" : null,
    "synchronizePasswords" : false,
    "removeLogEntryObjectClassFromFilter" : true,
    "modifiersNamesToFilterOut" : [ ],
    "passwordDecryptionKey" : null,
    "changeLogBlockSize" : 100,
    "attributesToSynchronize" : [ ],
    "changeNumberAttribute" : "changeNumber",
    "passwordDecryptionInitializationVector" : null,
    "filterWithOrInsteadOfAnd" : false,
    "objectClassesToSynchronize" : [
        "inetOrgPerson"
    ],
    "vlvSortAttribute" : "uid",
    "passwordAttribute" : "userPassword",
    "useBlocks" : false,
    "maintainPosixGroupMembership" : false,
    "failover" : [ ],
    "readSchema" : true,
    "accountObjectClasses" : [
        "top",
        "person",
        "organizationalPerson",
        "inetOrgPerson"
    ],
    "accountUserNameAttributes" : [
        "uid"
    ],
    "groupMemberAttribute" : "uniqueMember",
    "passwordHashAlgorithm" : null,
    "usePagedResultControl" : true,
    "blockSize" : 100,
    "uidAttribute" : "dn",
    "maintainLdapGroupMembership" : false,
    "respectResourcePasswordPolicyChangeAfterReset" : false
},
host

The host name or IP address of the server on which the LDAP instance is running.

port

The port on which the LDAP server listens for LDAP requests. The sample configuration specifies a default port of 1389.

ssl

If true, the specified port listens for LDAPS connections.

If you use the LDAP connector over SSL, set the ssl property to true, and the port to 636 in the connector configuration file. You must also specify the path to a truststore in your project's conf/system.properties file. A truststore is provided by default at openidm/security/truststore. Add the following line to the system.properties file, substituting the path to your own truststore if you do not want to use the default:

# Set the truststore
javax.net.ssl.trustStore=/path/to/openidm/security/truststore
startTLS

Specifies whether to use the startTLS operation to initiate a TLS/SSL session. To use startTLS, set "startTLS":true, and "ssl":false. Your connection should use the insecure LDAP port (typically 389 or 1389 for an OpenDJ server).

principal

The bind DN that is used to connect to the LDAP server.

credentials

The password of the principal that is used to connect to the LDAP server.

baseContexts

One or more starting points in the LDAP tree that will be used when searching the tree. Searches are performed when discovering users from the LDAP server or when looking for the groups of which a user is a member. During reconciliation operations, OpenIDM searches through the base contexts listed in this property for changes. (See also "Controlling What the LDAP Connector Synchronizes").

baseContextsToSynchronize

One or more starting points in the LDAP tree that will be used to determine if a change should be synchronized. During LiveSync operations, OpenIDM searches through the base contexts listed in this property for changes. If no value is specified here, the values in listed in the baseContexts property are used. (See also "Controlling What the LDAP Connector Synchronizes").

accountSynchronizationFilter

Used during synchronization actions to filter out LDAP accounts. (See also "Controlling What the LDAP Connector Synchronizes").

accountObjectClasses

This property lists all the object classes that represent an account. If this property has multiple values, an OR filter is used to determine the affected entries. For example, if the value of this property is ["organizationalPerson", "inetOrgPerson"], any entry with the object class organizationalPerson OR the object class inetOrgPerson is considered as an account entry. The value of this property must not include the top object class.

accountSearchFilter

Search filter that user accounts must match. (See also "Controlling What the LDAP Connector Synchronizes").

accountUserNameAttributes

Attributes holding the account's user name. Used during authentication to find the LDAP entry matching the user name.

attributesToSynchronize

List of attributes used during object synchronization. OpenIDM ignores change log updates that do not include any of the specified attributes. If empty, OpenIDM considers all changes. (See also "Controlling What the LDAP Connector Synchronizes").

blockSize

Block size for simple paged results and VLV index searches, reflecting the maximum number of entries retrieved at any one time.

changeLogBlockSize

Block size used when fetching change log entries.

changeNumberAttribute

Change log attribute containing the last change number.

failover

LDAP URLs specifying alternative LDAP servers to connect to if OpenIDM cannot connect to the primary LDAP server specified in the host and port properties.

filterWithOrInsteadOfAnd

In most cases, the filter to fetch change log entries is AND-based. If this property is set, the filter ORs the required change numbers instead.

groupMemberAttribute

LDAP attribute holding members for non-POSIX static groups.

groupSearchFilter

Search filter that group entries must match.

maintainLdapGroupMembership

If true, OpenIDM modifies group membership when entries are renamed or deleted.

In the sample LDAP connector configuration file provided with OpenIDM, this property is set to false. This means that LDAP group membership is not modified when entries are renamed or deleted in OpenIDM. To ensure that entries are removed from LDAP groups when the entries are deleted, set this property to true or enable referential integrity on the LDAP server. For information about configuring referential integrity in OpenDJ, see Configuring Referential Integrity in the OpenDJ Administration Guide.

maintainPosixGroupMembership

If true, OpenIDM modifies POSIX group membership when entries are renamed or deleted.

modifiersNamesToFilterOut

Use this property to avoid loops caused by changes made to managed user objects being synchronized. For more information, see "Controlling What the LDAP Connector Synchronizes".

objectClassesToSynchronize

OpenIDM synchronizes only entries that have these object classes. See also "Controlling What the LDAP Connector Synchronizes".

passwordAttribute

Attribute to which OpenIDM writes the predefined PASSWORD attribute.

passwordAttributeToSynchronize

OpenIDM synchronizes password values on this attribute.

passwordDecryptionInitializationVector

This is a legacy attribute, and its value should remain set to null. To configure password synchronization between an LDAP server and OpenIDM, use one of the password synchronization plugins, described in "Synchronizing Passwords Between OpenIDM and an LDAP Server".

passwordDecryptionKey

This is a legacy attribute, and its value should remain set to null. To configure password synchronization between an LDAP server and OpenIDM, use one of the password synchronization plugins, described in "Synchronizing Passwords Between OpenIDM and an LDAP Server".

passwordHashAlgorithm

Hash password values with the specified algorithm, if the LDAP server stores them in clear text.

The hash algorithm can be one of the following:

  • NONE - Clear text

  • WIN-AD - Used for password changes to Active Directory

  • SHA - Secure Hash Algorithm

  • SHA-1 - A 160-bit hash algorithm that resembles the MD5 algorithm

  • SSHA - Salted SHA

  • MD5 - A 128-bit message-digest algorithm

  • SMD5 - Salted MD5

readSchema

If true, read the schema from the LDAP server.

This property is used only during the connector setup, to generate the object types.

If this property is false, the LDAP connector provides a basic default schema that can manage LDAP users and groups. The default schema maps inetOrgPerson to the OpenICF __ACCOUNT__ property, and groupOfUniqueNames to the OpenICF __GROUP__ property. The following LDAP object classes are also included in the default schema:

organization
organizationalUnit
person
organizationalPerson
account
groupOfNames
removeLogEntryObjectClassFromFilter

If true, the filter to fetch change log entries does not contain the changeLogEntry object class, and OpenIDM expects no entries with other object types in the change log. The default setting is true.

respectResourcePasswordPolicyChangeAfterReset

If true, bind with the Password Expired and Password Policy controls, and throw PasswordExpiredException and other exceptions appropriately.

synchronizePasswords

This is a legacy attribute, and its value should remain set to false. To configure password synchronization between an LDAP server and OpenIDM, use one of the password synchronization plugins, described in "Synchronizing Passwords Between OpenIDM and an LDAP Server".

uidAttribute

Specifies the LDAP attribute that should be used as the immutable ID (_UID_) for the entry. For an OpenDJ resource, you should use the entryUUID. You can use the DN as the UID attribute but note that this is not immutable.

useBlocks

If useBlocks is false, no pagination is used. If useBlocks is true, the connector uses block-based LDAP controls, either the simple paged results control, or the virtual list view control, depending on the setting of the usePagedResultControl property.

usePagedResultControl

Taken into account only if useBlocks is true. If usePagedResultControl is false, the connector uses the virtual list view (VLV) control, if it is available. If usePagedResultControl is true, the connector uses the simple paged results control for search operations.

useTimestampsForSync

If true, use timestamps for LiveSync operations, instead of the change log.

By default, the LDAP connector has a change log strategy for LDAP servers that support a change log (such as OpenDJ and Oracle Directory Server Enterprise Edition). If the LDAP server does not support a change log, or if the change log is disabled, LiveSync for create and modify operations can still occur, based on the timestamps of modifications.

vlvSortAttribute

Attribute used as the sort key for virtual list view.

11.5.1.1. Controlling What the LDAP Connector Synchronizes

To control the set of LDAP entries that are affected by reconciliation and automatic synchronization operations, set the following properties in the provisioner configuration. Automatic synchronization operations includes LiveSync (synchronization of changes from the LDAP server to OpenIDM) and implicit sync (synchronization from the OpenIDM repository to the LDAP server).

baseContexts

The starting points in the LDAP tree that are used when searching the directory tree, for example, dc=example,dc=com. These base contexts must include the set of users and the set of groups that must be searched during reconciliation operations.

baseContextsToSynchronize

The starting points in the LDAP tree that are used to determine if a change should be synchronized. This property is used only for automatic synchronization operations. Only entries that fall under these base contexts are considered during synchronization operations.

accountSearchFilter

Only user accounts that match this filter are searched, and therefore affected by reconciliation and synchronization operations. If you do not set this property, all accounts within the base contexts specified previously are searched.

accountSynchronizationFilter

This property is used during reconciliation and automatic synchronization operations, and filters out any LDAP accounts that you specifically want to exclude from these operations.

objectClassesToSynchronize

During automatic synchronization operations, only the object classes listed here are considered for changes. OpenIDM ignores change log updates (or changes to managed objects) which do not have any of the object classes listed here. If this property is not set, OpenIDM considers changes to all attributes specified in the mapping.

attributesToSynchronize

During automatic synchronization operations, only the attributes listed here are considered for changes. Objects that include these attributes are synchronized. Objects that do not include these attributes are ignored. If this property is not set, OpenIDM considers changes to all attributes specified in the mapping. Automatic synchronization includes LiveSync and implicit synchronization operations. For more information, see "Types of Synchronization"

This attribute works only with LDAP servers that log changes in a change log, not with servers (such as Active Directory) that use other mechanisms to track changes.

modifiersNamesToFilterOut

This property enables you to define a list of DNs. During synchronization operations, the connector ignores changes made by these DNs.

When a managed user object is updated, and that change is synchronized to the LDAP server, the change made on the LDAP server is recorded in the change log. A LiveSync operation picks up the change, and attempts to replay the change on the managed user object, effectively resulting in a loop of updates.

To avoid this situation, you can specify a unique user in your LDAP directory, that will be used only for the LDAP connector. The unique user must be something other than cn=directory manager, for example cn=openidmuser. You can then include that user DN as the value of modifiersNamesToFilterOut. When a change is made through the LDAP connector, and that change is recorded in the change log, the modifier's name (cn=openidmuser) is flagged and OpenIDM does not attempt to replay the change back to the managed user repository. So you are effectively indicating that OpenIDM should not synchronized changes back to managed user that originated from managed user, thus preventing the update loop.

This attribute works only with LDAP servers that log changes in a change log, not with servers (such as Active Directory) that use other mechanisms to track changes.

11.5.1.2. Using the Generic LDAP Connector With Active Directory

The LDAP connector provides new functionality for managing Active Directory users and groups. Among other changes, the new connector can handle the following operational attributes to manage Active Directory accounts:

  • ENABLE - uses the userAccountControl attribute to get or set the account status of an object.

    The LDAP connector reads the userAccountControl to determine if an account is enabled or disabled. The connector modifies the value of the userAccountControl attribute if OpenIDM changes the value of __ENABLE__.

  • __ACCOUNT_EXPIRES__ - gets or sets the accountExpires attribute of an Active Directory object.

  • __LOCK_OUT__ - uses the msDS-User-Account-Control-Computed system attribute to check if a user account has been locked.

    If OpenIDM sets the __LOCK_OUT__ to FALSE, the LDAP connector sets the Active Directory lockoutTime to 0 to unlock the account.

    If OpenIDM sets the __LOCK_OUT__ to TRUE, the LDAP connector ignores the change and logs a message.

  • __PASSWORD_EXPIRED__ - uses the msDS-User-Account-Control-Computed system attribute to check if a user password has expired.

    To force password expiration (to force a user to change their password when they next log in), pwdLastSet must be set to 0. The LDAP connector sets pwdLastSet to 0, if OpenIDM sets __PASSWORD_EXPIRED__ to TRUE.

    To remove password expiration, pwdLastSet must be set to 0 and then -1. This sets the value of pwdLastSet to the current time. The LDAP connector sets pwdLastSet to -1 if OpenIDM sets __PASSWORD_EXPIRED__ to FALSE.

Note

You must update your provisioner configuration to be able to use these new operational attributes. You can use this sample provisioner configuration as a guide.

11.5.1.2.1. Managing Active Directory Users With the LDAP Connector

If you create or update users in Active Directory, and those user entries include passwords, you must use the LDAP connector over SSL. You cannot create or update an Active Directory user password in clear text. To use the connector over SSL, set "ssl" : true in the provisioner configuration and set the path to your truststore in your project's conf/system.properties file. For example, add the following line to that file:

# Set the truststore
javax.net.ssl.trustStore=/path/to/openidm/security/truststore

The following command adds an Active Directory user. The output shows the operational attributes described in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data '{
 "dn": "CN=Brian Smith,CN=Users,DC=example,DC=com",
 "cn": "Brian Smith",
 "sAMAccountName": "bsmith",
 "userPrincipalName": "bsmith@example.com",
 "userAccountControl": "512",
 "givenName": "Brian",
 "mail": "bsmith@example.com",
 "__PASSWORD__": "Passw0rd"
 }' \
 https://localhost:8443/openidm/system/ad/account?_action=create
{
  "_id": "<GUID=cb2f8cbc032f474c94c896e69db2feb3>",
  "mobile": null,
  "postalCode": null,
  "st": null,
  "employeeType": null,
  "objectGUID": "<GUID=cb2f8cbc032f474c94c896e69db2feb3>",
  "cn": "Brian Smith",
  "department": null,
  "l": null,
  "description": null,
  "info": null,
  "manager": null,
  "sAMAccountName": "bsmith",
  "sn": null,
  "whenChanged": "20151217131254.0Z",
  "userPrincipalName": "bsmith@example.com",
  "userAccountControl": "512",
  "__ENABLE__": true,
  "displayName": null,
  "givenName": "Brian",
  "middleName": null,
  "facsimileTelephoneNumber": null,
  "lastLogon": "0",
  "countryCode": "0",
  "employeeID": null,
  "co": null,
  "physicalDeliveryOfficeName": null,
  "pwdLastSet": "2015-12-17T13:12:54Z",
  "streetAddress": null,
  "homePhone": null,
  "__PASSWORD_NOTREQD__": false,
  "telephoneNumber": null,
  "dn": "CN=Brian Smith,CN=Users,DC=example,DC=com",
  "title": null,
  "mail": "bsmith@example.com",
  "postOfficeBox": null,
  "__SMARTCARD_REQUIRED__": false,
  "uSNChanged": "86144",
  "__PASSWORD_EXPIRED__": false,
  "initials": null,
  "__LOCK_OUT__": false,
  "company": null,
  "employeeNumber": null,
  "accountExpires": "0",
  "c": null,
  "whenCreated": "20151217131254.0Z",
  "uSNCreated": "86142",
  "division": null,
  "groups": [],
  "__DONT_EXPIRE_PASSWORD__": false,
  "otherHomePhone": []
}

Note that the command sets the userAccountControl to 512, which is an enabled account. The value of the userAccountControl determines the account policy. The following list describes the common values for the userAccountControl.

512

Enabled account.

514

Disabled account.

544

Enabled account, password not required.

546

Disabled account, password not required.

66048

Enabled account, password does not expire.

66050

Disabled account, password does not expire.

66080

Enabled account, password does not expire and is not required.

66082

Disabled account, password does not expire and is not required.

262656

Enabled account, smartcard required.

262658

Disabled account, smartcard required.

262688

Enabled account, smartcard required, password not required.

262690

Disabled account, smartcard required, password not required.

328192

Enabled account, smartcard required, password does not expire.

328192

Enabled account, smartcard required, password does not expire.

328194

Disabled account, smartcard required, password does not expire.

328224

Enabled account, smartcard required, password does not expire and is not required.

328226

Disabled account, smartcard required, password does not expire and is not required.

11.5.1.2.2. Managing Active Directory Groups With the LDAP Connector

The following command creates a basic Active Directory group with the LDAP connector:

$ curl \
 --cacert self-signed.crt \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data '{
 "dn": "CN=Employees,DC=example,DC=com"
 }' \
 https://localhost:8443/openidm/system/ad/group?_action=create
{
  "_id": "<GUID=240da4e959d81547ad8629f5b2b5114d>"
}

The LDAP connector exposes two special attributes to handle Active Directory group scope and type: GROUP_SCOPE and GROUP_TYPE.

The GROUP_SCOPE attribute is defined in the provisioner configuration as follows:

...
    "__GROUP_SCOPE__" : {
        "type" : "string",
        "nativeName" : "__GROUP_SCOPE__",
        "nativeType" : "string"
    },

The value of the GROUP_SCOPE attribute can be global, domain, or universal. If no group scope is set when the group is created, the scope is global by default. For more information about the different group scopes, see the corresponding Microsoft documentation.

The GROUP_TYPE attribute is defined in the provisioner configuration as follows:

...
"__GROUP_TYPE__" : {
 "type" : "string",
 "nativeName" : "__GROUP_TYPE__",
 "nativeType" : "string"
 },  

The value of the GROUP_TYPE attribute can be security or distribution. If no group type is set when the group is created, the type is security by default. For more information about the different group types, see the corresponding Microsoft documentation.

The following example creates a new distribution group, with universal scope:

$ curl \
 --cacert self-signed.crt \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data '{
 "dn": "CN=NewGroup,DC=example,DC=com",
 "__GROUP_SCOPE__": "universal",
 "__GROUP_TYPE__": "distribution"
 }' \
 https://localhost:8443/openidm/system/ad/group?_action=create
{
  "_id": "<GUID=f189df8a276f91478ad5055b1580cbcb>"
}
11.5.1.2.3. Handling Active Directory Dates

Most dates in Active Directory are represented as the number of 100-nanosecond intervals since January 1, 1601 (UTC). For example:

pwdLastSet: 130698687542272930

OpenIDM generally represents dates as an ISO 8601-compliant string with yyyy-MM-dd'T'HH:mm:ssZ format. For example:

2015-03-02T20:17:48Z

The generic LDAP connector therefore converts any dates from Active Directory to ISO 8601 format, for fields such as pwdLastSet, accountExpires, lockoutTime, and lastLogon.

11.5.2. Active Directory Connector

Unlike most other connectors, the Active Directory connector is written not in Java, but in C# for the .Net platform. OpenICF should connect to Active Directory over ADSI, the native connection protocol for Active Directory. The connector therefore requires a connector server that has access to the ADSI .dll files.

In general, the generic LDAP connector is preferable to the Active Directory connector for the following reasons:

  • There is no requirement for a .NET connector server, so the deployment is simpler, and less intrusive

  • Better performance is observed with the LDAP connector

  • The LDAP connector is easier to configure

If your Active Directory environment is simple, deals with regular attributes, you should use the LDAP connector or the scripted PowerShell connector. Unfortunately, in complex Active Directory environments, there are some limitations when you use the LDAP connector, which might make it unsuitable for your deployments.

Before you configure the Active Directory Connector, make sure that the .NET Connector Server is installed, configured and started, and that OpenIDM has been configured to use the Connector Server. For more information, see "Installing and Configuring a .NET Connector Server".

Setting Up the Active Directory Connector
  1. Download the Active Directory Connector from ForgeRock's download page.

  2. Extract the contents of the AD Connector zip file into the directory in which you installed the Connector Server (by default c:\Program Files (x86)\Identity Connectors\Connector Server>).

    Note that the files, specifically the connector itself (ActiveDirectory.Connector.dll) must be directly under the path\to\Identity Connectors\Connector Server directory, and not in a subdirectory.

    Note

    If the account that is used to install the Active Directory connector is different from the account under which the Connector Server runs, you must give the Connector Server runtime account the rights to access the Active Directory connector log files.

  3. A sample Active Directory Connector configuration file is provided in openidm/samples/provisioners/provisioner.openicf-ad.json. On the OpenIDM host, copy the sample Active Directory connector configuration file to your project's conf/ directory:

    $ cd /path/to/openidm
    $ cp samples/provisioners/provisioner.openicf-ad.json project-dir/conf/
  4. Edit the Active Directory connector configuration to match your Active Directory deployment.

    Specifically, check and edit the configurationProperties that define the connection details to the Active Directory server.

    Also, check that the bundleVersion of the connector matches the version of the ActiveDirectory.Connector.dll in the Connector Server directory. The bundle version can be a range that includes the version of the connector bundle. To check the .dll version:

    • Right click on the ActiveDirectory.Connector.dll file and select Properties.

    • Select the Details tab and note the Product Version.

      Bundle version of the Active Directory Connector

    The following configuration extract shows sample values for the connectorRef and configurationProperties:

    ...
    "connectorRef" :
       {
          "connectorHostRef" : "dotnet",
          "connectorName" : "Org.IdentityConnectors.ActiveDirectory.ActiveDirectoryConnector",
          "bundleName" : "ActiveDirectory.Connector",
          "bundleVersion" : "[1.4.0.0,2.0.0.0)"
       },          ...
    "configurationProperties" :
       {
          "DirectoryAdminName" : "EXAMPLE\\Administrator",
          "DirectoryAdminPassword" : "Passw0rd",
          "ObjectClass" : "User",
          "Container" : "dc=example,dc=com",
          "CreateHomeDirectory" : true,
          "LDAPHostName" : "192.0.2.0",
          "SearchChildDomains" : false,
          "DomainName" : "example",
          "SyncGlobalCatalogServer" : null,
          "SyncDomainController" : null,
          "SearchContext" : ""
       }, 

    The main configurable properties are as follows:

    connectorHostRef

    Must point to an existing connector info provider configuration in project-dir/conf/provisioner.openicf.connectorinfoprovider.json. The connectorHostRef property is required because the Active Directory connector must be installed on a .NET connector server, which is always remote, relative to OpenIDM.

    DirectoryAdminName and DirectoryAdminPassword

    Specify the credentials of an administrator account in Active Directory, that the connector will use to bind to the server.

    The DirectoryAdminName can be specified as a bind DN, or in the format DomainName\\samaccountname.

    SearchChildDomains

    Specifies if a Global Catalog (GC) should be used. This parameter is used in search and query operations. A Global Catalog is a read-only, partial copy of the entire forest, and is never used for create, update or delete operations.

    Boolean, false by default.

    LDAPHostName

    Specifies a particular Domain Controller (DC) or Global Catalog (GC), using its hostname. This parameter is used for query, create, update, and delete operations.

    If SearchChildDomains is set to true, this specific GC will be used for search and query operations. If the LDAPHostName is null (as it is by default), the connector will allow the ADSI libraries to pick up a valid DC or GC each time it needs to perform a query, create, update, or delete operation.

    SyncGlobalCatalogServer

    Specifies a Global Catalog server name for sync operations. This property is used in combination with the SearchChildDomains property.

    If a value for SyncGlobalCatalogServer is set (that is, the value is not null) and SearchChildDomains is set to true, this GC server is used for sync operations. If no value for SyncGlobalCatalogServer is set and SearchChildDomains is set to true, the connector allows the ADSI libraries to pick up a valid GC.

    SyncDomainController

    Specifies a particular DC server for sync operations. If no DC is specified, the connector picks up the first available DC and retains this DC in future sync operations.

    The updated configuration is applied immediately.

  5. Check that the connector has been configured correctly by running the following command in the OSGi console:

    scr list

    This command returns all of the installed modules. The OpenICF provisioner module should be active, as follows:

    [32] [active] org.forgerock.openidm.provisioner.openicf.connectorinfoprovider

    The number of the module may differ. Make a note of the module number, as it is referenced in the commands that follow.

  6. Review the contents of the connector by running the following command in the OSGi console (substituting the module number returned in the previous step):

    scr info 32
    ID: 32
    Name: org.forgerock.openidm.provisioner.openicf.connectorinfoprovider
    Bundle: org.forgerock.openidm.provisioner-openicf (82)
    State: active
    Default State: enabled
    Activation: immediate
    Configuration Policy: optional
    Activate Method: activate (declared in the descriptor)
    Deactivate Method: deactivate (declared in the descriptor)
    Modified Method: -
    Services: org.forgerock.openidm.provisioner.openicf.ConnectorInfoProvider
              org.forgerock.openidm.metadata.MetaDataProvider
              org.forgerock.openidm.provisioner.ConnectorConfigurationHelper
    Service Type: service
    Reference: osgiConnectorEventPublisher
        Satisfied: satisfied
    ...
    component.name = org.forgerock.openidm.provisioner.openicf.connectorinfoprovider
        felix.fileinstall.filename = file:/openidm/conf/provisioner.openicf.connectorinfoprovider.json
        jsonconfig = {
        "connectorsLocation" : "connectors",
        "remoteConnectorServers" : [
            {
                "name" : "dotnet",
                "host" : "192.0.2.0",
                "port" : 8759,
                "useSSL" : false,
                "timeout" : 0,
                "key" : {
                    "$crypto" : {
                        "value" : {
                            "iv" : "3XpjsLV1YNP034Rt/6BZgg==",
                            "data" : "8JXxpoRJjYGFkRVHvTwGTA==",
                            "cipher" : "AES/CBC/PKCS5Padding",
                            "key" : "openidm-sym-default"
                        },
                        "type" : "x-simple-encryption"
                    }
                }
            }
        ]
    }
    ...
  7. The connector is now configured. To verify the configuration, perform a RESTful GET request on the remote system URL, for example:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request GET \
     "https://localhost:8443/openidm/system/ActiveDirectory/account?_queryId=query-all-ids"

    This request should return the user accounts in the Active Directory server.

  8. (Optional) To configure reconciliation or LiveSync between OpenIDM and Active Directory, create a synchronization configuration file (sync.json) in your project's conf/ directory.

    The synchronization configuration file defines the attribute mappings and policies that are used during reconciliation.

    The following is a simple example of a sync.json file for Active Directory:

    {
        "mappings" : [
            {
                "name" : "systemADAccounts_managedUser",
                "source" : "system/ActiveDirectory/account",
                "target" : "managed/user",
                "properties" : [
                    { "source" : "cn", "target" : "displayName" },
                    { "source" : "description", "target" : "description" },
                    { "source" : "givenName", "target" : "givenName" },
                    { "source" : "mail", "target" : "email" },
                    { "source" : "sn", "target" : "familyName" },
                    { "source" : "sAMAccountName", "target" : "userName" }
                ],
                "policies" : [
                    { "situation" : "CONFIRMED", "action" : "UPDATE" },
                    { "situation" : "FOUND", "action" : "UPDATE" },
                    { "situation" : "ABSENT", "action" : "CREATE" },
                    { "situation" : "AMBIGUOUS", "action" : "EXCEPTION" },
                    { "situation" : "MISSING", "action" : "UNLINK" },
                    { "situation" : "SOURCE_MISSING", "action" : "DELETE" },
                    { "situation" : "UNQUALIFIED", "action" : "DELETE" },
                    { "situation" : "UNASSIGNED", "action" : "DELETE" }
                ]
            }
        ]
    }
  9. To test the synchronization, run a reconciliation operation as follows:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request POST \
     "https://localhost:8443/openidm/recon?_action=recon&mapping=systemADAccounts_managedUser"

    If reconciliation is successful, the command returns a reconciliation run ID, similar to the following:

    {"_id":"0629d920-e29f-4650-889f-4423632481ad","state":"ACTIVE"}
  10. Query the internal repository, using either a curl command, or the OpenIDM Admin UI, to make sure that the users in your Active Directory server were provisioned into the repository.

11.5.2.1. Using PowerShell Scripts With the Active Directory Connector

The Active Directory connector supports PowerShell scripting. The following example shows a simple PowerShell script that is referenced in the connector configuration and can be called over the REST interface.

Note

External script execution is disabled on system endpoints by default. For testing purposes, you can enable script execution over REST, on system endpoints by adding the script action to the system object, in the access.js file. For example:

$ more /path/to/openidm/script/access.js
...
{
    "pattern"   : "system/ActiveDirectory",
    "roles"     : "openidm-admin",
    "methods" : "action",
    "actions"   : "script"
},     

Be aware that scripts passed to clients imply a security risk in production environments. If you need to expose a script for direct external invocation, it might be better to write a custom authorization function to constrain the script ID that is permitted. Alternatively, do not expose the script action for external invocation, and instead, expose a custom endpoint that can make only the desired script calls. For more information about using custom endpoints, see "Adding Custom Endpoints".

The following PowerShell script creates a new MS SQL user with a username that is specified when the script is called. The script sets the user's password to Passw0rd and, optionally, gives the user a role. Save this script as project-dir/script/createUser.ps1:

if ($loginName -ne $NULL) {
  [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | Out-Null
  $sqlSrv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') ('WIN-C2MSQ8G1TCA')

  $login = New-Object -TypeName ('Microsoft.SqlServer.Management.Smo.Login') ($sqlSrv, $loginName)
  $login.LoginType = 'SqlLogin'
  $login.PasswordExpirationEnabled = $false
  $login.Create('Passw0rd')
  #  The next two lines are optional, and to give the new login a server role, optional
  $login.AddToRole('sysadmin')
  $login.Alter()
 } else {
  $Error_Message = [string]"Required variables 'loginName' is missing!"
     Write-Error $Error_Message
     throw $Error_Message
 }   

Now edit the Active Directory connector configuration to reference the script. Add the following section to the connector configuration file (project-dir/conf/provisioner.openicf-ad.json):

 "systemActions" : [
     {
         "scriptId" : "ConnectorScriptName",
         "actions" : [
             {
                 "systemType" : ".*ActiveDirectoryConnector",
                 "actionType" : "Shell",
                 "actionSource" : "@echo off \r\n echo %loginName%\r\n"
             },
             {
                 "systemType" : ".*ActiveDirectoryConnector",
                 "actionType" : "PowerShell",
                 "actionFile" : "script/createUser.ps1"
             }
         ]
     }
 ]   

To call the PowerShell script over the REST interface, use the following request, specifying the userName as input:

$ curl \
  --cacert self-signed.crt \
  --header "X-OpenIDM-Username: openidm-admin" \
  --header "X-OpenIDM-Password: openidm-admin" \
  --request POST \
  "https://localhost:8443/openidm/system/ActiveDirectory/?_action=script&scriptId=ConnectorScriptName&scriptExecuteMode=resource&loginName=myUser"

11.5.3. CSV File Connector

The CSV file connector is useful when importing users, either for initial provisioning or for ongoing updates. When used continuously in production, a CSV file serves as a change log, often containing only user records that have changed.

A sample CSV file connector configuration is provided in openidm/samples/provisioners/provisioner.openicf-csv.json.

The following example shows an excerpt of the provisioner configuration. The connectorHostRef property is optional and must be provided only if the connector runs remotely.

{
  "connectorRef": {
    "connectorHostRef": "#LOCAL",
    "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
    "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
    "bundleVersion": "1.5.0.0"
  }
}

The following excerpt shows the required configuration properties:

"configurationProperties" : {
    "csvFile" : "&{launcher.project.location}/data/hr.csv",
    "headerName" : "username",
    "headerUid" : "uid"
},
csvFile

The path to the CSV file that is the data source for this connector.

headerName

The CSV header that maps to the username for each row.

Default: username

headerUid

The CSV header that maps to the uid for each row.

Default: uid

The CSV file connector also supports following optional configuration properties:

encoding

Default: utf-8

headerPassword

The CSV header that maps to the password for each row. Use this property when password-based authentication is required.

fieldDelimiter

The character in the CSV file that is used to separate field values.

Default: ,

quoteCharacter

The character in the CSV file that is used to encapsulate strings.

Default: "

newlineString

The character string in the CSV file that is used to terminate each line.

Default: \n

syncFileRetentionCount

The number of historical copies of the CSV file to retain when performing synchronization operations.

Default: 3

11.5.4. Scripted SQL Connector

The Scripted SQL Connector uses customizable Groovy scripts to interact with the database. This connector is not bundled with OpenIDM 4 but can be downloaded from the OpenICF Connectors page.

The Scripted SQL connector uses one script for each of the following actions on the external database:

  • Create

  • Delete

  • Search

  • Sync

  • Test

  • Update

Example Groovy scripts are provided in the openidm/samples/sample3/tools/ directory.

The scripted SQL connector runs with autocommit mode enabled by default. As soon as a statement is executed that modifies a table, the update is stored on disk and the change cannot be rolled back. This setting applies to all database actions (search, create, delete, test, synch, and update). You can disable autocommit in the connector configuration file (conf/provisioner.openicf-scriptedsql.json) by adding the autocommit property and setting it to false, for example:

"configurationProperties" : {
    "host" : "localhost",
    "port" : "3306",
    ...
    "database" : "HRDB",
    "autoCommit" : false,
    "reloadScriptOnExecution" : true,
    "createScriptFileName" : "&{launcher.project.location}/tools/CreateScript.groovy",
    ...

If you require a traditional transaction with a manual commit for a specific script, you can disable autocommit mode in the script or scripts for each action that requires a manual commit. For more information on disabling autocommit, see the corresponding MySQL documentation.

11.5.5. Database Table Connector

The Database Table connector enables provisioning to a single table in a JDBC database. A sample connector configuration for the Database Table connector is provided in samples/provisioners/provisioner.openicf-contractordb.json. The corresponding data definition language file is provided in samples/provisioners/provisioner.openicf-contractordb.sql.

The following excerpt shows the settings for the connector configuration properties in the sample Database Table connector:

"configurationProperties" :
    {
       "quoting" : "",
       "host" : "localhost",
       "port" : "3306",
       "user" : "root",
       "password" : "",
       "database" : "contractordb",
       "table" : "people",
       "keyColumn" : "UNIQUE_ID",
       "passwordColumn" : "",
       "jdbcDriver" : "com.mysql.jdbc.Driver",
       "jdbcUrlTemplate" : "jdbc:mysql://%h:%p/%d",
       "enableEmptyString" : false,
       "rethrowAllSQLExceptions" : true,
       "nativeTimestamps" : true,
       "allNative" : false,
       "validConnectionQuery" : null,
       "changeLogColumn" : "CHANGE_TIMESTEMP",
       "datasource" : "",
       "jndiProperties" : null
    },

The mandatory configurable properties are as follows:

database

The JDBC database that contains the table to which you are provisioning.

table

The name of the table in the JDBC database that contains the user accounts.

keyColumn

The column value that is used as the unique identifier for rows in the table.

For a description of all configurable properties for this connector, see the OpenICF Connector Configuration Reference.

11.5.6. Groovy Connector Toolkit

OpenICF provides a generic Groovy Connector Toolkit that enables you to run a Groovy script for any OpenICF operation, such as search, update, create, and others, on any external resource.

The Groovy Connector Toolkit is not a complete connector in the traditional sense. Rather, it is a framework within which you must write your own Groovy scripts to address the requirements of your implementation. Specific scripts are provided within these samples, which demonstrate how the Groovy Connector Toolkit can be used. These scripts cannot be used as is in your deployment, but are a good starting point on which to base your customization.

The Groovy Connector Toolkit is bundled with OpenIDM 4, in the JAR openidm/connectors/groovy-connector-1.4.2.0.jar.

Sample implementations are provided in "Samples That Use the Groovy Connector Toolkit to Create Scripted Connectors" in the Samples Guide.

11.5.7. PowerShell Connector Toolkit

The PowerShell Connector Toolkit is not a complete connector in the traditional sense. Rather, it is a framework within which you must write your own PowerShell scripts to address the requirements of your Microsoft Windows ecosystem. You can use the PowerShell Connector Toolkit to create connectors that can provision any Microsoft system, including, but not limited to, Active Directory, MS SQL, MS Exchange, Sharepoint, Azure, and Office365. Essentially, any task that can be performed with PowerShell can be executed through connectors based on this toolkit.

Connectors created with the PowerShell Connector Toolkit run on the .NET platform and require the installation of a .NET connector server on the Windows system. To install the .NET connector, follow the instructions in "Installing and Configuring a .NET Connector Server". These connectors also require PowerShell V2.

The PowerShell Connector Toolkit is not bundled with OpenIDM, but is available, with a subscription, from ForgeRock Backstage. To install the connector, download the archive (mspowershell-connector-1.4.2.0.zip) and extract the MsPowerShell.Connector.dll to the same directory where the Connector Server (ConnectorServerService.exe) is located.

OpenIDM includes Active Directory sample scripts for the Powershell connector that will enable you to get started with this toolkit. For more information, see "Samples That Use the PowerShell Connector Toolkit to Create Scripted Connectors" in the Samples Guide.

11.5.8. Salesforce Connector

The Enterprise build of OpenIDM includes a Salesforce connector, along with a sample connector configuration. The Salesforce connector enables provisioning, reconciliation, and synchronization between Salesforce and the OpenIDM repository.

To use this connector, you need a Salesforce account, and a Connected App that has OAuth enabled, which will allow you to retrieve the required consumer key and consumer secret.

For additional instructions, and a sample Salesforce configuration, see "Salesforce Sample - Salesforce With the Salesforce Connector" in the Samples Guide.

11.5.9. Google Apps Connector

The Enterprise build of OpenIDM includes a Google Apps connector, along with a sample connector configuration. The Google Apps Connector enables you to interact with Google's web applications.

To use this connector, you need a Google Apps account.

If you have OpenIDM Enterprise, you can view a sample Google Apps connector configuration file in samples/provisioners/provisioner.openicf-google.json

The following is an excerpt of the provisioner configuration file. This example shows an excerpt of the provisioner configuration. The default location of the connector .jar is openidm/connectors. Therefore the value of the connectorHostRef property must be "#LOCAL":

{
    "connectorHostRef": "#LOCAL",
    "connectorName": "org.forgerock.openicf.connectors.googleapps.GoogleAppsConnector",
    "bundleName": "org.forgerock.openicf.connectors.googleapps-connector",
    "bundleVersion": "[1.4.0.0,2.0.0.0)"
}, 

The following excerpt shows the required configuration properties:

"configurationProperties": {
    "domain": "",
    "clientId": "",
    "clientSecret": null,
    "refreshToken": null
}, 

These configuration properties are fairly straightforward:

domain

Set to the domain name for OAuth 2-based authorization.

clientId

A client identifier, as issued by the OAuth 2 authorization server. For more information, see the following section of RFC 6749: Client Identifier.

clientSecret

Sometimes also known as the client password. OAuth 2 authorization servers can support the use of clientId and clientSecret credentials, as noted in the following section of RFC 6749: Client Password.

refreshToken

A client can use an OAuth 2 refresh token to continue accessing resources. For more information, see the following section of RFC 6749: Refresh Tokens.

For a sample Google Apps configuration that includes OAuth 2-based entries for configurationProperties, see "Google Sample - Connecting to Google With the Google Apps Connector" in the Samples Guide.

11.5.10. XML File Connector

OpenIDM includes a simple XML file connector. This connector is really useful only in a demonstration context and should not be used in the general provisioning of XML data stores. It is used in this document to demonstrate provisioning to a remote data store. In real deployments, if you need to connect to a custom XML data file, you should create your own scripted connector by using the Groovy connector toolkit.

A sample XML connector configuration is provided in path/to/openidm/samples/provisioners/provisioner.openicf-xml.json. The following excerpt of the provisioner configuration shows the main configurable properties:

{
    "connectorRef": {
        "connectorHostRef": "#LOCAL",
        "bundleName": "org.forgerock.openicf.connectors.xml-connector",
        "bundleVersion": "1.1.0.2",
        "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector"
    }
}

The connectorHostRef is optional if the connector server is local.

The configuration properties for the XML file connector set the relative path to the file containing the identity data, and also the paths to the required XML schemas:

{
    "configurationProperties": {
        "xsdIcfFilePath" : "&{launcher.project.location}/data/resource-schema-1.xsd",
        "xsdFilePath" : "&{launcher.project.location}/data/resource-schema-extension.xsd",
        "xmlFilePath" : "&{launcher.project.location}/data/xmlConnectorData.xml"
    }
}

&{launcher.project.location} refers to the project directory of your OpenIDM instance, for example, path/to/openidm/samples/sample1. Note that relative paths such as these work only if your connector server runs locally. For remote connector servers, you must specify the absolute path to the schema and data files.

xsdIcfFilePath

References the XSD file defining schema common to all XML file resources. Do not change the schema defined in this file.

xsdFilePath

References custom schema defining attributes specific to your project.

xmlFilePath

References the XML file that contains account entries.

Using the XML Connector to Reconcile Users in a Remote XML Data Store

This example demonstrates reconciliation of users stored in an XML file on a remote machine. The remote Java Connector Server enables OpenIDM to synchronize the internal OpenIDM repository with the remote XML repository.

The example assumes that a remote Java Connector Server is installed on a host named remote-host. For instructions on setting up the remote Java Connector Server, see "Installing a Remote Java Connector Server for Unix/Linux" or "Installing a Remote Java Connector Server for Windows".

Configuring the Remote Connector Server for the XML Connector Example

This example uses the XML data that is provided in the basic XML reconciliation sample (Sample 1). The XML connector runs as a remote connector, that is, on the remote host on which the Java Connector Server is installed. Before you start, copy the data and the XML connector over to the remote machine.

  1. Shut down the remote connector server, if it is running. In the connector server terminal window, type q:

    q
    INFO: Stopped listener bound to [0.0.0.0:8759]
    Dec 08, 2015 10:43:26 PM INFO  o.f.o.f.server.ConnectorServer: Server is
     shutting down org.forgerock.openicf.framework.server.ConnectorServer@75a07f17
    
          
  2. Copy the XML data from Sample 1 to an accessible location on the machine that hosts the remote Java Connector Server. For example:

    $ cd path/to/openidm
    $ scp -r samples/sample1/data testuser@remote-host:/home/testuser/xml-sample
    testuser@remote-host's password:
    resource-schema-1.xsd                    100% 4083     4.0KB/s   00:00
    resource-schema-extension.xsd            100% 1351     1.3KB/s   00:00
    xmlConnectorData.xml                     100% 1648     1.6KB/s   00:00
  3. Copy the XML connector .jar from the OpenIDM installation to the openicf/bundles directory on the remote host:

    $ cd path/to/openidm
    $ scp connectors/xml-connector-1.1.0.2.jar testuser@remote-host:/path/to/openicf/bundles
    testuser@remote-host's password:
    xml-connector-1.1.0.2.jar                100% 4379KB   4.3MB/s   00:00
  4. Restart the remote connector server so that it picks up the new connector:

    $ cd /path/to/openicf
    $ bin/ConnectorServer.sh /run
    ...
     Dec 08, 2015 10:46:03 PM INFO  o.i.f.i.a.l.LocalConnectorInfoManagerImpl:
     Add ConnectorInfo ConnectorKey( bundleName=org.forgerock.openicf.connectors.xml-connector
     bundleVersion=1.1.0.2 connectorName=org.forgerock.openicf.connectors.xml.XMLConnector )
     to Local Connector Info Manager from file:/home/testuser/openicf/bundles/xml-connector-1.1.0.2.jar 

    The connector server logs are noisy by default. You should, however, notice the addition of the XML connector.

Configuring OpenIDM for the XML Connector Example

This example uses the configuration of Sample 1, which is effectively your OpenIDM project location. Any configuration changes that you make must therefore be made in the conf directory of Sample 1:

  1. Copy the remote connector configuration file (provisioner.openicf.connectorinfoprovider.json) from the provisioner samples directory to the configuration directory of your OpenIDM project:

    $ cd path/to/openidm/samples/
    $ cp provisioners/provisioner.openicf.connectorinfoprovider.json sample1/conf
  2. Edit the remote connector configuration file (provisioner.openicf.connectorinfoprovider.json) to match your network setup.

    The following example indicates that the remote Java connector server is running on the host remote-host, listening on the default port, and configured with a secret key of Passw0rd:

    {
        "remoteConnectorServers" : [
            {
                "name" : "xml",
                "host" : "remote-host",
                "port" : 8759,
                "useSSL" : false,
                "timeout" : 0,
                "protocol" : "websocket",
                "key" : "Passw0rd"
            }
        ]
    }
  3. Edit the XML connector configuration file (provisioner.openicf-xml.json) in the sample1/conf directory as follows:

    {
        "name" : "xmlfile",
        "connectorRef" : {
            "connectorHostRef" : "xml",
            "bundleName" : "org.forgerock.openicf.connectors.xml-connector",
            "bundleVersion" : "1.1.0.2",
            "connectorName" : "org.forgerock.openicf.connectors.xml.XMLConnector"
        },
        "configurationProperties" : {
            "xsdIcfFilePath" : "/home/testuser/xml-sample/data/resource-schema-1.xsd",
            "xsdFilePath" : "/home/testuser/xml-sample/data/resource-schema-extension.xsd",
            "xmlFilePath" : "/home/testuser/xml-sample/data/xmlConnectorData.xml"
        },
    }
    • The connectorHostRef property indicates which remote connector server to use, and refers to the name property defined in the provisioner.openicf.connectorinfoprovider.json file.

    • The bundleVersion : 1.1.0.2 must be exactly the same as the version of the XML connector that you are using. If you specify a range here, the XML connector version must be included in this range.

    • The configurationProperties must specify the absolute path to the data files that you copied to the server on which the Java Connector Server is running.

  4. Start OpenIDM with the configuration for Sample 1:

    $ ./startup.sh -p samples/sample1/
  5. Use the AdminUI to verify that OpenIDM can reach the remote connector server and that the XML connector is active:

    Log in to the Admin UI (https://localhost:8443/openidm/admin) and select Configure > Connectors.

    The XML connector should be available, and active.

    Figure shows active XML Connector in Admin UI

    Click on the XML connector to view its configuration.

  6. To test that the connector has been configured correctly, run a reconciliation operation as follows:

    1. Select Configure > Mappings and click the systemXmlfileAccounts_managedUser mapping.

    2. Click Reconcile Now.

    If the reconciliation is successful, the two users from the XML file should have been added to the managed user repository.

    To check this, select Manage > User.


11.6. Creating Default Connector Configurations

You have three ways to create provisioner files:

  • Start with the sample provisioner files in the /path/to/openidm/samples/provisioners directory. For more information, see "Connectors Supported With OpenIDM 4".

  • Set up connectors with the help of the Admin UI. To start this process, navigate to https://localhost:8443/admin and log in to OpenIDM. Continue with "Adding New Connectors from the Admin UI".

  • Use the service that OpenIDM exposes through the REST interface to create basic connector configuration files, or use the cli.sh or cli.bat scripts to generate a basic connector configuration. To see how this works continue with "Adding New Connectors from the Command Line".

11.6.1. Adding New Connectors from the Admin UI

You can include several different connectors in an OpenIDM configuration. In the Admin UI, select Configure > Connector. Try some of the different connector types in the screen that appears. Observe as the Admin UI changes the configuration options to match the requirements of the connector type.

Adding connectors from the Admin UI

The list of connectors shown in the Admin UI does not include all supported connectors. For information and examples of how each supported connector is configured, see "Connectors Supported With OpenIDM 4".

When you have filled in all required text boxes, the Admin UI allows you to validate the connector configuration.

If you want to configure a different connector through the Admin UI, you could copy the provisioner file from the /path/to/openidm/samples/provisioners directory. However, additional configuration may be required, as described in "Connectors Supported With OpenIDM 4".

Alternatively, some connectors are included with the configuration of a specific sample. For example, if you want to build a ScriptedSQL connector, read "Sample 3 - Using the Custom Scripted Connector Bundler to Build a ScriptedSQL Connector" in the Samples Guide.

11.6.2. Adding New Connectors from the Command Line

This section describes how to create connector configurations over the REST interface. For instructions on how to create connector configurations from the command line, see "Using the configureconnector Subcommand".

You create a new connector configuration file in three stages:

  1. List the available connectors.

  2. Generate the core configuration.

  3. Connect to the target system and generate the final configuration.

List the available connectors by using the following command:

$ curl \
--cacert self-signed.crt \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request POST \
"https://localhost:8443/openidm/system?_action=availableConnectors"

Available connectors are installed in openidm/connectors. OpenIDM 4 bundles the following connectors:

  • CSV File Connector

  • Database Table Connector

  • Scripted Groovy Connector Toolkit, which includes the following sample implementations:

    • Scripted SQL Connector

    • Scripted CREST Connector

    • Scripted REST Connector

  • LDAP Connector

  • XML Connector

  • GoogleApps Connector (OpenIDM Enterprise only)

  • Salesforce Connector (OpenIDM Enterprise only)

The preceding command therefore returns the following output:

{
  "connectorRef": [
    {
      "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector",
      "displayName": "XML Connector",
      "bundleName": "org.forgerock.openicf.connectors.xml-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.1.0.2"
    },
    {
      "connectorName": "org.identityconnectors.ldap.LdapConnector",
      "displayName": "LDAP Connector",
      "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.1.0"
    },
    {
      "connectorName": "org.forgerock.openicf.connectors.scriptedsql.ScriptedSQLConnector",
      "displayName": "Scripted SQL Connector",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.2.0"
    },
    {
      "connectorName": "org.forgerock.openicf.connectors.scriptedrest.ScriptedRESTConnector",
      "displayName": "Scripted REST Connector",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.2.0"
    },
    {
      "connectorName": "org.forgerock.openicf.connectors.scriptedcrest.ScriptedCRESTConnector",
      "displayName": "Scripted CREST Connector",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.2.0"
    },
    {
      "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedPoolableConnector",
      "displayName": "Scripted Poolable Groovy Connector",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.2.0"
    },
    {
      "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedConnector",
      "displayName": "Scripted Groovy Connector",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.4.2.0"
    },
    {
      "connectorName": "org.identityconnectors.databasetable.DatabaseTableConnector",
      "displayName": "Database Table Connector",
      "bundleName": "org.forgerock.openicf.connectors.databasetable-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.1.0.1"
    },
    {
      "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
      "displayName": "CSV File Connector",
      "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
      "systemType": "provisioner.openicf",
      "bundleVersion": "1.5.0.0"
    }
  ]
}

To generate the core configuration, choose one of the available connectors by copying one of the JSON objects from the generated list into the body of the REST command, as shown in the following command for the XML connector:

$ curl \
--cacert self-signed.crt \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '{"connectorRef":
    {"connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector",
    "displayName": "XML Connector",
    "bundleName": "org.forgerock.openicf.connectors.xml-connector",
    "bundleVersion": "1.1.0.2"}
 }' \
 "https://localhost:8443/openidm/system?_action=createCoreConfig"

This command returns a core connector configuration, similar to the following:

{
    "poolConfigOption": {
    "minIdle": 1,
    "minEvictableIdleTimeMillis": 120000,
    "maxWait": 150000,
    "maxIdle": 10,
    "maxObjects": 10
  },
    "resultsHandlerConfig": {
    "enableAttributesToGetSearchResultsHandler": true,
    "enableFilteredResultsHandler": true,
    "enableNormalizingResultsHandler": true
  },
  "operationTimeout": {
    "SCHEMA": -1,
    "SYNC": -1,
    "VALIDATE": -1,
    "SEARCH": -1,
    "AUTHENTICATE": -1,
    "CREATE": -1,
    "UPDATE": -1,
    "DELETE": -1,
    "TEST": -1,
    "SCRIPT_ON_CONNECTOR": -1,
    "SCRIPT_ON_RESOURCE": -1,
    "GET": -1,
    "RESOLVEUSERNAME": -1
  },
  "configurationProperties": {
    "xsdIcfFilePath": null,
    "xsdFilePath": null,
    "createFileIfNotExists": false,
    "xmlFilePath": null
  },
  "connectorRef": {
    "bundleVersion": "1.1.0.2",
    "bundleName": "org.forgerock.openicf.connectors.xml-connector",
    "displayName": "XML Connector",
    "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector"
  }
}

The configuration that is returned is not yet functional. Notice that it does not contain the required system-specific configurationProperties, such as the host name and port, or the xmlFilePath for the XML file-based connector. In addition, the configuration does not include the complete list of objectTypes and operationOptions.

To generate the final configuration, add values for the configurationProperties to the core configuration, and use the updated configuration as the body for the next command:

$ curl \
--cacert self-signed.crt \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '{
  "configurationProperties":
    {
      "xsdIcfFilePath" : "samples/sample1/data/resource-schema-1.xsd",
      "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd",
      "xmlFilePath" : "samples/sample1/data/xmlConnectorData.xml",
      "createFileIfNotExists": false
    },
    "operationTimeout": {
      "SCHEMA": -1,
      "SYNC": -1,
      "VALIDATE": -1,
      "SEARCH": -1,
      "AUTHENTICATE": -1,
      "CREATE": -1,
      "UPDATE": -1,
      "DELETE": -1,
      "TEST": -1,
      "SCRIPT_ON_CONNECTOR": -1,
      "SCRIPT_ON_RESOURCE": -1,
      "GET": -1,
      "RESOLVEUSERNAME": -1
    },
    "resultsHandlerConfig": {
      "enableAttributesToGetSearchResultsHandler": true,
      "enableFilteredResultsHandler": true,
      "enableNormalizingResultsHandler": true
    },
    "poolConfigOption": {
      "minIdle": 1,
      "minEvictableIdleTimeMillis": 120000,
      "maxWait": 150000,
      "maxIdle": 10,
      "maxObjects": 10
    },
    "connectorRef": {
      "bundleVersion": "1.1.0.2",
      "bundleName": "org.forgerock.openicf.connectors.xml-connector",
      "displayName": "XML Connector",
      "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector"
    }
  }' \
"https://localhost:8443/openidm/system?_action=createFullConfig"

Note

Notice the single quotes around the argument to the --data option in the preceding command. For most UNIX shells, single quotes around a string prevent the shell from executing the command when encountering a new line in the content. You can therefore pass the --data '...' option on a single line, or including line feeds.

OpenIDM attempts to read the schema, if available, from the external resource in order to generate output. OpenIDM then iterates through schema objects and attributes, creating JSON representations for objectTypes and operationOptions for supported objects and operations.

The output includes the basic --data input, along with operationOptions and objectTypes.

Because OpenIDM produces a full property set for all attributes and all object types in the schema from the external resource, the resulting configuration can be large. For an LDAP server, OpenIDM can generate a configuration containing several tens of thousands of lines, for example. You might therefore want to reduce the schema to a minimum on the external resource before you run the createFullConfig command.

When you have the complete connector configuration, save that configuration in a file named provisioner.openicf-name.json (where name corresponds to the name of the connector) and place it in the conf directory of your project. For more information, see "Configuring Connectors".

11.7. Checking the Status of External Systems Over REST

After a connection has been configured, external systems are accessible over the REST interface at the URL https://localhost:8443/openidm/system/connector-name. Aside from accessing the data objects within the external systems, you can test the availability of the systems themselves.

To list the external systems that are connected to an OpenIDM instance, use the test action on the URL https://localhost:8443/openidm/system/. The following example shows the connector configuration for an external LDAP system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/system?_action=test"
[
  {
    "ok": true,
    "displayName": "LDAP Connector",
    "connectorRef": {
      "bundleVersion": "[1.4.0.0,2.0.0.0)",
      "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
      "connectorName": "org.identityconnectors.ldap.LdapConnector"
    },
    "objectTypes": [
      "__ALL__",
      "group",
      "account"
    ],
    "config": "config/provisioner.openicf/ldap",
    "enabled": true,
    "name": "ldap"
  }
]

The status of the system is provided by the ok parameter. If the connection is available, the value of this parameter is true.

To obtain the status for a single system, include the name of the connector in the URL, for example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/system/ldap?_action=test"
{
  "ok": true,
  "displayName": "LDAP Connector",
  "connectorRef": {
    "bundleVersion": "[1.4.0.0,2.0.0.0)",
    "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
    "connectorName": "org.identityconnectors.ldap.LdapConnector"
  },
  "objectTypes": [
    "__ALL__",
    "group",
    "account"
  ],
  "config": "config/provisioner.openicf/ldap",
  "enabled": true,
  "name": "ldap"
}

If there is a problem with the connection, the ok parameter returns false, with an indication of the error. In the following example, the LDAP server named ldap, running on localhost:1389, is down:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/system/ldap?_action=test"
{
  "ok": false,
  "error": "localhost:1389",
  "displayName": "LDAP Connector",
  "connectorRef": {
    "bundleVersion": "[1.4.0.0,2.0.0.0)",
    "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
    "connectorName": "org.identityconnectors.ldap.LdapConnector"
  },
  "objectTypes": [
    "__ALL__",
    "group",
    "account"
  ],
  "config": "config/provisioner.openicf/ldap",
  "enabled": true,
  "name": "ldap"
}

To test the validity of a connector configuration, use the testConfig action and include the configuration in the command. For example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --data '{
    "name" : "xmlfile",
    "connectorRef" : {
        "bundleName" : "org.forgerock.openicf.connectors.xml-connector",
        "bundleVersion" : "1.1.0.2",
        "connectorName" : "org.forgerock.openicf.connectors.xml.XMLConnector"
    },
    "producerBufferSize" : 100,
    "connectorPoolingSupported" : true,
    "poolConfigOption" : {
        "maxObjects" : 10,
        "maxIdle" : 10,
        "maxWait" : 150000,
        "minEvictableIdleTimeMillis" : 120000,
        "minIdle" : 1
    },
    "operationTimeout" : {
        "CREATE" : -1,
        "TEST" : -1,
        "AUTHENTICATE" : -1,
        "SEARCH" : -1,
        "VALIDATE" : -1,
        "GET" : -1,
        "UPDATE" : -1,
        "DELETE" : -1,
        "SCRIPT_ON_CONNECTOR" : -1,
        "SCRIPT_ON_RESOURCE" : -1,
        "SYNC" : -1,
        "SCHEMA" : -1
    },
    "configurationProperties" : {
        "xsdIcfFilePath" : "samples/sample1/data/resource-schema-1.xsd",
        "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd",
        "xmlFilePath" : "samples/sample1/data/xmlConnectorData.xml"
    },
    "syncFailureHandler" : {
        "maxRetries" : 5,
        "postRetryAction" : "logged-ignore"
    },
    "objectTypes" : {
        "account" : {
            "$schema" : "http://json-schema.org/draft-03/schema",
            "id" : "__ACCOUNT__",
            "type" : "object",
            "nativeType" : "__ACCOUNT__",
            "properties" : {
                "description" : {
                    "type" : "string",
                    "nativeName" : "__DESCRIPTION__",
                    "nativeType" : "string"
                },
                "firstname" : {
                    "type" : "string",
                    "nativeName" : "firstname",
                    "nativeType" : "string"
                },
                "email" : {
                    "type" : "string",
                    "nativeName" : "email",
                    "nativeType" : "string"
                },
                "_id" : {
                    "type" : "string",
                    "nativeName" : "__UID__"
                },
                "password" : {
                    "type" : "string",
                    "nativeName" : "password",
                    "nativeType" : "string"
                },
                "name" : {
                    "type" : "string",
                    "required" : true,
                    "nativeName" : "__NAME__",
                    "nativeType" : "string"
                },
                "lastname" : {
                    "type" : "string",
                    "required" : true,
                    "nativeName" : "lastname",
                    "nativeType" : "string"
                },
                "mobileTelephoneNumber" : {
                    "type" : "string",
                    "required" : true,
                    "nativeName" : "mobileTelephoneNumber",
                    "nativeType" : "string"
                },
                "securityQuestion" : {
                    "type" : "string",
                    "required" : true,
                    "nativeName" : "securityQuestion",
                    "nativeType" : "string"
                },
                "securityAnswer" : {
                    "type" : "string",
                    "required" : true,
                    "nativeName" : "securityAnswer",
                    "nativeType" : "string"
                },
                "roles" : {
                    "type" : "string",
                    "required" : false,
                    "nativeName" : "roles",
                    "nativeType" : "string"
                }
            }
        }
    },
    "operationOptions" : { }
}' \
 --request POST \
 "https://localhost:8443/openidm/system?_action=testConfig"

If the configuration is valid, the command returns "ok": true, for example:

{
   "ok": true,
   "name": "xmlfile"
}

If the configuration is not valid, the command returns an error, indicating the problem with the configuration. For example, the following result is returned when the LDAP connector configuration is missing a required property (in this case, the baseContexts to synchronize):

{
  "error": "org.identityconnectors.framework.common.exceptions.ConfigurationException:
           The list of base contexts cannot be empty",
  "name": "OpenDJ",
  "ok": false
} 

The testConfig action requires a running OpenIDM instance, as it uses the REST API, but does not require an active connector instance for the connector whose configuration you want to test.

11.8. Adding Attributes to Connector Configurations

You can add the attributes of your choice to a connector configuration file. Specifically, if you want to set up "Extending the Property Type Configuration" to one of the objectTypes such as account, use the format shown under "Specifying the Supported Object Types".

You can configure connectors to enable provisioning of arbitrary property level extensions (such as image files) to system resources. For example, if you want to set up image files such as account avatars, open the appropriate provisioner file. Look for an account section similar to:

"account" : {
    "$schema" : "http://json-schema.org/draft-03/schema",
    "id" : "__ACCOUNT__",
    "type" : "object",
    "nativeType" : "__ACCOUNT__",
    "properties" : {...  

Under properties, add one of the following code blocks. The first block works for a single photo encoded as a base64 string. The second block would address multiple photos encoded in the same way:

"attributeByteArray" : {
    "type" : "string",
    "nativeName" : "attributeByteArray",
    "nativeType" : "JAVA_TYPE_BYTE_ARRAY"
},  
"attributeByteArrayMultivalue": {
    "type": "array",
    "items": {
        "type": "string",
        "nativeType": "JAVA_TYPE_BYTE_ARRAY"
    },
    "nativeName": "attributeByteArrayMultivalue"
}, 

Chapter 12. Synchronizing Data Between Resources

One of the core services of OpenIDM is synchronizing identity data between different resources. In this chapter, you will learn about the different types of synchronization, and how to configure OpenIDM's flexible synchronization mechanism.

12.1. Types of Synchronization

Synchronization happens either when OpenIDM receives a change directly, or when OpenIDM discovers a change on an external resource. An external resource can be any system that holds identity data, such as Active Directory, OpenDJ, a CSV file, a JDBC database, and others. OpenIDM connects to external resources by using OpenICF connectors. For more information, see "Connecting to External Resources".

For direct changes to managed objects, OpenIDM immediately synchronizes those changes to all mappings configured to use those objects as their source. A direct change can originate not only as a write request through the REST interface, but also as an update resulting from reconciliation with another resource.

  • OpenIDM discovers and synchronizes changes from external resources by using reconciliation and liveSync.

  • OpenIDM synchronizes changes made to its internal repository with external resources by using implicit synchronization.

Reconciliation

In identity management, reconciliation is the bidirectional synchronization of objects between different data stores. Traditionally, reconciliation applies mainly to user objects, but OpenIDM can reconcile any objects, such as groups, roles, and devices.

In any reconciliation operation, there is a source system (the system that contains the changes) and a target system (the system to which the changes will be propagated). The source and target system are defined in a mapping. OpenIDM can be either the source or the target in a mapping. You can configure multiple mappings for one OpenIDM instance, depending on the external resources to which OpenIDM connects.

To perform reconciliation, OpenIDM analyzes both the source system and the target system, to discover the differences that it must reconcile. Reconciliation can therefore be a heavyweight process. When working with large data sets, finding all changes can be more work than processing the changes.

Reconciliation is, however, thorough. It recognizes system error conditions and catches changes that might be missed by liveSync. Reconciliation therefore serves as the basis for compliance and reporting functionality.

LiveSync

LiveSync captures the changes that occur on a remote system, then pushes those changes to OpenIDM. OpenIDM uses the defined mappings to replay the changes where they are required; either in the OpenIDM repository, or on another remote system, or both. Unlike reconciliation, liveSync uses a polling system, and is intended to react quickly to changes as they happen.

To perform this polling, liveSync relies on a change detection mechanism on the external resource to determine which objects have changed. The change detection mechanism is specific to the external resource, and can be a time stamp, a sequence number, a change vector, or any other method of recording changes that have occurred on the system. For example, OpenDJ implements a change log that provides OpenIDM with a list of objects that have changed since the last request. Active Directory implements a change sequence number, and certain databases might have a lastChange attribute.

Implicit synchronization

Implicit synchronization automatically pushes changes that are made in the OpenIDM internal repository to external systems.

Note that implicit synchronization only pushes changes out to the external data sources. To synchronize a complete data set, you must start with a reconciliation operation.

OpenIDM uses mappings, configured in your project's conf/sync.json file, to determine which data to synchronize, and how that data must be synchronized. You can schedule reconciliation operations, and the frequency with which OpenIDM polls for liveSync changes, as described in "Scheduling Tasks and Events".

OpenIDM logs reconciliation and synchronization operations in the audit logs by default. For information about querying the reconciliation and synchronization logs, see "Querying Audit Logs Over REST".

12.2. Defining Your Data Mapping Model

In general, identity management software implements one of the following data models:

  • A meta-directory data model, where all data are mirrored in a central repository.

    The meta-directory model offers fast access at the risk of getting outdated data.

  • A virtual data model, where only a minimum set of attributes are stored centrally, and most are loaded on demand from the external resources in which they are stored.

    The virtual model guarantees fresh data, but pays for that guarantee in terms of performance.

OpenIDM leaves the data model choice up to you. You determine the right trade offs for a particular deployment. OpenIDM does not hard code any particular schema or set of attributes stored in the repository. Instead, you define how external system objects map onto managed objects, and OpenIDM dynamically updates the repository to store the managed object attributes that you configure.

You can, for example, choose to follow the data model defined in the Simple Cloud Identity Management (SCIM) specification. The following object represents a SCIM user:

{
    "userName": "james1",
    "familyName": "Berg",
    "givenName": "James",
    "email": [
        "james1@example.com"
    ],
    "description": "Created by OpenIDM REST.",
    "password": "asdfkj23",
    "displayName": "James Berg",
    "phoneNumber": "12345",
    "employeeNumber": "12345",
    "userType": "Contractor",
    "title": "Vice President",
    "active": true
}

Note

Avoid using the dash character ( - ) in property names, like last-name, as dashes in names make JavaScript syntax more complex. If you cannot avoid the dash, then write source['last-name'] instead of source.last-name in your JavaScript.

12.3. Configuring Synchronization Between Two Resources

This section describes the high-level steps required to set up synchronization between two resources. A basic synchronization configuration involves the following steps:

  1. Set up the connector configuration.

    Connector configurations are defined in conf/provisioner-*.json files. One provisioner file must be defined for each external resource to which you are connecting.

  2. Configure a synchronization mapping.

    Mappings are defined in the conf/sync.json file. There is only one sync.json file per OpenIDM instance, but multiple mappings can be defined in that file.

  3. Configure any scripts that are required to check source and target objects, and to manipulate attributes.

  4. In addition to these configuration elements, OpenIDM stores a links table in its repository. The links table maintains a record of relationships established between source and target objects.

12.3.1. Setting Up the Connector Configuration

Connector configuration files map external resource objects to OpenIDM objects, and are described in detail in "Connecting to External Resources". Connector configuration files are stored in the conf/ directory of your project, and are named provisioner.resource-name.json, where resource-name reflects the connector technology and the external resource, for example, openicf-xml.

You can create and modify connector configurations through the Admin UI or directly in the configuration files, as described in the following sections.

12.3.1.1. Setting up and Modifying Connector Configurations in the Admin UI

The easiest way to set up and modify connector configurations is to use the Admin UI.

To add or modify a connector configuration in the Admin UI:

  1. Log in to the UI (https://localhost:8443/admin) as an administrative user. The default administrative username and password is openidm-admin and openidm-admin.

  2. Select Configure > Connectors.

  3. Click on the connector that you want to modify (if there is an existing connector configuration) or click New Connector to set up a new connector configuration.

12.3.1.2. Editing Connector Configuration Files

A number of sample provisioner files are provided in path/to/openidm/samples/provisioners. To modify connector configuration files directly, edit one of the sample provisioner files that corresponds to the resource to which you are connecting.

The following excerpt of an example LDAP connector configuration shows the name for the connector and two attributes of an account object type. In the attribute mapping definitions, the attribute name is mapped from the nativeName (the attribute name used on the external resource) to the attribute name that is used in OpenIDM. The sn attribute in LDAP is mapped to lastName in OpenIDM. The homePhone attribute is defined as an array, because it can have multiple values:

{
    "name": "MyLDAP",
    "objectTypes": {
        "account": {
            "lastName": {
                "type": "string",
                "required": true,
                "nativeName": "sn",
                "nativeType": "string"
            },
            "homePhone": {
                "type": "array",
                "items": {
                    "type": "string",
                    "nativeType": "string"
                },
                "nativeName": "homePhone",
                "nativeType": "string"
            }
        }
    }
}

For OpenIDM to access external resource objects and attributes, the object and its attributes must match the connector configuration. Note that the connector file only maps external resource objects to OpenIDM objects. To construct attributes and to manipulate their values, you use the synchronization mappings file, described in the following section.

12.3.2. Configuring the Synchronization Mapping

A synchronization mapping specifies a relationship between properties in two data stores. A typical mapping, between an external LDAP directory and an internal Managed User data store, is:

"source": "lastName",
"target": "sn"

In this case, the lastName source attribute is mapped to the sn (surname) attribute on the target.

The synchronization mappings file (conf/sync.json) represents the core configuration for OpenIDM synchronization.

The sync.json file defines attribute mappings from a source to a target. A data store can be either a source, or a target, or both. To configure bidirectional synchronization, you must define a separate mapping for each data flow. To synchronize records from an LDAP server to the repository and also from the repository to the LDAP server, you would define two separate mappings.

You can also identify and add mappings in the Admin UI. To do so, navigate to https://localhost:8443/admin, and click Configure > Mappings.

You can update a mapping while the server is running. Make sure, however, that you do not update a mapping if a reconciliation is in progress for that mapping.

The easiest way to set up synchronization mappings is by using the Admin UI. The Admin UI serves as a front end to OpenIDM configuration files, so, the changes you make to mappings in the Admin UI are written to your project's conf/sync.json file.

12.3.2.1. Specifying Resource Mappings in sync.json

Objects in external resources are specified in a mapping as system/name/object-type, where name is the name used in the connector configuration file, and object-type is the object defined in the connector configuration file list of object types. Objects in OpenIDM's internal repository are specified in the mapping as managed/object-type, where object-type is defined in the managed objects configuration file (conf/managed.json).

External resources, and OpenIDM managed objects, can be the source or the target in a mapping. The mapping name, by convention, is set to a string of the form source_target, as shown in the following example:

{
    "mappings": [
        {
            "name": "systemLdapAccounts_managedUser",
            "source": "system/ldap/account",
            "target": "managed/user",
            "properties": [
                {
                    "source": "lastName",
                    "target": "sn"
                },
                {
                    "source": "telephoneNumber",
                    "target": "telephoneNumber"
                },
                {
                    "target": "phoneExtension",
                    "default": "0047"
                },
                {
                    "source": "email",
                    "target": "mail",
                    "comment": "Set mail if non-empty.",
                    "condition": {
                        "type": "text/javascript",
                        "source": "(object.email != null)"
                    }
                },
                {
                    "source": "",
                    "target": "displayName",
                    "transform": {
                        "type": "text/javascript",
                        "source": "source.lastName +', ' + source.firstName;"
                    }
                },
               {
                    "source" : "uid",
                    "target" : "userName",
                    "condition" : "/linkQualifier eq \"user\""
                    }
               },
            ]
        }
    ]
}    

In this example, the source is the external resource (ldap), and the target is OpenIDM's repository, specifically the managed user objects. The properties defined in the mapping reflect attribute names that are defined in the OpenIDM configuration. The source attribute uid is defined in the ldap connector configuration file, rather than on the external resource itself.

To synchronize objects from the repository to the ldap resource, define a mapping with source managed/user and target system/ldap/account. In this case, the name for the mapping would be managedUser_systemLdapAccounts.

12.3.2.2. Creating Attributes in a Mapping

You can create attributes on the target resource as part of the mapping. In the preceding example, a phoneExtension attribute with a default value of 0047 is created on the target.

Use the default property to specify a value to assign to the target property. When OpenIDM determines the value of the target property, any associated conditions are evaluated first, followed by the transform script, if present. The default value is applied (for update and create actions) if the source property and the transform script yield a null value. The default value overrides the target value, if one exists.

To set up attributes with default values in the Admin UI:

  1. Select Configure > Mappings, and click on the Mapping you want to edit.

  2. Click on the Target Property that you want to create (phoneExtension in the previous example), select the Default Values tab, and enter a default value for that property mapping.

12.3.2.3. Transforming Attributes in a Mapping

Use a mapping configuration to define attribute transformations that occur during synchronization. In the following excerpt of the sample mapping, the value of the displayName attribute (on the target) is set using a combination of the lastName and firstName attribute values from the source:

{
    "source": "",
    "target": "displayName",
    "transform": {
        "type": "text/javascript",
        "source": "source.lastName +', ' + source.firstName;"
    }
},   

For transformations, the source property is optional. However, a source object is only available when you specify the source property. Therefore, in order to use source.lastName and source.firstName to calculate the displayName, the example specifies "source" : "".

If you do not specify a source attribute, the entire object is regarded as the source, and you must include the attribute name in the transformation script. For example, to transform the source username to lower case, your script would be source.mail.toLowerCase();. If you do specify a source attribute, just that attribute is regarded as the source. In this case, the transformation script would be source.toLowerCase();.

To set up a transformation script in the Admin UI, select Configure > Mappings, and select the Mapping. Select the line with the target attribute whose value you want to set. On the Transformation Script tab, select Javascript for the transformation type, and enter the transformation as an Inline Script.

12.3.2.5. Using Conditions in a Mapping

By default, OpenIDM synchronizes all attributes in a mapping. To facilitate more complex relationships between source and target objects, define specific conditions under which OpenIDM maps certain attributes. OpenIDM supports two types of mapping conditions:

  • Scriptable conditions, in which an attribute is mapped only if the defined script evaluates to true

  • Link qualifier conditions, used to distinguish the properties that are to be set only for the identified link qualifier. For more information, see "Adding Link Qualifiers to a Mapping".

To set up mapping conditions in the Admin UI, select Configure > Mappings. Click the mapping for which you want to configure conditions. On the Properties tab, click on the attribute that you want to map, then select the Conditional Updates tab.

Configure the filtered condition on the Condition Filter tab, or a scriptable condition on the Script tab.

12.3.2.5.1. Using Scriptable Conditions

Scriptable conditions create mapping logic, based on the result of the condition script. If the script does not return true, OpenIDM does not manipulate the target attribute during a synchronization operation.

In the following excerpt, the value of the target mail attribute is set to the value of the source email attribute only if the source attribute is not empty:

{
    "target": "mail",
    "comment": "Set mail if non-empty.",
    "source": "email",
    "condition": {
        "type": "text/javascript",
        "source": "(object.email != null)"
    }
...   

Only the source object is in the condition script's scope, so the object.email in this example refers to the email property of the source object.

Note

Add comments to your mapping file to indicate how the attributes are mapped. This example includes a comment property but you can use any property whose name is meaningful to you, as long as that property name is not used elsewhere in the server. OpenIDM simply ignores unknown property names in JSON configuration files.

12.3.2.6. Filtering Synchronized Objects

By default, OpenIDM synchronizes all objects that match those defined in the connector configuration for the resource. Many connectors allow you to limit the scope of objects that the connector accesses. For example, the LDAP connector allows you to specify base DNs and LDAP filters so that you do not need to access every entry in the directory. You can also filter the source or target objects that are included in a synchronization operation. To apply these filters, use the validSource, validTarget, or sourceCondition properties in your mapping:

validSource

A script that determines if a source object is valid to be mapped. The script yields a boolean value: true indicates that the source object is valid; false can be used to defer mapping until some condition is met. In the root scope, the source object is provided in the "source" property. If the script is not specified, then all source objects are considered valid:

{
    "validSource": {
        "type": "text/javascript",
        "source": "source.ldapPassword != null"
    }
}
validTarget

A script used during reconciliation's second phase that determines if a target object is valid to be mapped. The script yields a boolean value: true indicates that the target object is valid; false indicates that the target object should not be included in reconciliation. In the root scope, the source object is provided in the "target" property. If the script is not specified, then all target objects are considered valid for mapping:

{
    "validTarget": {
        "type": "text/javascript",
        "source": "target.employeeType == 'internal'"
    }
}
sourceCondition

The sourceCondition element defines an additional filter that must be met for a source object's inclusion in a mapping.

This condition works like a validSource script. Its value can be either a queryFilter string, or a script configuration. sourceCondition is used principally to specify that a mapping applies only to a particular role or entitlement.

The following sourceCondition restricts synchronization to those user objects whose account status is active:

{
    "mappings": [
        {
            "name": "managedUser_systemLdapAccounts",
            "source": "managed/user",
            "sourceCondition": "/source/accountStatus eq \"active\"",
        ...
        }
    ]
}

During synchronization, your scripts and filters have access to a source object and a target object. Examples already shown in this section use source.attributeName to retrieve attributes from the source objects. Your scripts can also write to target attributes using target.attributeName syntax:

{
    "onUpdate": {
        "type": "text/javascript",
        "source": "if (source.email != null) {target.mail = source.email;}"
    }
}

In addition, the sourceCondition filter has the linkQualifier variable in its scope.

For more information about scripting, see "Scripting Reference".

12.3.2.7. Preventing Accidental Deletion of a Target System

If a source resource is empty, the default behavior is to exit without failure and to log a warning similar to the following:

2015-06-05 10:41:18:918 WARN Cannot reconcile from an empty data
    source, unless allowEmptySourceSet is true.

The reconciliation summary is also logged in the reconciliation audit log.

This behavior prevents reconciliation operations from accidentally deleting everything in a target resource. In the event that a source system is unavailable but erroneously reports its status as up, the absence of source objects should not result in objects being removed on the target resource.

When you do want reconciliations of an empty source resource to proceed, override the default behavior by setting the "allowEmptySourceSet" property to true in the mapping. For example:

{
    "mappings" : [
        {
        "name" : "systemXmlfileAccounts_managedUser",
        "source" : "system/xmlfile/account",
        "allowEmptySourceSet" : true,
        ...

When an empty source is reconciled, the target is wiped out.

12.3.3. Constructing and Manipulating Attributes With Scripts

OpenIDM provides a number of script hooks to construct and manipulate attributes. These scripts can be triggered during various stages of the synchronization process, and are defined as part of the mapping, in the sync.json file.

The scripts can be triggered when a managed or system object is created (onCreate), updated (onUpdate), or deleted (onDelete). Scripts can also be triggered when a link is created (onLink) or removed (onUnlink).

In the default synchronization mapping, changes are always written to target objects, not to source objects. However, you can explicitly include a call to an action that should be taken on the source object within the script.

Note

The onUpdate script is always called for an UPDATE situation, even if the synchronization process determines that there is no difference between the source and target objects, and that the target object will not be updated.

If, subsequent to the onUpdate script running, the synchronization process determines that the target value to set is the same as its existing value, the change is prevented from synchronizing to the target.

The following sample extract of a sync.json file derives a DN for an LDAP entry when the entry is created in the internal repository:

{
    "onCreate": {
        "type": "text/javascript",
        "source":
            "target.dn = 'uid=' + source.uid + ',ou=people,dc=example,dc=com'"
    }
}

12.3.4. Advanced Use of Scripts in Mappings

"Constructing and Manipulating Attributes With Scripts" shows how to manipulate attributes with scripts when objects are created and updated. You might want to trigger scripts in response to other synchronization actions. For example, you might not want OpenIDM to delete a managed user directly when an external account record is deleted, but instead unlink the objects and deactivate the user in another resource. (Alternatively, you might delete the object in OpenIDM but nevertheless execute a script.) The following example shows a more advanced mapping configuration that exposes the script hooks available during synchronization.

{
    "mappings": [
        {
            "name": "systemLdapAccount_managedUser",
            "source": "system/ldap/account",
            "target": "managed/user",
            "validSource": {
                "type": "text/javascript",
                "file": "script/isValid.js"
            },
            "correlationQuery" : {
                "type" : "text/javascript",
                "source" : "var map = {'_queryFilter': 'uid eq \"' +
                     source.userName + '\"'}; map;"
            },
            "properties": [
                {
                    "source": "uid",
                    "transform": {
                        "type": "text/javascript",
                        "source": "source.toLowerCase()"
                    },
                    "target": "userName"
                },
                {
                    "source": "",
                    "transform": {
                        "type": "text/javascript",
                        "source": "if (source.myGivenName)
                            {source.myGivenName;} else {source.givenName;}"
                    },
                    "target": "givenName"
                },
                {
                    "source": "",
                    "transform": {
                        "type": "text/javascript",
                        "source": "if (source.mySn)
                            {source.mySn;} else {source.sn;}"
                    },
                    "target": "familyName"
                },
                {
                    "source": "cn",
                    "target": "fullname"
                },
                {
                    "comment": "Multi-valued in LDAP, single-valued in AD.
                        Retrieve first non-empty value.",
                    "source": "title",
                    "transform": {
                        "type": "text/javascript",
                        "file": "script/getFirstNonEmpty.js"
                    },
                    "target": "title"
                },
                {
                    "condition": {
                        "type": "text/javascript",
                        "source": "var clearObj = openidm.decrypt(object);
                            ((clearObj.password != null) &&
                            (clearObj.ldapPassword != clearObj.password))"
                    },
                    "transform": {
                        "type": "text/javascript",
                        "source": "source.password"
                    },
                    "target": "__PASSWORD__"
                }
            ],
            "onCreate": {
                "type": "text/javascript",
                "source": "target.ldapPassword = null;
                    target.adPassword = null;
                    target.password = null;
                    target.ldapStatus = 'New Account'"
            },
            "onUpdate": {
                "type": "text/javascript",
                "source": "target.ldapStatus = 'OLD'"
            },
            "onUnlink": {
                "type": "text/javascript",
                "file": "script/triggerAdDisable.js"
            },
            "policies": [
                {
                    "situation": "CONFIRMED",
                    "action": "UPDATE"
                },
                {
                    "situation": "FOUND",
                    "action": "UPDATE"
                },
                {
                    "situation": "ABSENT",
                    "action": "CREATE"
                },
                {
                    "situation": "AMBIGUOUS",
                    "action": "EXCEPTION"
                },
                {
                    "situation": "MISSING",
                    "action": "EXCEPTION"
                },
                {
                    "situation": "UNQUALIFIED",
                    "action": "UNLINK"
                },
                {
                    "situation": "UNASSIGNED",
                    "action": "EXCEPTION"
                }
            ]
        }
    ]
}

The following list shows the properties that you can use as hooks in mapping configurations to call scripts:

Triggered by Situation

onCreate, onUpdate, onDelete, onLink, onUnlink

Object Filter

vaildSource, validTarget

Correlating Objects

correlationQuery

Triggered on Reconciliation

result

Scripts Inside Properties

condition, transform

Your scripts can get data from any connected system at any time by using the openidm.read(id) function, where id is the identifier of the object to read.

The following example reads a managed user object from the repository:

repoUser = openidm.read("managed/user/ddoe");

The following example reads an account from an external LDAP resource:

externalAccount = openidm.read("system/ldap/account/uid=ddoe,ou=People,dc=example,dc=com");

Note that the query targets a DN rather than a UID as it did in the previous example. The attribute that is used for the _id is defined in the connector configuration file and, in this example, is set to "uidAttribute" : "dn". Although it is possible to use a DN (or any unique attribute) for the _id, as a best practice, you should use an attribute that is both unique and immutable.

12.4. Managing Reconciliation Over REST

Reconciliation is the bidirectional synchronization of objects between two data stores. You can trigger, cancel, and monitor reconciliation operations over REST, using the REST endpoint https://localhost:8443/openidm/recon.

12.4.1. Triggering a Reconciliation Run

The following example triggers a reconciliation operation based on the systemLdapAccounts_managedUser mapping. The mapping is defined in the file conf/sync.json:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/recon?_action=recon&mapping=systemLdapAccounts_managedUser"

By default, a reconciliation run ID is returned immediately when the reconciliation operation is initiated. Clients can make subsequent calls to the reconciliation service, using this reconciliation run ID to query its state and to call operations on it.

The reconciliation run initiated previously would return something similar to the following:

{"_id":"9f4260b6-553d-492d-aaa5-ae3c63bd90f0-14","state":"ACTIVE"}

To complete the reconciliation operation before the reconciliation run ID is returned, set the waitForCompletion property to true when the reconciliation is initiated:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/recon?_action=recon&mapping=systemLdapAccounts_managedUser&waitForCompletion=true"

12.4.2. Obtaining the Details of a Reconciliation Run

Display the details of a specific reconciliation run over REST by including the reconciliation run ID in the URL. The following call shows the details of the reconciliation run initiated in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/recon/0890ad62-4738-4a3f-8b8e-f3c83bbf212e"
{
  "ended": "2014-03-06T07:00:32.094Z",
  "_id": "7a07c100-4f11-4d7e-bf8e-fa4594f99d58",
  "mapping": "systemLdapAccounts_managedUser",
  "state": "SUCCESS",
  "stage": "COMPLETED_SUCCESS",
  "stageDescription": "reconciliation completed.",
  "progress": {
     "links": {
       "created": 0,
       "existing": {
         "total": "1",
         "processed": 1
       }
     },
     "target": {
       "created": 0,
       "existing": {
         "total": "3",
         "processed": 3
       }
     },
     "source": {
       "existing": {
         "total": "1",
         "processed": 1
       }
     }
  },
  "situationSummary": {
     "UNASSIGNED": 2,
     "TARGET_IGNORED": 0,
     "SOURCE_IGNORED": 0,
     "MISSING": 0,
     "FOUND": 0,
     "AMBIGUOUS": 0,
     "UNQUALIFIED": 0,
     "CONFIRMED": 1,
     "SOURCE_MISSING": 0,
     "ABSENT": 0
  },
  "started": "2014-03-06T07:00:31.907Z"
}

12.4.3. Canceling a Reconciliation Run

Cancel a reconciliation run by sending a REST call with the cancel action, specifying the reconciliation run ID. The following call cancels the reconciliation run initiated in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/recon/0890ad62-4738-4a3f-8b8e-f3c83bbf212e?_action=cancel"

The output for a reconciliation cancellation request is similar to the following:

{
     "status":"SUCCESS",
     "action":"cancel",
     "_id":"0890ad62-4738-4a3f-8b8e-f3c83bbf212e"
}

If the reconciliation run is waiting for completion before its ID is returned, obtain the reconciliation run ID from the list of active reconciliations, as described in the following section.

12.4.4. Listing Reconciliation Runs

Display a list of reconciliation processes that have completed, and those that are in progress, by running a RESTful GET on "https://localhost:8443/openidm/recon". The following example displays all reconciliation runs:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/recon"

The output is similar to the following, with one item for each reconciliation run:

{
   "reconciliations": [
     {
       "ended": "2014-03-06T06:14:11.845Z",
       "_id": "4286510e-986a-4521-bfa4-8cd1e039a7f5",
       "mapping": "systemLdapAccounts_managedUser",
       "state": "SUCCESS",
       "stage": "COMPLETED_SUCCESS",
       "stageDescription": "reconciliation completed.",
       "progress": {
         "links": {
           "created": 1,
           "existing": {
           "total": "0",
           "processed": 0
         }
       },
       "target": {
         "created": 1,
         "existing": {
           "total": "2",
           "processed": 2
         }
       },
       "source": {
         "existing": {
           "total": "1",
           "processed": 1
         }
       }
     },
     "situationSummary": {
       "UNASSIGNED": 2,
       "TARGET_IGNORED": 0,
       "SOURCE_IGNORED": 0,
       "MISSING": 0,
       "FOUND": 0,
       "AMBIGUOUS": 0,
       "UNQUALIFIED": 0,
       "CONFIRMED": 0,
       "SOURCE_MISSING": 0,
       "ABSENT": 1
     },
     "started": "2014-03-06T06:14:04.722Z"
   },
 ]
}

Each reconciliation run has the following properties:

_id

The ID of the reconciliation run.

mapping

The name of the mapping, defined in the conf/sync.json file.

state

The high level state of the reconciliation run. Values can be as follows:

  • ACTIVE

    The reconciliation run is in progress.

  • CANCELED

    The reconciliation run was successfully canceled.

  • FAILED

    The reconciliation run was terminated because of failure.

  • SUCCESS

    The reconciliation run completed successfully.

stage

The current stage of the reconciliation run's progress. Values can be as follows:

  • ACTIVE_INITIALIZED

    The initial stage, when a reconciliation run is first created.

  • ACTIVE_QUERY_ENTRIES

    Querying the source, target and possibly link sets to reconcile.

  • ACTIVE_RECONCILING_SOURCE

    Reconciling the set of IDs retrieved from the mapping source.

  • ACTIVE_RECONCILING_TARGET

    Reconciling any remaining entries from the set of IDs retrieved from the mapping target, that were not matched or processed during the source phase.

  • ACTIVE_LINK_CLEANUP

    Checking whether any links are now unused and should be cleaned up.

  • ACTIVE_PROCESSING_RESULTS

    Post-processing of reconciliation results.

  • ACTIVE_CANCELING

    Attempting to abort a reconciliation run in progress.

  • COMPLETED_SUCCESS

    Successfully completed processing the reconciliation run.

  • COMPLETED_CANCELED

    Completed processing because the reconciliation run was aborted.

  • COMPLETED_FAILED

    Completed processing because of a failure.

stageDescription

A description of the stages described previously.

progress

The progress object has the following structure (annotated here with comments):

"progress":{
  "source":{             // Progress on set of existing entries in the mapping source
    "existing":{
      "processed":1001,
        "total":"1001"   // Total number of entries in source set, if known, "?" otherwise
    }
  },
  "target":{             // Progress on set of existing entries in the mapping target
    "existing":{
      "processed":1001,
      "total":"1001"     // Total number of entries in target set, if known, "?" otherwise
    },
    "created":0          // New entries that were created
  },
  "links":{              // Progress on set of existing links between source and target
    "existing":{
      "processed":1001,
      "total":"1001"     // Total number of existing links, if known, "?" otherwise
    },
  "created":0            // Denotes new links that were created
  }
},     

12.4.5. Triggering LiveSync Over REST

Because you can trigger liveSync operations over REST (or by using the resource API) you can use an external scheduler to trigger liveSync operations, rather than using the OpenIDM scheduling mechanism.

There are two ways to trigger liveSync over REST:

  • Use the _action=liveSync parameter directly on the resource. This is the recommended method. The following example calls liveSync on the user accounts in an external LDAP system:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request POST \
     "https://localhost:8443/openidm/system/ldap/account?_action=liveSync"
  • Target the system endpoint and supply a source parameter to identify the object that should be synchronized. This method matches the scheduler configuration and can therefore be used to test schedules before they are implemented.

    The following example calls the same liveSync operation as the previous example:

    $ curl \
     --cacert self-signed.crt \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request POST \
     "https://localhost:8443/openidm/system?_action=liveSync&source=system/ldap/account"

A successful liveSync operation returns the following response:

{
    "_rev": "4",
    "_id": "SYSTEMLDAPACCOUNT",
    "connectorData": {
        "nativeType": "integer",
        "syncToken": 1
    }
}

Do not run two identical liveSync operations simultaneously. Rather ensure that the first operation has completed before a second similar operation is launched.

To troubleshoot a liveSync operation that has not succeeded, include an optional parameter (detailedFailure) to return additional information. For example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/system/ldap/account?_action=liveSync&detailedFailure=true"

Note

The first time liveSync is called, it does not have a synchronization token in the database to establish which changes have already been processed. The default liveSync behavior is to locate the last existing entry in the change log, and to store that entry in the database as the current starting position from which changes should be applied. This behavior prevents liveSync from processing changes that might already have been processed during an initial data load. Subsequent liveSync operations will pick up and process any new changes.

Typically, in setting up liveSync on a new system, you would load the data initially (by using reconciliation, for example) and then enable liveSync, starting from that base point.

12.5. Restricting Reconciliation By Using Queries

Every reconciliation operation performs a query on the source and on the target resource, to determine which records should be reconciled. The default source and target queries are query-all-ids, which means that all records in both the source and the target are considered candidates for that reconciliation operation.

You can restrict reconciliation to specific entries by defining explicit source or target queries in the mapping configuration.

To restrict reconciliation to only those records whose employeeType on the source resource is Permanent, you might specify a source query as follows:

"mappings" : [
     {
         "name" : "managedUser_systemLdapAccounts",
         "source" : "managed/user",
         "target" : "system/ldap/account",
         "sourceQuery" : {
            "_queryFilter" : "employeeType eq \"Permanent\""
         },
...

The format of the query can be any query type that is supported by the resource, and can include additional parameters, if applicable. OpenIDM 4 supports the following query types.

For queries on managed objects:

  • _queryId for arbitrary predefined, parameterized queries

  • _queryFilter for arbitrary filters, in common filter notation

  • _queryExpression for client-supplied queries, in native query format

For queries on system objects:

  • _queryId=query-all-ids (the only supported predefined query)

  • _queryFilter for arbitrary filters, in common filter notation

The source and target queries send the query to the resource that is defined for that source or target, by default. You can override the resource the query is to sent by specifying a resourceName in the query. For example, to query a specific endpoint instead of the source resource, you might modify the preceding source query as follows:

"mappings" : [
    {
        "name" : "managedUser_systemLdapAccounts",
        "source" : "managed/user",
        "target" : "system/ldap/account",
        "sourceQuery" : {
            "resourceName" : "endpoint/scriptedQuery"
            "_queryFilter" : "employeeType eq \"Permanent\""
        },
...

To override a source or target query that is defined in the mapping, you can specify the query when you call the reconciliation operation. If you wanted to reconcile all employee entries, and not just the permanent employees, you would run the reconciliation operation as follows:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{"sourceQuery": {"_queryId" : "query-all-ids"}}' \
 "https://localhost:8443/openidm/recon?_action=recon&mapping=managedUser_systemLdapAccounts"

By default, a reconciliation operation runs both the source and target phase. To avoid queries on the target resource, set runTargetPhase to false in the mapping configuration (conf/sync.json file). To prevent the target resource from being queried during the reconciliation operation configured in the previous example, amend the mapping configuration as follows:

{
    "mappings" : [
        {
            "name" : "systemLdapAccounts_managedUser",
            "source" : "system/ldap/account",
            "target" : "managed/user",
            "sourceQuery" : {
                "_queryFilter" : "employeeType eq \"Permanent\""
            },
            "runTargetPhase" : false,
   ...  

You can also restrict reconciliation by using queries through the Admin UI. Select Configure > Mappings, select a Mapping > Association > Reconciliation Query Filters. You can then specify desired source and target queries.

12.5.1. Improving Reconciliation Performance

Reconciliation is designed to be highly performant out of the box. You can, however, improve performance further, depending on your OpenIDM deployment. This section describes two ways of boosting the reconciliation performance.

12.5.1.1. Improving Reconciliation Query Performance

Reconciliation operations are processed in two phases; a source phase and a target phase. In most reconciliation configurations, source and target queries make a read call to every record on the source and target systems to determine candidates for reconciliation. On slow source or target systems, these frequent calls can incur a substantial performance cost.

To improve query performance in these situations, you can preload the entire result set into memory on the source or target system, or on both systems. Subsequent read queries on known IDs are made against the data in memory, rather than the data on the remote system. For this optimization to be effective, the entire result set must fit into the available memory on the system for which it is enabled.

The optimization works by defining a sourceQuery or targetQuery in the synchronization mapping that returns not just the ID, but the complete object.

The following example query loads the full result set into memory during the source phase of the reconciliation. The example uses a common filter expression, called with the _queryFilter keyword. The query returns the complete object for all entries that include a uid (uid sw ""):

"mappings" : [
    {
        "name" : "systemLdapAccounts_managedUser",
        "source" : "system/ldap/account",
        "target" : "managed/user",
        "sourceQuery" : {
            "_queryFilter" : "uid sw \"\""
        },
    ...

OpenIDM tries to detect what data has been returned. The autodetection mechanism assumes that a result set that includes three or more fields per object (apart from the _id and rev fields) contains the complete object.

You can explicitly state whether a query is configured to return complete objects by setting the value of sourceQueryFullEntry or targetQueryFullEntry in the mapping. The setting of these properties overrides the autodetection mechanism.

Setting these properties to false indicates that the returned object is not the complete object. This might be required if a query returns more than three fields of an object, but not the complete object. Without this setting, the autodetect logic would assume that the complete object was being returned. OpenIDM uses only the IDs from this query result. If the complete object is required, the object is queried on demand.

Setting these properties to true indicates that the complete object is returned. This setting is typically required only for very small objects, for which the number of returned fields does not reach the threshold required for the auto-detection mechanism to assume that it is a full object. In this case, the query result includes all the details required to pre-load the full object.

The following excerpt indicates that the full objects are returned and that OpenIDM should not autodetect the result set:

"mappings" : [
    {
        "name" : "systemLdapAccounts_managedUser",
        "source" : "system/ldap/account",
        "target" : "managed/user",
        "sourceQueryFullEntry" : true,
        "sourceQuery" : {
            "_queryFilter" : "uid sw \"\""
        },
    ...

12.5.1.2. Improving Role-Based Provisioning Performance With an onRecon Script

OpenIDM provides an onRecon script that runs once, at the beginning of each reconciliation. This script can perform any setup or initialization operations that are appropriate for the reconciliation run.

In addition, OpenIDM provides a reconContext that is added to a request's context chain when reconciliation runs. The reconContext can store pre-loaded data that can be used by other OpenIDM components (such as the managed object service) to increase performance.

The default onRecon script (openidm/bin/default/script/roles/onRecon.groovy) loads the reconContext with all the roles and assignments that are required for the current mapping. The effectiveAssignments script checks the reconContext first. If a reconContext is present, the script uses that reconContext to populate the array of effectiveAssignments. This prevents a read operation to managed/role or managed/assignment every time reconciliation runs, and greatly improves the overall performance for role-based provisioning.

You can customize the onRecon, effectiveRoles, and effectiveAssignments scripts to provide additional business logic during reconciliation. If you customize these scripts, copy the default scripts from openidm/bin/defaults/scripts into your project's script directory, and make the changes there.

12.5.2. Configuring Reconciliation Paging

"Improving Reconciliation Query Performance" describes how to improve reconciliation performance by loading all entries into memory to avoid making individual requests to the external system for every ID. However, this optimization depends on the entire result set fitting into the available memory on the system for which it is enabled. For particularly large data sets (for example, data sets of hundreds of millions of users), having the entire data set in memory might not be feasible.

To alleviate this constraint, OpenIDM supports reconciliation paging, which breaks down extremely large data sets into chunks. It also lets you specify the number of entries that should be reconciled in each chunk or page.

Reconciliation paging is disabled by default, and can be enabled per mapping (in the sync.json file). To configure reconciliation paging, set the reconSourceQueryPaging property to true and set the reconSourceQueryPageSize in the synchronization mapping, for example:

{
    "mappings" : [
        {
            "name" : "systemLdapAccounts_managedUser",
            "source" : "system/ldap/account",
            "target" : "managed/user",
            "reconSourceQueryPaging" : true,
            "reconSourceQueryPageSize" : 100,
            ...
        }

The value of reconSourceQueryPageSize must be a positive integer, and specifies the number of entries that will be processed in each page. If reconciliation paging is enabled but no page size is set, a default page size of 1000 is used.

12.6. Restricting Reconciliation to a Specific ID

You can specify an ID to restrict reconciliation to a specific record in much the same way as you restrict reconciliation by using queries.

To restrict reconciliation to a specific ID, use the reconById action, instead of the recon action when you call the reconciliation operation. Specify the ID with the ids parameter. Reconciling more than one ID with the reconById action is not currently supported.

The following example is based on the data from Sample 2b, which maps an LDAP server with the OpenIDM repository. The example reconciles only the user bjensen, using the managedUser_systemLdapAccounts mapping to update the user account in LDAP with the data from the OpenIDM repository. The _id for bjensen in this example is b3c2f414-e7b3-46aa-8ce6-f4ab1e89288c. The example assumes that implicit synchronization has been disabled and that a reconciliation operation is required to copy changes made in the repository to the LDAP system:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/recon?_action=reconById&mapping=managedUser_systemLdapAccounts&ids=b3c2f414-e7b3-46aa-8ce6-f4ab1e89288c"

Reconciliation by ID takes the default reconciliation options that are specified in the mapping so the source and target queries, and source and target phases described in the previous section apply equally to reconciliation by ID.

12.7. Configuring the LiveSync Retry Policy

You can specify the results when a liveSync operation reports a failure. Configure the liveSync retry policy to specify the number of times a failed modification should be reattempted and what should happen if the modification is unsuccessful after the specified number of attempts. If no retry policy is configured, OpenIDM reattempts the change an infinite number of times until the change is successful. This behavior can increase data consistency in the case of transient failures (for example, when the connection to the database is temporarily lost). However, in situations where the cause of the failure is permanent (for example, if the change does not meet certain policy requirements) the change will never succeed, regardless of the number of attempts. In this case, the infinite retry behavior can effectively block subsequent liveSync operations from starting.

Generally, a scheduled reconciliation operation will eventually force consistency. However, to prevent repeated retries that block liveSync, restrict the number of times OpenIDM reattempts the same modification. You can then specify what OpenIDM does with failed liveSync changes. The failed modification can be stored in a dead letter queue, discarded, or reapplied. Alternatively, an administrator can be notified of the failure by email or by some other means. This behavior can be scripted. The default configuration in the samples provided with OpenIDM is to retry a failed modification five times, and then to log and ignore the failure.

The liveSync retry policy is configured in the connector configuration file (provisioner.openicf-*.json). The sample connector configuration files have a retry policy defined as follows:

"syncFailureHandler" : {
  "maxRetries" : 5,
  "postRetryAction" : "logged-ignore"
}, 

The maxRetries field specifies the number of attempts that OpenIDM should make to process the failed modification. The value of this property must be a positive integer, or -1. A value of zero indicates that failed modifications should not be reattempted. In this case, the post-retry action is executed immediately when a liveSync operation fails. A value of -1 (or omitting the maxRetries property, or the entire syncFailureHandler from the configuration) indicates that failed modifications should be retried an infinite number of times. In this case, no post retry action is executed.

The default retry policy relies on the scheduler, or whatever invokes liveSync. Therefore, if retries are enabled and a liveSync modification fails, OpenIDM will retry the modification the next time that liveSync is invoked.

The postRetryAction field indicates what OpenIDM should do if the maximum number of retries has been reached (or if maxRetries has been set to zero). The post-retry action can be one of the following:

  • logged-ignore indicates that OpenIDM should ignore the failed modification, and log its occurrence.

  • dead-letter-queue indicates that OpenIDM should save the details of the failed modification in a table in the repository (accessible over REST at repo/synchronisation/deadLetterQueue/provisioner-name).

  • script specifies a custom script that should be executed when the maximum number of retries has been reached. For information about using custom scripts in the configuration, see "Scripting Reference".

    In addition to the regular objects described in "Scripting Reference", the following objects are available in the script scope:

    syncFailure

    Provides details about the failed record. The structure of the syncFailure object is as follows:

    "syncFailure" :
      {
        "token" : the ID of the token,
        "systemIdentifier" : a string identifier that matches the "name" property in
                             provisioner.openicf.json,
        "objectType" : the object type being synced, one of the keys in the
                       "objectTypes" property in provisioner.openicf.json,
        "uid" : the UID of the object (for example uid=joe,ou=People,dc=example,dc=com),
        "failedRecord", the record that failed to synchronize
      },    

    To access these fields, include syncFailure.fieldname in your script.

    failureCause

    Provides the exception that caused the original liveSync failure.

    failureHandlers

    OpenIDM currently provides two synchronization failure handlers out of the box:

    • loggedIgnore indicates that the failure should be logged, after which no further action should be taken.

    • deadLetterQueue indicates that the failed record should be written to a specific table in the repository, where further action can be taken.

    To invoke one of the internal failure handlers from your script, use a call similar to the following (shown here for JavaScript):

    failureHandlers.deadLetterQueue.invoke(syncFailure, failureCause);

    Two sample scripts are provided in path/to/openidm/samples/syncfailure/script, one that logs failures, and one that sends them to the dead letter queue in the repository.

The following sample provisioner configuration file extract shows a liveSync retry policy that specifies a maximum of four retries before the failed modification is sent to the dead letter queue:

...
"connectorName" : "org.identityconnectors.ldap.LdapConnector"
    },

    "syncFailureHandler" : {
        "maxRetries" : 4,
        "postRetryAction" : dead-letter-queue
    },
    "poolConfigOption" : {
... 

In the case of a failed modification, a message similar to the following is output to the log file:

INFO: sync retries = 1/4, retrying

OpenIDM reattempts the modification the specified number of times. If the modification is still unsuccessful, a message similar to the following is logged:

INFO: sync retries = 4/4, retries exhausted
Jul 19, 2013 11:59:30 AM
    org.forgerock.openidm.provisioner.openicf.syncfailure.DeadLetterQueueHandler invoke
INFO: uid=jdoe,ou=people,dc=example,dc=com saved to dead letter queue

The log message indicates the entry for which the modification failed (uid=jdoe, in this example).

You can view the failed modification in the dead letter queue, over the REST interface, as follows:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/repo/synchronisation/deadLetterQueue/ldap?_queryId=query-all-ids"
{
    "query-time-ms": 2,
    "result":
        [
            {
                "_id": "4",
                "_rev": "0"
            }
        ],
    "conversion-time-ms": 0
}

To view the details of a specific failed modification, include its ID in the URL:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/repo/synchronisation/deadLetterQueue/ldap/4"
{
  "objectType": "account",
  "systemIdentifier": "ldap",
  "failureCause": "org.forgerock.openidm.sync.SynchronizationException:
            org.forgerock.openidm.objset.ConflictException:
            org.forgerock.openidm.sync.SynchronizationException:
            org.forgerock.openidm.script.ScriptException:
            ReferenceError: \"bad\" is not defined.
            (PropertyMapping/mappings/0/properties/3/condition#1)",
  "token": 4,
  "failedRecord": "complete record, in xml format"
  "uid": "uid=jdoe,ou=people,dc=example,dc=com",
  "_rev": "0",
  "_id": "4"
}   

12.8. Disabling Automatic Synchronization Operations

By default, all mappings are automatically synchronized. A change to a managed object is automatically synchronized to all resources for which the managed object is configured as a source. Similarly, if liveSync is enabled for a system, changes to an object on that system are automatically propagated to the managed object repository.

To prevent automatic synchronization for a specific mapping, set the enableSync property of that mapping to false. In the following example, implicit synchronization is disabled. This means that changes to objects in the internal repository are not automatically propagated to the LDAP directory. To propagate changes to the LDAP directory, reconciliation must be launched manually:

{
    "mappings" : [
        {
            "name" : "managedUser_systemLdapAccounts",
            "source" : "managed/user",
            "target" : "system/ldap/account",
            "enableSync" : false,
             ....
}  

If enableSync is set to false for a system to managed user mapping (for example "systemLdapAccounts_managedUser"), liveSync is disabled for that mapping.

12.9. Configuring Synchronization Failure Compensation

When implicit synchronization is used to push a large number of changes from the managed object repository to several external repositories, the process can take some time. Problems such as lost connections might happen, resulting in the changes being only partially synchronized.

For example, if a Human Resources manager adds a group of new employees in one database, a partial synchronization might mean that some of those employees do not have access to their email or other systems.

You can configure implicit synchronization to revert a reconciliation operation if it is not completely successful. This is known as failure compensation. An example of such a configuration is shown in "Sample 5b - Failure Compensation With Multiple Resources" in the Samples Guide. That sample demonstrates how OpenIDM compensates when synchronization to an external resource fails.

Failure compensation works by using the optional onSync hook, which can be specified in the conf/managed.json file. The onSync hook can be used to provide failure compensation as follows:

...
"onDelete" : {
    "type" : "text/javascript",
    "file" : "ui/onDelete-user-cleanup.js"
    },
"onSync" : {
    "type" : "text/javascript",
    "file" : "compensate.js"
    },
"properties" : [
    ... 

The onSync hook references a script (compensate.js), that is located in the /path/to/openidm/bin/defaults/script directory.

When a managed object is changed, an implicit synchronization operation attempts to synchronize the change (and any other pending changes) with any external data store(s) for which a mapping is configured. Note that implicit synchronization is enabled by default. To disable implicit synchronization, see "Disabling Automatic Synchronization Operations".

The implicit synchronization process proceeds with each mapping, in the order in which the mappings are specified in sync.json.

The compensate.js script is designed to avoid partial synchronization. If synchronization is successful for all configured mappings, OpenIDM exits from the script.

If an implicit synchronization operation fails for a particular resource, the onSync hook invokes the compensate.js script. This script attempts to revert the original change by performing another update to the managed object. This change, in turn, triggers another implicit synchronization operation to all external resources for which mappings are configured.

If the synchronization operation fails again, the compensate.js script is triggered a second time. This time, however, the script recognizes that the change was originally called as a result of a compensation and aborts. OpenIDM logs warning messages related to the sync action (notifyCreate, notifyUpdate, notifyDelete), along with the error that caused the sync failure.

If failure compensation is not configured, any issues with connections to an external resource can result in out of sync data stores, as discussed in the earlier Human Resources example.

With the compensate.js script, any such errors will result in each data store using the information it had before implicit synchronization started. OpenIDM stores that information, temporarily, in the oldObject variable.

In the previous Human Resources example, managers should see that new employees are not shown in their database. Then, the OpenIDM administrators can check log files for errors, address them, and restart implicit synchronization with a new REST call.

12.10. Synchronization Situations and Actions

During synchronization, OpenIDM categorizes objects according to their situation. Situations are characterized according to the following criteria:

  • Does the object exist on a source or target system?

  • Has OpenIDM registered a link between the source object and the target object?

  • Is the object considered valid, as assessed by the validSource and validTarget scripts?

OpenIDM then takes a specific action, depending on the situation.

You can define actions for particular situations in the policies section of a synchronization mapping, as shown in the following excerpt from the sync.json file of Sample 2b:

{
    "policies": [
        {
            "situation": "CONFIRMED",
            "action": "UPDATE"
        },
        {
            "situation": "FOUND",
            "action": "LINK"
        },
        {
            "situation": "ABSENT",
            "action": "CREATE"
        },
        {
            "situation": "AMBIGUOUS",
            "action": "IGNORE"
        },
        {
            "situation": "MISSING",
            "action": "IGNORE"
        },
        {
            "situation": "SOURCE_MISSING",
            "action": "DELETE"
        {
            "situation": "UNQUALIFIED",
            "action": "IGNORE"
        },
        {
            "situation": "UNASSIGNED",
            "action": "IGNORE"
        }
    ]
}

If you do not define a policy for a particular situation, OpenIDM takes the default action for the situation. The default actions for each situation are listed in "Synchronization Situations".

The following sections describe the possible situations and their default corresponding actions. You can also view these situations and actions in the Admin UI by selecting Configure > Mappings. Click on a Mapping, then update the Policies on the Behaviors tab.

12.10.1. Synchronization Situations

OpenIDM performs reconciliation in two phases:

  1. Source reconciliation, where OpenIDM accounts for source objects and associated links based on the configured mapping.

  2. Target reconciliation, where OpenIDM iterates over the target objects that were not processed in the first phase.

During source reconciliation, OpenIDM builds three lists, assigning values to the objects to reconcile:

  1. All valid objects from the source.

    OpenIDM assigns valid source objects qualifies=1. Invalid objects, including those that were not found in the source system and those that were filtered out by the script specified in the validSource property, are assigned qualifies=0.

  2. All records from the appropriate links table.

    Objects that have a corresponding link in the links table of the repository are assigned link=1. Objects that do not have a corresponding link are assigned link=0.

  3. All valid objects on the target system.

    Objects that are found in the target system are assigned target=1. Objects that are not found in the target system are assigned target=0.

Based on the values assigned to objects during source reconciliation, OpenIDM assigns situations, listed here with default and appropriate alternative actions:

Situations detected during reconciliation and change events:
CONFIRMED (qualifies=1, link=1, target=1)

The source object qualifies for a target object, and is linked to an existing target object.

Default action: UPDATE the target object.

Other valid actions: IGNORE, REPORT, NOREPORT, ASYNC

FOUND (qualifies=1, link=0, target=1)

The source object qualifies for a target object and is not linked to an existing target object. There is a single target object that correlates with this source object, according to the logic in the correlation.

Default action: UPDATE the target object.

Other valid actions: EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC

FOUND_ALREADY_LINKED (qualifies=1, link=1, target=1)

The source object qualifies for a target object and is not linked to an existing target object. There is a single target object that correlates with this source object, according to the logic in the correlation, but that target object is already linked to a different source object.

Default action: throw an EXCEPTION.

Other valid actions: IGNORE, REPORT, NOREPORT, ASYNC

ABSENT (qualifies=1, link=0, target=0)

The source object qualifies for a target object, is not linked to an existing target object, and no correlated target object is found.

Default action: CREATE a target object.

Other valid actions: EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC

UNQUALIFIED (qualifies=0, link=0 or 1, target=1 or >1)

The source object is unqualified (by the "validSource" script). One or more target objects are found through the correlation logic.

Default action: DELETE the target object or objects.

Other valid actions: EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC

Situations detected during reconciliation and source object changes:
AMBIGUOUS (qualifies=1, link=0, target>1)

The source object qualifies for a target object, is not linked to an existing target object, but there is more than one correlated target object (that is, more than one possible match on the target system).

Default action: throw an EXCEPTION.

Other valid actions: IGNORE, REPORT, NOREPORT, ASYNC

MISSING (qualifies=1, link=1, target=0)

The source object qualifies for a target object, and is linked to a target object, but the target object is missing.

Default action: throw an EXCEPTION.

Other valid actions: CREATE, UNLINK, IGNORE, REPORT, NOREPORT, ASYNC

Note

When a target object is deleted, the link from the target to the corresponding source object is not deleted automatically. This lets OpenIDM detect and report items that might have been removed without permission or might need review. If you need to remove the corresponding link when a target object is deleted, define a back-mapping so that OpenIDM can identify the deleted object as a source object, and remove the link.

SOURCE_IGNORED (qualifies=0, link=0, target=0)

The source object is unqualified (by the validSource script), no link is found, and no correlated target exists.

Default action: IGNORE the source object.

Other valid actions: EXCEPTION, REPORT, NOREPORT, ASYNC

Situations detected only during source object changes:
TARGET_IGNORED (qualifies=0, link=0 or 1, target=1)

The source object is unqualified (by the validSource script). One or more target objects are found through the correlation logic.

This situation differs from the UNQUALIFIED situation, based on the status of the link and the target. If there is a link, the target is not valid. If there is no link and exactly one target, that target is not valid.

Default action: IGNORE the target object until the next full reconciliation operation.

Other valid actions: DELETE, UNLINK, EXCEPTION, REPORT, NOREPORT, ASYNC

LINK_ONLY (qualifies=n/a, link=1, target=0)

The source may or may not be qualified. A link is found, but no target object is found.

Default action: throw an EXCEPTION.

Other valid actions: UNLINK, IGNORE, REPORT, NOREPORT, ASYNC

ALL_GONE (qualifies=n/a, link=0, cannot-correlate)

The source object has been removed. No link is found. Correlation is not possible, for one of the following reasons:

  • No previous source value can be found.

  • There is no correlation logic used.

  • A previous value was found, and correlation logic exists, but no corresponding target was found.

Default action: IGNORE the source object.

Other valid actions: EXCEPTION, REPORT, NOREPORT, ASYNC

During target reconciliation, OpenIDM assigns the following values as it iterates through the target objects that were not accounted for during the source reconciliation:

  1. Valid objects from the target.

    OpenIDM assigns valid target objects qualifies=1. Invalid objects, including those that are filtered out by the script specified in the validTarget property, are assigned qualifies=0.

  2. All records from the appropriate links table.

    Objects that have a corresponding link in the links table of the repository are assigned link=1. Objects that do not have a corresponding link are assigned link=0.

  3. All valid objects on the source system.

    Objects that are found in the source system are assigned source=1. Objects that are not found in the source system are assigned source=0.

Based on the values that are assigned to objects during the target reconciliation phase, OpenIDM assigns situations, listed here with their default actions:

Situations detected only during reconciliation:
TARGET_IGNORED (qualifies=0)

During target reconciliation, the target becomes unqualified by the validTarget script.

Default action: IGNORE the target object.

Other valid actions: DELETE, UNLINK, REPORT, NOREPORT, ASYNC

UNASSIGNED (qualifies=1, link=0)

A valid target object exists but does not have a link.

Default action: throw an EXCEPTION.

Other valid actions: IGNORE, REPORT, NOREPORT, ASYNC

CONFIRMED (qualifies=1, link=1, source=1)

The target object qualifies, and a link to a source object exists.

Default action: UPDATE the target object.

Other valid actions: IGNORE, REPORT, NOREPORT

Situations detected during reconciliation and change events:
UNQUALIFIED (qualifies=0, link=1, source=1, but source does not qualify)

The target object is unqualified (by the validTarget script). There is a link to an existing source object, which is also unqualified.

Default action: DELETE the target object.

Other valid actions: UNLINK, EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC

SOURCE_MISSING (qualifies=1, link=1, source=0)

The target object qualifies and a link is found, but the source object is missing.

Default action: throw an EXCEPTION.

Other valid actions: DELETE, UNLINK, IGNORE, REPORT, NOREPORT, ASYNC

The following sections walk you through how OpenIDM assigns situations during source and target reconciliation.

12.10.2. Source Reconciliation

OpenIDM starts reconciliation and liveSync by reading a list of objects from the resource. For reconciliation, the list includes all objects that are available through the connector. For liveSync, the list contains only changed objects. OpenIDM can filter objects from the list by using the script specified in the validSource property, or the query specified in the sourceCondition property.

OpenIDM then iterates the list, checking each entry against the validSource and sourceCondition filters, and classifying objects according to their situations as described in "Synchronization Situations". OpenIDM uses the list of links for the current mapping to classify objects. Finally, OpenIDM executes the action that is configured for each situation.

The following table shows how OpenIDM assigns the appropriate situation during source reconciliation, depending on whether a valid source exists (Source Qualifies), whether a link exists in the repository (Link Exists), and the number of target objects found, based either on links or on the results of the correlation.

Resolving Source Reconciliation Situations
Source Qualifies?Link Exists?Target Objects Found[a]Situation
YesNoYesNo01> 1
 X X X SOURCE_MISSING
 X X  XUNQUALIFIED
 XX X  UNQUALIFIED
 XX  X TARGET_IGNORED
 XX   XUNQUALIFIED
X  XX  ABSENT
X  X X FOUND
X  X[b] X FOUND_ALREADY_LINKED
X  X  XAMBIGUOUS
X X X  MISSING
X X  X CONFIRMED

[a] If no link exists for the source object, then OpenIDM executes correlation logic. If no previous object is available, OpenIDM cannot correlate.

[b] A link exists from the target object but it is not for this specific source object.


12.10.3. Target Reconciliation

During source reconciliation, OpenIDM cannot detect situations where no source object exists, such as the UNASSIGNED situation. When no source object exists, OpenIDM detects the situation during the second reconciliation phase, target reconciliation. During target reconciliation, OpenIDM iterates all target objects that do not have a representation on the source, checking each object against the validTarget filter, determining the appropriate situation and executing the action configured for the situation.

The following table shows how OpenIDM assigns the appropriate situation during target reconciliation, depending on whether a valid target exists (Target Qualifies), whether a link with an appropriate type exists in the repository (Link Exists), whether a source object exists (Source Exists), and whether the source object qualifies (Source Qualifies). Not all situations assigned during source reconciliation are assigned during target reconciliation.

Resolving Target Reconciliation Situations
Target Qualifies?Link Exists?Source Exists?Source Qualifies?Situation
YesNoYesNoYesNoYesNo
 X      TARGET_IGNORED
X  X X  UNASSIGNED
X X X X CONFIRMED
X X X  XUNQUALIFIED
X X  X  SOURCE_MISSING

12.10.4. Situations Specific to Implicit Synchronization and LiveSync

Certain situations occur only during implicit synchronization (when OpenIDM pushes changes made in the repository out to external systems) and liveSync (when OpenIDM polls external system change logs for changes and updates the repository).

The following table shows the situations that pertain only to implicit sync and liveSync, when records are deleted from the source or target resource.

Resolving Implicit Sync and LiveSync Delete Situations
Source Qualifies?Link Exists?Target Objects Found [a] Situation
YesNoYesNo01> 1
N/AN/AX X  LINK_ONLY
N/AN/A XX  ALL_GONE
X  X  XAMBIGUOUS
 X X  XUNQUALIFIED

[a] If no link exists for the source object, OpenIDM executes any included correlation logic. If a link exists, correlation does not apply.


12.10.5. Synchronization Actions

When a situation has been assigned to an object, OpenIDM takes the actions configured in the mapping. If no action is configured, OpenIDM takes the default action for the situation. OpenIDM supports the following actions:

CREATE

Create and link a target object.

UPDATE

Link and update a target object.

DELETE

Delete and unlink the target object.

LINK

Link the correlated target object.

UNLINK

Unlink the linked target object.

EXCEPTION

Flag the link situation as an exception.

Do not use this action for liveSync mappings.

IGNORE

Do not change the link or target object state.

REPORT

Do not perform any action but report what would happen if the default action were performed.

NOREPORT

Do not perform any action or generate any report.

ASYNC

An asynchronous process has been started so do not perform any action or generate any report.

12.10.6. Launching a Script As an Action

In addition to the static synchronization actions described in the previous section, you can provide a script that is run in specific synchronization situations. The script can be either JavaScript or Groovy, and can be provided inline (with the "source" property), or referenced from a file, (with the "file" property).

The following excerpt of a sample sync.json file specifies that an inline script should be invoked when a synchronization operation assesses an entry as ABSENT in the target system. The script checks whether the employeeType property of the corresponding source entry is contractor. If so, the entry is ignored. Otherwise, the entry is created on the target system:

{
    "situation" : "ABSENT",
    "action" : {
        "type" : "text/javascript",
        "globals" : { },
        "source" : "if (source.employeeType === "contractor") {action='IGNORE'}
                   else {action='CREATE'};action;"
    },
}  

The variables available to a script that is called as an action are source, target, linkQualifier, and recon (where recon.actionParam contains information about the current reconciliation operation). For more information about the variables available to scripts, see "Variables Available to Scripts".

The result obtained from evaluating this script must be a string whose value is one of the synchronization actions listed in "Synchronization Actions". This resulting action will be shown in the reconciliation log.

To launch a script as a synchronization action in the Admin UI:

  1. Select Configure > Mappings.

  2. Select the mapping that you want to change.

  3. On the Behaviors tab, click the pencil icon next to the situation whose action you want to change.

  4. On the Perform this Action tab, click Script, then enter the script that corresponds to the action.

12.10.7. Launching a Workflow As an Action

OpenIDM provides a default script (triggerWorkflowFromSync.js) that launches a predefined workflow when a synchronization operation assesses a particular situation. The mechanism for triggering this script is the same as for any other script. The script is provided in the openidm/bin/defaults/script/workflow directory. If you customize the script, copy it to the script directory of your project to ensure that your customizations are preserved during an upgrade.

The parameters for the workflow are passed as properties of the action parameter.

The following extract of a sample sync.json file specifies that, when a synchronization operation assesses an entry as ABSENT, the workflow named managedUserApproval is invoked:

{
    "situation" : "ABSENT",
    "action" : {
        "workflowName" : "managedUserApproval",
        "type" : "text/javascript",
        "file" : "workflow/triggerWorkflowFromSync.js"
    }
}  

To launch a workflow as a synchronization action in the Admin UI:

  1. Select Configure > Mappings.

  2. Select the mapping that you want to change.

  3. On the Behaviors tab, click the pencil icon next to the situation whose action you want to change.

  4. On the Perform this Action tab, click Workflow, then enter the details of the workflow you want to launch.

12.11. Asynchronous Reconciliation

Reconciliation can work in tandem with workflows to provide additional business logic to the reconciliation process. You can define scripts to determine the action that should be taken for a particular reconciliation situation. A reconciliation process can launch a workflow after it has assessed a situation, and then perform the reconciliation or some other action.

For example, you might want a reconciliation process to assess new user accounts that need to be created on a target resource. However, new user account creation might require some kind of approval from a manager before the accounts are actually created. The initial reconciliation process can assess the accounts that need to be created, launch a workflow to request management approval for those accounts, and then relaunch the reconciliation process to create the accounts, after the management approval has been received.

In this scenario, the defined script returns IGNORE for new accounts and the reconciliation engine does not continue processing the given object. The script then initiates an asynchronous process which calls back and completes the reconciliation process at a later stage.

A sample configuration for this scenario is available in openidm/samples/sample9, and described in "Workflow Sample - Demonstrating Asynchronous Reconciling Using a Workflow" in the Samples Guide.

Configuring asynchronous reconciliation using a workflow involves the following steps:

  1. Create the workflow definition file (.xml or .bar file) and place it in the openidm/workflow directory. For more information about creating workflows, see "Integrating Business Processes and Workflows".

  2. Modify the conf/sync.json file for the situation or situations that should call the workflow. Reference the workflow name in the configuration for that situation.

    For example, the following sync.json extract calls the managedUserApproval workflow if the situation is assessed as ABSENT:

    {
        "situation" : "ABSENT",
        "action" : {
            "workflowName" : "managedUserApproval",
            "type" : "text/javascript",
            "file" : "workflow/triggerWorkflowFromSync.js"
        }
    },  
  3. In the sample configuration, the workflow calls a second, explicit reconciliation process as a final step. This reconciliation process is called on the sync context path, with the performAction action (openidm.action('sync', 'performAction', params)).

You can also use this kind of explicit reconciliation to perform a specific action on a source or target record, regardless of the assessed situation.

You can call such an operation over the REST interface, specifying the source, and/or target IDs, the mapping, and the action to be taken. The action can be any one of the supported reconciliation actions: CREATE, UPDATE, DELETE, LINK, UNLINK, EXCEPTION, REPORT, NOREPORT, ASYNC, IGNORE.

The following sample command calls the DELETE action on user bjensen, whose _id in the LDAP directory is uid=bjensen,ou=People,dc=example,dc=com. The user is deleted in the target resource, in this case, the OpenIDM repository.

Note that the _id must be URL-encoded in the REST call:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/sync?_action=performAction&sourceId=uid%3Dbjensen%2Cou%3DPeople%2Cdc%3Dexample%2Cdc%3Dcom&mapping=
 systemLdapAccounts_ManagedUser&action=DELETE"
{}

The following example creates a link between a managed object and its corresponding system object. Such a call is useful in the context of manual data association, when correlation logic has linked an incorrect object, or when OpenIDM has been unable to determine the correct target object.

In this example, there are two separate target accounts (scarter.user and scarter.admin) that should be mapped to the managed object. This call creates a link to the user account and specifies a link qualifier that indicates the type of link that will be created:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/sync?_action=performAction&action=LINK
   &sourceId=4b39f74d-92c1-4346-9322-d86cb2d828a8&targetId=scarter.user
   &mapping=managedUser_systemXmlfileAccounts&linkQualifier=user"
{}

For more information about mapping to multiple accounts, see "Correlating Multiple Target Objects".

12.12. Configuring Case Sensitivity For Data Stores

By default, OpenIDM is case-sensitive, which means that case is taken into account when comparing IDs during reconciliation. For data stores that are case-insensitive, such as OpenDJ, IDs and links that are created by reconciliation may be stored with a different case to how they are stored in the OpenIDM repository. This can cause problems during a reconciliation operation, as the links for these IDs might not match.

For such data stores, you can configure OpenIDM to ignore case during reconciliation operations. With case-sensitivity turned off in OpenIDM, comparisons are done without regard to case.

To specify case-insensitive data stores, set the sourceIdsCaseSensitive or targetIdsCaseSensitive property to false in the mapping for those links. For example, if the LDAP data store is case-insensitive, set the mapping from the LDAP store to the managed user repository as follows:

"mappings" : [
{
"name" : "systemLdapAccounts_managedUser",
"source" : "system/ldap/account",
"sourceIdsCaseSensitive" : false,
"target" : "managed/user",
"properties" : [
...

If a mapping inherits links by using the links property, you do not need to set case-sensitivity, because the mapping uses the setting of the referred links.

Be aware that, even if you configure OpenIDM to be case-insensitive when comparing links, the OpenICF provisioner is not necessarily case-insensitive when it requests data. For example, if a user entry is stored with the ID testuser and you make a request for https://localhost:8443/openidm/managed/TESTuser, most provisioners will filter out the match because of the difference in case, and will indicate that the record is not found. To prevent the provisioner from performing this secondary filtering, set the enableFilteredResultsHandler property to false in the provisioner configuration. For example:

"resultsHandlerConfig" :
{
    "enableFilteredResultsHandler":false,
}, 

Caution

Do not disable the filtered results handler for the CSV file connector. The CSV file connector does not perform filtering so if you disable the filtered results handler for this connector, the full CSV file will be returned for every request.

12.13. Optimizing Reconciliation Performance

By default, reconciliation is configured to function optimally, with regard to performance. Some of these optimizations might, however, be unsuitable for your environment. The following sections describe the default optimizations and how they can be configured.

12.13.1. Correlating Empty Target Sets

To optimize performance, reconciliation does not correlate source objects to target objects if the set of target objects is empty when the correlation is started. This considerably speeds up the process the first time reconciliation is run. You can change this behavior for a specific mapping by adding the correlateEmptyTargetSet property to the mapping definition and setting it to true. For example:

{
    "mappings": [
        {
            "name"                     : "systemMyLDAPAccounts_managedUser",
            "source"                   : "system/MyLDAP/account",
            "target"                   : "managed/user",
            "correlateEmptyTargetSet"  : true
        },
    ]
}

Be aware that this setting will have a performance impact on the reconciliation process.

12.13.3. Parallel Reconciliation Threads

By default, reconciliation is multithreaded; numerous threads are dedicated to the same reconciliation run. Multithreading generally improves reconciliation performance. The default number of threads for a single reconciliation run is 10 (plus the main reconciliation thread). Under normal circumstances, you should not need to change this number; however the default might not be appropriate in the following situations:

  • The hardware has many cores and supports more concurrent threads. As a rule of thumb for performance tuning, start with setting the thread number to two times the number of cores.

  • The source or target is an external system with high latency or slow response times. Threads may then spend considerable time waiting for a response from the external system. Increasing the available threads enables the system to prepare or continue with additional objects.

To change the number of threads, set the taskThreads property in the conf/sync.json file, for example:

    "mappings" : [
        {
            "name" : "systemXmlfileAccounts_managedUser",
            "source" : "system/xmlfile/account",
            "target" : "managed/user",
            "taskThreads" : 20
            ...
         }
    ]
}

A zero value runs reconciliation as a serialized process, on the main reconciliation thread.

12.14. Correlating Existing Target Objects

When OpenIDM creates an object through synchronization, it creates a link between the source and target objects. OpenIDM then uses the link to determine the object's synchronization situation during later synchronization operations. For a list of synchronization situations, see "Synchronization Situations".

Initial, full synchronization operations can involve correlating many objects on both source and target systems. You can use correlation to return matching record IDs with either a correlation query or a correlation script.

With a correlation query, you can set up a query definition (_queryId, _queryFilter, _queryExpression), possibly with the help of a linkQualifier. OpenIDM executes that query to search through a target repository for record IDs.

With a correlation script, you return a list of target record IDs. This script makes use of the source object, and possibly the value of a linkQualifier to find those matching record IDs. There is no restriction imposed on the method for finding these ID values. Be aware, such scripts may be relatively complex.

To configure correlation queries and correlation scripts from the Admin UI, select Configure > Mappings, and select the mapping that you want to change. On the Association tab, expand Association Rules, then select Correlation Queries or Correlation Script from the list.

See the following sections for guidance on writing a correlation query and a correlation script, in the UI and in the sync.json configuration file.

12.14.1. Configuring Correlation Queries

OpenIDM processes a correlation query by constructing a query map. The content of the query is generated dynamically, using values from the source object. For each source object, a new query is sent to the target system, using (possibly transformed) values from the source object for its execution.

Correlation queries are defined as part of the mapping objects that are configured in the conf/sync.json file. They are run against target resources, either managed or system objects, depending on the mapping. Correlation queries on system objects access the connector, which executes the query on the external resource.

The preferred syntax for a correlation query is a filtered query, using the _queryFilter keyword. Filtered queries should work in the same way on any backend, whereas other query types are generally specific to the targeted backend. Predefined queries (using _queryId) and native queries (using _queryExpression) can also be used for correlation queries. Note, however, that system objects do not support predefined queries, other than query-all-ids, which serves no purpose in a correlation query.

To configure a correlation query, define a script whose source returns a query that uses the _queryFilter, _queryId, or _queryExpression keyword. For example:

  • For a _queryId, the value is the named query. Named parameters in the query map are expected by that query.

    {'_queryId' : 'for-userName', 'uid' : source.name}
  • For a _queryFilter, the value is the abstract filter string:

    { "_queryFilter" : "uid eq \"" + source.userName + "\"" }
  • For a _queryExpression, the value is the system-specific query expression, such as raw SQL.

    {'_queryExpression': 'select * from managed_user where givenName = \"' + source.firstname + '\"' }

A sample correlation query definition inside a mapping object follows:

{
    "mappings" : [
        {
            "name" : "managedUser_systemHrdb",
            "source" : "managed/user",
            "target" : "system/scriptedsql/account",
            "links" : "systemHrdb_managedUser",
            "correlationQuery" : {
                "type" : "text/javascript",
                "source" : "var qry = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; qry"
            },

12.14.1.1. Using Filtered Queries to Correlate Objects

For filtered queries, the script that is defined or referenced in the correlationQuery property must return an object with the following elements:

  • The element that is being compared on the target object, for example, uid.

    The element on the target object is not necessarily a single attribute. Your query filter can be simple or complex; valid query filters range from a single operator to an entire boolean expression tree.

    If the target object is a system object, this attribute must be referred to by its OpenIDM name rather than its OpenICF nativeName. For example, given the following provisioner configuration excerpt, the name to use in the correlation query would be uid and not __NAME__:

    "uid" : {
        "type" : "string",
        "nativeName" : "__NAME__",
        "required" : true,
        "nativeType" : "string"
    }
    ...    
  • The value to search for in the query.

    This value is generally based on one or more values from the source object. However, it does not have to match the value of a single source object property. You can define how your script uses the values from the source object to find a matching record in the target system.

    You might use a transformation of a source object property, such as toUpperCase(). You can concatenate that output with other strings or properties. You can also use this value to call an external REST endpoint, and redirect the response to the final "value" portion of the query.

The following query finds objects on the target whose uid is the same as the userName of a source object:

"correlationQuery" : {
     "type" : "text/javascript",
     "source" : "var qry = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; qry"
},   

The query filter can be simple or complex; valid query filters range from a single operator to an entire boolean expression tree.

The query can return zero or more objects. The situation that OpenIDM assigns to the source object depends on the number of target objects that are returned. For more information, see "Synchronization Situations".

12.14.1.2. Using Predefined Queries to Correlate Objects

If you configure correlation queries with predefined queries, they must be defined in the database table configuration file for the repository, either conf/repo.jdbc.json or conf/repo.orientdb.json. In addition, these predefined queries must also be referenced in the mapping file: conf/sync.json.

The following example shows a query defined in the OrientDB repository configuration (conf/repo.orientdb.json) that can be used as the basis for a correlation query:

"for-userName" : "SELECT * FROM ${unquoted:_resource} WHERE userName = ${uid}
      SKIP ${unquoted:_pagedResultsOffset} LIMIT ${unquoted:_pageSize}"

By default, a ${value} token replacement is assumed to be a quoted string. If the value is not a quoted string, use the unquoted: prefix, as shown above.

You would call this query in the mapping (sync.json) file as follows:

{
    "correlationQuery": {
      "type": "text/javascript",
      "source":
        "var qry = {'_queryId' : 'for-userName', 'uid' : source.name}; qry;"
    }
  }  

In this correlation query, the _queryId property value (for-userName) matches the name of the query specified in openidm/conf/repo.orientdb.json. The source.name value replaces ${uid} in the query. OpenIDM replaces ${unquoted:_resource} in the query with the name of the table that holds managed objects.

12.14.1.3. Using the Expression Builder to Create Correlation Queries

OpenIDM 4 provides a declarative correlation option, the expression builder, that makes it easier to configure correlation queries.

The easiest way to use the expression builder to create a correlation query is through the Admin UI:

  1. Select Configure > Mappings and select the mapping for which you want to configure a correlation query.

  2. On the Association tab, expand the Association Rules item and select Correlation Queries from the list.

    Admin UI mapping screen, Association Rules
  3. Click Add Correlation query.

  4. In the Correlation Query window, select a link qualifier. (For more information, see "Using Link Qualifier Conditions").

  5. Now create your query expression.

    The following image shows how you can use the expression builder to build a correlation query for a mapping from system/ldap/accounts to managed/user objects. The query essentially states, in order for a match to exist between the source (LDAP) object and the target (managed) object, both the givenName and telephoneNumber of those objects must match.

    Admin UI mapping screen showing correlation query
  6. Click Submit to exit the Correlation Query pop-up window.

    When you have created all the correlation queries that you need, click Save.

The correlation query created in the previous steps displays as follows in the mapping configuration (sync.json):

"correlationQuery" : {
     "linkQualifier" : "user",
     "expressionTree" : {
         "all" : [
             "givenName",
             "telephoneNumber"
         ]
     },
     "mapping" : "systemLdapAccounts_managedUser",
     "type" : "text/javascript",
     "file" : "ui/correlateTreeToQueryFilter.js"
},   

You can find the logic in the expression builder in the following script: openidm/bin/defaults/script/ui/correlateTreeToQueryFilter.js. This script converts the expression into the required query filter.

12.14.2. Correlating Multiple Target Objects

To correlate a single source entry with multiple target entries, you indicate how the source entry should be linked to the target entries, by providing correlation logic appropriate for each link qualifier, typically in the sync.json file in your project-dir/conf directory.

When complete, you will have created a separate correlation query for each mapping from a single source object to a potential target object. You can differentiate these correlation queries, by link, with a link qualifier.

When correlating multiple target objects, you'll likely see the following code snippet in your sync.json file. you'll have two or more roles listed in the linkQualifiers for your mapping:

{
  "mappings" : [
    {
      "name" : "managedUser_systemLdapAccounts",
      "source" : "managed/user",
      "target" : "system/ldap/account",
      "linkQualifiers" : [
        "role1",
        "role2"
      ],
...   

These mappings work for users who belong to one or both roles. For example, an insurance agent may also be a customer of the same insurance company. The discussion that follows is generic; for a specific use case, see "The Multi-Account Linking Sample" in the Samples Guide.

You can then set up correlation queries for each linkQualifier role, as follows. In this case, the linkQualifier uses the expressionTree to set the conditions for a match between the source and target objects. In this case, the condition is a match on the dn, or distinguished name.

"correlationQuery" : [
  {
    "linkQualifier" : "role1",
    "expressionTree" : {
      "all" : [
        "dn"
      ]
    },
    "mapping" : "managedUser_systemLdapAccounts",
    "type" : "text/javascript",
    "file" : "ui/correlateTreeToQueryFilter.js"
  },
  {
    "linkQualifier" : "role2",
    "expressionTree" : {
      "all" : [
        "dn"
      ]
    },
    "mapping" : "managedUser_systemLdapAccounts",
    "type" : "text/javascript",
    "file" : "ui/correlateTreeToQueryFilter.js"
  }
], 

You can also build correlation queries through the Admin UI. For more information, see "Using the Expression Builder to Create Correlation Queries".

You will need a validSource script based on two requirements:

  • Determine whether a user has one or more roles.

  • Ensure that OpenIDM examines the source only for the specified role

"validSource" : {
    "type" : "text/javascript",
    "globals" : { },
    "source" : "var res = false;\nvar i=0;\n\nwhile
        (!res && i < source.effectiveRoles.length) {\n
        var roleId = source.effectiveRoles[i];\n
            if (roleId != null && roleId.indexOf(\"/\") != -1) {\n
                var roleInfo = openidm.read(roleId);\n
                    res = (((roleInfo.properties.name === 'RoleName1')\n
                        &&(linkQualifier ==='role1'))\n
                        || ((roleInfo.properties.name === 'RoleName2')\n
                        &&(linkQualifier ==='role2')));\n
                    }\n
                i++;\n}\n\nres"
        }

12.14.3. Correlation Scripts

An alternative to correlation queries is a correlation script. You can configure a correlation script as part of a mapping in the sync.json file.

In the following example excerpt, the correlateScript.js script is used to return IDs from the target repository:

{
    "mappings" : [
        "name" : "managedUser_systemLdapAccounts",
        "source" : "managed/user"
        "target" : "system/ldap/account",
        "correlationScript" : {
            "type" : "text/javascript",
            "file" : "script/correlateScript.js"
        },
        ... 

To configure a correlation script in the Admin UI, follow these steps:

  1. Select Configure > Mappings and select the mapping for which you want to configure the correlation script.

  2. On the Association tab, expand the Association Rules item and select Correlation Script from the list.

    Admin UI mapping screen showing correlation script
  3. Select a script type (either JavaScript or Groovy) and either enter the script source in the Inline Script box, or specify the path to a file that contains the script.

    To create a correlation script, use the details from the source object to find the matching record in the target system. If you are using link qualifiers to match a single source record to multiple target records, you must also use the value of the linkQualifier variable within your correlation script to find the target ID that applies for that qualifier.

  4. Click Save to save the script as part of the mapping.

12.15. Scheduling Synchronization

You can schedule synchronization operations, such as liveSync and reconciliation, using cron-like syntax.

This section describes scheduling specifically for reconciliation and liveSync. You can use OpenIDM's scheduler service to schedule any other event by supplying a link to a script file, in which that event is defined. For information about scheduling other events, see "Scheduling Tasks and Events".

12.15.1. Configuring Scheduled Synchronization

Each scheduled reconciliation and liveSync task requires a schedule configuration file in your project's conf directory. By convention, schedule configuration files are named schedule-schedule-name.json, where schedule-name is a logical name for the scheduled synchronization operation, such as reconcile_systemXmlAccounts_managedUser.

Schedule configuration files have the following format:

{
 "enabled"       : true,
 "persisted"     : false,
 "type"          : "cron",
 "startTime"     : "(optional) time",
 "endTime"       : "(optional) time",
 "schedule"      : "cron expression",
 "misfirePolicy" : "optional, string",
 "timeZone"      : "(optional) time zone",
 "invokeService" : "service identifier",
 "invokeContext" : "service specific context info"
}  

These properties are specific to the scheduler service, and are explained in "Scheduling Tasks and Events".

To schedule a reconciliation or liveSync task, set the invokeService property to either "sync" (for reconciliation) or "provisioner" for liveSync.

The value of the invokeContext property depends on the type of scheduled event. For reconciliation, the properties are set as follows:

{
    "invokeService": "sync",
    "invokeContext": {
        "action": "reconcile",
        "mapping": "systemLdapAccount_managedUser"
    }
}  

The mapping is either referenced by its name in the conf/sync.json file, or defined inline by using the mapping property, as shown in the example in "Specifying the Mapping as Part of the Schedule".

For liveSync, the properties are set as follows:

{
    "invokeService": "provisioner",
    "invokeContext": {
        "action": "liveSync",
        "source": "system/OpenDJ/__ACCOUNT__"
    }
}
   

The source property follows OpenIDM's convention for a pointer to an external resource object and takes the form system/resource-name/object-type.

12.15.2. Specifying the Mapping as Part of the Schedule

Mappings for synchronization operations are usually stored in your project's sync.json file. You can, however, provide the mapping for scheduled synchronization operation by including it as part of the invokeContext of the schedule configuration, as shown in the following example:

{
    "enabled": true,
    "type": "cron",
    "schedule": "0 08 16 * * ?",
    "invokeService": "sync",
    "invokeContext": {
        "action": "reconcile",
        "mapping": {
            "name": "CSV_XML",
            "source": "system/Ldap/account",
            "target": "managed/user",
            "properties": [
                {
                    "source": "firstname",
                    "target": "firstname"
                },
                ...
            ],
            "policies": [...]
        }
    }
}  

Chapter 13. Scheduling Tasks and Events

The OpenIDM scheduler enables you to schedule reconciliation and synchronization tasks, trigger scripts, collect and run reports, trigger workflows, and perform custom logging.

OpenIDM supports cron-like syntax to schedule events and tasks, based on expressions supported by the Quartz Scheduler (bundled with OpenIDM).

If you use configuration files to schedule tasks and events, you must place the schedule files in the openidm/conf directory. By convention, OpenIDM uses file names of the form schedule-schedule-name.json, where schedule-name is a logical name for the scheduled operation, for example, schedule-reconcile_systemXmlAccounts_managedUser.json. There are several example schedule configuration files in the openidm/samples/schedules directory.

You can configure OpenIDM to pick up changes to scheduled tasks and events dynamically, during initialization and also at runtime. For more information, see "Changing the Default Configuration".

In addition to the fine-grained scheduling facility, you can perform a scheduled batch scan for a specified date in OpenIDM data, and then automatically run a task when this date is reached. For more information, see "Scanning Data to Trigger Tasks".

13.1. Scheduler Configuration

Schedules are configured through JSON objects. The schedule configuration involves three files:

  • The boot.properties file, where you can enable persistent schedules.

  • The scheduler.json file, that configures the overall scheduler service.

  • One schedule-schedule-name.json file for each configured schedule.

In the boot properties configuration file (project-dir/conf/boot/boot.properties), the instance type is standalone and persistent schedules are enabled by default:

# valid instance types for node include standalone, clustered-first, and clustered-additional
openidm.instance.type=standalone

# enables the execution of persistent schedulers
openidm.scheduler.execute.persistent.schedules=true

The scheduler service configuration file (project-dir/conf/scheduler.json) governs the configuration for a specific scheduler instance, and has the following format:

{
    "threadPool" : {
        "threadCount" : "10"
    },
    "scheduler" : {
        "executePersistentSchedules" : "&{openidm.scheduler.execute.persistent.schedules}"
    }
} 

The properties in the scheduler.json file relate to the configuration of the Quartz Scheduler:

  • threadCount specifies the maximum number of threads that are available for running scheduled tasks concurrently.

  • executePersistentSchedules allows you to disable persistent schedules for a specific node. If this parameter is set to false, the Scheduler Service will support the management of persistent schedules (CRUD operations) but it will not run any persistent schedules. The value of this property can be a string or boolean and is true by default.

    Note that changing the value of the openidm.scheduler.execute.persistent.schedules property in the boot.properties file changes the scheduler that manages scheduled tasks on that node. Because the persistent and in-memory schedulers are managed separately, a situation can arise where two separate schedules have the same schedule name.

  • advancedProperties (optional) enables you to configure additional properties for the Quartz Scheduler.

Note

In clustered environments, the scheduler service obtains an instanceID, and checkin and timeout settings from the cluster management service (defined in the project-dir/conf/cluster.json file).

For details of all the configurable properties for the Quartz Scheduler, see the Quartz Scheduler Configuration Reference.

Each schedule configuration file (project-dir/conf/schedule-schedule-name.json) has the following format:

{
 "enabled"             : true,
 "persisted"           : false,
 "concurrentExecution" : false,
 "type"                : "cron",
 "startTime"           : "(optional) time",
 "endTime"             : "(optional) time",
 "schedule"            : "cron expression",
 "misfirePolicy"       : "optional, string",
 "timeZone"            : "(optional) time zone",
 "invokeService"       : "service identifier",
 "invokeContext"       : "service specific context info",
 "invokeLogLevel"      : "(optional) level"
}

The schedule configuration properties are defined as follows:

enabled

Set to true to enable the schedule. When this property is false, OpenIDM considers the schedule configuration dormant, and does not allow it to be triggered or launched.

If you want to retain a schedule configuration, but do not want it used, set enabled to false for task and event schedulers, instead of changing the configuration or cron expressions.

persisted (optional)

Specifies whether the schedule state should be persisted or stored in RAM. Boolean (true or false), false by default.

In a clustered environment, this property must be set to true to have the schedule fire only once across the cluster. For more information, see "Configuring Persistent Schedules".

concurrentExecution

Specifies whether multiple instances of the same schedule can run concurrently. Boolean (true or false), false by default. Multiple instances of the same schedule cannot run concurrently by default. This setting prevents a new scheduled task from being launched before the same previously launched task has completed. For example, under normal circumstances you would want a LiveSync operation to complete before the same operation was launched again. To enable multiple schedules to run concurrently, set this parameter to true. The behavior of missed scheduled tasks is governed by the misfirePolicy.

type

Currently OpenIDM supports only cron.

startTime (optional)

Used to start the schedule at some time in the future. If this parameter is omitted, empty, or set to a time in the past, the task or event is scheduled to start immediately.

Use ISO 8601 format to specify times and dates ( YYYY-MM-DD Thh:mm :ss).

endTime (optional)

Used to plan the end of scheduling.

schedule

Takes cron expression syntax. For more information, see the CronTrigger Tutorial and Lesson 6: CronTrigger.

misfirePolicy

For persistent schedules, this optional parameter specifies the behavior if the scheduled task is missed, for some reason. Possible values are as follows:

  • fireAndProceed. The first run of a missed schedule is immediately launched when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed.

  • doNothing. All missed schedules are discarded and the normal schedule is resumed when the server is back online.

timeZone (optional)

If not set, OpenIDM uses the system time zone.

invokeService

Defines the type of scheduled event or action. The value of this parameter can be one of the following:

  • sync for reconciliation

  • provisioner for LiveSync

  • script to call some other scheduled operation defined in a script

invokeContext

Specifies contextual information, depending on the type of scheduled event (the value of the invokeService parameter).

The following example invokes reconciliation:

{
    "invokeService": "sync",
    "invokeContext": {
        "action": "reconcile",
        "mapping": "systemLdapAccount_managedUser"
    }
}

For a scheduled reconciliation task, you can define the mapping in one of two ways:

  • Reference a mapping by its name in sync.json, as shown in the previous example. The mapping must exist in your project's conf/sync.json file.

  • Add the mapping definition inline by using the mapping property, as shown in "Specifying the Mapping as Part of the Schedule".

The following example invokes a LiveSync operation:

{
    "invokeService": "provisioner",
    "invokeContext": {
        "action": "liveSync",
        "source": "system/OpenDJ/__ACCOUNT__"
    }
}

For scheduled LiveSync tasks, the source property follows OpenIDM's convention for a pointer to an external resource object and takes the form system/resource-name/object-type.

The following example invokes a script, which prints the string Hello World to the OpenIDM log (/openidm/logs/openidm0.log.X).

{
    "invokeService": "script",
    "invokeContext": {
        "script": {
            "type": "text/javascript",
            "source": "console.log('Hello World');"
        }
    }
}

Note that these are sample configurations only. Your own schedule configuration will differ according to your specific requirements.

invokeLogLevel (optional)

Specifies the level at which the invocation will be logged. Particularly for schedules that run very frequently, such as LiveSync, the scheduled task can generate significant output to the log file, and you should adjust the log level accordingly. The default schedule log level is info. The value can be set to any one of the SLF4J log levels:

  • trace

  • debug

  • info

  • warn

  • error

  • fatal

13.2. Configuring Persistent Schedules

By default, scheduling information, such as schedule state and details of the schedule run, is stored in RAM. This means that such information is lost when OpenIDM is rebooted. The schedule configuration itself (defined in your project's conf/schedule-schedule-name.json file) is not lost when OpenIDM is shut down, and normal scheduling continues when the server is restarted. However, there are no details of missed schedule runs that should have occurred during the period the server was unavailable.

You can configure schedules to be persistent, which means that the scheduling information is stored in the internal repository rather than in RAM. With persistent schedules, scheduling information is retained when OpenIDM is shut down. Any previously scheduled jobs can be rescheduled automatically when OpenIDM is restarted.

Persistent schedules also enable you to manage scheduling across a cluster (multiple OpenIDM instances). When scheduling is persistent, a particular schedule will be launched only once across the cluster, rather than once on every OpenIDM instance. For example, if your deployment includes a cluster of OpenIDM nodes for high availability, you can use persistent scheduling to start a reconciliation operation on only one node in the cluster, instead of starting several competing reconciliation operations on each node.

To configure persistent schedules, set persisted to true in the schedule configuration file (schedule-schedule-name.json).

If OpenIDM is down when a scheduled task was set to occur, one or more runs of that schedule might be missed. To specify what action should be taken if schedules are missed, set the misfirePolicy in the schedule configuration file. The misfirePolicy determines what OpenIDM should do if scheduled tasks are missed. Possible values are as follows:

  • fireAndProceed. The first run of a missed schedule is immediately implemented when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed.

  • doNothing. All missed schedules are discarded and the normal schedule is resumed when the server is back online.

13.3. Schedule Examples

The following example shows a schedule for reconciliation that is not enabled. When the schedule is enabled ("enabled" : true,), reconciliation runs every 30 minutes, starting on the hour:

{
    "enabled": false,
    "persisted": false,
    "type": "cron",
    "schedule": "0 0/30 * * * ?",
    "invokeService": "sync",
    "invokeContext": {
        "action": "reconcile",
        "mapping": "systemLdapAccounts_managedUser"
    }
}

The following example shows a schedule for LiveSync enabled to run every 15 seconds, starting at the beginning of the minute. The schedule is persisted, that is, stored in the internal repository rather than in memory. If one or more LiveSync runs are missed, as a result of OpenIDM being unavailable, the first run of the LiveSync operation is implemented when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed:

{
    "enabled": true,
    "persisted": true,
    "misfirePolicy" : "fireAndProceed",
    "type": "cron",
    "schedule": "0/15 * * * * ?",
    "invokeService": "provisioner",
    "invokeContext": {
        "action": "liveSync",
        "source": "system/ldap/account"
    }
}

13.4. Managing Schedules Over REST

OpenIDM exposes the scheduler service under the /openidm/scheduler context path. The following examples show how schedules can be created, read, updated, and deleted, over REST, by using the scheduler service. The examples also show how to pause and resume scheduled tasks, when an OpenIDM instance is placed in maintenance mode. For information about placing OpenIDM in maintenance mode, see "Placing an OpenIDM Instance in Maintenance Mode" in the Installation Guide.

Note

When you configure schedules in this way, changes made to the schedules are not pushed back into the configuration service. Managing schedules by using the /openidm/scheduler context path essentially bypasses the configuration service and sends the request directly to the scheduler.

If you need to perform an operation on a schedule that was created by using the configuration service (by placing a schedule file in the conf/ directory), you must direct your request to the /openidm/config endpoint, and not to the /openidm/scheduler endpoint.

13.4.1. Creating a Schedule

You can create a schedule with a PUT request, which allows you to specify the ID of the schedule, or with a POST request, in which case the server assigns an ID automatically.

The following example uses a PUT request to create a schedule that fires a script (script/testlog.js) every second. The schedule configuration is as described in "Scheduler Configuration":

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "enabled":true,
    "type":"cron",
    "schedule":"0/1 * * * * ?",
    "persisted":true,
    "misfirePolicy":"fireAndProceed",
    "invokeService":"script",
    "invokeContext": {
        "script": {
            "type":"text/javascript",
            "file":"script/testlog.js"
        }
    }
 }' \
 "https://localhost:8443/openidm/scheduler/testlog-schedule"
{
  "type": "cron",
  "invokeService": "script",
  "persisted": true,
  "_id": "testlog-schedule",
  "schedule": "0/1 * * * * ?",
  "misfirePolicy": "fireAndProceed",
  "enabled": true,
  "invokeContext": {
    "script": {
      "file": "script/testlog.js",
      "type": "text/javascript"
    }
  }
}

The following example uses a POST request to create an identical schedule to the one created in the previous example, but with a server-assigned ID:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
    "enabled":true,
    "type":"cron",
    "schedule":"0/1 * * * * ?",
    "persisted":true,
    "misfirePolicy":"fireAndProceed",
    "invokeService":"script",
    "invokeContext": {
        "script": {
            "type":"text/javascript",
            "file":"script/testlog.js"
        }
    }
 }' \
 "https://localhost:8443/openidm/scheduler?_action=create"
{
  "type": "cron",
  "invokeService": "script",
  "persisted": true,
  "_id": "d6d1b256-7e46-486e-af88-169b4b1ad57a",
  "schedule": "0/1 * * * * ?",
  "misfirePolicy": "fireAndProceed",
  "enabled": true,
  "invokeContext": {
    "script": {
      "file": "script/testlog.js",
      "type": "text/javascript"
    }
  }
}

The output includes the _id of the schedule, in this case "_id": "d6d1b256-7e46-486e-af88-169b4b1ad57a".

13.4.2. Obtaining the Details of a Schedule

The following example displays the details of the schedule created in the previous section. Specify the schedule ID in the URL:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/scheduler/d6d1b256-7e46-486e-af88-169b4b1ad57a"
{
  "_id": "d6d1b256-7e46-486e-af88-169b4b1ad57a",
  "schedule": "0/1 * * * * ?",
  "misfirePolicy": "fireAndProceed",
  "startTime": null,
  "invokeContext": {
    "script": {
      "file": "script/testlog.js",
      "type": "text/javascript"
    }
  },
  "enabled": true,
  "concurrentExecution": false,
  "persisted": true,
  "timeZone": null,
  "type": "cron",
  "invokeService": "org.forgerock.openidm.script",
  "endTime": null,
  "invokeLogLevel": "info"
}

13.4.3. Updating a Schedule

To update a schedule definition, use a PUT request and update all properties of the object. Note that PATCH requests are currently supported only for managed and system objects.

The following example disables the schedule created in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "enabled":false,
    "type":"cron",
    "schedule":"0/1 * * * * ?",
    "persisted":true,
    "misfirePolicy":"fireAndProceed",
    "invokeService":"script",
    "invokeContext": {
        "script": {
            "type":"text/javascript",
            "file":"script/testlog.js"
        }
    }
 }' \
 "https://localhost:8443/openidm/scheduler/d6d1b256-7e46-486e-af88-169b4b1ad57a"
   null

13.4.4. Listing Configured Schedules

To display a list of all configured schedules, query the openidm/scheduler context path as shown in the following example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/scheduler?_queryId=query-all-ids"
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "resultCount": 2,
  "result": [
    {
      "_id": "d6d1b256-7e46-486e-af88-169b4b1ad57a"
    },
    {
      "_id": "recon"
    }
  ]
}

13.4.5. Deleting a Schedule

To deleted a configured schedule, call a DELETE request on the schedule ID. For example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "https://localhost:8443/openidm/scheduler/d6d1b256-7e46-486e-af88-169b4b1ad57a"
null

13.4.6. Obtaining a List of Running Scheduled Tasks

The following command returns a list of tasks that are currently executing. This list enables you to decide whether to wait for specific tasks to complete before you place an OpenIDM instance in maintenance mode.

Note that this list is accurate only at the moment the request was issued. The list can change at any time after the response is received.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/scheduler?_action=listCurrentlyExecutingJobs"
[
    {
        "concurrentExecution": false,
        "enabled": true,
        "endTime": null,
        "invokeContext": {
            "script": {
                "file": "script/testlog.js",
                "type": "text/javascript"
            }
        },
        "invokeLogLevel": "info",
        "invokeService": "org.forgerock.openidm.script",
        "misfirePolicy": "doNothing",
        "persisted": false,
        "schedule": "0/10 * * * * ?",
        "startTime": null,
        "timeZone": null,
        "type": "cron"
    }
]

13.4.7. Pausing Scheduled Tasks

In preparation for placing an OpenIDM instance into maintenance mode, you can temporarily suspend all scheduled tasks. This action does not cancel or interrupt tasks that are already in progress - it simply prevents any scheduled tasks from being invoked during the suspension period.

The following command suspends all scheduled tasks and returns true if the tasks could be suspended successfully.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/scheduler?_action=pauseJobs"
{
    "success": true
}

13.4.8. Resuming All Running Scheduled Tasks

When an update has been completed, and your instance is no longer in maintenance mode, you can resume scheduled tasks to start them up again. Any tasks that were missed during the downtime will follow their configured misfire policy to determine whether they should be reinvoked.

The following command resumes all scheduled tasks and returns true if the tasks could be resumed successfully.

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/scheduler?_action=resumeJobs"
{
    "success": true
}

13.5. Scanning Data to Trigger Tasks

In addition to the fine-grained scheduling facility described previously, OpenIDM provides a task scanning mechanism. The task scanner enables you to perform a batch scan on a specified property in OpenIDM, at a scheduled interval, and then to launch a task when the value of that property matches a specified value.

When the task scanner identifies a condition that should trigger the task, it can invoke a script created specifically to handle the task.

For example, the task scanner can scan all managed/user objects for a "sunset date" and can invoke a script that launches a "sunset task" on the user object when this date is reached.

13.5.1. Configuring the Task Scanner

The task scanner is essentially a scheduled task that queries a set of managed users. The task scanner is configured in the same way as a regular scheduled task in a schedule configuration file named (schedule-task-name.json), with the invokeService parameter set to taskscanner. The invokeContext parameter defines the details of the scan, and the task that should be launched when the specified condition is triggered.

The following example defines a scheduled scanning task that triggers a sunset script. The schedule configuration file is provided in openidm/samples/taskscanner/conf/schedule-taskscan_sunset.json. To use this sample file, copy it to the openidm/conf directory.

{
    "enabled" : true,
    "type" : "cron",
    "schedule" : "0 0 * * * ?",
    "concurrentExecution" : false,
    "invokeService" : "taskscanner",
    "invokeContext" : {
        "waitForCompletion" : false,
        "maxRecords" : 2000,
        "numberOfThreads" : 5,
        "scan" : {
            "_queryId" : "scan-tasks",
            "object" : "managed/user",
            "property" : "sunset/date",
            "condition" : {
                "before" : "${Time.now}"
            },
            "taskState" : {
                "started" : "sunset/task-started",
                "completed" : "sunset/task-completed"
            },
            "recovery" : {
                "timeout" : "10m"
            }
        },
        "task" : {
            "script" : {
                "type" : "text/javascript",
                "file" : "script/sunset.js"
            }
        }
    }
}

The schedule configuration calls a script (script/sunset.js). To test the sample, copy this script file from openidm/samples/taskscanner/script/sunset.js to the openidm/script directory. The remaining properties in the schedule configuration are as follows:

The invokeContext parameter takes the following properties:

waitForCompletion (optional)

This property specifies whether the task should be performed synchronously. Tasks are performed asynchronously by default (with waitForCompletion set to false). A task ID (such as {"_id":"354ec41f-c781-4b61-85ac-93c28c180e46"}) is returned immediately. If this property is set to true, tasks are performed synchronously and the ID is not returned until all tasks have completed.

maxRecords (optional)

The maximum number of records that can be processed. This property is not set by default so the number of records is unlimited. If a maximum number of records is specified, that number will be spread evenly over the number of threads.

numberOfThreads (optional)

By default, the task scanner runs in a multi-threaded manner, that is, numerous threads are dedicated to the same scanning task run. Multi-threading generally improves the performance of the task scanner. The default number of threads for a single scanning task is ten. To change this default, set the numberOfThreads property.

scan

Defines the details of the scan. The following properties are defined:

_queryId

Specifies the predefined query that is performed to identify the entries for which this task should be run.

The query that is referenced here must be defined in the database table configuration file (conf/repo.orientdb.json or conf/repo.jdbc.json). A sample query for a scanned task (scan-tasks) is defined in the JDBC repository configuration file as follows:

"scan-tasks" : "SELECT fullobject FROM ${_dbSchema}.${_mainTable}
 obj INNER JOIN ${_dbSchema}.${_propTable}
 prop ON obj.id = prop.${_mainTable}_id
 LEFT OUTER JOIN ${_dbSchema}.${_propTable}
 complete ON obj.id = complete.${_mainTable}_id
 AND complete.propkey=${taskState.completed}
 INNER JOIN ${_dbSchema}.objecttypes objtype
 ON objtype.id = obj.objecttypes_id
 WHERE ( prop.propkey=${property} AND prop.propvalue < ${condition.before}
 AND objtype.objecttype = ${_resource} )
 AND ( complete.propvalue is NULL )",

Note that this query identifies records for which the value of the specified property is smaller than the condition. The sample query supports only time-based conditions, with the time specified in ISO 8601 format (Zulu time). You can write any query to target the records that you require.

object

Defines the managed object type against which the query should be performed, as defined in the managed.json file.

property

Defines the property of the managed object, against which the query is performed. In the previous example, the "property" : "sunset/date" indicates a JSON pointer that maps to the object attribute, and can be understood as sunset: {"date" : "date"}.

If you are using a JDBC repository, with a generic mapping, you must explicitly set this property as searchable so that it can be queried by the task scanner. For more information, see "Using Generic Mappings".

condition (optional)

Indicates the conditions that must be matched for the defined property.

In the previous example, the scanner scans for users whose sunset/date is prior to the current timestamp (at the time the script is run).

You can use these fields to define any condition. For example, if you wanted to limit the scanned objects to a specified location, say, London, you could formulate a query to compare against object locations and then set the condition to be:

"condition" : {
    "location" : "London"
},     

For time-based conditions, the condition property supports macro syntax, based on the Time.now object (which fetches the current time). You can specify any date/time in relation to the current time, using the + or - operator, and a duration modifier. For example: ${Time.now + 1d} would return all user objects whose sunset/date is the following day (current time plus one day). You must include space characters around the operator (+ or -). The duration modifier supports the following unit specifiers:

s - second
m - minute
h - hour
d - day
M - month
y - year
taskState

Indicates the names of the fields in which the start message and the completed message are stored. These fields are used to track the status of the task.

started specifies the field that stores the timestamp for when the task begins.
completed specifies the field that stores the timestamp for when the task completes its operation. The completed field is present as soon as the task has started, but its value is null until the task has completed.
recovery (optional)

Specifies a configurable timeout, after which the task scanner process ends. For clustered OpenIDM instances, there might be more than one task scanner running at a time. A task cannot be launched by two task scanners at the same time. When one task scanner "claims" a task, it indicates that the task has been started. That task is then unavailable to be claimed by another task scanner and remains unavailable until the end of the task is indicated. In the event that the first task scanner does not complete the task by the specified timeout, for whatever reason, a second task scanner can pick up the task.

task

Provides details of the task that is performed. Usually, the task is invoked by a script, whose details are defined in the script property:

  • type ‒ the type of script, either JavaScript or Groovy.

  • file ‒ the path to the script file. The script file takes at least two objects (in addition to the default objects that are provided to all OpenIDM scripts):

    • input ‒ the individual object that is retrieved from the query (in the example, this is the individual user object).

    • objectID ‒ a string that contains the full identifier of the object. The objectID is useful for performing updates with the script as it allows you to target the object directly. For example: openidm.update(objectID, input['_rev'], input);.

    A sample script file is provided in openidm/samples/taskscanner/script/sunset.js. To use this sample file, copy it to your project's script/ directory. The sample script marks all user objects that match the specified conditions as inactive. You can use this sample script to trigger a specific workflow, or any other task associated with the sunset process.

For more information about using scripts in OpenIDM, see "Scripting Reference".

13.5.2. Managing Scanning Tasks Over REST

You can trigger, cancel, and monitor scanning tasks over the REST interface, using the REST endpoint https://localhost:8443/openidm/taskscanner.

13.5.2.1. Triggering a Scanning Task

The following REST command runs a task named "taskscan_sunset". The task itself is defined in a file named conf/schedule-taskscan_sunset.json:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/taskscanner?_action=execute&name=schedule/taskscan_sunset"

By default, a scanning task ID is returned immediately when the task is initiated. Clients can make subsequent calls to the task scanner service, using this task ID to query its state and to call operations on it.

For example, the scanning task initiated previously would return something similar to the following, as soon as it was initiated:

{"_id":"edfaf59c-aad1-442a-adf6-3620b24f8385"}

To have the scanning task complete before the ID is returned, set the waitForCompletion property to true in the task definition file (schedule-taskscan_sunset.json). You can also set the property directly over the REST interface when the task is initiated. For example:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/taskscanner?_action=execute&name=schedule/taskscan_sunset&waitForCompletion=true"

13.5.2.2. Canceling a Scanning Task

You can cancel a scanning task by sending a REST call with the cancel action, specifying the task ID. For example, the following call cancels the scanning task initiated in the previous section:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "https://localhost:8443/openidm/taskscanner/edfaf59c-aad1-442a-adf6-3620b24f8385?_action=cancel"
{
    "_id":"edfaf59c-aad1-442a-adf6-3620b24f8385",
    "action":"cancel",
    "status":"SUCCESS"
}

13.5.2.3. Listing Scanning Tasks

You can display a list of scanning tasks that have completed, and those that are in progress, by running a RESTful GET on the openidm/taskscanner" context. The following example displays all scanning tasks:

$ curl \
 --cacert self-signed.crt \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "https://localhost:8443/openidm/taskscanner"
{
 "tasks": [
    {
      "ended": 1352455546182
      "started": 1352455546149,
      "progress": {
        "failures": 0
        "successes": 2400,
        "total": 2400,
        "processed": 2400,
        "state": "COMPLETED",
      },
      "_id": "edfaf59c-aad1-442a-adf6-3620b24f8385",
    }
  ]
}

Each scanning task has the following properties:

ended

The time at which the scanning task ended.

started

The time at which the scanning task started.

progress

The progress of the scanning task, summarised in the following fields:

failures - the number of records not able to be processed
successes - the number of records processed successfully
total - the total number of records
processed - the number of processed records
state - the overall state of the task, INITIALIZED, ACTIVE, COMPLETED, CANCELLED, or ERROR
_id

The ID of the scanning task.

The number of processed tasks whose details are retained is governed by the openidm.taskscanner.maxcompletedruns property in the conf/system.properties file. By default, the last one hundred completed tasks are retained.

Chapter 14. Managing Passwords

OpenIDM provides password management features that help you enforce password policies, limit the number of passwords users must remember, and let users reset and change their passwords.

14.1. Enforcing Password Policy

A password policy is a set of rules defining what sequence of characters constitutes an acceptable password. Acceptable passwords generally are too complex for users or automated programs to generate or guess.

Password policies set requirements for password length, character sets that passwords must contain, dictionary words and other values that passwords must not contain. Password policies also require that users not reuse old passwords, and that users change their passwords on a regular basis.

OpenIDM enforces password policy rules as part of the general policy service. For more information about the policy service, see "Using Policies to Validate Data". The default password policy applies the following rules to passwords as they are created and updated:

  • A password property is required for any user object.

  • The value of a password cannot be empty.

  • The password must include at least one capital letter.

  • The password must include at least one number.

  • The minimum length of a password is 8 characters.

  • The password cannot contain the user name, given name, or family name.

You can remove these validation requirements, or include additional requirements, by configuring the policy for passwords. For more information, see "Configuring the Default Policy for Managed Objects".

The password validation mechanism can apply in many situations.

Password change and password reset

Password change involves changing a user or account password in accordance with password policy. Password reset involves setting a new user or account password on behalf of a user.

By default, OpenIDM controls password values as they are provisioned.

To change the default administrative user password, openidm-admin, see "Replace Default Security Settings".

Password recovery

Password recovery involves recovering a password or setting a new password when the password has been forgotten.

OpenIDM provides a self-service end user interface for password changes, password recovery, and password reset. For more information, see "Configuring User Self-Service".

Password comparisons with dictionary words

You can add dictionary lookups to prevent use of password values that match dictionary words.

Password history

You can add checks to prevent reuse of previous password values. For more information, see "Creating a Password History Policy".

Password expiration

You can configure OpenIDM to call a workflow that ensures users are able to change expiring or to reset expired passwords.

14.1.1. Creating a Password History Policy

To create a password history policy, you need to include customized scripts as described in "Storing Multiple Passwords For Managed Users" in the Samples Guide. Copy these scripts to yourproject-dir/script directory.

You also need to modify the following configuration files:

  • Modify the sync.json file to include connections to the custom onCreate-onUpdate-sync.js script:

    "onCreate" : {
        "type" : "text/javascript",
        "file" : "script/onCreate-onUpdate-sync.js"
    },
    "onUpdate" : {
        "type" : "text/javascript",
        "file" : "script/onCreate-onUpdate-sync.js"
    }

    If you have existing onCreate and onUpdate code blocks, you may need to consolidate options either in the applicable script file, or in a source entry.

  • Modify the router.json file to include code blocks for the managed/user object and associated policy. These policies apply to the arbitrary ldapPassword parameter which you will also add to the managed.json file:

    {
      "pattern" : "managed/user.*",
      "onRequest" : {
        "type" : "text/javascript",
        "file" : "script/set-additional-passwords.js",
        "additionalPasswordFields" : [
          "ldapPassword"
        ]
      },
      "methods" : [
        "create",
        "update"
      ]
    },
    {
      "pattern" : "policy/managed/user.*",
      "onRequest" : {
        "type" : "text/javascript",
        "file" : "script/set-additional-passwords.js",
        "additionalPasswordFields" : [
          "ldapPassword"
        ]
      },
      "methods" : [
        "action"
      ]
    }
  • In the policy.json file, include the pwpolicy.js file from your project's script/ subdirectory, as additionalFiles:

    "type" : "text/javascript",
    "file" : "policy.js",
    "additionalFiles": [ "script/pwpolicy.js" ]
  • Now make the following changes to your project's managed.json file.

    • Find the "name" : "user", object code block, normally near the start of the file. Include the following code blocks for the onValidate, onCreate, and onUpdate scripts. The value for the storedFields and historyFields should match the additionalPasswordFields that you included in the router.json file.

      You may vary the value of historySize, depending on the number of recent passwords you want to record in the history for each user. A historySize of 2 means that users who change their passwords can't use their previous two passwords.

      "name" : "user",
      "onValidate" : {
          "type" : "groovy",
          "file" : "script/storeFields.groovy",
          "storedFields" : [
              "ldapPassword"
          ]
      },
      "onCreate" : {
          "type" : "text/javascript",
          "file" : "script/onCreate-user-custom.js",
          "historyFields" : [
              "ldapPassword"
          ],
          "historySize" : 2
      },
      "onUpdate" : {
          "type" : "text/javascript",
          "file" : "script/onUpdate-user-custom.js",
          "historyFields" : [
              "ldapPassword"
          ],
          "historySize" : 2
      }
    • In same file under properties, add the following code block for ldapPassword

      "ldapPassword" : {
          "title" : "Password",
          "type" : "string",
          "viewable" : false,
          "searchable" : false,
          "minLength" : 8,
          "userEditable" : true,
          "secureHash" : {
              "algorithm" : "SHA-256"
          },
          "policies" : [
              {
                  "policyId" : "at-least-X-capitals",
                  "params" : {
                      "numCaps" : 2
                  }
              },
              {
                  "policyId" : "at-least-X-numbers",
                  "params" : {
                      "numNums" : 1
                  }
              },
              {
                  "policyId" : "cannot-contain-others",
                  "params" : {
                      "disallowedFields" : [
                          "userName",
                          "givenName",
                          "sn"
                      ]
                  }
              },
              {
                  "policyId" : "re-auth-required",
                  "params" : {
                      "exceptRoles" : [
                          "system",
                          "openidm-admin",
                          "openidm-reg",
                          "openidm-cert"
                      ]
                  }
              },
              {
                  "policyId" : "is-new",
                  "params" : {
                      "historyLength" : 2
                  }
              }
          ]
      }
    • Add the following fieldHistory code block, which maps field names to a list of historical values for the field.

      "fieldHistory" : {
          "title" : "Field History",
          "type" : "object",
          "viewable" : false,
          "searchable" : false,
          "minLength" : 8,
          "userEditable" : true,
          "scope" : "private"
      },

After your next reconciliation, the password policies that you just set up in OpenIDM should apply.

14.2. Storing Separate Passwords Per Linked Resource

OpenIDM supports storing multiple passwords in a managed user entry, to enable synchronization of different passwords on different external resources.

To store multiple passwords, you must extend the managed user schema to include additional properties for each target resource. You can set separate policies on each of these new properties, to ensure that the stored passwords adhere to the password policies of the specific external resources.

The following addition to a sample managed.json configuration shows an ldapPassword property that has been added to managed user objects. This property will be mapped to the password property on an LDAP system:

"ldapPassword" : {
    "title" : "Password",
    "type" : "string",
    "viewable" : false,
    "searchable" : false,
    "minLength" : 8,
    "userEditable" : true,
    "scope" : "private",
    "secureHash" : {
        "algorithm" : "SHA-256"
    },
    "policies" : [
        {
            "policyId" : "at-least-X-capitals",
            "params" : {
                "numCaps" : 2
            }
        },
        {
            "policyId" : "at-least-X-numbers",
            "params" : {
                "numNums" : 1
            }
        },
        {
            "policyId" : "cannot-contain-others",
            "params" : {
                "disallowedFields" : [
                    "userName",
                    "givenName",
                    "sn"
                ]
            }
        },
        {
            "policyId" : "re-auth-required",
            "params" : {
                "exceptRoles" : [
                    "system",
                    "openidm-admin",
                    "openidm-reg",
                    "openidm-cert"
                ]
            }
        },
        {
            "policyId" : "is-new",
            "params" : {
                "historyLength" : 2
            }
        }
    ]
},

This property definition shows that the ldapPassword will be hashed, with an SHA-256 algorithm, and sets the policy that will be applied to values of this property.

To use this custom managed object property and its policies to update passwords on an external resource, you must make the corresponding configuration and script changes in your deployment. For a detailed sample that implements multiple passwords, see "Storing Multiple Passwords For Managed Users" in the Samples Guide. That sample can also help you set up password history policies.

14.3. Generating Random Passwords

There are many situations when you might want to generate a random password for one or more user objects.

OpenIDM provides a way to customize your user creation logic to include a randomly generated password that complies with the default password policy. This functionality is included in the default crypto script, bin/defaults/script/crypto.js, but is not invoked by default. For an example of how this functionality might be used, see the openidm/bin/defaults/script/ui/onCreate-user-set-default-fields.js script. The following section of that file (commented out by default) means that users created by using the Admin UI, or directly over the REST interface, will have a randomly generated, password added to their entry:

if (!object.password) {

    // generate random password that aligns with policy requirements
    object.password = require("crypto").generateRandomString([
        { "rule": "UPPERCASE", "minimum": 1 },
        { "rule": "LOWERCASE", "minimum": 1 },
        { "rule": "INTEGERS", "minimum": 1 },
        { "rule": "SPECIAL", "minimum": 1 }
    ], 16);

} 

Comment out this section to invoke the random password generation when users are created. Note that changes made to scripts take effect after the time set in the recompile.minimumInterval, described in "Setting the Script Configuration".

The generated password can be encrypted, or hashed, in accordance with the managed user schema, defined in conf/managed.json. For more information, see "Encoding Attribute Values".

You can use this random string generation in a number of situations. Any script handler that is implemented in JavaScript can call the generateRandomString function.

14.4. Synchronizing Passwords Between OpenIDM and an LDAP Server

Password synchronization ensures uniform password changes across the resources that store the password. After password synchronization, a user can authenticate with the same password on each resource. No centralized directory or authentication server is required for performing authentication. Password synchronization reduces the number of passwords users need to remember, so they can use fewer, stronger passwords.

OpenIDM can propagate passwords to the resources that store a user's password. In addition, OpenIDM provides two plugins to intercept and synchronize passwords that are changed natively in OpenDJ and Active Directory.

When you use the password synchronization plugins, set up password policy enforcement on OpenDJ or Active Directory rather than on OpenIDM. Alternatively, ensure that all password policies that are enforced are identical to prevent password updates on one resource from being rejected by OpenIDM or by another resource.

The password synchronization plugins intercept password changes on the resource before the passwords are stored in encrypted form. The plugins then send intercepted password values to OpenIDM over an encrypted channel.

If the OpenIDM instance is unavailable when a password is changed in either OpenDJ or Active Directory, the respective password plugin intercepts the change, encrypts the password, and stores the encrypted password in a JSON file. The plugin then checks whether the OpenIDM instance is available, at a predefined interval. When OpenIDM becomes available, the plugin performs a PATCH on the managed user record, to replace the password with the encrypted password stored in the JSON file.

To be able to synchronize passwords, both password synchronization plugins require that the corresponding managed user object exist in the OpenIDM repository.

The following sections describe how to use the password synchronization plugin for OpenDJ, and the corresponding plugin for Active Directory.

14.4.1. Synchronizing Passwords With OpenDJ

Password synchronization with OpenDJ requires communication over the secure LDAP protocol (LDAPS). If you have not set up OpenDJ for LDAPS, do this before you start, as described in the OpenDJ Administration Guide.

OpenIDM must be installed, and running before you continue with the procedures in this section.

14.4.1.1. Establishing Secure Communication Between OpenIDM and OpenDJ

There are two possible modes of communication between OpenIDM and the OpenDJ password synchronization plugin:

  • SSL Authentication. In this case, you must import the OpenIDM certificate into OpenDJ's truststore (either the self-signed certificate that is generated the first time OpenIDM is started, or a CA-signed certificate).

    For more information, see "To Import OpenIDM's Certificate into the OpenDJ Truststore".

  • Mutual SSL Authentication. In this case, you must import the OpenIDM certificate into OpenDJ's truststore, as described in "To Import OpenIDM's Certificate into the OpenDJ Truststore", and import the OpenDJ certificate into OpenIDM's truststore, as described in "To Import OpenDJ's Certificate into the OpenIDM Truststore". You must also add the OpenDJ certificate DN as a value of the allowedAuthenticationIdPatterns property in your project's conf/authentication.json file. Mutual SSL authentication is the default configuration of the password synchronization plugin, and the one described in this procedure.

To Import OpenIDM's Certificate into the OpenDJ Truststore

You must export the certificate from OpenIDM's keystore into OpenDJ's truststore so that the OpenDJ agent can make SSL requests to the OpenIDM endpoints.

OpenIDM generates a self-signed certificate the first time it starts up. This procedure uses the self-signed certificate to get the password synchronization plugin up and running. In a production environment, you should use a certificate that has been signed by a Certificate Authority.

  1. Export OpenIDM's generated self-signed certificate to a file, as follows:

    $ cd /path/to/openidm/security
    $ keytool \
     -export \
     -alias openidm-localhost \
     -file openidm-localhost.crt \
     -keystore keystore.jceks \
     -storetype jceks
    Enter keystore password: changeit
    Certificate stored in file <openidm-localhost.crt>

    The default OpenIDM keystore password is changeit.

  2. Import the self-signed certificate into OpenDJ's truststore:

    $ cd /path/to/opendj/config
    $ keytool \
     -importcert \
     -alias openidm-localhost \
     -keystore truststore \
     -storepass `cat keystore.pin` \
     -file /path/to/openidm/security/openidm-localhost.crt
    Owner: CN=localhost, O=OpenIDM Self-Signed Certificate, OU=None, L=None, ST=None, C=None
    Issuer: CN=localhost, O=OpenIDM Self-Signed Certificate, OU=None, L=None, ST=None, C=None
    Serial number: 15413e24ed3
    Valid from: Tue Mar 15 10:27:59 SAST 2016 until: Tue Apr 14 10:27:59 SAST 2026
    Certificate fingerprints:
    	 MD5:  78:81:DE:C0:5D:86:3E:DE:E0:67:C2:2E:9D:48:A0:0E
    	 SHA1: 29:14:FE:30:E7:D8:13:0F:A5:DD:DD:38:B5:D0:98:BA:E8:5B:96:59
    	 SHA256: F8:F2:F6:56:EF:DC:93:C0:98:36:95:...7D:F4:0D:F8:DC:22:7F:D1:CF:F5:FA:75:62:7A:69
    	 Signature algorithm name: SHA512withRSA
    	 Version: 3
    Trust this certificate? [no]:  yes
    Certificate was added to keystore
To Import OpenDJ's Certificate into the OpenIDM Truststore

For mutual authentication, you must import OpenDJ's certificate into the OpenIDM truststore.

OpenDJ generates a self-signed certificate when you set up communication over LDAPS. This procedure uses the self-signed certificate to get the password synchronization plugin up and running. In a production environment, you should use a certificate that has been signed by a Certificate Authority.

  1. Export OpenDJ's generated self-signed certificate to a file, as follows:

    $ cd /path/to/opendj/config
    $ keytool \
     -export \
     -alias server-cert \
     -file server-cert.crt \
     -keystore keystore \
     -storepass `cat keystore.pin`
    Certificate stored in file <server-cert.crt>
  2. Import the OpenDJ self-signed certificate into OpenIDM's truststore:

    $ cd /path/to/openidm/security
    $ keytool \
     -importcert \
     -alias server-cert \
     -keystore truststore \
     -storepass changeit \
     -file /path/to/opendj/config/server-cert.crt
    Owner: CN=localhost, O=OpenDJ RSA Self-Signed Certificate
    Issuer: CN=localhost, O=OpenDJ RSA Self-Signed Certificate
    Serial number: 41cefe38
    Valid from: Thu Apr 14 10:17:39 SAST 2016 until: Wed Apr 09 10:17:39 SAST 2036
    Certificate fingerprints:
    	 MD5:  0D:BC:44:B3:C4:98:90:45:97:4A:8D:92:84:2B:FC:60
    	 SHA1: 35:10:B8:34:DE:38:59:AA:D6:DD:B3:44:C2:14:90:BA:BE:5C:E9:8C
    	 SHA256: 43:66:F7:81:3C:0D:30:26:E2:E2:09:...9F:5E:27:DC:F8:2D:42:79:DC:80:69:73:44:12:87
    	 Signature algorithm name: SHA1withRSA
    	 Version: 3
    Trust this certificate? [no]: yes
    Certificate was added to keystore
  3. Add the certificate DN as a value of the allowedAuthenticationIdPatterns property for the CLIENT_CERT authentication module, in your project's conf/authentication.json file.

    For example, if you are using the OpenDJ self-signed certificate, add the DN "CN=localhost, O=OpenDJ RSA Self-Signed Certificate, OU=None, L=None, ST=None, C=None", as shown in the following excerpt:

    $ more /path/to/openidm/project-dir/conf/authentication.json
    ...
    {
         "name" : "CLIENT_CERT",
         "properties" : {
             "queryOnResource" : "security/truststore",
             "defaultUserRoles" : [
                 "openidm-cert"
             ],
             "allowedAuthenticationIdPatterns" : [
                 "CN=localhost, O=OpenDJ RSA Self-Signed Certificate, OU=None, L=None, ST=None, C=None"
             ]
         },
         "enabled" : true
    }
         ...

14.4.1.2. Installing the OpenDJ Password Synchronization Plugin

The following steps install the password synchronization plugin on an OpenDJ directory server that is running on the same host as OpenIDM (localhost). If you are running OpenDJ on a different host, use the fully qualified domain name instead of localhost.

  1. Download the OpenDJ password synchronization plugin (OpenIDM Agents - OpenDJ 1.1.1) from the ForgeRock BackStage site.

  2. Extract the contents of the opendj-accountchange-handler-1.1.1.zip file to the directory where OpenDJ is installed:

    $ unzip ~/Downloads/opendj-accountchange-handler-1.1.1.zip -d /path/to/opendj/
    Archive:  opendj-accountchange-handler-1.1.1.zip
       creating: opendj/
       ...
  3. Restart OpenDJ to load the additional schema from the password synchronization plugin:

    $ cd /path/to/opendj/bin
    $ ./stop-ds --restart
    Stopping Server...
    ...
    [14/Apr/2016:13:19:11 +0200] category=EXTENSIONS severity=NOTICE
     msgID=org.opends.messages.extension.571 msg=Loaded extension from file
     '/path/to/opendj/lib/extensions/openidm-account-change-handler.jar' (build 1.1.1, revision 1)
    ...
    [14/Apr/2016:13:19:43 +0200] category=CORE severity=NOTICE msgID=org.opends.messages.core.139
    ... The Directory Server has started successfully
  4. Configure the password synchronization plugin, if required.

    The plugin configuration is specified in the openidm-pwsync-plugin-config.ldif file, which should have been extracted to path/to/opendj/config when you extracted the plugin. Use a text editor to update the configuration.

    $ cd /path/to/opendj/config
    $ more openidm-pwsync-plugin-config.ldif
    dn: cn=OpenIDM Notification Handler,cn=Account Status Notification Handlers,cn=config
    objectClass: top
    objectClass: ds-cfg-account-status-notification-handler
    objectClass: ds-cfg-openidm-account-status-notification-handler
    cn: OpenIDM Notification Handler
    ...

    You can configure the following elements of the plugin:

    ds-cfg-enabled

    Specifies whether the plugin is enabled.

    Default value: true

    ds-cfg-attribute

    The attribute in OpenIDM that stores user passwords. This property is used to construct the patch request on the OpenIDM managed user object.

    Default value: password

    ds-task-id

    The query-id for the patch-by-query request. This query must be defined in the repository configuration.

    Default value: for-userName

    ds-cfg-attribute-type

    Specifies zero or more attribute types that the plug-in will send along with the password change. If no attribute types are specified, only the DN and the new password will be synchronized to OpenIDM.

    Default values: entryUUID and uid

    ds-cfg-log-file

    The log file location where the changed passwords are written when the plug-in cannot contact OpenIDM. The default location is the logs directory of the server instance, in the file named pwsync. Passwords in this file will be encrypted.

    Default value: logs/pwsync

    Note that this setting has no effect if ds-cfg-update-interval is set to 0 seconds.

    ds-cfg-update-interval

    The interval, in seconds, at which password changes are propagated to OpenIDM. If this value is 0, updates are made synchronously in the foreground, and no encrypted passwords are stored in the ds-cfg-log-file.

    Default value: 0 seconds

    ds-cfg-referrals-url

    The endpoint at which the plugin should find OpenIDM managed user accounts.

    Default value: https://localhost:8444/openidm/managed/user

    ds-cfg-ssl-cert-nickname

    The alias of the client certificate in the OpenDJ keystore. If LDAPS is configured during the GUI setup of OpenDJ, the default client key alias is server-cert.

    Default value: server-cert

    ds-cfg-realm

    The alias of the private key that should be used by OpenIDM to decrypt the session key.

    Default value: openidm-localhost

    ds-certificate-subject-dn

    The certificate subject DN of the OpenIDM private key. The default configuration assumes that you are using the self-signed certificate that is generated when OpenIDM first starts.

    Default value: CN=localhost, O=OpenIDM Self-Signed Certificate, OU=None, L=None, ST=None, C=None

    ds-cfg-key-manager-provider

    The OpenDJ key manager provider. The key manager provider specified here must be enabled.

    Default value: cn=JKS,cn=Key Manager Providers,cn=config

    ds-cfg-trust-manager-provider

    The OpenDJ trust manager provider. The trust manager provider specified here must be enabled.

    Default value: cn=JKS,cn=Trust Manager Providers,cn=config

    ds-openidm-httpuser

    An OpenIDM administrative username that the plugin will use to make REST calls to OpenIDM.

    Default value: openidm-admin

    ds-openidm-httppasswd

    The password of the OpenIDM administrative user specified by the previous property.

    Default value: openidm-admin

  5. When you have updated the plugin configuration to fit your deployment, add the configuration to OpenDJ's configuration:

    $ cd /path/to/opendj/bin
    $ ./ldapmodify \
     --port 1389 \
     --hostname `hostname` \
     --bindDN "cn=Directory Manager" \
     --bindPassword "password" \
     --defaultAdd \
     --filename ../config/openidm-pwsync-plugin-config.ldif
    
    Processing ADD request for cn=OpenIDM Notification Handler,cn=Account Status
        Notification Handlers,cn=config
    ADD operation successful for DN cn=OpenIDM Notification Handler,cn=Account Status
        Notification Handlers,cn=config
  6. Restart OpenDJ for the new configuration to take effect:

    $ ./stop-ds --restart
    Stopping Server...
    ...
    [14/Apr/2016:13:25:50 +0200] category=EXTENSIONS severity=NOTICE
     msgID=org.opends.messages.extension.571 msg=Loaded extension from file
     '/path/to/opendj/lib/extensions/openidm-account-change-handler.jar' (build 1.1.1, revision 1)
    ...
    [14/Apr/2016:13:26:27 +0200] category=CORE severity=NOTICE msgID=org.opends.messages.core.139
     msg=The Directory Server has sent an alert notification generated by
     class org.opends.server.core.DirectoryServer (alert type org.opends.server.DirectoryServerStarted,
     alert ID org.opends.messages.core-135): The Directory Server has started successfully
  7. Adjust your OpenDJ password policy configuration to use the password synchronization plugin.

    The following command adjusts the default password policy:

    $ cd /path/to/opendj/bin
    $ ./dsconfig \
     set-password-policy-prop \
     --port 4444 \
     --hostname `hostname` \
     --bindDN "cn=Directory Manager" \
     --bindPassword password \
     --policy-name "Default Password Policy" \
     --set account-status-notification-handler:"OpenIDM Notification Handler" \
     --trustStorePath ../config/admin-truststore \
     --no-prompt
    Apr 14, 2016 1:28:32 PM org.forgerock.i18n.slf4j.LocalizedLogger info
    INFO: Loaded extension from file
     '/path/to/opendj/lib/extensions/openidm-account-change-handler.jar' (build 1.1.1, revision 1)    

Password synchronization should now be configured and working. To test that the setup has been successful, change a user password in OpenDJ. That password should be synchronized to the corresponding OpenIDM managed user account, and you should be able to query the user's own entry in OpenIDM using the new password.

14.4.2. Synchronizing Passwords With Active Directory

Use the Active Directory password synchronization plugin to synchronize passwords between OpenIDM and Active Directory (on systems running at least Microsoft Windows Server 2003).

Install the plugin on Active Directory domain controllers (DCs) to intercept password changes, and send the password values to OpenIDM over an encrypted channel. You must have Administrator privileges to install the plugin. In a clustered Active Directory environment, you must install the plugin on all DCs.

14.4.2.1. Configuring OpenIDM for Password Synchronization With Active Directory

To support password synchronization with Active Directory, you must make the following configuration changes to your managed user schema (in your project's conf/managed.json file):

  • Add a new property, named userPassword to the user object schema. This new property corresponds with the userPassword attribute in an Active Directory user entry.

    The following excerpt shows the required addition to the managed.json file:

    {
        "objects" : [
            {
                "name" : "user",
                ...
                "schema" : {
                    ...
                    "properties" : {
                        "password" : {
                            ...
                            "encryption" : {
                                "key" : "openidm-sym-default"
                            },
                            "scope" : "private"
                        },
                        "userPassword" : {
                            "description" : "",
                            "title" : "",
                            "viewable" : true,
                            "searchable" : false,
                            "userEditable" : false,
                            "policies" : [ ],
                            "returnByDefault" : false,
                            "minLength" : "",
                            "pattern" : "",
                            "type" : "string",
                            "encryption" : {
                                "key" : "openidm-sym-default"
                            },
                            "scope" : "private"
                        },
                        ...
                    },
                    "order" : [
                        "_id",
                        "userName",
                        "password",
                        ...
                        "userPassword"
                    ]
                }
            },
    		...
        ]
    }     
  • Add an onUpdate script to the managed user object that checks whether the values of the two password properties (password and userPassword) match, and sets them to the same value if they do not.

    The excerpt shows the required addition to the managed.json file:

    {
        "objects" : [
            {
                "name" : "user",
                ...
                "onUpdate" : {
                      "type" : "text/javascript",
                      "source" : "if (newObject.userPassword !== oldObject.userPassword) { newObject.password = newObject.userPassword; }"
                },
           ...
        ]
    }     

14.4.2.2. Installing the Active Directory Password Synchronization Plugin

The following steps install the password synchronization on an Active directory server:

  1. Download the Active Directory password synchronization plugin from the ForgeRock BackStage site.

    • Double-click the setup file to launch the installation wizard.

    • Alternatively, from a command line, start the installation wizard with the idm-setup.exe command. If you want to save the settings in a configuration file, you can use the /saveinf switch as follows.

      C:\Path\To > idm-setup.exe /saveinf=C:\temp\adsync.inf

    • If you have a configuration file with installation parameters, you can install the password plugin in silent mode as follows:

      C:\Path\To > idm-setup.exe /verysilent /loadinf=C:\temp\adsync.inf

  2. Provide the following information during the installation. You must accept the license agreement shown to proceed with the installation.

    OpenIDM Connection information
    • OpenIDM URL. Enter the URL where OpenIDM is deployed, including the query that targets each user account. For example:

      https://localhost:8444/openidm/managed/user?_action=patch&_queryId=for-userName&uid=${samaccountname}

      This query requires a mapping from sAMAccountname to userName in your project's mapping file (conf/sync.json). For example:

      {
          "mappings" : [
              {
                  "name" : "systemAdAccounts_managedUser",
                  "source" : "system/ad/account",
                  "target" : "managed/user",
                  "properties" : [
                      ...
                      {
                          "source" : "sAMAccountName",
                          "target" : "userName"
                      },
                      ...
                }
          ]
      }        

      The password synchronization plugin assumes that the Active Directory user attribute is sAMAccountName. The default attribute will work in most deployments. If you cannot use the sAMAccountName attribute to identify the Active Directory user, set the following registry keys on your Active Directory server, specifying an alternative attribute. These examples use the employeeId attribute instead of sAMAccountName :

      • userAttribute = employeeId

      • userSearchFilter = (&(objectClass=user)(employeeId=%s))

      • idmURL = https://localhost:8444/openidm/managed/user?_action=patch&_queryId=for-userName&uid=${employeeId}

      For information about creating registry keys, see Configure a Registry Item in the Windows documentation.

    • OpenIDM User Password attribute. The password attribute for the managed/user object, such as password.

      If the password attribute does not exist in the managed/user object on OpenIDM, the password sync service will return an error when it attempts to replay a password update that has been made in Active Directory. If your managed user objects do not include passwords, you can add an onCreate script to the Active Directory > Managed Users mapping that sets an empty password when managed user accounts are created. The following excerpt of a sync.json file shows such a script in the mapping:

      "mappings" : [
       {
         "name" : "systemAdAccounts_managedUser",
         "source" : "system/ad/account",
         "target" : "managed/user",
         "properties" : [
           {
             "source" : "sAMAccountName",
             "target" : "userName"
           }
         ],
         "onCreate" : {
           "type" : "text/javascript",
           "source" : "target.password=''"
         },
      ...

      The onCreate script creates an empty password in the managed/user object, so that the password attribute exists and can be patched.

    OpenIDM Authentication Parameters

    Provide the following information:

    • User name. Enter name of an administrative user that can authenticate to OpenIDM, for example, openidm-admin.

    • Password. Enter the password of the user that authenticates to OpenIDM, for example, openidm-admin.

    • Select authentication type. Select the type of authentication that Active Directory will use to authenticate to OpenIDM.

      For plain HTTP authentication, select OpenIDM Header. For SSL mutual authentication, select Certificate.

    Certificate authentication settings

    If you selected Certificate as the authentication type on the previous screen, specify the details of the certificate that will be used for authentication.

    • Select Certificate file. Browse to select the certificate file that Active Directory will use to authenticate to OpenIDM. The certificate file must be configured with an appropriate encoding, cryptographic hash function, and digital signature. The plugin can read a public or a private key from a PKCS12 archive file.

      For production purposes, you should use a certificate that has been issued by a Certificate Authority. For testing purposes, you can generate a self-signed certificate. Whichever certificate you use, you must import that certificate into OpenIDM's trust store.

      To generate a self-signed certificate for Active Directory, follow these steps:

      1. On the Active Directory host, generate a private key, which will be used to generate a self-signed certificate with the alias ad-pwd-plugin-localhost:

        > keytool.exe ^
         -genkey ^
         -alias ad-pwd-plugin-localhost ^
         -keyalg rsa ^
         -dname "CN=localhost, O=AD-pwd-plugin Self-Signed Certificate" ^
         -keystore keystore.jceks ^
         -storetype JCEKS
        Enter keystore password: changeit
        Re-enter new password: changeit
        Enter key password for <ad-pwd-plugin-localhost>
              <RETURN if same as keystore password>
      2. Now use the private key, stored in the keystore.jceks file, to generate the self-signed certificate:

        > keytool.exe ^
         -selfcert ^
         -alias ad-pwd-plugin-localhost ^
         -validity 365 ^
         -keystore keystore.jceks ^
         -storetype JCEKS ^
         -storepass changeit
      3. Export the certificate. In this case, the keytool command exports the certificate in a PKCS12 archive file format, used to store a private key with a certificate:

        > keytool.exe ^
         -importkeystore ^
         -srckeystore keystore.jceks ^
         -srcstoretype jceks ^
         -srcstorepass changeit ^
         -srckeypass changeit ^
         -srcalias ad-pwd-plugin-localhost ^
         -destkeystore ad-pwd-plugin-localhost.p12 ^
         -deststoretype PKCS12 ^
         -deststorepass changeit ^
         -destkeypass changeit ^
         -destalias ad-pwd-plugin-localhost ^
         -noprompt
      4. The PKCS12 archive file is named ad-pwd-plugin-localhost.p12. Import the contents of the keystore contained in this file to the system that hosts OpenIDM. To do so, import the PKCS12 file into the OpenIDM keystore file, named truststore, in the /path/to/openidm/security directory.

        On the machine that is running OpenIDM, enter the following command:

        $ keytool \
         -importkeystore \
         -srckeystore /path/to/ad-pwd-plugin-localhost.p12
         -srcstoretype PKCS12
         -destkeystore truststore
         -deststoretype JKS
    • Password to open the archive file with the private key and certificate. Specify the keystore password (changeit, in the previous example).

    Password Encryption settings

    Provide the details of the certificate that will be used to encrypt password values.

    • Archive file with certificate. Browse to select the archive file that will be used for password encryption. That file is normally set up in PKCS12 format.

      For evaluation purposes, you can use a self-signed certificate, as described earlier. For production purposes, you should use a certificate that has been issued by a Certificate Authority.

      Whichever certificate you use, the certificate must be imported into OpenIDM's keystore, so that OpenIDM can locate the key with which to decrypt the data. To import the certificate into OpenIDM's keystore, keystore.jceks, run the following command on the OpenIDM host (UNIX):

      $ keytool \
       -importkeystore \
       -srckeystore /path/to/ad-pwd-plugin-localhost.p12 \
       -srcstoretype PKCS12 \
       -destkeystore /path/to/openidm/security/keystore.jceks \
       -deststoretype jceks
    • Private key alias. Specify the alias for the certificate, such as ad-pwd-plugin-localhost.

    • Password to open certificate file. Specify the password to access the PFX keystore file, such as changeit, from the previous example.

    • Select encryption standard. Specify the encryption standard that will be used when encrypting the password value (AES-128, AES-192, or AES-256).

    Data storage

    Provide the details for the storage of encrypted passwords in the event that OpenIDM is not available when a password modification is made.

    • Select a secure directory in which the JSON files that contain encrypted passwords are queued. The server should prevent access to this folder, except access by the Password Sync service. The path name cannot include spaces.

    • Directory poll interval (seconds). Enter the number of seconds between calls to check whether OpenIDM is available, for example, 60, to poll OpenIDM every minute.

    Log storage

    Provide the details of the messages that should be logged by the plugin.

    • Select the location to which messages should be logged. The path name cannot include spaces.

    • Select logging level. Select the severity of messages that should be logged, either error, info, warning, fatal, or debug.

    Select Destination Location

    Setup installs the plugin in the location you select, by default C:\Program Files\OpenIDM Password Sync.

  3. After running the installation wizard, restart the computer.

  4. If you need to change any settings after installation, access the settings using the Registry Editor under HKEY_LOCAL_MACHINE > SOFTWARE > ForgeRock > OpenIDM > PasswordSync.

    If you have configured SSL access, make sure authType is set to idm.

  5. If you selected to authenticate over plain HTTP in the previous step, your setup is now complete.

    If you selected to authenticate with mutual authentication, complete this step.

    1. The Password Sync Service uses Windows certificate stores to verify OpenIDM's identity. The certificate that OpenIDM uses must therefore be added to the list of trusted certificates on the Windows machine.

      For production purposes, you should use a certificate that has been issued by a certificate authority. For test purposes, you can use the self-signed certificate that is generated by OpenIDM on first startup.

      To add the OpenIDM certificate to the list of trusted certificates, use the Microsoft Management Console.

      1. Select Start and type mmc in the Search field.

      2. In the Console window, select File > Add/Remove Snap-in.

      3. From the left hand column, select Certificates and click Add.

      4. Select My user account, and click Finish.

      5. Repeat the previous two steps for Service account and Computer account.

        For Service account, select Local computer, then select OpenIDM Password Sync Service.

        OpenIDM architecture

        For Computer account, select Local computer.

      6. Click Finish when you have added the three certificate snap-ins.

      7. Still in the Microsoft Management Console, expand Certificates - Current User > Personal and select Certificates.

      8. Select Action > All Tasks > Import to open the Certificate Import Wizard.

      9. Browse for the OpenIDM certificate (openidm-localhost.crt by default, if you use OpenIDM's self-signed certificate).

      10. Enter the Password for the certificate (changeit by default, if you use OpenIDM's self-signed certificate).

      11. Accept the default for the Certificate Store.

      12. Click Finish to complete the import.

      13. Repeat the previous six steps to import the certificate for:

        Certificates - Current User > Trusted Root Certification Authorities
        Certificates - Service > OpenIDM Password Sync\Personal
        Certificates - Service > OpenIDM Password Sync\Trusted Root Certification Authorities
        Certificates > Local Computer > Personal
        Certificates > Local Computer > Trusted Root Certification Authorities

Chapter 15. Managing Authentication, Authorization and Role-Based Access Control

OpenIDM provides a flexible authentication and authorization mechanism, based on REST interface URLs and on managed roles. This chapter describes how to configure the supported authentication modules, and how roles are used to support authentication, authorization, and access control.

15.1. OpenIDM Authentication

OpenIDM does not allow access to the REST interface without authentication. User self-registration requires anonymous access. For this purpose, OpenIDM includes an anonymous user, with the password anonymous. For more information, see "Internal Users".

OpenIDM supports an enhanced authentication mechanism over the REST interface, that is compatible with the AJAX framework. Although OpenIDM understands the authorization header of the HTTP basic authorization contract, it deliberately does not utilize the full contract. In other words, it does not cause the browser built in mechanism to prompt for username and password. However, OpenIDM does understand utilities such as curl that can send the username and password in the Authorization header.

In general, the HTTP basic authentication mechanism does not work well with client side web applications, and applications that need to render their own login screens. Because the browser stores and sends the username and password with each request, HTTP basic authentication has significant security vulnerabilities. OpenIDM therefore supports sending the username and password via the authorization header, and returns a token for subsequent access.

This document uses the OpenIDM authentication headers in all REST examples, for example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 ...

For more information about the OpenIDM authentication mechanism, see "Use Message Level Security".

15.1.1. Authenticating OpenIDM Users

OpenIDM stores two types of users in its repository - internal users and managed users. The way in which both of these user types are authenticated is defined in your project's conf/authentication.json file.

15.1.1.1. Internal Users

OpenIDM creates two internal users by default: anonymous and openidm-admin. These internal user accounts are separated from other user accounts to protect them from any reconciliation or synchronization processes.

OpenIDM stores internal users and their role membership in a table in the repository. The two default internal users have the following functions:

anonymous

This user enables anonymous access to OpenIDM, for users who do not have their own accounts. The anonymous user has limited rights within OpenIDM. By default, the anonymous user has the openidm-reg role, and can be used to allow self-registration. For more information about self-registration, see "The End User and Commons User Self-Service".

openidm-admin

This user serves as the top-level administrator. After installation, the openidm-admin user has full access, and provides a fallback mechanism in the event that other users are locked out of their accounts. Do not use openidm-admin for regular tasks. Under normal circumstances, the openidm-admin account does not represent a regular user, so audit log records for this account do not represent the actions of any real person.

The default password for the openidm-admin user (also openidm-admin) is not encrypted, and is not secure. In production environments, you must change this password to a more secure one, as described in the following section. The new password will be encoded using a salted hash algorithm, when it is changed.

15.1.1.1.1. Managing Internal Users Over REST

Like any other user in the repository, you can manage internal users over the REST interface.

To list the internal users over REST, query the repo endpoint as follows:

$ curl \
   --cacert self-signed.crt \
   --header "X-OpenIDM-Username: openidm-admin" \
   --header "X-OpenIDM-Password: openidm-admin" \
   --request GET  \
   "https://localhost:8443/openidm/repo/internal/user?_queryId=query-all-ids"
  {
  "result": [
    {
      "_id": "openidm-admin",
      "_rev": "1"
    },
    {
      "_id": "anonymous",
      "_rev": "1"
    }
  ],
  "resultCount": 2,
  "pagedResultsCookie": null,
  "totalPagedResultsPolicy": "NONE",
  "totalPagedResults": -1,
  "remainingPagedResults": -1
}

To query the details of an internal user, include the user's ID in the request, for example:

$ curl \
   --cacert self-signed.crt \
   --header "X-OpenIDM-Username: openidm-admin" \
   --header "X-OpenIDM-Password: openidm-admin" \
   --request GET  \
   "https://localhost:8443/openidm/repo/internal/user/openidm-admin"
  {
  "_id": "openidm-admin",
  "_rev": "1",
  "roles": [
    {
      "_ref": "repo/internal/role/openidm-admin"
    },
    {
      "_ref": "repo/internal/role/openidm-authorized"
    }
  ],
  "userName": "openidm-admin",
  "password": "openidm-admin"
}

To change the password of the default administrative user, send a PUT request to the user object. The following example changes the password of the openidm-admin user to Passw0rd:

$ curl \
 --cacert self-signed.crt \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request PUT \
 --data '{
     "_id": "openidm-admin",
     "roles": [
         {
             "_ref": "repo/internal/role/openidm-admin"
         },
         {
             "_ref": "repo/internal/role/openidm-authorized"
         }
     ],
     "userName": "openidm-admin",
     "password": "Passw0rd"
 }' \
 "https://localhost:8443/openidm/repo/internal/user/openidm-admin"

15.1.1.2. Managed Users

External users that are managed by OpenIDM are known as managed users.

The table in which managed users are stored depends on the type of repository. For JDBC repositories, OpenIDM stores managed users in the managed objects table, named managedobjects, and indexes those objects in a table named managedobjectproperties.

For an OrientDB repository, managed objects are stored in the table managed_user.

OpenIDM provides RESTful access to managed users, at the context path /openidm/managed/user. For more information, see "Managing Users Over REST".

15.1.1.3. Authenticating Internal and Managed Users

By default, the attribute names that are used to authenticate managed and internal users are username and password, respectively. However, you can explicitly define the properties that constitute usernames, passwords or roles with the propertyMapping object in the conf/authentication.json file. The following excerpt of the authentication.json file shows the default property mapping object:

...
    "propertyMapping" : {
        "authenticationId" : "username",
        "userCredential" : "password",
        "userRoles" : "roles"
    },
 ...

If you change the attribute names that are used for authentication, you must adjust the following authentication queries (defined in the repository configuration file, openidm/conf/repo.repo-type.json).

Two queries are defined by default.

credential-internaluser-query

This query uses the username attribute for login, for internal users. For example, the following credential-internaluser-query is defined in the default repository configuration file for a MySQL repository.

"credential-internaluser-query" : "SELECT objectid, pwd, roles FROM
        ${_dbSchema}.${_table} WHERE objectid = ${username}",
credential-query

This query uses the username attribute for login, for managed users. For example, the following credential-query is defined in the default repository configuration file for a MySQL repository.

"credential-query" : "SELECT * FROM ${_dbSchema}.${_table} WHERE
        objectid = ${username} and accountStatus = 'active'",

The query that is used for a particular resource is specified by the queryId property in the authentication.json file. The following sample excerpt of that file shows that the credential-query is used when validating managed user credentials.

{
    "queryId" : "credential-query",
    "queryOnResource" : "managed/user",
...
}  

15.1.2. Supported Authentication and Session Modules

The authentication configuration is defined in conf/authentication.json. This file configures the methods by which a user request is authenticated. It includes both session and authentication module configuration.

You may review and configure supported modules in the Admin UI. To do so, log into https://localhost:8443/admin, and select Configure > System Preferences > Authentication.

15.1.2.1. Supported Session Module

At this time, OpenIDM includes one supported session module. The JSON Web Token session module configuration specifies keystore information, and details about the session lifespan. The default JWT_SESSION configuration is as follows:

   "name" : "JWT_SESSION",
   "properties" : {
       "keyAlias" : "openidm-localhost",
       "privateKeyPassword" : "&{openidm.keystore.password}",
       "keystoreType" : "&{openidm.keystore.type}",
       "keystoreFile" : "&{openidm.keystore.location}",
       "keystorePassword" : "&{openidm.keystore.password}",
       "maxTokenLifeMinutes" : "120",
       "tokenIdleTimeMinutes" : "30",
       "sessionOnly" : true
   }

For more information about the JWT_SESSION module, see the following Javadoc page: Class JwtSessionModule.

15.1.2.2. Supported Authentication Modules

OpenIDM evaluates modules in the order shown in the authentication.json file for your project. When OpenIDM finds a module to authenticate a user, it does not evaluate subsequent modules.

You can also configure the order of authentication modules in the Admin UI. After logging in, click Configure > System Preferences > Authentication. The following figure illustrates how you might include the IWA module in the Admin UI.

OpenIDM Authentication Modules, with IWA

Do prioritize authentication modules that query OpenIDM resources. If you prioritize modules that query external resources, that could lead to problems for internal users such as openidm-admin.

STATIC_USER

STATIC_USER authentication provides an anonymous authentication mechanism that bypasses any database lookups if the headers in a request indicate that the user is anonymous. The following sample REST call uses STATIC_USER authentication in the self-registration process:

$ curl \
 --header "X-OpenIDM-Password: anonymous" \
 --header "X-OpenIDM-Username: anonymous" \
 --header "Content-Type: application/json" \
 --data '{
       "userName":"steve",
       "givenName":"Steve",
       "sn":"Carter",
       "telephoneNumber":"0828290289",
       "mail":"scarter@example.com",
       "password":"Passw0rd"
       }' \
 --request POST \
 "https://localhost:8443/openidm/managed/user/?_action=create"

Note that this is not the same as an anonymous request that is issued without headers.

Authenticating with the STATIC_USER module avoids the performance cost of reading the database for self-registration, certain UI requests, and other actions that can be performed anonymously. Authenticating the anonymous user with the STATIC_USER module is identical to authenticating the anonymous user with the INTERNAL_USER module, except that the database is not accessed. So, STATIC_USER authentication provides an authentication mechanism for the anonymous user that avoids the database lookups incurred when using INTERNAL_USER.

A sample STATIC_USER authentication configuration follows:

{
    "name" : "STATIC_USER",
    "enabled" : true,
    "properties" : {
        "propertyMapping" : "{}",
        "queryOnResource" : "repo/internal/user",
        "username" : "anonymous",
        "password" : "anonymous",
        "defaultUserRoles" : [
            "openidm-reg"
        ],
        "augmentSecurityContext" : null
    }
}       
TRUSTED_ATTRIBUTE

The TRUSTED_ATTRIBUTE authentication module allows you to configure OpenIDM to trust the HttpServletRequest attribute of your choice. You can configure it by adding the TRUSTED_ATTRIBUTE module to your authentication.json file, as shown in the following code block:

...
{
    "name" : "TRUSTED_ATTRIBUTE",
    "properties" : {
        "queryOnResource" : "managed/user",
        "propertyMapping" : {
            "authenticationId" : "username",
            "userRoles" : "authzRoles"
        },
        "defaultUserRoles" : [ ],
        "authenticationIdAttribute" : "X-ForgeRock-AuthenticationId",
        "augmentSecurityContext" : {
            "type" : "text/javascript",
            "file" : "auth/populateRolesFromRelationship.js"
        }
    },
    "enabled" : true
}
...

TRUSTED_ATTRIBUTE authentication queries the managed/user repository, and allows authentication when credentials match, based on the username and authzRoles assigned to that user, specifically the X-ForgeRock-AuthenticationId attribute.

To see how you can configure this with OpenIDM, see "The Trusted Servlet Filter Sample" in the Samples Guide.

MANAGED_USER

MANAGED_USER authentication queries the repository, specifically the managed/user objects, and allows authentication if the credentials match. The default configuration uses the username and password of the managed user to authenticate, as shown in the following sample configuration.

{
    "name" : "MANAGED_USER",
    "enabled" : true,
    "properties" : {
        "queryId" : "credential-query",
        "queryOnResource" : "managed/user",
        "propertyMapping" : {
            "authenticationId" : "username",
            "userCredential" : "password",
            "userRoles" : "roles"
        },
        "defaultUserRoles" : [ ]
    }
},    
INTERNAL_USER

INTERNAL_USER authentication queries the repository, specifically the repo/internal/user objects, and allows authentication if the credentials match. The default configuration uses the username and password of the internal user to authenticate, as shown in the following sample configuration.

{
    "name" : "INTERNAL_USER",
    "enabled" : true,
    "properties" : {
        "queryId" : "credential-internaluser-query",
        "queryOnResource" : "repo/internal/user",
        "propertyMapping" : {
            "authenticationId" : "username",
            "userCredential" : "password",
            "userRoles" : "roles"
        },
        "defaultUserRoles" : [ ]
    }
},    
CLIENT_CERT

The client certificate module, CLIENT_CERT, provides authentication by validating a client certificate, transmitted via an HTTP request. The criteria compares the subject DN of the request certificate with the subject DN of the truststore.

A sample CLIENT_CERT authentication configuration follows:

{
    "name" : "CLIENT_CERT",
    "enabled" : true,
    "properties" : {
        "queryOnResource" : "security/truststore",
        "defaultUserRoles" : [ "openidm-cert" ],
        "allowedAuthenticationIdPatterns" : [ ]
    }
},

The "allowedAuthenticationIdPatterns" filter enables you to specify an array of usernames or username patterns that will be accepted for authentication. If this property is empty, any username can authenticate.

For detailed options, see "Configuring the CLIENT_CERT Authentication Module".

The modules which follow point to external systems. In the authentication.json file, you should generally include these modules after any modules that that query internal OpenIDM resources.

PASSTHROUGH

PASSTHROUGH authentication queries an external system, such as an LDAP server, and allows authentication if the provided credentials match those in the external system. The following sample configuration shows pass-through authentication using the user objects in the system endpoint system/ldap/account. For more information on pass-through authentication, see "Configuring Pass-Through Authentication".

OPENAM_SESSION

The OPENAM_SESSION module enables you to protect an OpenIDM deployment with ForgeRock's OpenAM Access Management product. For an example of how you might use the OPENAM_SESSION module, see "Full Stack Sample - Using OpenIDM in the ForgeRock Identity Platform" in the Samples Guide.

For detailed options, see "OPENAM_SESSION Module Configuration Options".

IWA

The IWA module supports Integrated Windows Authentication. In other words, the IWA module supports the use of an LDAP connector for an Active Directory server. For an example of how you can set that up with a Kerberos server, see "Kerberos Configuration Example".

15.1.3. Configuring Pass-Through Authentication

OpenIDM 4 supports a pass-through authentication mechanism. With pass-through authentication, the credentials included with the REST request are validated against those stored in a remote system, such as an LDAP server.

The following excerpt of an authentication.json shows a pass-through authentication configuration for an LDAP system.

"authModules" : [
    {
       "name" : "PASSTHROUGH",
       "enabled" : true,
       "properties" : {
          "augmentSecurityContext": {
             "type" : "text/javascript",
             "file" : "auth/populateAsManagedUser.js"
          },
          "queryOnResource" : "system/ldap/account",
          "propertyMapping" : {
             "authenticationId" : "uid",
             "groupMembership" : "memberOf"
          },
          "groupRoleMapping" : {
             "openidm-admin" : ["cn=admins"]
          },
          "managedUserLink" : "systemLdapAccounts_managedUser",
          "defaultUserRoles" : [
             "openidm-authorized"
          ]
       },
    },
    ...
 ] 

For more information on authentication module properties, see the following: "Authentication and Session Module Configuration Details".

The OpenIDM samples, described in "Overview of the OpenIDM Samples" in the Samples Guide, include several examples of pass-through authentication configuration. Samples 2, 2b, 2c, and 2d use an external LDAP system for authentication. Sample 3 authenticates against a SQL database. Sample 6 authenticates against an Active Directory server. The scriptedrest2dj sample uses a scripted REST connector to authenticate against an OpenDJ server.

15.1.4. Kerberos Configuration Example

This section assumes that you have an active Kerberos server acting as a Key Distribution Center (KDC). If you're running Active Directory in your deployment, that service includes a Kerberos KDC by default.

To take advantage of a Kerberos KDC, you need to do two things: first include at least the IWA and possibly the PASSTHROUGH modules in the authentication.json file. Second, modify the system.properties file to take advantage of the noted modules.

For IWA, based on Integrated Windows Authentication, this section assumes you have configured an LDAP connector for an Active Directory server. To confirm, identify the following mapping source in the sync.json configuration file:

system/ad/account

You could then include the following code block towards the end of the authentication.json file. Include appropriate values for the kerberosRealm and kerberosServerName. For a list of definitions, see "Kerberos Definitions".

"authModules" : [
    ...
    {
        "name" : "IWA",
        "properties": {
            "servicePrincipal" : "",
            "keytabFileName" : "security/name.HTTP.keytab",
            "kerberosRealm" : "",
            "kerberosServerName" : "",
            "queryOnResource" : "system/ad/account",
            "propertyMapping" : {
                "authenticationId" : "sAMAccountName",
                "groupMembership" : "memberOf"
            },
            "groupRoleMapping" : {
                "openidm-admin": [ ]
            },
            "groupComparisonMethod": "ldap",
            "defaultUserRoles" : [
                "openidm-authorized"
            ],
            "augmentSecurityContext" : {
                "type" : "text/javascript",
                "file" : "auth/populateAsManagedUser.js"
            }
        },
        "enabled" : true
    }

To grant different roles to users who are authenticated through the IWA module, list them with their groupRoleMapping.

You could pair the IWA module with the PASSTHROUGH module. When paired, a failure in the IWA module allows users to revert to forms-based authentication.

You could add the PASSTHROUGH module, based on the model shown in "Configuring Pass-Through Authentication".

Once you have included at least the IWA module, edit the system.properties file. Include the following entry to point to a JAAS configuration file. Substitute if desired for gssapi_jaas.conf

java.security.auth.login.config=/path/to/openidm/conf/gssapi_jaas.conf

In the gssapi_jaas.conf file, include the following information related to the LDAP connector:

org.identityconnectors.ldap.LdapConnector {
    com.sun.security.auth.module.Krb5LoginModule required client=TRUE
    principal="bjensen@EXAMPLE.COM" useKeyTab=true keyTab="/path/to/bjensen.keytab";
};

15.1.4.1. Kerberos Definitions

The Windows Desktop authentication module uses Kerberos. The user presents a Kerberos token to the ForgeRock product, through the Simple and Protected GSS-API Negotiation Mechanism (SPNEGO) protocol. The Windows Desktop authentication module enables desktop single sign on such that a user who has already authenticated with a Kerberos Key Distribution Center can authenticate without having to provide the login information again. Users might need to set up Integrated Windows Authentication in Internet Explorer to benefit from single sign on when logged on to a Windows desktop.

The Kerberos attributes shown may correspond to a ssoadm attribute for OpenAM or a JSON attribute for OpenIDM.

Service Principal

Specify the Kerberos principal for authentication in the following format.

HTTP/host.domain@dc-domain-name

Here, host and domain correspond to the host and domain names of the installed ForgeRock product, and dc-domain-name is the domain name of the Windows Kerberos domain controller server. The dc-domain-name can differ from the domain name for the installed ForgeRock product.

You set up the account on the Windows domain controller, creating a computer account for the installed ForgeRock product and associating the new account with a service provider name.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-principal-name

JSON attribute: servicePrincipal

Keytab File Name

Specify the full path of the keytab file for the Service Principal. You generate the keytab file using the Windows ktpass utility.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-keytab-file

JSON attribute: keytabFileName

Kerberos Realm

Specify the Kerberos Key Distribution Center realm. For the Windows Kerberos service this is the domain controller server domain name.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-kerberos-realm

JSON attribute: kerberosRealm

Kerberos Server Name

Specify the fully qualified domain name of the Kerberos Key Distribution Center server, such as that of the domain controller server.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-kdc

JSON attribute: kerberosServerName

Return Principal with Domain Name

When enabled, OpenAM automatically returns the Kerberos principal with the domain controller's domain name during authentication.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-returnRealm

JSON attribute: returnRealm

Authentication Level

Sets the authentication level used to indicate the level of security associated with the module. The value can range from 0 to any positive integer.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-auth-level

JSON attribute: authLevel

Search for the user in the realm

Validates the user against the configured data stores. If the user from the Kerberos token is not found, authentication will fail. If an authentication chain is set, the user will be able to authenticate through another module.

ssoadm attribute: iplanet-am-auth-windowsdesktopsso-lookupUserInRealm

JSON attribute: lookupUserInRealm

Note

Note: For Windows 7 and later, you will need to disable the "Enable Integrated Windows Authentication" option in Internet Explorer. In addition, you will need to add and activate the DisableNTMLPreAuth key to the Windows Registry. For detailed instructions, see the Microsoft KB article on when You cannot post data to a non-NTLM-authenticated Web site

15.1.5. Configuring the CLIENT_CERT Authentication Module

The CLIENT_CERT authentication module compares the subject DN of the client certificate with the subject DN of the OpenIDM truststore.

The following procedure allows you to review the process with a generated self-signed certificate for the CLIENT_CERT module. If you have a *.pem file signed by a certificate authority, substitute accordingly.

In this procedure, you will verify the certificate over port 8444 as defined in your project's conf/boot/boot.properties file:

openidm.auth.clientauthonlyports=8443,8444
Demonstrating the CLIENT_CERT Module
  1. Generate the self-signed certificate with the following command:

    $ openssl \
     req \
     -x509 \
     -newkey rsa:1024 \
     -keyout key.pem \
     -out cert.pem \
     -days 3650 \
     -nodes 
  2. Respond to the questions when prompted.

    Country Name (2 letter code) [XX]:
    State or Province Name (full name) []:
    Locality Name (eg, city) [Default City]:
    Name (eg, company) [Default Company Ltd]:ForgeRock
    Organizational Unit Name (eg, section) []:
    Common Name (eg, your name or your server's hostname) []:localhost
    Email Address []:

    In this case, the Name corresponds to the O (for organization) of ForgeRock, and the Common Name corresponds to the cn of localhost. You'll use this information in a couple of steps.

  3. Import the certificate cert.pem file into the OpenIDM truststore:

    $ keytool \
     -importcert \
     -keystore \
     /path/to/openidm/security/truststore \
     -storetype JKS \
     -storepass changeit \
     -file cert.pem \
     -trustcacerts \
     -noprompt \
     -alias \
     client-cert-example
     Certificate was added to keystore
  4. Open the authentication.json file in the project-dir/conf directory. Scroll to the code block with CLIENT_CERT and include the information from when you generated the self-signed certificate:

    ...
    {
       "name" : "CLIENT_CERT",
       "properties" : {
          "queryOnResource" : "security/truststore",
          "defaultUserRoles" : [
             "openidm-cert"
          ],
          "allowedAuthenticationIdPatterns" : [
             "cn=localhost, O=ForgeRock"
          ]
       },
       "enabled" : true
    }
    ... 
  5. Start OpenIDM:

    $ cd /path/to/openidm
    $ ./startup.sh -p project-dir