Guide to configuring and integrating ForgeRock® Identity Management software into identity management solutions. This software offers flexible services for automating management of the identity life cycle.

Preface

ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.

The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:

  • ForgeRock Access Management (AM)

  • ForgeRock Identity Management (IDM)

  • ForgeRock Directory Services (DS)

  • ForgeRock Identity Gateway (IG)

1. About This Guide

In this guide you will learn how to integrate ForgeRock Identity Management (IDM) software as part of a complete identity management solution.

This guide is written for systems integrators building solutions based on ForgeRock Identity Management services. This guide describes the product functionality, and shows you how to set up and configure IDM software as part of your overall identity management solution.

2. Formatting Conventions

Most examples in the documentation are created in GNU/Linux or Mac OS X operating environments. If distinctions are necessary between operating environments, examples are labeled with the operating environment name in parentheses. To avoid repetition file system directory names are often given only in UNIX format as in /path/to/server, even if the text applies to C:\path\to\server as well.

Absolute path names usually begin with the placeholder /path/to/. This path might translate to /opt/, C:\Program Files\, or somewhere else on your system.

Command-line, terminal sessions are formatted as follows:

$ echo $JAVA_HOME
/path/to/jdk

Command output is sometimes formatted for narrower, more readable output even though formatting parameters are not shown in the command.

Program listings are formatted as follows:

class Test {
    public static void main(String [] args)  {
        System.out.println("This is a program listing.");
    }
}

3. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

4. Using the ForgeRock.org Site

The ForgeRock.org site has links to source code for ForgeRock open source software, as well as links to the ForgeRock forums and technical blogs.

If you are a ForgeRock customer, raise a support ticket instead of using the forums. ForgeRock support professionals will get in touch to help you.

Chapter 1. Architectural Overview

This chapter introduces the IDM architecture, and describes component modules and services.

In this chapter you will learn:

  • How IDM uses the OSGi framework as a basis for its modular architecture

  • How the infrastructure modules provide the features required for IDM's core services

  • What those core services are and how they fit in to the overall architecture

  • How IDM provides access to the resources it manages

1.1. Modular Framework

IDM implements infrastructure modules that run in an OSGi framework. It exposes core services through RESTful APIs to client applications.

The following figure provides an overview of the architecture. Specific components are described in more detail in subsequent sections of this chapter.

Figure 1.1. Modular Architecture
IDM architecture

The IDM framework is based on OSGi:

OSGi

OSGi is a module system and service platform for the Java programming language that implements a complete and dynamic component model. For a good introduction to OSGi, see the OSGi site. IDM runs in Apache Felix, an implementation of the OSGi Framework and Service Platform.

Servlet

The Servlet layer provides RESTful HTTP access to the managed objects and services. IDM embeds the Jetty Servlet Container, which can be configured for either HTTP or HTTPS access.

1.2. Infrastructure Modules

The infrastructure modules provide the underlying features needed for core services:

BPMN 2.0 Workflow Engine

The embedded workflow and business process engine is based on Activiti and the Business Process Model and Notation (BPMN) 2.0 standard.

For more information, see Chapter 21, "Integrating Business Processes and Workflows".

Task Scanner

The task-scanning mechanism performs a batch scan for a specified property, on a scheduled interval. The task scanner executes a task when the value of that property matches a specified value.

For more information, see Section 17.8, "Scanning Data to Trigger Tasks".

Scheduler

The scheduler provides a cron-like scheduling component implemented using the Quartz library. Use the scheduler, for example, to enable regular synchronizations and reconciliations.

For more information, see Chapter 17, "Scheduling Tasks and Events".

Script Engine

The script engine is a pluggable module that provides the triggers and plugin points for IDM. JavaScript and Groovy are supported.

Policy Service

An extensible policy service applies validation requirements to objects and properties, when they are created or updated.

For more information, see Chapter 12, "Using Policies to Validate Data".

Audit Logging

Auditing logs all relevant system activity to the configured log stores. This includes the data from reconciliation as a basis for reporting, as well as detailed activity logs to capture operations on the internal (managed) and external (system) objects.

For more information, see Chapter 22, "Setting Up Audit Logging".

Repository

The repository provides a common abstraction for a pluggable persistence layer. IDM supports reconciliation and synchronization with several major external data stores in production, including relational databases, LDAP servers, and even flat CSV and XML files.

The repository API uses a JSON-based object model with RESTful principles consistent with the other IDM services. To facilitate testing, IDM includes an embedded instance of ForgeRock Directory Services (DS). In production, you must use a supported JDBC repository, as described in Chapter 2, "Selecting a Repository" in the Installation Guide.

1.3. Core Services

The core services are the heart of the resource-oriented unified object model and architecture:

Object Model

Artifacts handled by IDM are Java object representations of the JavaScript object model as defined by JSON. The object model supports interoperability and potential integration with many applications, services, and programming languages.

IDM can serialize and deserialize these structures to and from JSON as required. IDM also exposes a set of triggers and functions that you can define, in either JavaScript or Groovy, which can natively read and modify these JSON-based object model structures.

Managed Objects

A managed object is an object that represents the identity-related data managed by IDM. Managed objects are configurable, JSON-based data structures that IDM stores in its pluggable repository. The default managed object configuration includes users and roles, but you can define any kind of managed object, for example, groups or devices.

You can access managed objects over the REST interface with a query similar to the following:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/..."
System Objects

System objects are pluggable representations of objects on external systems. For example, a user entry that is stored in an external LDAP directory is represented as a system object in IDM.

System objects follow the same RESTful resource-based design principles as managed objects. They can be accessed over the REST interface with a query similar to the following:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/system/..."

There is a default implementation for the OpenICF framework, that allows any connector object to be represented as a system object.

Mappings

Mappings define policies between source and target objects and their attributes during synchronization and reconciliation. Mappings can also define triggers for validation, customization, filtering, and transformation of source and target objects.

For more information, see Chapter 15, "Synchronizing Data Between Resources".

Synchronization and Reconciliation

Reconciliation enables on-demand and scheduled resource comparisons between the managed object repository and the source or target systems. Comparisons can result in different actions, depending on the mappings defined between the systems.

Synchronization enables creating, updating, and deleting resources from a source to a target system, either on demand or according to a schedule.

For more information, see Chapter 15, "Synchronizing Data Between Resources".

1.4. Secure Commons REST Commands

Representational State Transfer (REST) is a software architecture style for exposing resources, using the technologies and protocols of the World Wide Web. For more information on the ForgeRock REST API, see Appendix D, "REST API Reference".

REST interfaces are commonly tested with a curl command. Many of these commands are used in this document. They work with the standard ports associated with Java EE communications, 8080 and 8443.

To run curl over the secure port, 8443, you must include either the --insecure option, or follow the instructions shown in Section 20.2.2, "Restricting REST Access to the HTTPS Port". You can use those instructions with the self-signed certificate generated when IDM starts, or with a *.crt file provided by a certificate authority.

1.5. Access Layer

The access layer provides the user interfaces and public APIs for accessing and managing the repository and its functions:

RESTful Interfaces

IDM provides REST APIs for CRUD operations, for invoking synchronization and reconciliation, and to access several other services.

For more information, see Appendix D, "REST API Reference".

User Interfaces

User interfaces provide access to most of the functionality available over the REST API.

Chapter 2. Starting and Stopping the Server

This chapter covers the scripts provided for starting and stopping IDM, and describes how to verify the health of a system, that is, that all requirements are met for a successful system startup.

2.1. To Start and Stop the Server

By default, you start and stop IDM in interactive mode.

To start the server interactively, open a terminal or command window, change to the openidm directory, and run the startup script:

  • startup.sh (UNIX)

  • startup.bat (Windows)

The startup script starts the server, and opens an OSGi console with a -> prompt where you can issue console commands.

To stop the server interactively in the OSGi console, run the shutdown command:

-> shutdown

You can also start IDM as a background process on UNIX and Linux systems. Follow these steps, preferable before you start IDM for the first time:

  1. If you have already started the server, shut it down and remove the Felix cache files under openidm/felix-cache/:

    -> shutdown
    ...
    $ rm -rf felix-cache/*
  2. Start the server in the background. The nohup survives a logout and the 2>&1& redirects standard output and standard error to the noted console.out file:

    $ nohup ./startup.sh > logs/console.out 2>&1&
    [1] 2343
    

To stop the server running as a background process, use the shutdown.sh script:

$ ./shutdown.sh
./shutdown.sh
Stopping OpenIDM (2343)

Note

Although installations on OS X systems are not supported in production, you might want to run IDM on OS X in a demo or test environment. To run IDM in the background on an OS X system, take the following additional steps:

  • Remove the org.apache.felix.shell.tui-*.jar bundle from the openidm/bundle directory.

  • Disable ConsoleHandler logging, as described in Section 13.3, "Disabling Logs".

2.2. Specifying the Startup Configuration

By default, IDM starts with the configuration, script, and binary files in the openidm/conf, openidm/script, and openidm/bin directories. You can launch IDM with a different set of configuration, script, and binary files for test purposes, to manage different projects, or to run one of the included samples.

The startup.sh script enables you to specify the following elements of a running instance:

-p | --project-location {/path/to/project/directory}

The project location specifies the directory that contains the configuration and script files that IDM will use.

All configuration objects and any artifacts that are not in the bundled defaults (such as custom scripts) must be included in the project location. These objects include all files otherwise included in the openidm/conf and openidm/script directories.

For example, the following command starts the server with the configuration of the sync-with-csv sample (located in /path/to/openidm/samples/sync-with-csv):

$ ./startup.sh -p /path/to/openidm/samples/sync-with-csv

If you do not provide an absolute path, the project location path is relative to the system property, user.dir. IDM sets launcher.project.location to that relative directory path. Alternatively, if you start the server without the -p option, IDM sets launcher.project.location to /path/to/openidm.

Note

In this documentation, "your project" refers to the value of launcher.project.location.

-w |--working-location {/path/to/working/directory}

The working location specifies the directory to which IDM writes its database cache, audit logs, and felix cache. The working location includes everything that is in the default db/, audit/, and felix-cache/ subdirectories.

The following command specifies that IDM writes its database cache and audit data to /Users/admin/openidm/storage:

$ ./startup.sh -w /Users/admin/openidm/storage

If you do not provide an absolute path, the path is relative to the system property, user.dir. If you do not specify a working location, IDM writes this data to the openidm/db, openidm/felix-cache and openidm/audit directories.

Note that this property does not affect the location of the IDM system logs. To change the location of these logs, edit the conf/logging.properties file.

You can also change the location of the Felix cache, by editing the conf/config.properties file, or by starting the server with the -s option, described later in this section.

-c | --config {/path/to/config/file}

A customizable startup configuration file (named launcher.json) enables you to specify how the OSGi Framework is started.

Unless you are working with a highly customized deployment, you should not modify the default framework configuration. This option is therefore described in more detail in Chapter 28, "Advanced Configuration".

-P {property=value}

Any properties passed to the startup script with the -P option are used when the server loads the launcher.json startup configuration file.

Options specified here have the lowest order of precedence when the configuration is loaded. If the same property is defined in any other configuration source, the value specified here is ignored.

-s | --storage {/path/to/storage/directory}

Specifies the OSGi storage location of the cached configuration files.

You can use this option to redirect output if you are installing on a read-only filesystem volume. For more information, see Appendix A, "Installing on a Read-Only Volume" in the Installation Guide. This option is also useful when you are testing different configurations. Sometimes when you start the server with two different sample configurations, one after the other, the cached configurations are merged and cause problems. Specifying a storage location creates a separate felix-cache directory in that location, and the cached configuration files remain completely separate.

By default, configuration properties are evaluated in the following order:

  1. Operating system environment variables

  2. system.properties, including system (-D) options passed through the OPENIDM_OPTS variable

  3. boot.properties

  4. config.properties

  5. launcher.json

If a property is defined in two configuration sources, the source that appears higher up in this list is used. For example, if a property is defined in a local OPENIDM_PORT_HTTP environment variable, that takes precedence over the openidm.port.http variable defined in the boot.properties file. For more information, see Section 2.2.1, "Property Substitution in the Startup Configuration".

2.2.1. Property Substitution in the Startup Configuration

You can set up property substitution in two ways:

  • You can use property substitution in any .json configuration file with the install, working and project locations described previously. You can substitute the following properties:

    install.location
    install.url
    working.location
    working.url
    project.location
    project.url

    Property substitution takes the following syntax:

    &{launcher.property}
  • You can also set up property substitution with environment variables. For example, the default boot.properties file contains the following property that sets the default HTTP port when you start IDM:

    openidm.port.http=8080

    If you run the following command in the bash shell, you would override this property when IDM starts:

    $ export OPENIDM_PORT_HTTP=8888

    After you run this export command and run the ./startup.sh script, you can access IDM at http://localhost:8888.

On the other hand, property substitution does not work for connector reference properties. So, for example, the following configuration would not be valid:

"connectorRef" : {
     "connectorName" : "&{connectorName}",
     "bundleName" : "org.forgerock.openicf.connectors.ldap-connector",
     "bundleVersion" : "&{LDAP.BundleVersion}"
     ...

The "connectorName" must be the precise string from the connector configuration. If you need to specify multiple connector version numbers, use a range of versions, for example:

"connectorRef" : {
     "connectorName" : "org.identityconnectors.ldap.LdapConnector",
     "bundleName" : "org.forgerock.openicf.connectors.ldap-connector",
     "bundleVersion" : "[1.4.0.0,2.0.0.0)",
     ...

2.3. Monitoring Server Health

Because IDM is highly modular and configurable, it is often difficult to assess whether a system has started up successfully, or whether the system is ready and stable after dynamic configuration changes have been made.

The health check service allows you to monitor the status of internal resources.

To monitor the status of external resources such as LDAP servers and external databases, use the commands described in Section 14.4, "Checking the Status of External Systems Over REST".

2.3.1. Basic Health Checks

The health check service reports on the state of the server and outputs this state to the OSGi console and to the log files. The server can be in one of the following states:

  • STARTING - the server is starting up

  • ACTIVE_READY - all of the specified requirements have been met to consider the server ready

  • ACTIVE_NOT_READY - one or more of the specified requirements have not been met and the server is not considered ready

  • STOPPING - the server is shutting down

You can verify the current server state with the following REST call:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/info/ping"
{
  "_id" : "",
  "state" : "ACTIVE_READY",
  "shortDesc" : "OpenIDM ready"
}

The information is provided by the following script: openidm/bin/defaults/script/info/ping.js.

2.3.2. Obtaining Session Information

You can get more information about the current IDM session, beyond basic health checks, with the following REST call:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
"http://localhost:8080/openidm/info/login" 
{
  "_id" : "",
  "class" : "org.forgerock.services.context.SecurityContext",
  "name" : "security",
  "authenticationId" : "openidm-admin",
  "authorization" : {
    "id" : "openidm-admin",
    "component" : "repo/internal/user",
    "roles" : [ "openidm-admin", "openidm-authorized" ],
    "ipAddress" : "127.0.0.1"
  },
  "parent" : {
    "class" : "org.forgerock.caf.authentication.framework.MessageContextImpl",
    "name" : "jaspi",
    "parent" : {
      "class" : "org.forgerock.services.context.TransactionIdContext",
      "id" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288",
      "name" : "transactionId",
      "transactionId" : {
        "value" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288",
        "subTransactionIdCounter" : 0
      },
      "parent" : {
        "class" : "org.forgerock.services.context.ClientContext",
        "name" : "client",
        "remoteUser" : null,
        "remoteAddress" : "127.0.0.1",
        "remoteHost" : "127.0.0.1",
        "remotePort" : 56534,
        "certificates" : "",
...

The information is provided by the following script: openidm/bin/defaults/script/info/login.js.

2.3.3. Monitoring Tuning and Health Parameters

You can extend monitoring beyond what you can check on the openidm/info/ping and openidm/info/login endpoints. Specifically, you can get more detailed information about the state of the following:

  • Operating System on the openidm/health/os endpoint

  • Memory on the openidm/health/memory endpoint

  • JDBC Pooling, based on the openidm/health/jdbc endpoint

  • Reconciliation, on the openidm/health/recon endpoint.

For information on controlling access to these endpoints, see Section 19.3.2, "Understanding the Access Configuration Script (access.js)".

2.3.3.1. Operating System Health Check

With the following REST call, you can get basic information about the host operating system:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/health/os"
{
    "_id" : "",
    "_rev" : "",
    "availableProcessors" : 1,
    "systemLoadAverage" : 0.06,
    "operatingSystemArchitecture" : "amd64",
    "operatingSystemName" : "Linux",
    "operatingSystemVersion" : "2.6.32-504.30.3.el6.x86_64"
}

From the output, you can see that this particular system has one 64-bit CPU, with a load average of 6 percent, on a Linux system with the noted kernel operatingSystemVersion number.

2.3.3.2. Memory Health Check

With the following REST call, you can get basic information about overall JVM memory use:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/health/memory"
{
    "_id" : "",
    "_rev" : "",
    "objectPendingFinalization" : 0,
    "heapMemoryUsage" : {
        "init" : 1073741824,
        "used" : 88538392,
        "committed" : 1037959168,
        "max" : 1037959168
    },
    "nonHeapMemoryUsage" : {
        "init" : 24313856,
        "used" : 69255024,
        "committed" : 69664768,
        "max" : 224395264
    }
}

The output includes information on JVM Heap and Non-Heap memory, in bytes. Briefly:

  • JVM Heap memory is used to store Java objects.

  • JVM Non-Heap Memory is used by Java to store loaded classes and related meta-data

2.3.3.3. JDBC Health Check

Running a health check on the JDBC repository is supported only if you are using the BoneCP connection pool. This is not the default connection pool, so you must make the following changes to your configuration before running this command:

  • In your project's conf/datasource.jdbc-default.json file, change the connectionPool parameter as follows:

    "connectionPool" : {
        "type" : "bonecp"
    }
  • In your project's conf/boot/boot.properties file, enable the statistics MBean for the BoneCP connection pool:

    openidm.bonecp.statistics.enabled=true

For a BoneCP connection pool, the following REST call returns basic information about the status of the JDBC repository:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/health/jdbc"
{
  "_id": "",
  "_rev": "",
  "com.jolbox.bonecp:type=BoneCP-4ffa60bd-5dfc-400f-850e-439c7aa27094": {
    "connectionWaitTimeAvg": 0.012701142857142857,
    "statementExecuteTimeAvg": 0.8084880967741935,
    "statementPrepareTimeAvg": 1.6652538867562894,
    "totalLeasedConnections": 0,
    "totalFreeConnections": 7,
    "totalCreatedConnections": 7,
    "cacheHits": 0,
    "cacheMiss": 0,
    "statementsCached": 0,
    "statementsPrepared": 31,
    "connectionsRequested": 28,
    "cumulativeConnectionWaitTime": 0,
    "cumulativeStatementExecutionTime": 25,
    "cumulativeStatementPrepareTime": 18,
    "cacheHitRatio": 0,
    "statementsExecuted": 31
  }
}

The BoneCP metrics are self-explanatory.

2.3.3.4. Reconciliation Health Check

With the following REST call, you can get basic information about the system demands related to reconciliation:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/health/recon"
{
    "_id" : "",
    "_rev" : "",
    "activeThreads" : 1,
    "corePoolSize" : 10,
    "largestPoolSize" : 1,
    "maximumPoolSize" : 10,
    "currentPoolSize" : 1
}

From the output, you can review the number of active threads used by the reconciliation, as well as the available thread pool.

2.3.4. Customizing Health Check Scripts

You can extend or override the default information that is provided by creating your own script file and its corresponding configuration file. Health check configuration files must be named info-name.json, and placed in your project's conf/ directory. The name generally refers to the purpose of the script. Custom health check script files can be located anywhere, although a best practice is to place them under a script/info directory in your project, for example, under openidm/samples/sync-with-ldap/script/info/.

The following sample script (named customping.js) extends the default ping service:

/*global healthinfo */

if (request.method !== "read") {
     throw "Unsupported operation on ping info service: " + request.method;
}
(function () {

    healthinfo.sampleprop="Example customization";
    return healthinfo;

}());

To use this script, you would create a custom configuration file with the following content:

{
    "infocontext" : "ping",
    "type" : "text/javascript",
    "file" : "script/info/customping.js"
}

A health check configuration file must include the following parameters:

infocontext

Specifies the relative name of the info endpoint under the info context. The information can be accessed over REST at this endpoint, for example, setting infocontext to mycontext/myendpoint would make the information accessible over REST at http://localhost:8080/openidm/info/mycontext/myendpoint.

type

Specifies the type of the information source. Can be either JavaScript ("type" : "text/javascript") or Groovy ("type" : "groovy").

file

Specifies the path to the JavaScript or Groovy file, if you do not provide a source parameter.

source

Specifies the actual JavaScript or Groovy script, if you have not provided a file parameter.

Health check scripts have access to the following variables:

request

The request details, including the method called and any parameters passed.

healthinfo

The current health status of the system.

language

The user's preferred language, based on the Accept-Language header included in the request. If Accept-Language is not specified in the request, it returns the language set in conf/ui-configuration.json.

openidm

Access to the JSON resource API.

2.3.5. Verifying the State of Health Check Service Modules

The configurable health check service verifies the status of the modules and services required for an operational system. During system startup, IDM checks that these modules and services are available and reports on any requirements that have not been met. If dynamic configuration changes are made, IDM rechecks that the required modules and services are functioning, to allow ongoing monitoring of system operation.

Example 2.1. Examples of Required Modules

IDM checks all required modules. Examples of those modules are shown here:

     "org.forgerock.openicf.framework.connector-framework"
     "org.forgerock.openicf.framework.connector-framework-internal"
     "org.forgerock.openicf.framework.connector-framework-osgi"
     "org.forgerock.openidm.audit"
     "org.forgerock.openidm.core"
     "org.forgerock.openidm.enhanced-config"
     "org.forgerock.openidm.external-email"
     ...
     "org.forgerock.openidm.system"
     "org.forgerock.openidm.ui"
     "org.forgerock.openidm.util"
     "org.forgerock.commons.org.forgerock.json.resource"
     "org.forgerock.commons.org.forgerock.util"
     "org.forgerock.openidm.security-jetty"
     "org.forgerock.openidm.jetty-fragment"
     "org.forgerock.openidm.quartz-fragment"
     "org.ops4j.pax.web.pax-web-extender-whiteboard"
     "org.forgerock.openidm.scheduler"
     "org.ops4j.pax.web.pax-web-jetty-bundle"
     "org.forgerock.openidm.repo-jdbc"
     "org.forgerock.openidm.repo-opendj"
     "org.forgerock.openidm.config"
     "org.forgerock.openidm.crypto"

Example 2.2. Examples of Required Services

IDM checks all required services. Examples of those services are shown here:

     "org.forgerock.openidm.config"
     "org.forgerock.openidm.provisioner"
     "org.forgerock.openidm.provisioner.openicf.connectorinfoprovider"
     "org.forgerock.openidm.external.rest"
     "org.forgerock.openidm.audit"
     "org.forgerock.openidm.policy"
     "org.forgerock.openidm.managed"
     "org.forgerock.openidm.script"
     "org.forgerock.openidm.crypto"
     "org.forgerock.openidm.recon"
     "org.forgerock.openidm.info"
     "org.forgerock.openidm.router"
     "org.forgerock.openidm.scheduler"
     "org.forgerock.openidm.scope"
     "org.forgerock.openidm.taskscanner"

You can replace the list of required modules and services, or add to it, by adding the following lines to your project's conf/boot/boot.properties file. Bundles and services are specified as a list of symbolic names, separated by commas:

  • openidm.healthservice.reqbundles - overrides the default required bundles.

  • openidm.healthservice.reqservices - overrides the default required services.

  • openidm.healthservice.additionalreqbundles - specifies required bundles (in addition to the default list).

  • openidm.healthservice.additionalreqservices - specifies required services (in addition to the default list).

By default, the server is given 15 seconds to start up all the required bundles and services before system readiness is assessed. Note that this is not the total start time, but the time required to complete the service startup after the framework has started. You can change this default by setting the value of the servicestartmax property (in milliseconds) in your project's conf/boot/boot.properties file. This example sets the startup time to five seconds:

openidm.healthservice.servicestartmax=5000

2.4. Displaying Information About Installed Modules

On a running instance, you can list the installed modules and their states by typing the following command in the OSGi console. (The output will vary by configuration):

-> scr list 
  
 BundleId Component Name Default State
    Component Id State      PIDs (Factory PID)
 [   5]   org.forgerock.openidm.config.enhanced.starter  enabled
    [   1] [active      ] org.forgerock.openidm.config.enhanced.starter
 [   5]   org.forgerock.openidm.config.manage  enabled
    [   0] [active      ] org.forgerock.openidm.config.manage
 [  10]   org.forgerock.openidm.datasource.jdbc  enabled
 [  10]   org.forgerock.openidm.repo.jdbc  enabled
 [  11]   org.forgerock.openidm.repo.opendj  enabled
    [  35] [active      ] org.forgerock.openidm.repo.opendj
 [  16]   org.forgerock.openidm.cluster  enabled
    [  18] [active      ] org.forgerock.openidm.cluster
 [  17]   org.forgerock.openidm.http.context  enabled
    [   2] [active      ] org.forgerock.openidm.http.context
 [ 123]   org.forgerock.openidm.api-servlet  enabled
    [   5] [active      ] org.forgerock.openidm.api-servlet
 [ 123]   org.forgerock.openidm.error-servlet  enabled
    [   3] [active      ] org.forgerock.openidm.error-servlet
 [ 123]   org.forgerock.openidm.router.servlet  enabled
    [   4] [active      ] org.forgerock.openidm.router.servlet
 [ 124]   org.forgerock.openidm.audit  enabled
    [  24] [active      ] org.forgerock.openidm.audit
 [ 124]   org.forgerock.openidm.audit.filter  enabled
    [   6] [active      ] org.forgerock.openidm.audit.filter
->

To display additional information about a particular module or service, run the following command, substituting the Component Id from the preceding list:

-> scr info Id

The following example displays additional information about the router service:

-> scr info 4
*** Bundle: org.forgerock.openidm.api-servlet (123)
Component Description:
  Name: org.forgerock.openidm.router.servlet
  Implementation Class: org.forgerock.openidm.servlet.internal.ServletConnectionFactory
  Default State: enabled
  Activation: immediate
  Configuration Policy: ignore
  Activate Method: activate
  Deactivate Method: deactivate
  Modified Method: -
  Configuration Pid: [org.forgerock.openidm.router.servlet]
  Services:
    org.forgerock.json.resource.ConnectionFactory
    org.forgerock.openidm.router.RouterFilterRegistration
  Service Scope: singleton
  Reference: requestHandler
    Interface Name: org.forgerock.json.resource.RequestHandler
    Target Filter: (org.forgerock.openidm.router=*)
    Cardinality: 1..1
    Policy: static
    Policy option: reluctant
    Reference Scope: bundle
...
->

2.5. Starting in Debug Mode

To debug custom libraries, you can start the server with the option to use the Java Platform Debugger Architecture (JPDA):

  • Start IDM with the jpda option:

    $ cd /path/to/openidm
    $ ./startup.sh jpda
    Executing ./startup.sh...
    Using OPENIDM_HOME:   /path/to/openidm
    Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Djava.compiler=NONE -Xnoagent -Xdebug
     -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n
    Using LOGGING_CONFIG:
       -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties
    Listening for transport dt_socket at address: 5005
    Using boot properties at /path/to/openidm/conf/boot/boot.properties
    -> OpenIDM version "5.5.0" (revision: xxxx)
    OpenIDM ready

    The relevant JPDA options are outlined in the startup script (startup.sh).

  • In your IDE, attach a Java debugger to the JVM via socket, on port 5005.

Caution

This interface is internal and subject to change. If you depend on this interface, contact ForgeRock support.

2.6. Running As a Service on Linux Systems

IDM provides a script that generates an initialization script to run as a service on Linux systems. You can start the script as the root user, or configure it to start during the boot process.

When IDM runs as a service, logs are written to the installation directory.

To run IDM as a service, take the following steps:

  1. If you have not yet installed IDM, follow the procedure described in Chapter 1, "Preparing to Install and Run Servers" in the Installation Guide.

  2. Run the RC script:

    $ cd /path/to/openidm/bin
    $ ./create-openidm-rc.sh
  3. As a user with administrative privileges, copy the openidm script to the /etc/init.d directory:

    $ sudo cp openidm /etc/init.d/
  4. If you run Linux with SELinux enabled, change the file context of the newly copied script with the following command:

    $ sudo restorecon /etc/init.d/openidm

    You can verify the change to SELinux contexts with the ls -Z /etc/init.d command. For consistency, change the user context to match other scripts in the same directory with the sudo chcon -u system_u /etc/init.d/openidm command.

  5. Run the appropriate commands to add IDM to the list of RC services:

    • On Red Hat-based systems, run the following commands:

      $ sudo chkconfig --add openidm
      $ sudo chkconfig openidm on
    • On Debian/Ubuntu systems, run the following command:

      $ sudo update-rc.d openidm defaults
      Adding system startup for /etc/init.d/openidm ...
      /etc/rc0.d/K20openidm -> ../init.d/openidm
      /etc/rc1.d/K20openidm -> ../init.d/openidm
      /etc/rc6.d/K20openidm -> ../init.d/openidm
      /etc/rc2.d/S20openidm -> ../init.d/openidm
      /etc/rc3.d/S20openidm -> ../init.d/openidm
      /etc/rc4.d/S20openidm -> ../init.d/openidm
      /etc/rc5.d/S20openidm -> ../init.d/openidm

      Note the output, as Debian/Ubuntu adds start and kill scripts to appropriate runlevels.

      When you run the command, you may get the following warning message: update-rc.d: warning: /etc/init.d/openidm missing LSB information. You can safely ignore that message.

  6. As an administrative user, start the IDM service:

    $ sudo /etc/init.d/openidm start

    Alternatively, reboot the system to start the service automatically.

  7. (Optional) The following commands stop and restart the service:

    $ sudo /etc/init.d/openidm stop
    $ sudo /etc/init.d/openidm restart

If you have set up a deployment in a custom directory, such as /path/to/openidm/production, you can modify the /etc/init.d/openidm script.

Open the openidm script in a text editor and navigate to the START_CMD line.

At the end of the command, you should see the following line:

org.forgerock.commons.launcher.Main -c bin/launcher.json > logs/server.out 2>&1 &"

Include the path to the production directory. In this case, you would add -p production as shown:

org.forgerock.commons.launcher.Main -c bin/launcher.json -p production > logs/server.out 2>&1 &

Save the openidm script file in the /etc/init.d directory. The sudo /etc/init.d/openidm start command should now start the server with the files in your production subdirectory.

Chapter 3. Command-Line Interface

This chapter describes the basic command-line interface (CLI). The CLI includes a number of utilities for managing an IDM instance.

All of the utilities are subcommands of the cli.sh (UNIX) or cli.bat (Windows) scripts. To use the utilities, you can either run them as subcommands, or launch the cli script first, and then run the utility. For example, to run the encrypt utility on a UNIX system:

$ cd /path/to/openidm 
$ ./cli.sh 
Using boot properties at /path/to/openidm/conf/boot/boot.properties
openidm# encrypt ....

or

$ cd /path/to/openidm
$ ./cli.sh encrypt ... 

By default, the command-line utilities run with the properties defined in your project's conf/boot/boot.properties file.

If you run the cli.sh command by itself, it opens an IDM-specific shell prompt:

openidm#

The startup and shutdown scripts are not discussed in this chapter. For information about these scripts, see Chapter 2, "Starting and Stopping the Server".

The following sections describe the subcommands and their use. Examples assume that you are running the commands on a UNIX system. For Windows systems, use cli.bat instead of cli.sh.

For a list of subcommands available from the openidm# prompt, run the cli.sh help command. The help and exit options shown below are self-explanatory. The other subcommands are explained in the subsections that follow:

local:keytool  Export or import a SecretKeyEntry. 
	   The Java Keytool does not allow for exporting or importing SecretKeyEntries.
local:encrypt    Encrypt the input string.
local:secureHash   Hash the input string.
local:validate   Validates all json configuration files in the configuration
    (default: /conf) folder.
basic:help   Displays available commands.
basic:exit   Exit from the console.
remote:update               Update the system with the provided update file.
remote:configureconnector   Generate connector configuration.
remote:configexport         Exports all configurations.
remote:configimport         Imports the configuration set from local file/directory.

The following options are common to the configexport, configimport, and configureconnector subcommands:

-u or --user USER[:PASSWORD]

Allows you to specify the server user and password. Specifying a username is mandatory. If you do not specify a username, the following error is output to the OSGi console: Remote operation failed: Unauthorized. If you do not specify a password, you are prompted for one. This option is used by all three subcommands.

--url URL

The URL of the REST service. The default URL is http://localhost:8080/openidm/. This can be used to import configuration files from a remote running IDM instance. This option is used by all three subcommands.

-P or --port PORT

The port number associated with the REST service. If specified, this option overrides any port number specified with the --url option. The default port is 8080. This option is used by all three subcommands.

3.1. Using the configexport Subcommand

The configexport subcommand exports all configuration objects to a specified location, enabling you to reuse a system configuration in another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment. This subcommand also enables you to inspect the active configuration of an IDM instance.

OpenIDM must be running when you execute this command.

Usage is as follows:

$ ./cli.sh configexport --user username:password export-location

For example:

$ ./cli.sh configexport --user openidm-admin:openidm-admin /tmp/conf

On Windows systems, the export-location must be provided in quotation marks, for example:

C:\openidm\cli.bat configexport --user openidm-admin:openidm-admin "C:\temp\openidm"

Configuration objects are exported as .json files to the specified directory. The command creates the directory if needed. Configuration files that are present in this directory are renamed as backup files, with a timestamp, for example, audit.json.2014-02-19T12-00-28.bkp, and are not overwritten. The following configuration objects are exported:

  • The internal repository table configuration (repo.opendj.json or repo.jdbc.json) and the datasource connection configuration, for JDBC repositories (datasource.jdbc-default.json)

  • The script configuration (script.json)

  • The log configuration (audit.json)

  • The authentication configuration (authentication.json)

  • The cluster configuration (cluster.json)

  • The configuration of a connected SMTP email server (external.email.json)

  • Custom configuration information (info-name.json)

  • The managed object configuration (managed.json)

  • The connector configuration (provisioner.openicf-*.json)

  • The router service configuration (router.json)

  • The scheduler service configuration (scheduler.json)

  • Any configured schedules (schedule-*.json)

  • Standard knowledge-based authentication questions (selfservice.kba.json)

  • The synchronization mapping configuration (sync.json)

  • If workflows are defined, the configuration of the workflow engine (workflow.json) and the workflow access configuration (process-access.json)

  • Any configuration files related to the user interface (ui-*.json)

  • The configuration of any custom endpoints (endpoint-*.json)

  • The configuration of servlet filters (servletfilter-*.json)

  • The policy configuration (policy.json)

3.2. Using the configimport Subcommand

The configimport subcommand imports configuration objects from the specified directory, enabling you to reuse a system configuration from another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment.

The command updates the existing configuration from the import-location over the REST interface. By default, if configuration objects are present in the import-location and not in the existing configuration, these objects are added. If configuration objects are present in the existing location but not in the import-location, these objects are left untouched in the existing configuration.

The subcommand takes the following options:

-r, --replaceall, --replaceAll

Replaces the entire list of configuration files with the files in the specified import location.

Note that this option wipes out the existing configuration and replaces it with the configuration in the import-location. Objects in the existing configuration that are not present in the import-location are deleted.

--retries (integer)

This option specifies the number of times the command should attempt to update the configuration if the server is not ready.

Default value : 10

--retryDelay (integer)

This option specifies the delay (in milliseconds) between configuration update retries if the server is not ready.

Default value : 500

Usage is as follows:

$ ./cli.sh configimport --user username:password [--replaceAll] [--retries integer] [--retryDelay integer] import-location

For example:

$ ./cli.sh configimport --user openidm-admin:openidm-admin --retries 5 --retryDelay 250 --replaceAll /tmp/conf

On Windows systems, the import-location must be provided in quotation marks, for example:

C:\openidm\cli.bat configimport --user openidm-admin:openidm-admin --replaceAll "C:\temp\openidm"

Configuration objects are imported as .json files from the specified directory to the conf directory. The configuration objects that are imported are the same as those for the export command, described in the previous section.

3.3. Using the configureconnector Subcommand

The configureconnector subcommand generates a configuration for an OpenICF connector.

Usage is as follows:

$ ./cli.sh configureconnector --user username:password --name connector-name

Select the type of connector that you want to configure. The following example configures a new CSV connector:

$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myCsvConnector
 Executing ./cli.sh...
Starting shell in /path/to/openidm
Oct 03, 2017 1:40:39 PM org.forgerock.openidm.core.FilePropertyAccessor loadProps
INFO: Using properties at /root/openidm/conf/boot/boot.properties
0. Salesforce Connector version 5.5.0
1. SSH Connector version 1.4.2.0
2. Scim Connector version 1.4.0.0
3. Marketo Connector version 1.4.3.0
4. LDAP Connector version 1.4.6.0
5. Kerberos Connector version 1.4.3.0
6. Scripted SQL Connector version 1.4.4.0
7. Scripted REST Connector version 1.4.4.0
8. Scripted CREST Connector version 1.4.4.0
9. Scripted Poolable Groovy Connector version 1.4.4.0
10. Scripted Groovy Connector version 1.4.4.0
11. GoogleApps Connector version 1.4.2.0
12. Database Table Connector version 1.1.1.0
13. CSV File Connector version 1.5.2.0
14. Adobe Marketing Cloud Connector version 1.5.0.0
15. Exit
Select [0..15]: 13
Edit the configuration file and run the command again. The configuration was saved to
/path/to/openidm/temp/provisioner.openicf-myCsvConnector.json

The basic configuration is saved in a file named /openidm/temp/provisioner.openicf-connector-name.json. Edit at least the configurationProperties parameter in this file to complete the connector configuration. For example, for a CSV connector:

"configurationProperties" : {
    "headerPassword" : "password",
    "csvFile" : "&{launcher.project.location}/data/csvConnectorData.csv",
    "newlineString" : "\n",
    "headerUid" : "uid",
    "quoteCharacter" : "\"",
    "fieldDelimiter" : ",",
    "syncFileRetentionCount" : 3
},

For more information about the connector configuration properties, see Section 14.2, "Configuring Connectors".

When you have modified the file, run the configureconnector command again so that IDM can pick up the new connector configuration:

$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myCsvConnector
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Configuration was found and read from: /path/to/openidm/temp/provisioner.openicf-myCsvConnector.json

You can now copy the new provisioner.openicf-myCsvConnector.json file to your project's conf/ subdirectory.

You can also configure connectors over the REST interface, or through the Admin UI. For more information, see Section 14.2, "Configuring Connectors".

3.4. Using the encrypt Subcommand

The encrypt subcommand encrypts an input string, or JSON object, provided at the command line. This subcommand can be used to encrypt passwords, or other sensitive data, to be stored in the repository. The encrypted value is output to standard output and provides details of the cryptography key that is used to encrypt the data.

Usage is as follows:

$ ./cli.sh encrypt [-j] string

If you do not enter the string as part of the command, the command prompts for the string to be encrypted. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.

The -j option indicates that the string to be encrypted is a JSON object, and validates the object. If the object is malformed JSON and you use the -j option, the command throws an error. It is easier to input JSON objects in interactive mode. If you input the JSON object on the command-line, the object must be surrounded by quotes and any special characters, including curly braces, must be escaped. The rules for escaping these characters are fairly complex. For more information, see section 4.8.2 of the OSGi draft specification. For example:

$ ./cli.sh encrypt -j '\{\"password\":\"myPassw0rd\"\}'

The following example encrypts a normal string value:

$ ./cli.sh encrypt mypassword
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
-----BEGIN ENCRYPTED VALUE-----
{
  "$crypto" : {
    "type" : "x-simple-encryption",
    "value" : {
      "cipher" : "AES/CBC/PKCS5Padding",
      "salt" : "0pRncNLTJ6ZySHfV4DEtgA==",
      "data" : "pIrCCkLPhBt0rbGXiZBHkw==",
      "iv" : "l1Hau6nf2zizQSib8kkW0g==",
      "key" : "openidm-sym-default",
      "mac" : "SoqfhpvhBVuIkux8mztpeQ=="
    }
  }
}
------END ENCRYPTED VALUE------

The following example prompts for a JSON object to be encrypted:

$ ./cli.sh encrypt -j
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Enter the Json value

> Press ctrl-D to finish input
Start data input:
{"password":"myPassw0rd"}
^D        
-----BEGIN ENCRYPTED VALUE-----
{
  "$crypto" : {
    "type" : "x-simple-encryption",
    "value" : {
      "cipher" : "AES/CBC/PKCS5Padding",
      "salt" : "vdz6bUztiT6QsExNrZQAEA==",
      "data" : "RgMLRbX0guxF80nwrtaZkkoFFGqSQdNWF7Ve0zS+N1I=",
      "iv" : "R9w1TcWfbd9FPmOjfvMhZQ==",
      "key" : "openidm-sym-default",
      "mac" : "9pXtSKAt9+dO3Mu0NlrJsQ=="
    }
  }
}
------END ENCRYPTED VALUE------

3.5. Using the secureHash Subcommand

The secureHash subcommand hashes an input string, or JSON object, using the specified hash algorithm. This subcommand can be used to hash password values, or other sensitive data, to be stored in the repository. The hashed value is output to standard output and provides details of the algorithm that was used to hash the data.

Usage is as follows:

$ ./cli.sh secureHash --algorithm [-j] string

The -a or --algorithm option specifies the hash algorithm to use. The following algorithms are supported: MD5, SHA-1, SHA-256, SHA-384, and SHA-512. If you do not specify a hash algorithm, SHA-256 is used.

If you do not enter the string as part of the command, the command prompts for the string to be hashed. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.

The -j option indicates that the string to be hashed is a JSON object, and validates the object. If the object is malformed JSON and you use the -j option, the command throws an error. It is easier to input JSON objects in interactive mode. If you input the JSON object on the command-line, the object must be surrounded by quotes and any special characters, including curly braces, must be escaped. The rules for escaping these characters are fairly complex. For more information, see section 4.8.2 of the OSGi draft specification. For example:

$ ./cli.sh secureHash --algorithm SHA-1 '\{\"password\":\"myPassw0rd\"\}'

The following example hashes a password value (mypassword) using the SHA-1 algorithm:

$ ./cli.sh secureHash --algorithm SHA-1 mypassword
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
-----BEGIN HASHED VALUE-----
{
  "$crypto" : {
    "value" : {
      "algorithm" : "SHA-1",
      "data" : "T9yf3dL7oepWvUPbC8kb4hEmKJ7g5Zd43ndORYQox3GiWAGU"
    },
    "type" : "salted-hash"
  }
}
------END HASHED VALUE------

The following example prompts for a JSON object to be hashed:

$ ./cli.sh secureHash --algorithm SHA-1 -j
Executing ./cli.sh...
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
Enter the Json value

> Press ctrl-D to finish input
Start data input:
{"password":"myPassw0rd"}
^D
-----BEGIN HASHED VALUE-----
{
  "$crypto" : {
    "value" : {
      "algorithm" : "SHA-1",
      "data" : "PBsmFJZEVNHuYPZJwaF5oX0LtamUA2tikFCiQEfgIsqa/VHK"
    },
    "type" : "salted-hash"
  }
}
------END HASHED VALUE------

3.6. Using the keytool Subcommand

The keytool subcommand exports or imports secret key values.

The Java keytool command enables you to export and import public keys and certificates, but not secret or symmetric keys. The IDM keytool subcommand provides this functionality.

Usage is as follows:

$ ./cli.sh keytool [--export, --import] alias

For example, to export the default IDM symmetric key, run the following command:

$ ./cli.sh keytool --export openidm-sym-default
   Using boot properties at /openidm/conf/boot/boot.properties
Use KeyStore from: /openidm/security/keystore.jceks
Please enter the password: 
[OK] Secret key entry with algorithm AES
AES:606d80ae316be58e94439f91ad8ce1c0  

The default keystore password is changeit. For security reasons, you must change this password in a production environment. For information about changing the keystore password, see Procedure 20.2, "Change the Default Keystore Password".

To import a new secret key named my-new-key, run the following command:

$ ./cli.sh keytool --import my-new-key   
Using boot properties at /openidm/conf/boot/boot.properties
Use KeyStore from: /openidm/security/keystore.jceks
Please enter the password: 
Enter the key: 
AES:606d80ae316be58e94439f91ad8ce1c0 

If a secret key with that name already exists, IDM returns the following error:

"KeyStore contains a key with this alias"

3.7. Using the validate Subcommand

The validate subcommand validates all .json configuration files in your project's conf/ directory.

Usage is as follows:

$ ./cli.sh validate
Executing ./cli.sh
Starting shell in /path/to/openidm
Using boot properties at /path/to/openidm/conf/boot/boot.properties
...................................................................
[Validating] Load JSON configuration files from:
[Validating] 	/path/to/openidm/conf
[Validating] audit.json .................................. SUCCESS
[Validating] authentication.json ......................... SUCCESS
    ...
[Validating] sync.json ................................... SUCCESS
[Validating] ui-configuration.json ....................... SUCCESS
[Validating] ui-countries.json ........................... SUCCESS
[Validating] workflow.json ............................... SUCCESS
  

3.8. Using the update Subcommand

The update subcommand supports updates for patches and migrations. For an example of this process, see Chapter 4, "Updating Servers" in the Installation Guide.

Chapter 4. Using the Browser-Based UI

IDM provides a customizable, browser-based user interface. The functionality is subdivided into Administrative and Self-Service User Interfaces.

If you are configuring or administering IDM, navigate to the Administrative User Interface (Admin UI). If IDM is installed on the local system, you can get to the Admin UI at the following URL: https://localhost:8443/admin. In the Admin UI, you can configure connectors, customize managed objects, set up attribute mappings, manage accounts, and more.

The Self-Service User Interface (Self-Service UI) provides role-based access to tasks based on BPMN2 workflows, and allows users to manage certain aspects of their own accounts, including configurable self-service registration. When IDM starts, you can access the Self-Service UI at https://localhost:8443/.

Warning

The default password for the administrative user, openidm-admin, is openidm-admin. To protect your deployment in production, change this password.

All users, including openidm-admin, can change their password through the Self-Service UI. After you have logged in, click Change Password.

4.1. Configuring the Server from the Admin UI

The Admin UI provides a graphical interface for most aspects of the IDM configuration.

Use the Quick Start cards and the Configure and Manage drop-down menus to configure the server.

In the following sections, you will examine the default Admin UI dashboard, and learn how to set up custom Admin UI dashboards.

Caution

If your browser uses an AdBlock extension, it might inadvertently block some UI functionality, particularly if your configuration includes strings such as ad. For example, a connection to an Active Directory server might be configured at the endpoint system/ad. To avoid problems related to blocked UI functionality, either remove the AdBlock extension, or set up a suitable white list to ensure that none of the targeted endpoints are blocked.

4.1.1. Default Admin UI Dashboard

When you log into the Admin UI, the first screen you should see is the "Administration" Dashboard.

Figure 4.1. Administration Dashboard
Administrative User Interface: Main Screen

The Admin UI includes a fixed top menu bar. As you navigate around the Admin UI, you should see the same menu bar throughout. You can click the Dashboards > Administration to return to that screen.

You can also access a System Monitoring dashboard from the same menu.

The widgets on these two dashboards cover the functionality described in the following sections.

Under the Administration dashboard, you'll see the following items:

  • Quick Start cards support one-click access to common administrative tasks, and are described in detail in the following section.

  • Resources include an abbreviated list of configured connectors, mappings, and managed objects.

Under the System Monitoring dashboard, you'll see the following items:

  • Audit Events include information on audit data, segregated by date. For more information on these events, see Chapter 22, "Setting Up Audit Logging".

  • System Health includes data on current CPU and memory usage.

  • Last Reconciliation includes data from the most recent reconciliation between data stores. After you run a reconciliation, you should see data similar to:

    Administrative User Interface: Reconciliation Statistics

The Quick Start cards allow quick access to the labeled configuration options, described here:

4.1.2. Creating and Modifying Dashboards

To create a new dashboard, click Dashboards > New Dashboard. You're prompted for a dashboard name, and whether to set it as the default. You can then add widgets.

Alternatively, you can start with an existing dashboard. In the upper-right corner of the UI, next to the Add Widgets button, click the vertical ellipses (). In the menu that appears, you can take the following actions on the current dashboard:

  • Rename

  • Duplicate

  • Set as Default

  • Delete

To add a widget to a dashboard, click Add Widget and select the widget type.

To modify the position of a widget in a dashboard, click and drag the move icon for the widget ().

If you add a new Quick Start widget, click in the upper right corner of the widget, and click Settings. You can configure an Admin UI sub-widget to embed in the Quick Start widget in the pop-up menu.

Click Add a Link. You can then enter a name, a destination URL, and an icon for the widget.

If you are linking to a specific page in the Admin UI, the destination URL can be the part of the address after the main page for the Admin UI, such as https://localhost:8443/admin

For example, if you want to create a quick start link to the Audit configuration tab, at https://localhost:8443/admin/#settings/audit/, you could enter #settings/audit in the destination URL text box.

IDM writes the changes you make to the ui-dashboard.json file for your project.

For example, if you add a Last Reconciliation and Embed Web Page widget to a new dashboard named Test, you'll see the following excerpt in your ui-dashboard.json file:

            {
    "name" : "Test",
    "isDefault" : false,
    "widgets" : [
        {
            "type" : "frame",
            "size" : "large",
            "frameUrl" : "http://example.com",
            "height" : "100px",
            "title" : "Example.com"
        },
        {
            "type" : "lastRecon",
            "size" : "large",
            "barchart" : "true"
        },
        {
            "type" : "quickStart",
             "size" : "large",
            "cards" : [
                {
                    "name" : "Audit",
                    "icon" : "fa-align-justify",
                    "href" : "#settings/audit"
                }
            ]
        },
    ]
}

For more information on each property, see the following table:

Table 4.1. Admin UI Widget Properties in ui-dashboard.json
PropertyOptionsDescription
nameUser entryDashboard name
isDefaulttrue or falseDefault dashboard; can set one default
widgetsDifferent options for typeCode blocks that define a widget
typelifeCycleMemoryHeap, lifeCycleMemoryNonHeap, systemHealthFull, cpuUsage, lastRecon, resourceList, quickStart, frame, userRelationship Widget name
sizex-small, small, medium, or largeWidth of widget, based on a 12-column grid system, where x-small=4, small=6, medium=8, and large=12; for more information, see Bootstrap CSS
heightHeight, in units such as cm, mm, px, and inHeight; applies only to Embed Web Page widget
frameUrlURLWeb page to embed; applies only to Embed Web Page widget
titleUser entryLabel shown in the UI; applies only to Embed Web Page widget
barcharttrue or falseReconciliation bar chart; applies only to Last Reconciliation widget

When complete, you can select the name of the new dashboard under the Dashboards menu.

You can modify the options for each dashboard and widget. Select the vertical ellipsis in the upper right corner of the object, and make desired choices from the pop-up menu.

The following table includes a list of available widgets.

Table 4.2. Available Admin UI Widgets
NameDescription
Audit EventsGraphical display of audit events; also see Section 22.11, "Viewing Audit Events in the Admin UI"
Cluster Node StatusLists connected instances of IDM; also see Section 23.4, "Managing Nodes Through the Admin UI"
CPU UsageAlso part of System Health widget
Daily Social LoginsGraphical display of logins via social identity providers; for related information see Chapter 11, "Configuring Social Identity Providers"
Daily Social RegistrationGraphical display of registrations via social identity providers; for related information, see Chapter 11, "Configuring Social Identity Providers"
Embed Web PageSupports embedding of external content
Identity RelationshipsGraphical display of relationships between identities; also see Section 10.6, "Viewing Relationships in Graph Form"
Last ReconciliationShows statistics from the most recent reconciliation, shown in System Monitoring dashboard; also see Section 15.7.4, "Obtaining the Details of a Reconciliation"
Managed Objects Relationship DiagramGraphical diagram with connections between managed object properties; also see Section 10.8, "Viewing the Relationship Configuration in the UI"
Memory Usage (JVM Heap)Graphs available JVM Heap memory (ref Section 2.3.3.2, "Memory Health Check"
Memory Usage (JVM NonHeap)Graphs available JVM Non-Heap memory (ref Section 2.3.3.2, "Memory Health Check"
Quick StartLinks to common tasks; shown in Administration dashboard
ResourcesConnectors, mappings, managed objects; shown in Administration dashboard
Social Registration (year)Graphical display of registrations over the past year; for related information, see Chapter 11, "Configuring Social Identity Providers"
System Health Shown in System Monitoring dashboard; includes CPU Usage, Memory Usage (JVM Heap), and Memory Usage (JVM NonHeap)

4.2. Managing Accounts

Only administrative users (with the role openidm-admin) can add, modify, and delete accounts from the Admin UI. Regular users can modify certain aspects of their own accounts from the Self-Service UI.

4.2.1. Account Configuration

In the Admin UI, you can manage most details associated with an account, as shown in the following screenshot.

Figure 4.2. Account, UI Configuration
Account Configuration Options in the UI

You can configure the following elements of a user account:

Details

The Details tab includes basic identifying data for each user, with two special entries:

Status

By default, accounts are shown as active. To suspend an account, such as for a user who has taken a leave of absence, set that user's status to inactive.

Manager

You can assign a manager from the existing list of managed users.

Password

As an administrator, you can create new passwords for users in the managed user repository.

Provisioning Roles

Used to specify how objects are provisioned to an external system. For more information, see Section 9.4, "Working With Managed Roles".

Authorization Roles

Used to specify the authorization rights of a managed user within IDM. For more information, see Section 9.4, "Working With Managed Roles".

Direct Reports

Users who are listed as managers of others have entries under the Direct Reports tab, as shown in the following illustration:

You can display direct reports in a chart
Linked Systems

Used to display account information reconciled from external systems.

4.2.2. Procedures for Managing Accounts

With the following procedures, you can add, update, and deactivate accounts for managed objects such as users.

The managed object does not have to be a user. It can be a role, a group, or even a physical item such as an IoT device. The basic process for adding, modifying, deactivating, and deleting other objects is the same as it is with accounts. However, the details may vary; for example, many IoT devices do not have telephone numbers.

Procedure 4.1. To Add a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin.

  2. Click Manage > User.

  3. Click New User.

  4. Complete the fields on the New User page.

    Most of these fields are self-explanatory. Be aware that the user interface is subject to policy validation, as described in Chapter 12, "Using Policies to Validate Data". So, for example, the email address must be a valid email address, and the password must comply with the password validation settings that appear if you enter an invalid password.

In a similar way, you can create accounts for other managed objects.

You can review new managed object settings in the managed.json file of your project-dir/conf directory.

In the following procedures, you learn how:

Procedure 4.2. To Update a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin as an administrative user.

  2. Click Manage > User.

  3. Click the Username of the user that you want to update.

  4. On the profile page for the user, modify the fields you want to change and click Update.

    The user account is updated in the repository.

Procedure 4.3. To Delete a User Account
  1. Log in to the Admin UI at https://localhost:8443/admin as an administrative user.

  2. Click Manage > User.

  3. Select the checkbox next to the desired Username.

  4. Click the Delete Selected button.

  5. Click OK to confirm the deletion.

    The user is deleted from the internal repository.

Procedure 4.4. To View an Account in External Resources

The Admin UI displays the details of the account in the repository (managed/user). When a mapping has been configured between the repository and one or more external resources, you can view details of that account in any external system to which it is linked. As this view is read-only, you cannot update a user record in a linked system from within the Self-Service UI.

By default, implicit synchronization is enabled for mappings from the managed/user repository to any external resource. This means that when you update a managed object, any mappings defined in the sync.json file that have the managed object as the source are automatically executed to update the target system. You can see these changes in the Linked Systems section of a user's profile.

To view a user's linked accounts:

  1. Log in to the Admin UI at https://localhost:8443/admin.

  2. Click Manage User > Username > Linked Systems.

  3. The Linked Systems panel indicates the external mapped resource or resources.

  4. Select the resource in which you want to view the account, from the Linked Resource list.

    The user record in the linked resource is displayed.

4.3. Customizing the Admin UI

You can customize the Admin UI for your specific deployment. When you install IDM, you will find the default Admin UI configuration files in the following directory: openidm/ui/admin/default.

In most cases, we recommend that you copy this directory to openidm/ui/admin/extension with commands such as:

$ cd /path/to/openidm/ui/admin
$ cp -r default/. extension

You can then set up custom files in the extension/ subdirectory.

The Admin UI templates in the openidm/ui/admin/default/templates directory might help you get started.

If you want to customize workflows in the UI, see Section 19.3.5, "Managing User Access to Workflows".

4.3.1. Customizing the Admin UI, by Functionality

You may want to customize parts of the Admin UI. You've set up an openidm/ui/selfservice/extension directory as described in Section 4.3, "Customizing the Admin UI". In that directory, you can find a series of subdirectories. The following table is intended to help you search for the right file(s) to customize:

Table 4.3. File Functionality by Admin UI Directory
SubdirectoryDescription
configTop-level configuration directory of JavaScript files. Customizable subdirectories include errorhandlers/ with HTTP error messages and messages/ with info and error messages. For actual messages, see the translation.json file in the locales/en/ subdirectory.
css/ and libs/ If you use a different bootstrap theme, you can replace the files in this and related subdirectories. For more information, see Section 4.4.1, "UI Themes and Bootstrap".
fonts/ The font files in this directory are based on the Font Awesome CSS toolkit described in Section 4.4, "Changing the UI Theme".
images/ and img/ IDM uses the image files in these directories, which you can choose to replace with your own.
locales/ Includes the associated translation.json file, by default in the en/ subdirectory.
org/Source files for the Self-Service UI
partials/ Includes partial components of HTML pages in the Self-Service UI, for assignments, authentication, connectors, dashboards, email, basic forms, login buttons, etc.
templates/ The files in the templates/ subdirectory are in actual use. For an example of how you can customize such files in the Admin UI, see Section 5.7, "Customizing the Self-Service UI".

To see an example of how this works, review Section 5.7, "Customizing the Self-Service UI". It includes examples of how you can customize parts of the Self-Service UI. You can use the same technique to customize parts of the Admin UI.

Tip

The above table is not a complete list. To see a visual representation of customizable Admin UI files, from the Linux command line, run the following commands:

$ cd /path/to/openidm/ui/admin/extension
$ tree

4.4. Changing the UI Theme

You can customize the theme of the user interface. The default UI uses the Bootstrap framework and the Font Awesome CSS toolkit. You can download and customize the UI with the Bootstrap themes of your choice.

Note

If you use Brand Icons from the Font Awesome CSS Toolkit, be aware of the following statement:

All brand icons are trademarks of their respective owners. The use of these trademarks does not indicate endorsement of the trademark holder by ForgeRock, nor vice versa.

4.4.1. UI Themes and Bootstrap

You can configure a few features of the UI in the ui-themeconfig.json file in your project's conf/ subdirectory. However, to change most theme-related features of the UI, you must copy target files to the appropriate extension subdirectory, and then modify them as discussed in Section 4.3, "Customizing the Admin UI".

The default configuration files for the Admin and Self-Service UIs are identical for theme configuration.

By default the UI reads the stylesheets and images from the respective openidm/ui/function/default directories. Do not modify the files in this directory. Your changes may be overwritten the next time you update or even patch your system.

To customize your UI, first set up matching subdirectories for your system (openidm/ui/admin/extension and openidm/ui/selfservice/extension). For example, assume you want to customize colors, logos, and so on.

You can set up a new theme, primarily through custom Bootstrap CSS files, in appropriate extension/ subdirectories, such as openidm/ui/selfservice/extension/libs and openidm/ui/selfservice/extension/css.

You may also need to update the "stylesheets" listing in the ui-themeconfig.json file for your project, in the project-dir/conf directory.

...
"stylesheets" : ["css/bootstrap-3.3.5-custom.css", "css/structure.css", "css/theme.css"],
...

You can find these stylesheets in the /css subdirectory.

  • bootstrap-3.3.5-custom.css: Includes custom settings that you can get from various Bootstrap configuration sites, such as the Bootstrap Customize and Download website.

    You may find the ForgeRock version of this in the config.json file in the ui/selfservice/default/css/common/structure/ directory.

  • structure.css: Supports configuration of structural elements of the UI.

  • theme.css: Includes customizable options for UI themes such as colors, buttons, and navigation bars.

If you want to set up custom versions of these files, copy them to the extension/css subdirectories.

4.4.3. Changing the Language of the UI

Currently, the UI is provided only in US English. You can translate the UI and specify that your own locale is used. The following example shows how to translate the UI into French:

  1. Assuming you set up custom extension subdirectories, as described in Section 4.3, "Customizing the Admin UI", you can copy the default (en) locale to a new (fr) subdirectory as follows:

    $ cd /path/to/openidm/ui/selfservice/extension/locales
    $ cp -R en fr

    The new locale (fr) now contains the default translation.json file:

    $ ls fr/
    translation.json
  2. Translate the values of the properties in the fr/translate.json file. Do not translate the property names. For example:

    ...
    "UserMessages" : {
       "changedPassword" : "Mot de passe a été modifié",
       "profileUpdateFailed" : "Problème lors de la mise à jour du profil",
       "profileUpdateSuccessful" : "Profil a été mis à jour",
       "userNameUpdated" : "Nom d'utilisateur a été modifié",
    .... 
  3. Change the UI configuration to use the new locale by setting the value of the lang property in the project-dir/conf/ui-configuration.json file, as follows:

    "lang" : "fr",
  4. Refresh your browser window to apply the change.

You can also change the labels for accounts in the UI. To do so, navigate to the Schema Properties for the managed object to be changed.

To change the labels for user accounts, navigate to the Admin UI. Click Configure > Managed Objects > User, and scroll down to Schema.

Under Schema Properties, select a property and modify the Readable Title. For example, you can modify the Readable Title for userName to a label in another language, such as Nom d'utilisateur.

4.4.4. Creating a Project-Specific UI Theme

You can create specific UI themes for different projects and then point a particular UI instance to use a defined theme on startup. To create a complete custom theme, follow these steps:

  1. Shut down the IDM instance, if it is running. In the OSGi console, type:

    shutdown
    ->
  2. Copy the entire default Self-Service UI theme to an accessible location. For example:

    $ cd /path/to/openidm/ui/selfservice
    $ cp -r default /path/to/openidm/new-project-theme
  3. If desired, repeat the process with the Admin UI; just remember to copy files to a different directory:

    $ cd /path/to/openidm/ui/admin
    $ cp -r default /path/to/openidm/admin-project-theme
  4. In the copied theme, modify the required elements, as described in the previous sections. Note that nothing is copied to the extension folder in this case - changes are made in the copied theme.

  5. In the conf/ui.context-selfservice.json file, modify the values for defaultDir and extensionDir to the directory with your new-project-theme:

    {
        "enabled" : true,
        "urlContextRoot" : "/",
        "defaultDir" : "&{launcher.install.location}/ui/selfservice/default",
        "extensionDir" : "&{launcher.install.location}/ui/selfservice/extension",
        "responseHeaders" : {
            "X-Frame-Options" : "DENY"
        }
    }
  6. If you want to repeat the process for the Admin UI, make parallel changes to the project-dir/conf/ui.context-admin.json file.

  7. Restart the server:

    $ cd /path/to/openidm
    $ ./startup.sh
  8. Relaunch the UI in your browser. The UI is displayed with the new custom theme.

4.4.5. Custom Response Headers

You can specify custom response headers for your UI by using the responseHeaders property in UI context configuration files such as conf/ui.context-selfservice.json. For example, the X-Frame-Options header is a security measure used to prevent a web page from being embedded within the frame of another page. For more information about response headers, see the MDN page on HTTP Headers.

Since the responseHeaders property is specified in the configuration file for each UI context, you can set different custom headers depending on the needs of that part of IDM. For example, you may want to have different security headers included for the Admin UI than you do for the Self-Service UI.

4.5. Resetting User Passwords

When working with end users, administrators frequently have to reset their passwords. You can do so directly, through the Admin UI. Alternatively, you can configure an external system for that purpose.

4.5.1. Resetting a User Password Through the Admin UI

From the Admin UI, you can reset the passwords of accounts in the internal Managed User datastore. If you haven't already done so, start by configuring the outbound email service, as described in Chapter 24, "Configuring Outbound Email". Then take the following steps in the Admin UI:

  1. Select Manage > User. Choose a specific user from the list that appears.

  2. Select the Password tab for that user; you should see a Reset Password option.

When you select Reset Password, IDM by default generates a random 16 character password with at least one of each of the following types of characters:

  • Uppercase letters: A-Z

  • Lowercase letters: a-z

  • Integers: 0-9

  • Special characters: : ; < = > ? @

The configured outgoing email service is used to send that password to the specified user. For example, user mike might receive an email message with the following subject line:

Your password has been reset by an administrator

along with the following message:

mike's new password is: <generated_password>

If desired, you can configure that message (along with password complexity) by modifying the following code block in your project's managed.json file:

"actions" : {
    "resetPassword": {
        "type": "text/javascript",
        "source": "require('ui/resetPassword').sendMail(object, subject, message, passwordRules, passwordLength);",
        "globals": {
            "subject": "Your password has been reset by an administrator",
            "message": "<html><body><p>{{object.userName}}'s new password is: {{password}}</p></body></html>",
            "passwordRules": [
                { "rule": "UPPERCASE", "minimum": 1 },
                { "rule": "LOWERCASE", "minimum": 1 },
                { "rule": "INTEGERS", "minimum": 1 },
                { "rule": "SPECIAL", "minimum": 1 }
            ],
        "passwordLength": 16
    }
}

4.5.2. Using an External System for Password Reset

By default, the Password Reset mechanism is handled within IDM. You can reroute Password Reset in the event that a user has forgotten their password, by specifying an external URL to which Password Reset requests are sent. Note that this URL applies to the Password Reset link on the login page only, not to the security data change facility that is available after a user has logged in.

To set an external URL to handle Password Reset, set the passwordResetLink parameter in the UI configuration file (conf/ui-configuration.json) file. The following example sets the passwordResetLink to https://accounts.example.com/account/reset-password:

passwordResetLink: "https://accounts.example.com/reset-password"

The passwordResetLink parameter takes either an empty string as a value (which indicates that no external link is used) or a full URL to the external system that handles Password Reset requests.

Note

External Password Reset and security questions for internal Password Reset are mutually exclusive. Therefore, if you set a value for the passwordResetLink parameter, users will not be prompted with any security questions, regardless of the setting of the securityQuestions parameter.

4.6. Providing a Logout URL to External Applications

By default, a UI session is invalidated when a user clicks on the Log out link. In certain situations your external applications might require a distinct logout URL to which users can be routed, to terminate their UI session.

The logout URL is #logout, appended to the UI URL, for example, https://localhost:8443/#logout/.

The logout URL effectively performs the same action as clicking on the Log out link of the UI.

4.7. Changing the UI Path

By default, the Self-Service UI is registered at the root context and is accessible at the URL https://localhost:8443. To specify a different URL, edit the project-dir/conf/ui.context-selfservice.json file, setting the urlContextRoot property to the new URL.

For example, to change the URL of the Self-Service UI to https://localhost:8443/exampleui, edit the file as follows:

"urlContextRoot" : "/exampleui",

Alternatively, to change the Self-Service UI URL in the Admin UI, follow these steps:

  1. Log in to the Admin UI.

  2. Select Configure > System Preferences, and select the Self-Service UI tab.

  3. Specify the new context route in the Relative URL field.

4.8. API Explorer

IDM includes an API Explorer, an implementation of the OpenAPI Initiative Specification, also known as Swagger.

To access the API Explorer, log into the Admin UI, select the question mark in the upper right corner, and choose API Explorer from the drop-down menu.

Note

If the API Explorer does not appear, you may need to enable it in your project's conf/boot/boot.properties file, specifically with the openidm.apidescriptor.enabled property. For more information see, Section 20.2.13, "Disabling the API Explorer".

The API Explorer covers most of the endpoints provided with a default IDM installation.

Each endpoint lists supported HTTP methods such as POST and GET. When custom actions are available, the API Explorer lists them as HTTP Method /path/to/endpoint?_action=something.

To see how this works, navigate to the User endpoint, select List Operations, and choose the GET option associated with the /managed/user#_query_id_query-all endpoint.

API Explorer: Querying All Users

In this case, the defaults are set, and all you need to do is select the Try it out! button. The output you see includes:

  • The REST call, in the form of the curl command.

  • The request URL, which specifies the endpoint and associated parameters.

  • The response body, which contains the data that you requested.

  • The HTTP response code; if everything works, this should be 200.

  • Response headers.

API Explorer: REST Output

If you're familiar with the sample described in Chapter 4, "Two Way Synchronization Between LDAP and IDM" in the Samples Guide, you might recognize the output as users in the managed user repository, after reconciliation.

Tip

If you see a 401 Access Denied code in the response body, your session may have timed out, and you'll have to log into the Admin UI again.

For details on common ForgeRock REST parameters, see Section D.1, "About ForgeRock Common REST".

You'll see examples of REST calls throughout this documentation set. You can try these calls with the API Explorer.

You can also generate an OpenAPI-compliant descriptor of the REST API to provide API reference documentation specific to your deployment. The following command saves the API descriptor of the managed/user endpoint to a file named my-openidm-api.json:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 --output "my-openidm-api.json" \
 "http://localhost:8080/openidm/managed/user?_api"

For information about publishing reference documentation using the API descriptor, see Procedure D.1, "To Publish OpenAPI Documentation".

4.9. Disabling the UI

The UI is packaged as a separate bundle that can be disabled in the configuration before server startup. To disable the registration of the UI servlet, edit the project-dir/conf/ui.context-selfservice.json file, setting the enabled property to false:

"enabled" : false,

Chapter 5. Configuring User Self-Service

This chapter describes the features of IDM user self-service.

ForgeRock Identity Management allows you to configure three features of user self-service: user self-registration, password reset, and forgotten username. This chapter also describes the Self-Service UI from the point of view of an end user, and describes what you can do to customize the Self-Service UI.

5.1. Tokens and User Self-Service

Many processes within user self-service involve multiple stages, such as user self-registration, password reset, and forgotton username. As the user transitions from one stage to another, IDM uses JWT tokens to represent the current state of the process. As each stage is completed, IDM returns a new token. Each request that follows includes that latest token.

For example, users who use these features to recover their usernames and passwords get two tokens in the following scenario:

  • The user goes through the forgotten username process, gets a JWT Token with a lifetime (default = 300 seconds) that allows that user to get to the next step in the process.

  • With username in hand, that user may then start the password reset process. That user gets a second JWT token, with the token lifetime configured for that process.

Note

The default IDM JWT token is encrypted and stateless. If you need a stateful token, for example to enable a longer session for Section 5.3, "User Password Reset", edit the selfservice-reset.json file to change the snapshotToken type to uuid.

5.2. User Self-Registration

To configure user self-registration from the Admin UI, select Configure > User Registration and select Enable User Registration in the page that appears.

You'll see a pop-up window that specifies User Registration Settings, including the following:

  • Identity Resource, typically managed/user

  • Identity Email Field, typically mail or email

  • Success URL for the Self-Service UI; users who successfully login are redirected to that URL. By default, the success URL is http://localhost:8080/#dashboard/.

  • Preferences, which set up default marketing preferences for new users. New users can change these preferences during registration, or from the Self-Service UI.

  • Advanced Options, Snapshot Token, typically a JSON Web Token (JWT).

  • Advanced Options, Token Lifetime, with a default of 300 seconds

You can also add these settings to the following configuration file: selfservice-registration.json. When you modify these settings in the Admin UI, IDM creates the file for you. Alternatively, you can use a template version of this file located in the openidm/samples/example-configurations/self-service directory.

Once active, you'll see three tabs under User Registration in the Admin UI:

For audit activity data related to user self-registration, see Section 22.10.2, "Querying the Activity Audit Log".

5.2.1. Configuring the User Self-Registration Form

IDM lists the attributes that users see in the user registration form, under the Registration Form tab. You can change the order in this form, and add available attributes from the managed.json for your project. Select the arrow in the drop-down text box for an available list.

If desired, you can further customize that screen, as described in Procedure 5.1, "Customizing the User Registration Page".

You can also configure user self-registration via configuration files, as described in the following table:

Table 5.1. User Self-Registration Configuration Files
File NameDescription
external.email.jsonTo enable email validation, you'll need this file, available in the following directory: openidm/samples/example-configurations/conf. Alternatively, from the Admin UI, you can select Configure > Email Settings. For more information, see Chapter 24, "Configuring Outbound Email".
managed.jsonAvailable and required entries for user self-registration. You can edit UI labels by changing the desired title.
policy.jsonRules for User IDs and Passwords.
selfservice.kba.jsonFor KBA, includes minimumAnswersToDefine and minimumAnswersToVerify.
selfservice-registration.jsonInclude desired user self-registration properties in the registrationProperties code block along with reCAPTCHA keys in a captcha code block.
ui-configuration.jsonIncludes booleans to enable user self-registration, password reset, forgotten username.
consent.jsonSpecifies whether Privacy & Consent is configured; for more information, see Section 5.2.3.5, "Configuring Privacy & Consent".

5.2.2. User Registration: Social

Before you can activate Social Registration under the User Registration, Social tab, you'll need to configure registration with social identity providers. To review the process, see Chapter 11, "Configuring Social Identity Providers".

When you've configured one or more social identity providers, you can activate the Social Registration option. Then under the Social tab, you'll see a list of property mappings as defined in the selfservice.propertymap.json file.

Social properties as mapped in selfservice.propertymap.json

One or more properties in the Source column comes from a social identity provider. When a user registers with their social identity account, that information is reconciled to the matching property for IDM. For example, the email property from a social identity provider is normally reconciled to the mail property on IDM.

You can also find property mappings in the sync.json for your project. For details of these synchronization mappings, see Section 15.3.2, "Mapping Source Objects to Target Objects".

5.2.3. Configuring User Self-Registration Steps

Under the Options tab, you can configure several steps, as described in the following sections:

5.2.3.1. Configuring Google ReCaptcha

Google reCAPTCHA helps prevent bots from registering users or resetting passwords on your system. For Google documentation on this feature, see Google reCAPTCHA. IDM works with Google reCAPTCHA v2.

To use Google reCAPTCHA, you will need a Google account and your domain name (RFC 2606-compliant URLs such as localhost and example.com are acceptable for test purposes). Google then provides a Site key and a Secret key that you can include in the self-service function configuration.

For example, you can add reCAPTCHA keys into the appropriate configuration file:

        {
            "name" : "captcha",
            "recaptchaSiteKey" : "< Insert Site Key Here >",
            "recaptchaSecretKey" : "< Insert Secret Key Here >",
            "recaptchaUri" : "https://www.google.com/recaptcha/api/siteverify"
        },

You may also add the reCAPTCHA keys through the UI, for user self-registration, password reset, and forgotton username functionality.

5.2.3.2. Configuring Self-Service Email Validation / Username

When a user requests a new account, a password reset, or a reminder of their username, you can configure IDM to confirm the request by sending an email message to that user.

Before you can configure email validation, you must first configure an outgoing email service. To do so, select Configure > Email Settings. For more information, read Chapter 24, "Configuring Outbound Email".

Then, to activate Email Validation, configure the self-service feature of your choice; Select Configure > User Registration or Password Reset or Forgotten Username. Enable the feature. Under the Options tab, enable the Email Validation option. Alternatively, edit the applicable configuration file:

  • User Self-Registration: selfservice-registration.json

  • Password Reset: selfservice-reset.json

  • Forgotten Username: selfservice-username.json

Then you can configure the options shown in the following table.

Table 5.2. Configuring Validation Emails
Admin UIJSON file propertyDescription
Email FromfromEmail from address
Mime TypemimeTypeDescribes document transmission format, such as text/html
Verification Link TokenverificationLinkTokenLink that includes a registration or password reset token; does not apply to forgotton username
Username TokenusernameTokenLink that includes a username token; applies only to forgotton username
Email Verification LinkverificationLinkThe link sent to the user includes this URL and the verification link token
Email SubjectsubjectSubject in the transmitted email; may include subjectTranslations with ISO 8601 locales
Email MessagemessageTranslationsMessages in different languages with ISO 8601 locales

For the email message that informs the user of the new account, see Section 24.3, "Configuring Notification Emails".

5.2.3.3. Configuring Self-Service Questions (KBA)

IDM uses Knowledge-based Authentication (KBA) to help users prove their identities. With KBA, users can choose questions configured in the selfservice.kba.json file.

When enabled, the user is prompted to enter answers to pre-configured or custom security questions, during the self-registration process. These questions are used to help verify an identity when a user requests a password reset.

The template version of the selfservice.kba.json file is straightforward; it includes a minimumAnswersToDefine, which requires a user to define at least that many KBA questions and answers, along with minimumAnswersToVerify, which requires a user to answer (in this case) at least one of those questions when asking for a password reset.

{
     "kbaPropertyName" : "kbaInfo",
     "minimumAnswersToDefine": 2,
     "minimumAnswersToVerify": 1,
     "questions" : {
         "1" : {
             "en" : "What's your favorite color?",
             "en_GB" : "What is your favorite colour?",
             "fr" : "Quelle est votre couleur préférée?"
         },
         "2" : {
             "en" : "Who was your first employer?"
         }
     }
}

Warning

Once you deploy these IDM self-service features, you should never remove or change existing security questions, as users may have included those questions during the user self-registration process.

You may change or add the questions of your choice, in JSON format. If you're configuring user self-registration, you can also edit these questions through the Admin UI. In fact, the Admin UI allows you to localize these questions in different languages.

In the Admin UI, select Configure > User Registration. Enable User Registration, and select Options > KBA Stage. In the Configure Security Questions window that appears, you can add, edit, or delete these questions from the Admin UI:

Figure 5.1. Configuring KBA via the Admin UI
Modifying default KBA questions

Any change you make to KBA questions under User Registration also applies to Password Reset. To confirm, select Configure > Password Reset. Enable Password Reset, and edit the KBA Stage step. You'll see the same questions there.

In addition, individual users can configure their own questions and answers, in two ways:

  • During the user self-registration process

  • From the Self-Service UI, in the user's My Account section, under Sign-in & Security > Security Questions

Figure 5.2. Modifying KBA Questions in the Self-Service UI
Users can modify their own KBA questions

Note

When Self-Service KBA modules hash answers they convert the answers to lower-case. If you intend to pre-populate KBA answers with a mapping, the openidm.hash function or the secureHash mechanism, you must provide the KBA string in lowercase to match the value of the answer.

5.2.3.4. Adding Terms & Conditions

Many organizations add Terms & Conditions for users who register through their IDM systems. When you edit this option, you can add the Terms & Conditions of your choice.

When you enter Terms & Conditions, include a locale such as en or fr. The default version number is 1.0. The following excerpt from the selfservice-registration.json file illustrates the format:

{
     "name" : "termsAndConditions",
     "termsTranslations" : {
         "en" : "Some fake terms",
         "fr" : "More fake terms"
     },
     "version" : "1.0"
},

You can also modify your Terms & Conditions from the Admin UI. Select Configure > User Registration. Activate User Registration if needed, select the Options tab, and activate Terms & Conditions. You'll see a pop-up with the same information shown in the selfservice-registration.json file.

Note

In your Terms & Conditions, use text and/or basic HTML. Test your terms and conditions, especially if you include JavaScript. You may need to include links to appropriate libraries.

If you change your Terms & Conditions, you can change the version number manually.

Once configured, you can find when new users accepted Terms & Conditions, along with the version number, in the audit activity log. For more information, see Section 22.10.2, "Querying the Activity Audit Log". Here is a sample excerpt from the activity.audit.json file in the /path/to/openidm/audit directory:

"termsAccepted" : {
  "iso8601date" : "2017-07-26T21:28:49Z",
  "termsVersion" : "1.0"
},

5.2.3.6. Disabling Email Validation for User Registration

If you disable email validation only for user registration, you should perform one of the following actions:

  • Disable validation for mail in the managed user schema. Select Configure > Managed Objects > User > Properties > Mail, and disable the Required option.

  • Configure the User Registration template to support user email entries. To do so, use Procedure 5.1, "Customizing the User Registration Page", and substitute mail for employeeNum.

Without these changes, users who try to register accounts will see a Forbidden Request Error.

5.3. User Password Reset

To configure the user password reset feature from the Admin UI, select Configure > Password Reset and select Enable Password Reset in the page that appears. Under the Options tab, you'll see several optional steps, including reCAPTCHA, User Query Form, Email Validation, KBA Stage, and Password Reset Form, as shown in the following figure:

Figure 5.3. Self-Service UI - Password Reset Sequence
Self-Service UI - Password Reset Sequence

You can also configure password reset via configuration files, as described in the following table:

Table 5.3. User Password Reset Configuration Files
File NameDescription
external.email.jsonIf you enable email validation, you'll need this file, available in the following directory: openidm/samples/example-configurations/conf
policy.jsonRequirements for User ID and Password
selfservice.kba.jsonFor KBA, includes minimumAnswersToDefine and minimumAnswersToVerify
selfservice-reset.jsonIncludes validQueryFields along with identity fields for user ID (identityIdField), email (identityEmailField), username (identityUsernameField) and the password field for the Password Reset Form (identityPasswordField).
ui-configuration.jsonIncludes booleans to enable user self-registration, password reset, forgotten username

You can configure several validation stages, including:

You can also configure user queries and a password reset form, as described in the following sections:

5.3.1. Configuring User Query

You can configure the User Query (Lookup) Form for password reset and forgotton username functionality.

Valid Query Fields in a User Lookup Form

If you modify the fields that a user is allowed to query, you may need to modify the HTML templates that appear to users who request such functionality, in the corresponding userQuery-initial.html file. For more information, see Procedure 5.1, "Customizing the User Registration Page".

If you've set up custom extension subdirectories, as described in Section 5.7.1, "Customizing a Self-Service UI Template", you can find this file in the following directory: selfservice/extension/templates/user/process.

As shown in the relevant figure, you can change:

  • Valid Query Fields

    Property names that you can use to help users find their usernames or verify their identity, such as userName, mail, or givenName.

  • Identity ID Field

    Property name associated with the User ID, typically _id.

  • Identity Email Field

    Property name associated with the user email field, typically something like mail or email.

  • Identity Username Field

    The path associated with the identity data store, such as managed/user.

5.3.2. Configuring the Password Reset Form

In the Admin UI, when configuring password reset, select the Password Reset Form. In the pop-up that appears, you'll see a Password Field. You can specify a relevant password property such as password, pwd, or userPassword. Be sure the property you select matches the canonical form for user passwords in the relevant datastore.

Alternatively, you can also change the identityPasswordField property in the selfservice-reset.json file.

5.4. Forgotten Username

You can set up IDM to allow users to recover forgotten usernames. You can require that users enter email addresses, or first and last names. Depending on your choices, IDM then will either display that username on the screen, and/or email such information to that user.

To configure the forgotten username feature from the Admin UI, select Configure > Forgotten Username and select Enable Forgotten Username Retrieval in the page that appears. Under the Options tab, you'll see several steps, including reCAPTCHA, User Query Form, Email Username, and Display Username.

You can also configure the forgotton username feature via configuration files, as described in the following table:

Table 5.4. Display Username Configuration Files
File NameDescription
external.email.jsonIf you enable email validation, you'll need this file, available in the following directory: openidm/samples/example-configurations/conf
selfservice-username.jsonIncludes validQueryFields along with identity fields for user ID (identityIdField), email (identityEmailField), username (identityUsernameField) and the password field for the Password Reset Form (identityPasswordField).
ui-configuration.jsonIncludes booleans to enable user self-registration, password reset, forgotten username

If you enable the Display Username option, IDM displays the username to clients who qualify based on the following enabled validation stages, as described in the linked sections:

5.5. Accommodating the End User

When these features are enabled, you will see three links on the self-service login page at http://localhost:8080: Reset your password, Register, and Forgot Username?.

5.5.1. Verifying Self-Registration in the Self-Service UI

After configuring user self-registration in Section 5.2, "User Self-Registration", you can test the result from the end user's point of view. Navigate to the Self-Service UI at http://localhost:8080, and select Register. You should see a single-page Register Your Account screen with configured text boxes and required security questions. If configured, you'll also see marketing preferences such as "Send me news and updates", along with the following text with a link to any configured Terms of Service:

By creating an account, you agree to the Terms of Service

Tip

To modify the Terms of Service, see Section 5.2.3.4, "Adding Terms & Conditions".

If you've activated the reCAPTCHA option as described in Section 5.2.3.1, "Configuring Google ReCaptcha", you'll need to satisfy the requirements before you can select the SAVE button.

If you've activated the Privacy & Consent option, you'll see a template Privacy Notice pop-up, which you'll have to accept before IDM creates the new account. To activate and configure a privacy notice, see Section 5.2.3.5, "Configuring Privacy & Consent".

Once the new user is created, you should be able to verify the account in two ways:

  • Log into the Admin UI, and select Manage > User. You should see that new user in the list.

  • Log into the Self-Service UI as the new user.

5.5.2. Verifying Password Reset in the Self-Service UI

After configuring password reset in Section 5.3, "User Password Reset", you can test the result from the end user's point of view. Navigate to the Self-Service UI at http://localhost:8080, and select Reset your password.

You should see a Reset Your Password page with pre-configured queries. After providing an answer, IDM should send a password reset link to the email associated with the target user account.

5.5.3. Verifying Access to a Forgotten Username in the Self-Service UI

After configuring password reset in Section 5.4, "Forgotten Username", you can test the result from the end user's point of view. Navigate to the Self-Service UI at http://localhost:8080, and select Forgot Username?.

You should see a Retrieve Your Username page with pre-configured queries. After providing an answer, IDM should either display your username in the local browser, or send that username to the associated email address.

5.6. Working With the Self-Service UI

For all users, the Self-Service UI includes Dashboard and My Account links in the top menu bar. To access the Self-Service UI, navigate to http://localhost:8443/.

5.6.1. The Self-Service UI Dashboard

The Dashboard includes a list of tasks assigned to the user who has logged in, tasks assigned to the relevant group, processes available to be invoked, current notifications for that user, along with Quick Start cards for that user's profile and password.

Figure 5.4. The Self-Service UI Dashboard
Self-Service User Interface

For more information on the My Account link, including implications for data protection and privacy, see Section 5.8.3, "Privacy: My Account Information in the Self-Service UI".

5.7. Customizing the Self-Service UI

You can customize the Self-Service UI. When you install IDM, you will find the default Self-Service UI configuration files in the following directory: openidm/ui/selfservice/default.

In most cases, we recommend that you copy this directory to openidm/ui/selfservice/extension with commands such as:

$ cd /path/to/openidm/ui/selfservice
$ cp -r default/. extension

You can then set up custom files in the extension/ subdirectory.

The openidm/ui/selfservice/default/templates directory includes Self-Service UI templates that might help you get started.

If you want to customize workflows in the UI, see Section 19.3.5, "Managing User Access to Workflows".

If you want to customize tabs under the My Account tab in the Self-Service UI, review the files described in Section 5.8.3, "Privacy: My Account Information in the Self-Service UI".

5.7.1. Customizing a Self-Service UI Template

You may want to customize information included in the Self-Service UI.

These procedures do not address actual data store requirements. If you add text boxes in the UI, it is your responsibility to set up associated properties in your repositories.

To do so, you should copy existing default template files in the openidm/ui/selfservice/default subdirectory to associated extension/ subdirectories.

To simplify the process, you can copy some or all of the content from the openidm/ui/selfservice/default/templates to the openidm/ui/selfservice/extension/templates directory.

You can use a similar process to modify what is shown in the Self-Service UI.

5.7.1.1. Customizing User Self-Service Screens

In the following procedure, you will customize the screen that users see during the user registration process. You can use a similar process to customize what a user sees during the password reset and forgotten username processes.

For user self-service features, you can customize options in three files. Navigate to the extension/templates/user/process subdirectory, and examine the following files:

  • User Registration: registration/userDetails-initial.html

  • Password Reset: reset/userQuery-initial.html

  • Forgotten Username: username/userQuery-initial.html

The following procedure demonstrates the process for user registration.

Procedure 5.1. Customizing the User Registration Page
  1. When you configure user self-service, as described in Chapter 5, "Configuring User Self-Service", anonymous users who choose to register will see a screen similar to:

    The Default Self-Registration Screen
  2. The screen you see is from the following file: userDetails-initial.html, in the selfservice/extension/templates/user/process/registration subdirectory. Open that file in a text editor.

  3. Assume that you want new users to enter an employee ID number when they register.

    Create a new form-group code block for that number. For this procedure, the code block appears after the block for Last Name (or surname) sn:

    <div class="form-group">
         <label class="sr-only" for="input-employeeNum">{{t 'common.user.employeeNum'}}</label>
         <input type="text" placeholder="{{t 'common.user.employeeNum'}}" id="input-employeeNum" name="user.employeeNum" class="form-control input-lg" />
    </div>
  4. Edit the relevant translation.json file. As this is the customized file for the Self-Service UI, you will find it in the selfservice/extension/locales/en directory that you set up in Section 4.3, "Customizing the Admin UI".

    You need to find the right place to enter text associated with the employeeNum property. Look for the other properties in the userDetails-initial.html file.

    The following excerpt illustrates the employeeNum property as added to the translation.json file.

    ...
    "givenName" : "First Name",
    "sn" : "Last Name",
    "employeeNum" : "Employee ID Number",
    ...
  5. The next time an anonymous user tries to create an account, that user should see a screen similar to:

    A Customized Self-Registration Screen

Tip

Changes to Self-Service UI Templates aren't enough; you must also change the corresponding backend entries for the managed object resource(s). For more information, see Section 9.1, "Creating and Modifying Managed Object Types".

If you've added an entry to a UI and have not added corresponding backend configuration files, IDM won't save that information.

Alternatively, if you've deleted an entry that's required in your managed object schema, the resulting registration will fail policy requirements. For more information, see Chapter 12, "Using Policies to Validate Data".

5.7.2. Customizing the Self-Service UI, by Functionality

You may want to customize additional parts of the Self-Service UI. You've set up an openidm/ui/selfservice/extension directory as described in Section 5.7, "Customizing the Self-Service UI". In that directory, you can find a series of subdirectories. The following table is intended to help you search for the right file(s) to customize:

Table 5.5. File Functionality by Self-Service Directory
SubdirectoryDescription
configTop-level configuration directory of JavaScript files. Customizable subdirectories include errorhandlers/ with HTTP error messages and messages/ with info and error messages. For actual messages, see the translation.json file in the locales/en/ subdirectory.
css/ and libs/ If you use a different bootstrap theme, you can replace the files in this and related subdirectories. For more information, see Section 4.4.1, "UI Themes and Bootstrap".
fonts/ The font files in this directory are based on the Font Awesome CSS toolkit described in Section 4.4, "Changing the UI Theme".
images/ and img/ IDM uses the image files in these directories, which you can choose to replace with your own.
locales/ Includes the associated translation.json file, by default in the en/ subdirectory.
org/Source files for the Self-Service UI
partials/ Includes partial components of HTML pages in the Self-Service UI, for assignments, authentication, connectors, dashboards, email, basic forms, login buttons, etc.
templates/ The files in the templates/ subdirectory are in actual use. For an example of how you can customize such files in the Self-Service UI, see Section 5.7, "Customizing the Self-Service UI".

Tip

The above table is not a complete list. To see a visual representation of customizable Self-Service UI files, from the Linux command line, run the following commands:

$ cd /path/to/openidm/ui/selfservice/extension
$ tree

5.7.2.1. Customizing the Landing Page

One place where you can customize the self-service UI is with the default landing page for users. By default, users who log into the self-service UI are taken to Section 5.6.1, "The Self-Service UI Dashboard".

To change the landing page to the My Account screen, make changes to the following files in the config/routes subdirectory:

  • SelfServiceRoutesConfig.js

    Delete the following line:

    obj.landingPage = obj.dashboard;
  • UserRoutesConfig.js

    Add the following line, just before return obj;

    obj.landingPage = obj.profile;

The next time you log into the self-service UI, IDM takes you to the My Account screen.

5.8. Setting Up User-Managed Access (UMA), Trusted Devices, and Privacy

In the following sections, you'll refer to AM documentation to set up User-Managed Access (UMA), Trusted Devices, and Privacy for your end-users. These options require IDM working with AM. For a working implementation of both products, see Chapter 27, "Integrating IDM With the ForgeRock Identity Platform" in the Samples Guide.

Tip

If you want to configure both UMA and Trusted Devices, configure these features in the following order, as described in the sections that follow:

  1. Set up UMA

  2. Use AM to configure UMA-based resources

  3. Configure Trusted Devices

5.8.1. User Managed Access in IDM

When you integrate IDM with ForgeRock Access Management (AM) you can take advantage of AM's abilities to work with User-Managed Access (UMA) workflows. AM and IDM use a common installation of ForgeRock Directory Services (DS) to store user data.

For instructions on how to set up this integration, see Chapter 27, "Integrating IDM With the ForgeRock Identity Platform" in the Samples Guide. Once you've set up integration through that sample, you can configure AM to work with UMA. For more information, see the AM User-Managed Access (UMA) Guide. From that guide, you need to know how to:

  • Set up AM as an authorization server.

  • Register resource sets and client agents in AM.

  • Help users manage access to their protected resources through AM.

Note

IDM provides a read-only configuration of UMA-based sharing only for the resource owner. If you've used AM to share an UMA resource with another user, that user can use only AM to view that resource. Refer to the AM User-Managed Access (UMA) Guide for more information.

After your users have shared UMA resources from the AM Self-Service UI, they can view what they've done and shared in the IDM Self-Service UI, in the My Account section, under the Sharing and Activity tabs.

5.8.2. Configuring Trusted Devices on IDM

You can configure Trusted Devices through AM, using the following sections of the AM Authentication and Single Sign-On Guide: Configuring Authentication Chains and Device ID (Match) Authentication Module. You can use the techniques described in these sections to set up different authentication chains for administrators and regular users.

You can create an AM authentication chain with the following modules and criteria:

Table 5.6. AM Authentication Chain Modules
ModuleCriteria
DataStoreRequisite
Device Id (Match)Sufficient
Device Id (Save)Required

The following figure displays an example of a suitable authentication chain, configured in the AM console:

Figure 5.5. A Trusted Devices Authentication Chain (AM)
AM Authentication Chain Configuration

When trusted devices are enabled, users are subject to the screen shown in Figure 5.9, "Adding a Trusted Device", the first time they log in from a new browser on a new system. For more information, see Section 5.8.3.4, "Trusted Devices".

Note

In default configurations, trusted devices are not saved for the AM amadmin account. However, you can set up different AM administrative users as described in the following section of the AM Setup and Maintenance Guide: Delegating Realm Administration Privileges.

You can set up different authentication chains for regular and administrative users, as described in the AM Authentication and Single Sign-On Guide.

5.8.3. Privacy: My Account Information in the Self-Service UI

End users can find account details in the My Account section of the Self-Service UI. The information provided depends on what has been configured in the Admin UI, and potentially through AM as discussed in Chapter 27, "Integrating IDM With the ForgeRock Identity Platform" in the Samples Guide.

Some information configured in IDM, such as Personal Info, can be found through appropriate REST calls and audit logs for that particular user. However, some of the information in this section, such as Trusted Devices and UMA-based sharing, may be available from ForgeRock Directory Services (DS) or ForgeRock Access Management (AM).

Figure 5.6. My Account Information in the Self-Service UI
Defaults to Personal Information

The names shown in the left-hand column are known as "tabs", as that reflects the property shown in the ui-profile.json file for your project. When you select a tab such as Personal Info in the Self-Service UI, you'll see related information on your account. For more information on each tab, see the following sections:

5.8.3.1. Personal Info

The Personal Info tab allows users to manage their information in a centralized location. The last change made to the user account is reflected in the UTC timestamp. You'll see a new timestamp for any change made by a user or an administrator on that account.

For end users, Personal Info information account includes at least the following information: Username, First Name, Last Name, and Email Address. In the Self-Service UI, users can:

  • Correct errors in their default required information.

  • Add, delete, or modify information in all other fields.

  • Review the last time someone made a change to their account.

Each user can modify this information as needed, as long as "userEditable" : true for the property in the managed.json file, as described in Section 9.1, "Creating and Modifying Managed Object Types".

5.8.3.2. Sign-In & Security

At this tab, end users can change their passwords, They can also add, delete, or modify security questions, and link or unlink supported social identity accounts. For more information, see Section 5.2.3.3, "Configuring Self-Service Questions (KBA)" and Chapter 11, "Configuring Social Identity Providers".

Figure 5.7. Sign-in & Security Options
Supports management of passwords, security questions, connected social identity accounts

5.8.3.3. Preferences

The preferences tab allows end users to modify marketing preferences, as defined in the managed.json file, and the Managed Object User property Preferences tab. For more information, see Section 15.3.4.1, "Configuring End User Preferences".

Figure 5.8. Marketing Preferences in the Self-Service UI
Allows opt-in / opt-out control with third party providers

As shown in the figure, end users can toggle marketing preferences. When IDM includes a connector to a marketing database, these preferences are sent to that database. This can help administrators use IDM to target marketing campaigns and identify potential leads.

5.8.3.4. Trusted Devices

A trusted device uses AM's Device ID (Match) and Device ID (Save) authentication modules, as described in the AM Authentication and Single Sign-On Guide. When such modules are configured per Section 5.8.2, "Configuring Trusted Devices on IDM", end users get the opportunity to add such devices the first time they log in from a new location.

Figure 5.9. Adding a Trusted Device
Trusted Device information is stored with the user's DS record

During the login process, when an end user selects Log In, that user is prompted for a Trusted Device Name.

When added, users see such devices under the noted tab, as shown here:

Figure 5.10. Trusted Devices in the Self-Service UI
A list of trusted devices

A trusted device entry is paired with a specific browser on a specific system. The next time the same end user logs in from the same browser and system, in the same location, that user should not be prompted to enter a trusted device again.

End users may remove their trusted devices from the tab, as shown. Any changes made here are synchronized to the AM Self-Service UI dashboard.

5.8.3.5. Authorized Apps

The Authorized Apps tab is specific to end users as OAuth 2 clients. and reflects the corresponding section of the AM Self-Service dashboard, as described in the following section of the AM OAuth 2.0 Guide on: User Consent Management.

Figure 5.11. Authorized Apps in the Self-Service UI
A list of authorized applications

Any changes made here are synchronized to the AM Self-Service UI dashboard.

5.8.3.6. Privacy & Consent

This section assumes that as an administrator, you've followed the instructions in Section 5.2.3.5, "Configuring Privacy & Consent" to enable Privacy & Consent.

End users who see a Privacy & Consent tab have control of personal data that may be shared with an external database, such as one that might contain marketing leads.

The managed object record for end users who consent to sharing such data is shown in REST output and the audit activity log as one consentedMappings:

"consentedMappings" : [ {
   "mapping" : "managedUser_systemLdapAccounts",
   "consentDate" : "2017-08-25T18:13:08.358Z"
}

The profile fields shown in the figure, if authorized by the specific end user, may be shared with the external database.

Figure 5.12. Privacy & Consent in the Self-Service UI
Watch the Profile Fields, as they may be shared

This tab supports the right to restrict processing of user personal data.

5.8.3.7. Sharing and Activity

The Sharing and Activity tabs provide a read-only view of personal information that end users may have shared with others. If you as an administrator configured UMA as described in Section 5.8.1, "User Managed Access in IDM", any sharing and activity changes made by end users in the AM Self-Service UI are also shown in the IDM Self-Service UI.

If end users want to share their resources with others, they'll have to use the AM Self-Service UI.

Figure 5.13. Sharing and Activity in the Self-Service UI
A read-only view of the activity associated with a shared resource

Note how the activity includes a timestamp, which informs end users of the last time their resources were shared (or unshared).

5.8.3.8. Account Controls

The Account Controls tab allows end users to download their account data (in JSON format), and to delete their accounts from IDM.

Important

When end users delete their accounts, the change is recorded in external repositories upon the next reconciliation. It is then up to the administrator of the external repository to ensure user information is purged from the external system.

Figure 5.14. IDM Delete Your Account Screen
IDM supports users who want to delete their own accounts

To modify the message associated with the Delete Your Account option, follow the instructions shown in Section 5.7, "Customizing the Self-Service UI", find the translation.json file, search for the deleteAccount code block, and edit text information as desired.

The options shown in this tab can help meet requirements related to data portability, as well as the right to be forgotten.

5.8.3.9. Notifications related to My Account the Self-Service UI

When end users change their passwords and/or preferences, they receive a notification in the Self-Service UI. For administrators, you can change these notification messages in the onUpdateUser.js file, in the /path/to/openidm/bin/defaults/script/ui directory.

5.8.3.10. Configuring Additional Tabs for My Account

You'll find a list of available tabs in the ui-profile.json file, in your project's conf/ subdirectory. For example, this excerpt sets up the Personal Info tab:

{
   "name" : "personalInfoTab",
   "view" : "org/forgerock/openidm/ui/user/profile/personalInfo/PersonalInfoTab"
},

If you want to configure additional tabs for the My Account section of the Self-Service UI, focus on the following:

  • The ui-profile.json file in your project's conf/ subdirectory.

  • JavaScript files in the following directory, which configure details shown in each tab: /path/to/openidm/ui/selfservice/default/org/forgerock/openidm/ui/user/profile

  • The Self-Service translation.json file, ideally in the following directory: openidm/ui/selfservice/extension/locales. In that file, you'll add lines to the following code blocks:

    • templates: You'll see examples of existing templates in this code block.

    • common.user.profileMenu: You'll see examples of existing tabs, with Font Awesome icons and UI text titles.

Chapter 6. Managing the Repository

IDM stores managed objects, internal users, and configuration objects in a repository. By default, the server uses an internal ForgeRock Directory Services (DS) instance for use as its repository. In production, you must replace DS with a supported JDBC repository, as described in Chapter 2, "Selecting a Repository" in the Installation Guide.

This chapter describes the repository configuration, the use of mappings in the repository, and how to configure a connection to the repository over SSL. It also describes how to interact with the repository over the REST interface.

6.1. Understanding the Repository Configuration Files

IDM provides configuration files for supported JDBC repositories and for the embedded DS repository. These configuration files are located in the /path/to/openidm/db/database/conf directory. For JDBC repositories, the configuration is defined in two files:

  • datasource.jdbc-default.json, which specifies the connection details to the repository.

  • repo.jdbc.json, which specifies the mapping between IDM resources and the tables in the repository, and includes a number of predefined queries.

For a DS repository, the repo.opendj.json file specifies the resource mapping.

Copy the configuration files for your specific database type to your project's conf/ directory.

6.1.1. Understanding the JDBC Connection Configuration File

The default database connection configuration file for a MySQL database follows:

{
    "driverClass" : "com.mysql.jdbc.Driver",
    "jdbcUrl" : "jdbc:mysql://&{openidm.repo.host}:&{openidm.repo.port}/openidm?allowMultiQueries=true&characterEncoding=utf8",
    "databaseName" : "openidm",
    "username" : "openidm",
    "password" : "openidm",
    "connectionTimeout" : 30000,
    "connectionPool" : {
        "type" : "hikari",
        "minimumIdle" : 20,
        "maximumPoolSize" : 50
    }
}

The configuration file includes the following properties:

driverClass, jndiName, or jtaName

Depending on the mechanism you use to acquire the data source, set one of these properties:

  • "driverClass" : string

    To use the JDBC driver manager to acquire a data source, set this property, as well as "jdbcUrl", "username", and "password". The driver class must be the fully qualified class name of the database driver to use for your database.

    Using the JDBC driver manager to acquire a data source is the most likely option, and the only one supported "out of the box". The remaining options in the sample repository configuration file assume that you are using a JDBC driver manager.

    Example: "driverClass" : "com.mysql.jdbc.Driver"

  • "jndiName" : string

    If you use JNDI to acquire the data source, set this property to the JNDI name of the data source.

    This option might be relevant if you want to run IDM inside your own web container.

    Example: "jndiName" : "jdbc/my-datasource"

  • "jtaName" : string

    If you use an OSGi service to acquire the data source, set this property to a stringified version of the OsgiName.

    This option would only be relevant in a highly customized deployment, for example, if you wanted to develop your own connection pool.

    Example: "jtaName" : "osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/openidm)"

jdbcUrl

The connection URL to the JDBC database. The URL should include all of the parameters required by your database. For example, to specify the encoding in MySQL use 'characterEncoding=utf8'.

Specify the values for openidm.repo.host and openidm.repo.port in one of the following ways:

  • Set the values in your project's conf/system.properties or conf/boot/boot.properties file, for example:

    openidm.repo.host = localhost
    openidm.repo.port = 3306
  • Set the properties in the OPENIDM_OPTS environment variable and export that variable before startup. You must include the JVM memory options when you set this variable. For example:

    $ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dopenidm.repo.host=localhost -Dopenidm.repo.port=3306"
    $ ./startup.sh
    Executing ./startup.sh...
    Using OPENIDM_HOME:   /path/to/openidm
    Using PROJECT_HOME:   /path/to/openidm
    Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Dopenidm.repo.host=localhost -Dopenidm.repo.port=3306
    Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties
    Using boot properties at /path/to/openidm/conf/boot/boot.properties
    -> OpenIDM version "5.5.0"
    OpenIDM ready
databaseName

The name of the database to which IDM connects. By default, this is openidm.

username

The username with which to access the JDBC database.

password

The password with which to access the JDBC database. IDM automatically encrypts clear string passwords. To replace an existing encrypted value, replace the whole crypto-object value, including the brackets, with a string of the new password.

connectionTimeout

The period of time, in milliseconds, after which IDM should consider an attempted connection to the database to have failed. The default period is 30000 milliseconds (30 seconds).

connectionPool

Database connection pooling configuration. The default connection pool library is Hikari ("type" : "hikari").

IDM uses the default Hikari configuration, except for the following parameters. You might need to adjust these parameters, according to your database workload:

  • minimumIdle

    This property controls the minimum number of idle connections that Hikari maintains in the connection pool. If the number of idle connections drops below this value, Hikari attempts to add additional connections.

    By default, Hikari runs as a fixed-sized connection pool, that is, this property is not set. The connection configuration files provided with IDM set the minimum number of idle connections to 20.

  • maximumPoolSize

    This property controls the maximum number of connections to the database, including idle connections and connections that are being used.

    By default, Hikari sets the maximum number of connections to 10. The connection configuration files provided with IDM set the maximum number of connections to 50.

For information about the Hikari configuration parameters, see the Hikari Project Page.

You can also use the BoneCP connection pool library. To use BoneCP, change the configuration as follows:

"connectionPool" : {
        "type" : "bonecp"
}

IDM uses the default BoneCP configuration, except for the following parameters. You might need to adjust these parameters, according to your database workload:

  • partitionCount

    The partition count determines the lock segmentation in the connection pool. Each incoming connection request acquires a connection from a pool that has thread-affinity. Threads are dispatched to the appropriate lock by using a value of threadId % partitionCount. A partition count that is greater than 1 protects the connection pool with more than a single lock, thereby reducing lock contention.

    By default, BoneCP creates a single partition. The JDBC Connection Configuration Files provided with IDM set the partition count to 4.

  • maxConnectionsPerPartition

    The maximum number of connections to create per partition. The maximum number of database connections is equal to partitionCount * maxConnectionsPerPartition. BoneCP does not create all these connections at once, but starts off with the minConnectionsPerPartition and gradually increases connections as required.

    By default, BoneCP creates a maximum of 20 connections per partition. The JDBC Connection Configuration Files provided with IDM set the maximum connections per partition to 25.

  • minConnectionsPerPartition

    The number of connections to start off with, per partition. The minimum number of database connections is equal to partitionCount * minConnectionsPerPartition.

    By default, BoneCP starts with a minimum of 1 connection per partition. The JDBC Connection Configuration Files provided with IDM set the minimum connections per partition to 5.

For more information about the BoneCP configuration parameters, see http://www.jolbox.com/configuration.html.

6.1.2. Understanding the JDBC Database Table Configuration

An excerpt of a MySQL database table configuration file follows:

{
     "dbType" : "MYSQL",
     "useDataSource" : "default",
     "maxBatchSize" : 100,
     "maxTxRetry" : 5,
     "queries" : {...},
     "commands" : {...},
     "resourceMapping" : {...}
 }

The configuration file includes the following properties:

dbType : string, optional

The type of database. The database type might affect the queries used and other optimizations. Supported database types include MYSQL, SQLSERVER, ORACLE, MS SQL, and DB2.

useDataSource : string, optional

This option refers to the connection details that are defined in the configuration file, described previously. The default configuration file is named datasource.jdbc-default.json. This is the file that is used by default (and the value of the "useDataSource" is therefore "default"). You might want to specify a different connection configuration file, instead of overwriting the details in the default file. In this case, set your connection configuration file datasource.jdbc-name.json and set the value of "useDataSource" to whatever name you have used.

maxBatchSize

The maximum number of SQL statements that will be batched together. This parameter allows you to optimize the time taken to execute multiple queries. Certain databases do not support batching, or limit how many statements can be batched. A value of 1 disables batching.

maxTxRetry

The maximum number of times that a specific transaction should be attempted before that transaction is aborted.

queries

Predefined queries that can be referenced from the configuration. For more information about predefined queries, see Section 8.3.2, "Parameterized Queries". The queries are divided between those for genericTables and those for explicitTables.

The following sample extract from the default MySQL configuration file shows two credential queries, one for a generic mapping, and one for an explicit mapping. Note that the lines have been broken here for legibility only. In a real configuration file, the query would be all on one line:

 "queries" : {
     "genericTables" : {
         "credential-query" : "SELECT fullobject FROM ${_dbSchema}.${_mainTable}
           obj INNER JOIN ${_dbSchema}.${_propTable} prop ON
           obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes
           objtype ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName'
           AND prop.propvalue = ${username} AND objtype.objecttype = ${_resource}",
         ...
     "explicitTables" : {
         "credential-query" : "SELECT * FROM ${_dbSchema}.${_table}
           WHERE objectid = ${username} and accountStatus = 'active'",
         ...
     }
 }    

Options supported for query parameters include the following:

  • A default string parameter, for example:

    openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });

    For more information about the query function, see Section E.1.6, "openidm.query(resourceName, params, fields)".

  • A list parameter (${list:propName}).

    Use this parameter to specify a set of indeterminate size as part of your query. For example:

    WHERE targetObjectId IN (${list:filteredIds})
  • An integer parameter (${int:propName}).

    Use this parameter to query non-string values in the database. This is particularly useful with explicit tables.

commands

Specific commands configured to manage the database over the REST interface. Currently, the following default commands are included in the configuration:

  • purge-by-recon-expired

  • purge-by-recon-number-of

  • delete-mapping-links

  • delete-target-ids-for-recon

These commands assist with removing stale reconciliation audit information from the repository, and preventing the repository from growing too large. The commands work by executing a query filter, then performing the specified operation on each result set. Currently the only supported operation is DELETE, which removes all entries that match the filter. For more information about repository commands, see Section 6.4.1, "Running Queries and Commands on the Repository".

resourceMapping

Defines the mapping between IDM resource URIs (for example, managed/user) and JDBC tables. The structure of the resource mapping is as follows:

 "resourceMapping" : {
     "default" : {
         "mainTable" : "genericobjects",
         "propertiesTable" : "genericobjectproperties",
         "searchableDefault" : true
     },
     "genericMapping" : {...},
     "explicitMapping" : {...}
 }    

The default mapping object represents a default generic table in which any resource that does not have a more specific mapping is stored.

The generic and explicit mapping objects are described in the following section.

6.1.3. Understanding the DS Repository Configuration

An excerpt of a DS repository configuration file follows:

{
     "embedded" : true,
     "adminPort" : port number
     "queries" : {...},
     "commands" : {...},
     "rest2LdapOptions": {...},
     "indices": {...},
     "schemaProviders": {...},
     "resourceMapping" : {...}
 }

The configuration file includes the following properties:

embedded : boolean

Specifies an embedded or external DS instance. Currently only the embedded DS instance is supported.

queries

Predefined queries that can be referenced from the configuration. For a DS repository, all predefined queries are really filtered queries (using the _queryFilter parameter), for example:

"query-all-ids": {
    "_queryFilter": "true",
    "_fields": "_id,_rev"
},

The queries are divided between those for generic mappings and those for explicit mappings, but the queries themselves are the same for both mapping types.

commands

Specific commands configured to manage the repository over the REST interface. Currently, only two commands are included by default:

  • delete-mapping-links

  • delete-target-ids-for-recon

Both of these commands assist with removing stale reconciliation audit information from the repository, and preventing the repository from growing too large. For more information about repository commands, see Section 6.4.1, "Running Queries and Commands on the Repository".

rest2LdapOptions

Specifies the configuration for accessing the LDAP data stored in DS. For more information, see Gateway REST2LDAP Configuration File in the DS Reference.

indices

For generic mappings, sets up an LDAP index on an object whose properties are specified in the schemaProvider property. For more information, see Section 6.2.2.1.1, "Improving Generic Mapping Search Performance (DS)".

schemaProviders

For generic mappings, lists the objects whose properties should be indexed. For more information, see Section 6.2.2.1.1, "Improving Generic Mapping Search Performance (DS)".

resourceMapping

Defines the mapping between IDM resource URIs (for example, managed/user) and the DS directory tree. The structure of the resource mapping object is as follows:

{
...
    "resourceMapping" : {
        "defaultMapping": {
            "resource": "default",
            "dnTemplate": "ou=generic,dc=openidm,dc=forgerock,dc=com"
    },
     "explicitMapping" : {...},
     "genericMapping" : {...}
 }    

The default mapping object represents a default generic organizational unit (ou) in which any resource that does not have a more specific mapping is stored.

The generic and explicit mapping objects are described in Section 6.2, "Using Generic and Explicit Object Mappings" .

6.2. Using Generic and Explicit Object Mappings

There are two ways to map IDM objects to the tables in a JDBC database or to organizational units in DS:

  • Generic mapping, which allows you to store arbitrary objects without special configuration or administration.

  • Explicit mapping, which maps specific objects and properties to tables and columns in the JDBC database or to organizational units in DS.

These two mapping strategies are discussed in the following sections, for JDBC repositories and for DS repositories.

6.2.1. Generic and Explicit Mappings With a JDBC Repository

6.2.1.1. Using Generic Mappings With a JDBC Repository

Generic mapping speeds up development, and can make system maintenance more flexible by providing a stable database structure. However, generic mapping can have a performance impact and does not take full advantage of the database facilities (such as validation within the database and flexible indexing). In addition, queries can be more difficult to set up.

In a generic table, the entire object content is stored in a single large-character field named fullobject in the mainTable for the object. To search on specific fields, you can read them by referring to them in the corresponding properties table for that object. The disadvantage of generic objects is that, because every property you might like to filter by is stored in a separate table, you must join to that table each time you need to filter by anything.

The following diagram shows a pared down database structure for the default generic table, when using a MySQL repository. The diagram indicates the relationship between the main table and the corresponding properties table for each object.

Figure 6.1. Generic Tables Entity Relationship Diagram
Generic tables entity relationship diagram

These separate tables can make the query syntax particularly complex. For example, a simple query to return user entries based on a user name would need to be implemented as follows:

SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj INNER JOIN ${_dbSchema}.${_propTable} prop
     ON obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes objtype
     ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName' AND prop.propvalue = ${uid}
     AND objtype.objecttype = ${_resource}",

The query can be broken down as follows:

  1. Select the full object from the main table:

    SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj
  2. Join to the properties table and locate the object with the corresponding ID:

    INNER JOIN ${_dbSchema}.${_propTable} prop  ON obj.id = prop.${_mainTable}_id
  3. Join to the object types table to restrict returned entries to objects of a specific type. For example, you might want to restrict returned entries to managed/user objects, or managed/role objects:

    INNER JOIN ${_dbSchema}.objecttypes objtype ON objtype.id = obj.objecttypes_id
  4. Filter records by the userName property, where the userName is equal to the specified uid and the object type is the specified type (in this case, managed/user objects):

    WHERE prop.propkey='/userName'
     AND prop.propvalue = ${uid}
     AND objtype.objecttype = ${_resource}",

    The value of the uid field is provided as part of the query call, for example:

    openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });

Tables for user definable objects use a generic mapping by default.

The following sample generic mapping object illustrates how managed/ objects are stored in a generic table:

"genericMapping" : {
       "managed/*" : {
           "mainTable" : "managedobjects",
           "propertiesTable" : "managedobjectproperties",
           "searchableDefault" : true,
           "properties" : {
               "/picture" : {
                   "searchable" : false
               }
           }
       }
   },
mainTable (string, mandatory)

Indicates the main table in which data is stored for this resource.

The complete object is stored in the fullobject column of this table. The table includes an objecttypes foreign key that is used to distinguish the different objects stored within the table. In addition, the revision of each stored object is tracked, in the rev column of the table, enabling multiversion concurrency control (MVCC). For more information, see Section B.1.6.3, "Manipulating Managed Objects Programmatically".

propertiesTable (string, mandatory)

Indicates the properties table, used for searches.

The contents of the properties table is a defined subset of the properties, copied from the character large object (CLOB) that is stored in the fullobject column of the main table. The properties are stored in a one-to-many style separate table. The set of properties stored here is determined by the properties that are defined as searchable.

The stored set of searchable properties makes these values available as discrete rows that can be accessed with SQL queries, specifically, with WHERE clauses. It is not otherwise possible to query specific properties of the full object.

The properties table includes the following columns:

  • ${_mainTable}_id corresponds to the id of the full object in the main table, for example, manageobjects_id, or genericobjects_id.

  • propkey is the name of the searchable property, stored in JSON pointer format (for example /mail).

  • proptype is the data type of the property, for example java.lang.String. The property type is obtained from the Class associated with the value.

  • propvalue is the value of property, extracted from the full object that is stored in the main table.

    Regardless of the property data type, this value is stored as a string, so queries against it should treat it as such.

searchableDefault (boolean, optional)

Specifies whether all properties of the resource should be searchable by default. Properties that are searchable are stored and indexed. You can override the default for individual properties in the properties element of the mapping. The preceding example indicates that all properties are searchable, with the exception of the picture property.

For large, complex objects, having all properties searchable implies a substantial performance impact. In such a case, a separate insert statement is made in the properties table for each element in the object, every time the object is updated. Also, because these are indexed fields, the recreation of these properties incurs a cost in the maintenance of the index. You should therefore enable searchable only for those properties that must be used as part of a WHERE clause in a query.

properties

Lists any individual properties for which the searchable default should be overridden.

Note that if an object was originally created with a subset of searchable properties, changing this subset (by adding a new searchable property in the configuration, for example) will not cause the existing values to be updated in the properties table for that object. To add the new property to the properties table for that object, you must update or recreate the object.

6.2.1.2. Improving Generic Mapping Search Performance (JDBC)

All properties in a generic mapping are searchable by default. In other words, the value of the searchableDefault property is true unless you explicitly set it to false. Although there are no individual indexes in a generic mapping, you can improve search performance by setting only those properties that you need to search as searchable. Properties that are searchable are created within the corresponding properties table. The properties table exists only for searches or look-ups, and has a composite index, based on the resource, then the property name.

The sample JDBC repository configuration files (db/database/conf/repo.jdbc.json) restrict searches to specific properties by setting the searchableDefault to false for managed/user mappings. You must explicitly set searchable to true for each property that should be searched. The following sample extract from repo.jdbc.json indicates searches restricted to the userName property:

"genericMapping" : {
     "managed/user" : {
         "mainTable" : "manageduserobjects",
         "propertiesTable" : "manageduserobjectproperties",
         "searchableDefault" : false,
         "properties" : {
             "/userName" : {
             "searchable" : true
             }
         }
     }
 }, 

With this configuration, IDM creates entries in the properties table only for userName properties of managed user objects.

If the global searchableDefault is set to false, properties that do not have a searchable attribute explicitly set to true are not written in the properties table.

6.2.1.3. Using Explicit Mappings With a JDBC Repository

Explicit mapping is more difficult to set up and maintain, but can take complete advantage of the native database facilities.

An explicit table offers better performance and simpler queries. There is less work in the reading and writing of data, since the data is all in a single row of a single table. In addition, it is easier to create different types of indexes that apply to only specific fields in an explicit table. The disadvantage of explicit tables is the additional work required in creating the table in the schema. Also, because rows in a table are inherently more simple, it is more difficult to deal with complex objects. Any non-simple key:value pair in an object associated with an explicit table is converted to a JSON string and stored in the cell in that format. This makes the value difficult to use, from the perspective of a query attempting to search within it.

Note that it is possible to have a generic mapping configuration for most managed objects, and to have an explicit mapping that overrides the default generic mapping in certain cases. The sample configuration provided in /path/to/openidm/db/mysql/conf/repo.jdbc-mysql-explicit-managed-user.json has a generic mapping for managed objects, but an explicit mapping for managed user objects.

IDM uses explicit mapping for internal system tables, such as the tables used for auditing.

Depending on the types of usage your system is supporting, you might find that an explicit mapping performs better than a generic mapping. Operations such as sorting and searching (such as those performed in the default UI) tend to be faster with explicitly-mapped objects, for example.

The following sample explicit mapping object illustrates how internal/user objects are stored in an explicit table:

"explicitMapping" : {
     "internal/user" : {
         "table" : "internaluser",
         "objectToColumn" : {
             "_id" : "objectid",
             "_rev" : "rev",
             "password" : "pwd",
             "roles" : "roles"
         }
     },
     ...
 }   
<resource-uri> (string, mandatory)

Indicates the URI for the resources to which this mapping applies, for example, internal/user.

table (string, mandatory)

The name of the database table in which the object (in this case internal users) is stored.

objectToColumn (string, mandatory)

The way in which specific managed object properties are mapped to columns in the table.

The mapping can be a simple one to one mapping, for example "userName": "userName", or a more complex JSON map or list. When a column is mapped to a JSON map or list, the syntax is as shown in the following examples:

"messageDetail" : { "column" : "messagedetail", "type" : "JSON_MAP" }

or

"roles": { "column" : "roles", "type" : "JSON_LIST" }

Caution

Support for data types in columns is restricted to String (VARCHAR in the case of MySQL). If you use a different data type, such as DATE or TIMESTAMP, your database must attempt to convert from String to the other data type. This conversion is not guaranteed to work.

If the conversion does work, the format might not be the same when it is read from the database as it was when it was saved. For example, your database might parse a date in the format 12/12/2012 and return the date in the format 2012-12-12 when the property is read.

6.2.2. Generic and Explicit Mappings With a DS Repository

For both generic and explicit mappings, IDM maps object types using a dnTemplate property. The dnTemplate is effectively a pointer to where the object is stored in DS. For example, the following excerpt of the default repo.opendj.json file shows how configuration objects are stored under the DN ou=config,dc=openidm,dc=forgerock,dc=com:

"config": {
    "dnTemplate": "ou=config,dc=openidm,dc=forgerock,dc=com"
},

6.2.2.1. Using Generic Mappings With a DS Repository

By default, IDM uses a generic mapping for all objects except internal users and roles, links, and clustered reconciliation target IDs. Note that clustered reconciliation is not currently supported with a DS repository.

With a generic mapping, all the properties of an object are stored as a single JSON blob in the fr-idm-json attribute. To create a new generic mapping, you need only specify the dnTemplate, that is, where the object will be stored in the directory tree.

You can specify a wildcard mapping, that stores all nested URIs under a particular branch of the directory tree, for example:

"managed/*": {
    "dnTemplate": "ou=managed,dc=openidm,dc=forgerock,dc=com"
},

With this mapping, all objects under managed/, such as managed/user and managed/device, will be stored in the branch ou=managed,dc=openidm,dc=forgerock,dc=com. You do not have to specify separate mappings for each of these objects. The mapping creates a new ou for each object. So, for example, managed/user objects will be stored under the DN ou=user,ou=managed,dc=openidm,dc=forgerock,dc=com and managed/device objects will be stored under the DN ou=device,ou=managed,dc=openidm,dc=forgerock,dc=com.

6.2.2.1.1. Improving Generic Mapping Search Performance (DS)

By default, all generic objects are instances of the fr-idm-generic-obj object class and their properties are stored as a single JSON blob in the fr-idm-json attribute. The fr-idm-json attribute is indexed by default, which results in all attributes of a generic object being indexed. JDBC repositories behave in a similar way, with all generic objects being searchable by default.

To optimize search performance on specific generic resources, you can set up your own schema providers and indices as described in this section. For a detailed explanation of how indexes improve LDAP search performance, see Indexing Attribute Values in the DS Administration Guide.

For managed user objects, the following properties are indexed by default:

  • userName

  • givenName

  • sn

  • mail

  • accountStatus

These indexes are configured as follows in the repo.opendj.json file:

"indices" : {
    ...
    "fr-idm-managed-user-json" : {
      "type" : [ "EQUALITY" ]
    },
    ...
  },
  "schemaProviders" : {
    "Managed User Json" : {
      "matchingRuleName" : "caseIgnoreJsonQueryMatchManagedUser",
      "matchingRuleOid" : "1.3.6.1.4.1.36733.2.3.4.1",
      "caseSensitiveStrings" : false,
      "fields" : [ "userName", "givenName", "sn", "mail", "accountStatus" ]
    },
    ...
  },
     

The indexed properties are listed in the array of fields for that managed object. To index additional managed user properties, you can simply add the property names to this array of fields.

To set up indexes on generic objects other than the managed user object, you must do the following:

  • Add the object to the schema file (/path/to/openidm/db/opendj/schema/openidm.ldif).

    You can use the managed user object schema as an example:

    ###
    # Managed User
    ###
    attributeTypes: ( 1.3.6.1.4.1.36733.2.3.1.13
        NAME 'fr-idm-managed-user-json'
        SYNTAX 1.3.6.1.4.1.36733.2.1.3.1
        EQUALITY caseIgnoreJsonQueryMatchManagedUser
        ORDERING caseIgnoreOrderingMatch
        SINGLE-VALUE X-ORIGIN 'OpenIDM OpenDJRepoService')
    objectClasses: ( 1.3.6.1.4.1.36733.2.3.2.6
        NAME 'fr-idm-managed-user'
        SUP top STRUCTURAL
        MUST ( fr-idm-managed-user-json )
        X-ORIGIN 'OpenIDM OpenDJRepoService' )
  • Add the object to the indices property in the conf/repo.opendj.json file.

    The following example sets up an equality index for a managed devices object:

    "indices" : {
        ...
        "fr-idm-managed-devices-json" : {
          "type" : [ "EQUALITY" ]
        },
        ...
           },
  • Add the object to the schemaProviders property in the conf/repo.opendj.json file and list the properties that should be indexed.

    The following example sets up indexes for the deviceName, brand, and assetNumber properties of the managed device object:

    "schemaProviders" : {
        "Managed Device Json" : {
          "matchingRuleName" : "caseIgnoreJsonQueryMatchManagedDevice",
          "matchingRuleOid" : "1.3.6.1.4.1.36733.2.....",
          "caseSensitiveStrings" : false,
          "fields" : [ "deviceName", "brand", "assetNumber" ]
        },

For more information about indexing JSON attributes, see Configuring an Index for a JSON Attribute in the DS Administration Guide.

Note

The OIDs shown in this section are reserved for ForgeRock internal use. If you set up additional objects and attributes, or if you change the default schema, you must specify your own OIDs here.

6.2.2.2. Using Explicit Mappings With a DS Repository

The default configuration uses generic mappings for all objects except internal users and roles, links, and clustered reconciliation target IDs. To use an explicit mapping for managed user objects, follow these steps:

  1. Stop IDM if it is running.

  2. Copy the repo.opendj-explicit-managed-user.json file to your project's conf directory, and rename that file repo.opendj.json:

    $ cd /path/to/openidm
    $ cp db/opendj/conf/repo.opendj-explicit-managed-user.json project-dir/conf/
    $ mv project-dir/conf/repo.opendj-explicit-managed-user.json project-dir/conf/repo.opendj.json
  3. Update the DS schema in the openidm.ldif file, commenting out the fr-idm-managed-user-json object class and attribute:

    $ cd /path/to/openidm
    $ more db/opendj/schema/openidm.ldif
    ###
    # Managed User
    ###
    # attributeTypes: ( 1.3.6.1.4.1.36733.2.3.1.13 NAME 'fr-idm-managed-user-json'
    #    SYNTAX 1.3.6.1.4.1.36733.2.1.3.1 EQUALITY caseIgnoreJsonQueryMatchManagedUser ...
    # objectClasses: ( 1.3.6.1.4.1.36733.2.3.2.6 NAME 'fr-idm-managed-user' SUP top STRUCTURAL
    #    MUST ( fr-idm-managed-user-json )
    #    X-ORIGIN 'OpenIDM OpenDJRepoService' )
  4. Restart IDM.

IDM uses the DS REST to LDAP gateway to map JSON objects to LDAP objects stored in the directory. To create additional explicit mappings, you must specify the LDAP objectClasses to which the object is mapped, and how each property maps to its corresponding LDAP attributes. Specify at least the property type and the corresponding ldapAttribute.

The following excerpt of the explicit managed user object mapping provides an example:

"managed/user" : {
    "dnTemplate": "ou=user,ou=managed,dc=openidm,dc=forgerock,dc=com",
    "objectClasses": [ "person", "organizationalPerson", "inetOrgPerson", "fr-idm-managed-user-explicit" ],
    "properties": {
        "_id": {
            "type": "simple", "ldapAttribute": "uid", "isRequired": true, "writability": "createOnly"
        },
        "userName": {
            "type": "simple", "ldapAttribute": "cn"
        },
        "password": {
            "type": "json", "ldapAttribute": "fr-idm-password"
        },
        "accountStatus": {
            "type": "simple", "ldapAttribute": "fr-idm-accountStatus"
        },
        "roles": {
            "type": "json", "ldapAttribute": "fr-idm-role", "isMultiValued": true
        },
        "effectiveRoles": {
            "type": "json", "ldapAttribute": "fr-idm-effectiveRole", "isMultiValued": true
        },
        "effectiveAssignments": {
            "type": "json", "ldapAttribute": "fr-idm-effectiveAssignment", "isMultiValued": true
        },
        ...
    }
},

You do not need to map the _rev (revision) property of an object as this property is implicit in all objects and maps to the DS etag operational attribute.

For more information about the REST to LDAP property mappings, see Mapping Configuration File in the DS Reference.

Important

DS currently has a default index entry limit of 4000. Therefore, you cannot query more than 4000 records unless you create a Virtual List View (VLV) index. A VLV index is designed to help DS respond to client applications that need to browse through a long list of objects.

You cannot create a VLV index on a JSON attribute. For generic mappings, IDM avoids this restriction by using client-side sorting and searching. However, for explicit mappings you must create a VLV index for any filtered or sorted results, such as results displayed in a UI grid. To configure a VLV index, use the dsconfig command described in Configuring a Virtual List View Index in the DS Administration Guide.

6.3. Configuring SSL with a JDBC Repository

To configure SSL with a JDBC repository, import the CA certificate file for the server into the IDM truststore. The examples in this section assume a certificate file named ca-cert.pem. If you have an existing CA or self-signed certificate file, substitute the certificate name accordingly.

To import the CA certificate file into the IDM truststore, use the keytool command native to the Java environment, typically located in the /path/to/jre-version/bin directory. On some UNIX-based systems, /usr/bin/keytool may link to that command.

Procedure 6.1. Preparing IDM for SSL with a JDBC Repository
  1. Import the ca-cert.pem certificate into the IDM truststore file with the following command:

    $ keytool \
     -importcert \
     -trustcacerts \
     -file ca-cert.pem \
     -alias "DB cert" \
     -keystore /path/to/openidm/security/truststore

    You are prompted for a keystore password. You must use the same password as is shown in the your project's conf/boot/boot.properties file. The default truststore password is:

    openidm.truststore.password=changeit

    After entering a keystore password, you are prompted with the following question. Assuming you have included an appropriate ca-cert.pem file, enter yes.

    Trust this certificate? [no]: 
  2. Open the repository connection configuration file, datasource.jdbc-default.json and locate the jdbcUrl property.

    Append 8&useSSL=true to the end of that URL.

    The value of the jdbcUrl property depends on your JDBC repository. The following example shows a MySQL repository, configured for SSL:

    "jdbcUrl" : "jdbc:mysql://&{openidm.repo.host}:&{openidm.repo.port}/openidm?allowMultiQueries=true&characterEncoding=utf8&useSSL=true"
  3. Open your project's conf/config.properties file. Find the org.osgi.framework.bootdelegation property. Make sure that property includes a reference to the javax.net.ssl option. If you started with the default version of config.properties that line should now read as follows:

    org.osgi.framework.bootdelegation=sun.*,com.sun.*,apple.*,com.apple.*,javax.net.ssl
  4. Open your project's conf/system.properties file. Add the following line to that file. If appropriate, substitute the path to your own truststore:

    # Set the truststore
    javax.net.ssl.trustStore=&{launcher.install.location}/security/truststore

    Even if you are setting up this instance of IDM as part of a cluster, you must still configure this initial truststore. After this instance joins a cluster, the SSL keys in this particular truststore are replaced. For more information on clustering, see Chapter 23, "Clustering, Failover, and Availability".

6.4. Interacting With the Repository Over REST

The IDM repository is accessible over the REST interface, at the openidm/repo endpoint.

In general, you must ensure that external calls to the openidm/repo endpoint are protected. Native queries and free-form command actions on this endpoint are disallowed by default because the endpoint is vulnerable to injection attacks. For more information, see Section 6.4.1, "Running Queries and Commands on the Repository".

6.4.1. Running Queries and Commands on the Repository

Free-form commands and native queries on the repository are disallowed by default and should remain so in production to reduce the risk of injection attacks.

Common filter expressions, called with the _queryFilter keyword, enable you to form arbitrary queries on the repository, using a number of supported filter operations. For more information on these filter operations, see Section 8.3.4, "Constructing Queries". Parameterized or predefined queries and commands (using the _queryId and _commandId keywords) can be authorized on the repository for external calls if necessary. For more information, see Section 8.3.2, "Parameterized Queries".

Running commands on the repository is supported primarily from scripts. Certain scripts that interact with the repository are provided by default, for example, the scripts that enable you to purge the repository of reconciliation audit records.

You can define your own commands, and specify them in the database table configuration file (either repo.opendj.json or repo.jdbc.json). In the following simple example, a command is called to clear out UI notification entries from the repository, for specific users.

The command is defined in the repository configuration file, as follows:

"commands" : {
"delete-notifications-by-id" : "DELETE FROM ui_notification WHERE receiverId = ${username}"
...
}, 

The command can be called from a script, as follows:

openidm.action("repo/ui/notification", "command", {},
{ "commandId" : "delete-notifications-by-id", "userName" : "scarter"});

Exercise caution when allowing commands to be run on the repository over the REST interface, as there is an attached risk to the underlying data.

Chapter 7. Configuring the Server

This chapter describes how IDM loads and stores its configuration, how the configuration can be changed, and specific configuration recommendations in a production environment.

The configuration is defined in a combination of .properties files, container configuration files, and dynamic configuration objects. Most of the configuration files are stored in your project's conf/ directory. Note that you might see files with a .patch extension in the conf/ and db/repo/conf/ directories. These files specify differences relative to the last released version of IDM and are used by the update mechanism. They do not affect your current configuration.

When the same configuration object is declared in more than one location, the configuration is loaded with the following order of precedence:

  1. System properties passed in on startup through the OPENIDM_OPTS environment variable

  2. Properties declared in the project-dir/conf/system.properties file

  3. Properties declared in the project-dir/conf/boot/boot.properties file

  4. Properties set explicitly in the various project-dir/conf/*.json files

Properties that are set using the first three options are not stored in the repository. You can therefore use these mechanisms to set different configurations for multiple nodes participating in a cluster.

You can access configuration properties in scripts using identityServer.getProperty(). For more information, see Section E.3.8, "The identityServer Variable".

To set the configuration in the OPENIDM_OPTS environment variable, export that variable before startup. The following example starts IDM with a different keystore and truststore:

$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m \
 -Dopenidm.keystore.location=/path/to/keystore.jceks -Dopenidm.truststore.location=/path/to/truststore"
$ ./startup.sh
Executing ./startup.sh...
Using OPENIDM_HOME:   /path/to/openidm
Using PROJECT_HOME:   /path/to/openidm
Using OPENIDM_OPTS:   -Xmx1024m -Xms1024m -Dopenidm.keystore.location=/path/to/keystore.jceks
                      -Dopenidm.truststore.location=/path/to/truststore
Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties
Using boot properties at /path/to/openidm/conf/boot/boot.properties
-> OpenIDM version "5.5.0"
OpenIDM ready

Configuration properties that are explicitly set in project-dir/conf/*.json files are stored in the repository. You can manage these configuration objects by using the REST interface or by using the JSON files themselves. Most aspects of the configuration can also be managed by using the Admin UI, as described in Section 4.1, "Configuring the Server from the Admin UI".

7.1. Configuration Objects

IDM exposes internal configuration objects in JSON format. Configuration elements can be either single instance or multiple instance for an IDM installation.

7.1.1. Single Instance Configuration Objects

Single instance configuration objects correspond to services that have at most one instance per installation. JSON file views of these configuration objects are named object-name.json.

The following list describes the single instance configuration objects:

  • The audit configuration specifies how audit events are logged.

  • The authentication configuration controls REST access.

  • The cluster configuration defines how an IDM instance can be configured in a cluster.

  • The endpoint configuration controls any custom REST endpoints.

  • The info configuration points to script files for the customizable information service.

  • The managed configuration defines managed objects and their schemas.

  • The policy configuration defines the policy validation service.

  • The process access configuration defines access to configured workflows.

  • The repo.repo-type configuration such as repo.opendj or repo.jdbc configures the IDM repository.

  • The router configuration specifies filters to apply for specific operations.

  • The script configuration defines the parameters that are used when compiling, debugging, and running JavaScript and Groovy scripts.

  • The sync configuration defines the mappings that IDM uses when it synchronizes and reconciles managed objects.

  • The ui configuration defines the configurable aspects of the default user interfaces.

  • The workflow configuration defines the configuration of the workflow engine.

IDM stores managed objects in the repository, and exposes them under /openidm/managed. System objects on external resources are exposed under /openidm/system.

7.1.2. Multiple Instance Configuration Objects

Multiple instance configuration objects correspond to services that can have many instances per installation. Multiple instance configuration objects are named objectname/instancename, for example, provisioner.openicf/csv.

JSON file views of these configuration objects are named objectname-instancename.json, for example, provisioner.openicf-csv.json.

IDM provides the following multiple instance configuration objects:

  • Multiple schedule configurations can run reconciliations and other tasks on different schedules.

  • Multiple provisioner.openicf configurations correspond to connected resources.

  • Multiple servletfilter configurations can be used for different servlet filters such as the Cross Origin and GZip filters.

7.2. Changing the Default Configuration

When you change configuration objects, take the following points into account:

  • IDM's authoritative configuration source is its repository. While JSON files provide a view of the configuration objects, they do not represent the authoritative source.

    Unless you've disabled file writes, per Section 7.4.2, "Disabling Automatic Configuration Updates", IDM updates JSON files after making configuration changes via REST. Of course, you can also edit those JSON files directly.

  • IDM recognizes changes to JSON files when it is running. The server must be running when you delete configuration objects, even if you do so by editing the JSON files.

  • Avoid editing configuration objects directly in the repository. Rather, edit the configuration over the REST API, or in the configuration JSON files to ensure consistent behavior and that operations are logged.

  • By default, IDM stores its configuration in the repository. If you remove an IDM instance and do not specifically drop the repository, the configuration remains in effect for a new instance that uses that repository. For testing or evaluation purposes, you can disable this persistent configuration in the conf/system.properties file by uncommenting the following line:

    # openidm.config.repo.enabled=false

    Disabling persistent configuration means that IDM stores its configuration in memory only. Do not disable persistent configuration in a production environment.

7.3. Changing the Default REST Context

By default, IDM objects are accessible over REST at the context path /openidm/* where * indicates the remainder of the context path, for example /openidm/managed/user. You can change the default REST context (/openidm) by setting the openidm.servlet.alias property in your project's conf/boot/boot.properties file.

The following change to the boot.properties file sets the REST context to /example:

openidm.servlet.alias=/example

After this change, objects are accessible at the /example context path, for example:

$ $ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/example/managed/user?_queryId=query-all-ids"
{
  "result": [
    {
      "_id": "bjensen",
      "_rev": "0000000042b1dcd2"
    },
    {
      "_id": "scarter",
      "_rev": "000000009b54de8a"
    }
  ],
  ...
}

To ensure that the UI works with the new REST context, also change the commonConstants.context property in the following files:

/path/to/openidm/ui/selfservice/default/org/forgerock/openidm/ui/common/util/Constants.js
/path/to/openidm/ui/admin/default/org/forgerock/openidm/ui/common/util/Constants.js

For example:

$ more /path/to/openidm/ui/selfservice/default/org/forgerock/openidm/ui/common/util/Constants.js
...
define(["org/forgerock/commons/ui/common/util/Constants"], function (commonConstants) {
    commonConstants.context = "example";
...

Note that changing the REST context impacts the API Explorer, described in Section 4.8, "API Explorer". If you want to use the API Explorer with the new REST context, change the url property in the following file:

/path/to/openidm/ui/api/default/index.html

For example:

$ more /path/to/openidm/ui/api/default/index.html
...
    } else {
        // default Swagger JSON URL
        url = "/example/?_api";
    }
 ...

7.4. Configuring the Server for Production

Out of the box, IDM is configured to make it easy to install and evaluate. Specific configuration changes are required before you deploy IDM in a production environment.

7.4.1. Configuring a Production Repository

By default, IDM comes with an internal ForgeRock Directory Services (DS) instance for use as its repository. This makes it easy to get started. DS is not supported as a repository in production, however, so use a supported JDBC database when moving to production.

For more information, see Chapter 2, "Selecting a Repository" in the Installation Guide.

7.4.2. Disabling Automatic Configuration Updates

By default, IDM polls the JSON files in the conf directory periodically for any changes to the configuration. In a production system, it is recommended that you disable automatic polling for updates to prevent untested configuration changes from disrupting your identity service.

To disable automatic polling for configuration changes, edit the conf/system.properties file for your project, and uncomment the following line:

# openidm.fileinstall.enabled=false

This setting also disables the file-based configuration view, which means that IDM reads its configuration only from the repository.

Before you disable automatic polling, you must have started the server at least once to ensure that the configuration has been loaded into the repository. Be aware, if automatic polling is enabled, IDM immediately uses changes to scripts called from a JSON configuration file.

When your configuration is complete, you can disable writes to configuration files. To do so, add the following line to the conf/config.properties file for your project:

felix.fileinstall.enableConfigSave=false

7.4.3. Communicating Through a Proxy Server

To set up IDM to communicate through a proxy server, you can use JVM parameters that identify the proxy host system, and the IDM port number.

If you've configured IDM behind a proxy server, include JVM properties from the following table, in the IDM startup script:

Table 7.1. JVM Proxy Properties
JVM PropertyExample ValuesDescription
-Dhttps.proxyHostproxy.example.com, 192.168.0.1Hostname or IP address of the proxy server
-Dhttps.proxyPort8443, 9443Port number used by IDM

If an insecure port is acceptable, you can also use the -Dhttp.proxyHost and -Dhttp.proxyPort options. You can add these JVM proxy properties to the value of OPENIDM_OPTS in your startup script (startup.sh or startup.bat):

# Only set OPENIDM_OPTS if not already set
[ -z "$OPENIDM_OPTS" ] && OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8443"

7.5. Configuring the Server Over REST

IDM exposes configuration objects under the /openidm/config context path.

To list the configuration on the local host, perform a GET request on http://localhost:8080/openidm/config.

The following REST call includes excerpts of the default configuration for an IDM instance started with the sync-with-csv sample:

$ curl \
 --request GET \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 http://localhost:8080/openidm/config
{
  "_id": "",
  "configurations": [
    {
      "_id": "router",
      "pid": "router",
      "factoryPid": null
    },
    {
      "_id": "info/login",
      "pid": "info.f01fc3ed-5871-408d-a5f0-bef00ccc4c8f",
      "factoryPid": "info"
    },
    {
      "_id": "provisioner.openicf/csv",
      "pid": "provisioner.openicf.9009f4a1-ea47-4227-94e6-69c345864ba7",
      "factoryPid": "provisioner.openicf"
    },
    {
      "_id": "endpoint/usernotifications",
      "pid": "endpoint.e2751afc-d169-4a23-a88e-7211d340bccb",
      "factoryPid": "endpoint"
    },
    ...
  ]
}

Single instance configuration objects are located under openidm/config/object-name. The following example shows the audit configuration of the sync-with-csv. The output has been cropped for legibility:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "http://localhost:8080/openidm/config/audit"
{
  "_id": "audit",
  "auditServiceConfig": {
    "handlerForQueries": "json",
    "availableAuditEventHandlers": [
      "org.forgerock.audit.handlers.csv.CsvAuditEventHandler",
      "org.forgerock.audit.handlers.elasticsearch.ElasticsearchAuditEventHandler",
      "org.forgerock.audit.handlers.jms.JmsAuditEventHandler",
      "org.forgerock.audit.handlers.json.JsonAuditEventHandler",
      "org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
      "org.forgerock.openidm.audit.impl.RouterAuditEventHandler",
      "org.forgerock.audit.handlers.splunk.SplunkAuditEventHandler",
      "org.forgerock.audit.handlers.syslog.SyslogAuditEventHandler"
    ],
    "filterPolicies": {
      "value": {
        "excludeIf": [
          "/access/http/request/headers/Authorization",
          "/access/http/request/headers/X-OpenIDM-Password",
          "/access/http/request/cookies/session-jwt",
          "/access/http/response/headers/Authorization",
          "/access/http/response/headers/X-OpenIDM-Password"
        ],
        "includeIf": []
      }
    }
  },
  "eventHandlers": [
    {
      "class": "org.forgerock.audit.handlers.json.JsonAuditEventHandler",
      "config": {
        "name": "json",
        "logDirectory": "&{launcher.working.location}/audit",
        "buffering": {
          "maxSize": 100000,
          "writeInterval": "100 millis"
        },
        "topics": [
          "access",
          "activity",
          "recon",
          "sync",
          "authentication",
          "config"
        ]
      }
    },
    ...
}  

Multiple instance configuration objects are found under openidm/config/object-name/instance-name.

The following example shows the configuration for the CSV connector shown in the sync-with-csv sample. The output has been cropped for legibility:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "http://localhost:8080/openidm/config/provisioner.openicf/csv"
{
  "_id": "provisioner.openicf/csv",
  "name": "csvfile",
  "connectorRef": {
    "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
    "bundleVersion": "[1.5.1.4,1.6.0.0)",
    "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector"
  },
  "poolConfigOption": {
    "maxObjects": 10,
    "maxIdle": 10,
    "maxWait": 150000,
    "minEvictableIdleTimeMillis": 120000,
    "minIdle": 1
  },
  "operationTimeout": {
    "CREATE": -1,
    "VALIDATE": -1,
    "TEST": -1,
    "SCRIPT_ON_CONNECTOR": -1,
    "SCHEMA": -1,
    "DELETE": -1,
    "UPDATE": -1,
    "SYNC": -1,
    "AUTHENTICATE": -1,
    "GET": -1,
    "SCRIPT_ON_RESOURCE": -1,
    "SEARCH": -1
  },
  "configurationProperties": {
    "csvFile": "&{launcher.project.location}/data/csvConnectorData.csv"
  },
  ...
}

You can change the configuration over REST by using an HTTP PUT or HTTP PATCH request to modify the required configuration object.

The following example uses a PUT request to modify the configuration of the scheduler service, increasing the maximum number of threads that are available for the concurrent execution of scheduled tasks:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "threadPool": {
        "threadCount": "20"
    },
    "scheduler": {
        "executePersistentSchedules": "&{openidm.scheduler.execute.persistent.schedules}"
    }
}' \
 "http://localhost:8080/openidm/config/scheduler"
{
  "_id" : "scheduler",
  "threadPool": {
    "threadCount": "20"
  },
  "scheduler": {
    "executePersistentSchedules": "true"
  }
}

The following example uses a PATCH request to reset the number of threads to their original value.

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PATCH \
 --data '[
    {
      "operation" : "replace",
      "field" : "/threadPool/threadCount",
      "value" : "10"
    }
 ]' \
 "http://localhost:8080/openidm/config/scheduler"
{
  "_id": "scheduler",
  "threadPool": {
    "threadCount": "10"
  },
  "scheduler": {
    "executePersistentSchedules": "true"
  }
}

Note

Multi-version concurrency control (MVCC) is not supported for configuration objects so you do not need to specify a revision during updates to the configuration, and no revision is returned in the output.

For more information about using the REST API to update objects, see Appendix D, "REST API Reference".

7.6. Using Property Value Substitution In the Configuration

In an environment where you have more than one IDM instance, you might require a configuration that is similar, but not identical, across the different instances. IDM supports variable replacement in its configuration which means that you can modify the effective configuration according to the requirements of a specific environment or instance.

Property substitution enables you to achieve the following:

  • Define a configuration that is specific to a single instance, for example, setting the location of the keystore on a particular host.

  • Define a configuration whose parameters vary between different environments, for example, the URLs and passwords for test, development, and production environments.

  • Disable certain capabilities on specific nodes. For example, you might want to disable the workflow engine on specific instances.

When IDM starts up, it combines the system configuration, which might contain specific environment variables, with the defined configuration properties. This combination makes up the effective configuration for that instance. By varying the environment properties, you can change specific configuration items that vary between instances or environments.

Property references are contained within the construct &{ }. When such references are found, IDM replaces them with the appropriate property value, defined in the boot.properties file.

For properties that would usually be encrypted, such as passwords, IDM does not encrypt the property reference. You can therefore reference an obfuscated property value as shown in the following example:

Specify the reference in the configuration file:

{
...
"password" : "&{openidm.repo.password}",
...
}

Provide the encrypted or obfuscated property value in the boot.properties file:

openidm.repo.password=OBF:1jmv1usdf1t3b1vuz1sfgsb1t2v1ufs1jkn

7.6.1. Using Property Value Substitution With System Properties

You can use property value substitution in conjunction with the system properties, to modify the configuration according to the system on which the instance runs.

Example 7.1. Custom Audit Log Location

The following example modifies the audit.json file so that the JSON audit logs are written to the user's directory. The user.home property is a default Java System property:

"eventHandlers" : [
    {
        "class" : "org.forgerock.audit.handlers.json.JsonAuditEventHandler",
        "config" : {
            "name" : "json",
            "logDirectory" : "&{user.home}/audit",
            ...
        }
    },
...

You can define nested properties (that is a property definition within another property definition) and you can combine system properties and boot properties.

Example 7.2. Defining Different Ports in the Configuration

The following example uses the user.country property, a default Java system property. The example defines specific LDAP ports, depending on the country (identified by the country code) in the boot.properties file. The value of the LDAP port (set in the provisioner.openicf-ldap.json file) depends on the value of the user.country system property.

The port numbers are defined in the boot.properties file as follows:

openidm.NO.ldap.port=2389
openidm.EN.ldap.port=3389
openidm.US.ldap.port=1389

The following excerpt of the provisioner.openicf-ldap.json file shows how the value of the LDAP port is eventually determined, based on the system property:

"configurationProperties" :
   {
      "credentials" : "Passw0rd",
      "port" : "&{openidm.&{user.country}.ldap.port}",
      "principal" : "cn=Directory Manager",
      "baseContexts" :
         [
            "dc=example,dc=com"
         ],
      "host" : "localhost"
   }

7.6.2. Limitations of Property Value Substitution

Note the following limitations when you use property value substitution:

  • You cannot reference complex objects or properties with syntaxes other than string. Property values are resolved from the boot.properties file or from the system properties and the value of these properties is always in string format.

    Property substitution of boolean values is currently only supported in stringified format, that is, resulting in "true" or "false".

7.7. Setting the Script Configuration

The script configuration file (conf/script.json) enables you to modify the parameters that are used when compiling, debugging, and running JavaScript and Groovy scripts.

The default script.json file includes the following parameters:

properties

Any custom properties that should be provided to the script engine.

ECMAScript

Specifies JavaScript debug and compile options. JavaScript is an ECMAScript language.

  • javascript.recompile.minimumInterval - minimum time after which a script can be recompiled.

    The default value is 60000, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around 100 (100 milliseconds).

    If you set the javascript.recompile.minimumInterval to -1, or remove this property from the script.json file, IDM does not poll JavaScript files to check for changes.

Groovy

Specifies compilation and debugging options related to Groovy scripts. Many of these options are commented out in the default script configuration file. Remove the comments to set these properties:

  • groovy.warnings - the log level for Groovy scripts. Possible values are none, likely, possible, and paranoia.

  • groovy.source.encoding - the encoding format for Groovy scripts. Possible values are UTF-8 and US-ASCII.

  • groovy.target.directory - the directory to which compiled Groovy classes will be output. The default directory is install-dir/classes.

  • groovy.target.bytecode - the bytecode version that is used to compile Groovy scripts. The default version is 1.5.

  • groovy.classpath - the directory in which the compiler should look for compiled classes. The default classpath is install-dir/lib.

    To call an external library from a Groovy script, you must specify the complete path to the .jar file or files, as a value of this property. For example:

    "groovy.classpath" : "/&{launcher.install.location}/lib/http-builder-0.7.1.jar:
             /&{launcher.install.location}/lib/json-lib-2.3-jdk15.jar:
             /&{launcher.install.location}/lib/xml-resolver-1.2.jar:
             /&{launcher.install.location}/lib/commons-collections-3.2.1.jar",

    Note

    If you're deploying on Microsoft Windows, use a semicolon (;) instead of a colon to separate directories in the groovy.classpath.

  • groovy.output.verbose - specifies the verbosity of stack traces. Boolean, true or false.

  • groovy.output.debug - specifies whether debugging messages are output. Boolean, true or false.

  • groovy.errors.tolerance - sets the number of non-fatal errors that can occur before a compilation is aborted. The default is 10 errors.

  • groovy.script.extension - specifies the file extension for Groovy scripts. The default is .groovy.

  • groovy.script.base - defines the base class for Groovy scripts. By default any class extends groovy.lang.Script.

  • groovy.recompile - indicates whether scripts can be recompiled. Boolean, true or false, with default true.

  • groovy.recompile.minimumInterval - sets the minimum time between which Groovy scripts can be recompiled.

    The default value is 60000, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around 100 (100 milliseconds).

  • groovy.target.indy - specifies whether a Groovy indy test can be used. Boolean, true or false, with default true.

  • groovy.disabled.global.ast.transformations - specifies a list of disabled Abstract Syntax Transformations (ASTs).

sources

Specifies the locations in which IDM expects to find JavaScript and Groovy scripts that are referenced in the configuration.

The following excerpt of the script.json file shows the default locations:

...
"sources" : {
    "default" : {
        "directory" : "&{launcher.install.location}/bin/defaults/script"
    },
    "install" : {
        "directory" : "&{launcher.install.location}"
    },
    "project" : {
        "directory" : "&{launcher.project.location}"
    },
    "project-script" : {
        "directory" : "&{launcher.project.location}/script"
    }
...

Note

The order in which locations are listed in the sources property is important. Scripts are loaded from the bottom up in this list, that is, scripts found in the last location on the list are loaded first.

Note

By default, debug information (such as file name and line number) is excluded from JavaScript exceptions. To troubleshoot script exceptions, you can include debug information by changing the following setting to true in your project's conf/boot/boot.properties file:

javascript.exception.debug.info=false

Including debug information in a production environment is not recommended.

7.8. Calling a Script From a Configuration File

You can call a script from within a configuration file by providing the script source, or by referencing a file that contains the script source. For example:

{
    "type" : "text/javascript",
    "source": string
} 

or

{
    "type" : "text/javascript",
    "file" : file location
} 
type

string, required

Specifies the type of script to be executed. Supported types include text/javascript, and groovy.

source

string, required if file is not specified

Specifies the source code of the script to be executed.

file

string, required if source is not specified

Specifies the file containing the source code of the script to execute.

The following sample excerpts from configuration files indicate how scripts can be called.

The following example (included in the sync.json file) returns true if the employeeType is equal to external, otherwise returns false. This script can be useful during reconciliation to establish whether a target object should be included in the reconciliation process, or should be ignored:

"validTarget": {
    "type" : "text/javascript",
    "source": "target.employeeType == 'external'"
}  

The following example (included in the sync.json file) sets the __PASSWORD__ attribute to defaultpwd when IDM creates a target object:

"onCreate" : {
    "type" : "text/javascript",
    "source": "target.__PASSWORD__ = 'defaultpwd'"
} 

The following example (included in the router.json file) shows a trigger to create Solaris home directories using a script. The script is located in the file, project-dir/script/createUnixHomeDir.js:

{
    "filters" : [ {
        "pattern" : "^system/solaris/account$",
        "methods" : [ "create" ],
        "onResponse" : {
            "type" : "text/javascript",
            "file" : "script/createUnixHomeDir.js"
        }
    } ]
} 

Often, script files are reused in different contexts. You can pass variables to your scripts to provide these contextual details at runtime. You pass variables to the scripts that are referenced in configuration files by declaring the variable name in the script reference.

The following example of a scheduled task configuration calls a script named triggerEmailNotification.js. The example sets the sender and recipient of the email in the schedule configuration, rather than in the script itself:

{
    "enabled" : true,
    "type" : "cron",
    "schedule" : "0 0/1 * * * ?",
    "persisted" : true,
    "invokeService" : "script",
    "invokeContext" : {
        "script": {
            "type" : "text/javascript",
            "file" : "script/triggerEmailNotification.js",
            "fromSender" : "admin@example.com",
            "toEmail" : "user@example.com"
        }
    }
} 

Tip

In general, you should namespace variables passed into scripts with the globals map. Passing variables in this way prevents collisions with the top-level reserved words for script maps, such as file, source, and type. The following example uses the globals map to namespace the variables passed in the previous example.

"script": {
    "type" : "text/javascript",
    "file" : "script/triggerEmailNotification.js",
    "globals" : {
        "fromSender" : "admin@example.com",
        "toEmail" : "user@example.com"
    }
} 

Script variables are not necessarily simple key:value pairs. A script variable can be any arbitrarily complex JSON object.

Chapter 8. Accessing Data Objects

IDM supports a variety of objects that can be addressed via a URL or URI. You can access data objects by using scripts (through the Resource API) or by using direct HTTP calls (through the REST API).

The following sections describe these two methods of accessing data objects, and provide information on constructing and calling data queries.

8.1. Accessing Data Objects By Using Scripts

IDM's uniform programming model means that all objects are queried and manipulated in the same way, using the Resource API. The URL or URI that is used to identify the target object for an operation depends on the object type. For an explanation of object types, see Appendix B, "Data Models and Objects Reference". For more information about scripts and the objects available to scripts, see Appendix E, "Scripting Reference".

You can use the Resource API to obtain managed, system, configuration, and repository objects, as follows:

val = openidm.read("managed/organization/mysampleorg")
val = openidm.read("system/mysystem/account")
val = openidm.read("config/custom/mylookuptable")
val = openidm.read("repo/custom/mylookuptable")

For information about constructing an object ID, see Section D.3, "URI Scheme".

You can update entire objects with the update() function, as follows:

openidm.update("managed/organization/mysampleorg", rev, object)
openidm.update("system/mysystem/account", rev, object)

You can apply a partial update to a managed or system object by using the patch() function:

openidm.patch("managed/organization/mysampleorg", rev, value)

The create(), delete(), and query() functions work the same way.

8.2. Accessing Data Objects By Using the REST API

IDM provides RESTful access to data objects through the ForgeRock Common REST API. To access objects over REST, you can use a browser-based REST client, such as the Simple REST Client for Chrome, or RESTClient for Firefox. Alternatively you can use the curl command-line utility.

For a comprehensive overview of the REST API, see Appendix D, "REST API Reference".

To obtain a managed object through the REST API, depending on your security settings and authentication configuration, perform an HTTP GET on the corresponding URL, for example http://localhost:8080/openidm/managed/organization/mysampleorg.

By default, the HTTP GET returns a JSON representation of the object.

In general, you can map any HTTP request to the corresponding openidm.method call. The following example shows how the parameters provided in an openidm.query request correspond with the key-value pairs that you would include in a similar HTTP GET request:

Reading an object using the Resource API:

openidm.query("managed/user", { "_queryId": "query-all" }, ["userName","sn"])

Reading an object using the REST API:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user?_queryId=query-all&_fields=userName,sn"

8.3. Defining and Calling Queries

An advanced query model enables you to define queries and to call them over the REST or Resource API. Three types of queries are supported, on both managed, and system objects:

  • Common filter expressions

  • Parameterized, or predefined queries

  • Native query expressions

Each of these mechanisms is discussed in the following sections.

8.3.1. Common Filter Expressions

The ForgeRock REST API defines common filter expressions that enable you to form arbitrary queries using a number of supported filter operations. This query capability is the standard way to query data if no predefined query exists, and is supported for all managed and system objects.

Common filter expressions are useful in that they do not require knowledge of how the object is stored and do not require additions to the repository configuration.

Common filter expressions are called with the _queryFilter keyword. The following example uses a common filter expression to retrieve managed user objects whose user name is Smith:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"smith"'

The filter is URL encoded in this example. The corresponding filter using the resource API would be:

openidm.query("managed/user", { "_queryFilter" : '/userName eq "smith"' });

Note that, this JavaScript invocation is internal and is not subject to the same URL-encoding requirements that a GET request would be. Also, because JavaScript supports the use of single quotes, it is not necessary to escape the double quotes in this example.

For a list of supported filter operations, see Section 8.3.4, "Constructing Queries".

Note that using common filter expressions to retrieve values from arrays is currently not supported. If you need to search within an array, you should set up a predefined (parameterized) in your repository configuration. For more information, see Section 8.3.2, "Parameterized Queries".

8.3.2. Parameterized Queries

Managed objects in the supported repositories can be accessed using a parameterized query mechanism. Parameterized queries on repositories are defined in the repository configuration (repo.*.json) and are called by their _queryId.

Parameterized queries provide precise control over the query that is executed. Such control might be useful for tuning, or for performing database operations such as aggregation (which is not possible with a common filter expression.)

Parameterized queries provide security and portability for the query call signature, regardless of the backend implementation. Queries that are exposed over the REST interface must be parameterized queries to guard against injection attacks and other misuse. Queries on the officially supported repositories have been reviewed and hardened against injection attacks.

For system objects, support for parameterized queries is restricted to _queryId=query-all-ids. There is currently no support for user-defined parameterized queries on system objects. Typically, parameterized queries on system objects are not called directly over the REST interface, but are issued from internal calls, such as correlation queries.

A typical query definition is as follows:

"query-all-ids" : "select _openidm_id from ${unquoted:_resource}"

To call this query, you would reference its ID, as follows:

?_queryId=query-all-ids

The following example calls query-all-ids over the REST interface:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids"

8.3.3. Native Query Expressions

Native query expressions are supported for all managed objects and system objects, and can be called directly, rather than being defined in the repository configuration.

Native queries are intended specifically for internal callers, such as custom scripts, and should be used only in situations where the common filter or parameterized query facilities are insufficient. For example, native queries are useful if the query needs to be generated dynamically.

The query expression is specific to the target resource. For repositories, queries use the native language of the underlying data store. For system objects that are backed by OpenICF connectors, queries use the applicable query language of the system resource.

Important

Native query expressions are not supported with the default DS repository.

Native queries on the repository are made using the _queryExpression keyword. For example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 "http://localhost:8080/openidm/managed/user?_queryExpression=select+from+managed_user"

Unless you have specifically enabled native queries over REST, the previous command returns a 403 access denied error message. Native queries are not portable and do not guard against injection attacks. Such query expressions should therefore not be used or made accessible over the REST interface or over HTTP in production environments. They should be used only via the internal Resource API. If you want to enable native queries over REST for development, see Section 20.2.7, "Protecting Sensitive REST Interface URLs".

Alternatively, if you really need to expose native queries over HTTP, in a selective manner, you can design a custom endpoint to wrap such access.

8.3.4. Constructing Queries

The openidm.query function enables you to query managed and system objects. The query syntax is openidm.query(id, params), where id specifies the object on which the query should be performed and params provides the parameters that are passed to the query, either _queryFilter or _queryId. For example:

var params = {
    '_queryFilter' : 'givenName co "' + sourceCriteria + '" or ' + 'sn co "' + sourceCriteria + '"'
};
var results = openidm.query("system/ScriptedSQL/account", params)

Over the REST interface, the query filter is specified as _queryFilter=filter, for example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"Smith"'

Note the use of double-quotes around the search term: Smith. In _queryFilter expressions, string values must use double-quotes. Numeric and boolean expressions should not use quotes.

When called over REST, you must URL encode the filter expression. The following examples show the filter expressions using the resource API and the REST API, but do not show the URL encoding, to make them easier to read.

Note that, for generic mappings, any fields that are included in the query filter (for example userName in the previous query), must be explicitly defined as searchable, if you have set the global searchableDefault to false. For more information, see Section 6.2.1.2, "Improving Generic Mapping Search Performance (JDBC)".

The filter expression is constructed from the building blocks shown in this section. In these expressions the simplest json-pointer is a field of the JSON resource, such as userName or id. A JSON pointer can, however, point to nested elements.

Note

You can also use the negation operator (!) in query construction. For example, a _queryFilter=!(userName+eq+"jdoe") query would return every userName except for jdoe.

You can set up query filters with the following expression types:

8.3.4.1. Comparison Expressions

Note

Certain system endpoints also support EndsWith and ContainsAllValues queries. However, such queries are not supported for managed objects and have not been tested with all supported OpenICF connectors.

8.3.4.1.1. Querying Objects That Equal a Specified Value

This is the associated JSON comparison expression: json-pointer eq json-value.

Consider the following example:

"_queryFilter" : '/givenName eq "Dan"'

The following REST call returns the user name and given name of all managed users whose first name (givenName) is "Dan":

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=givenName+eq+"Dan"&_fields=userName,givenName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "givenName": "Dan",
      "userName": "dlangdon"
    },
    {
      "givenName": "Dan",
      "userName": "dcope"
    },
    {
      "givenName": "Dan",
      "userName": "dlanoway"
    }
}
8.3.4.1.2. Querying Objects That Contain a Specified Value

This is the associated JSON comparison expression: json-pointer co json-value.

Consider the following example:

"_queryFilter" : '/givenName co "Da"'

The following REST call returns the user name and given name of all managed users whose first name (givenName) contains "Da":

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=givenName+co+"Da"&_fields=userName,givenName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 10,
  "result": [
    {
      "givenName": "Dave",
      "userName": "djensen"
    },
    {
      "givenName": "David",
      "userName": "dakers"
    },
    {
      "givenName": "Dan",
      "userName": "dlangdon"
    },
    {
      "givenName": "Dan",
      "userName": "dcope"
    },
    {
      "givenName": "Dan",
      "userName": "dlanoway"
    },
    {
      "givenName": "Daniel",
      "userName": "dsmith"
    },
...
}
8.3.4.1.3. Querying Objects That Start With a Specified Value

This is the associated JSON comparison expression: json-pointer sw json-value.

Consider the following example:

"_queryFilter" : '/sn sw "Jen"'

The following REST call returns the user names of all managed users whose last name (sn) starts with "Jen":

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=sn+sw+"Jen"&_fields=userName'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 4,
  "result": [
    {
      "userName": "bjensen"
    },
    {
      "userName": "djensen"
    },
    {
      "userName": "cjenkins"
    },
    {
      "userName": "mjennings"
    }
  ]
}
8.3.4.1.4. Querying Objects That Are Less Than a Specified Value

This is the associated JSON comparison expression: json-pointer lt json-value.

Consider the following example:

"_queryFilter" : '/employeeNumber lt 5000'

The following REST call returns the user names of all managed users whose employeeNumber is lower than 5000:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+lt+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 4999,
  "result": [
    {
      "employeeNumber": 4907,
      "userName": "jnorris"
    },
    {
      "employeeNumber": 4905,
      "userName": "afrancis"
    },
    {
      "employeeNumber": 3095,
      "userName": "twhite"
    },
    {
      "employeeNumber": 3921,
      "userName": "abasson"
    },
    {
      "employeeNumber": 2892,
      "userName": "dcarter"
    }
...
  ]
}
8.3.4.1.5. Querying Objects That Are Less Than or Equal to a Specified Value

This is the associated JSON comparison expression: json-pointer le json-value.

Consider the following example:

"_queryFilter" : '/employeeNumber le 5000'

The following REST call returns the user names of all managed users whose employeeNumber is 5000 or less:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+le+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 5000,
  "result": [
    {
      "employeeNumber": 4907,
      "userName": "jnorris"
    },
    {
      "employeeNumber": 4905,
      "userName": "afrancis"
    },
    {
      "employeeNumber": 3095,
      "userName": "twhite"
    },
    {
      "employeeNumber": 3921,
      "userName": "abasson"
    },
    {
      "employeeNumber": 2892,
      "userName": "dcarter"
    }
...
  ]
}
8.3.4.1.6. Querying Objects That Are Greater Than a Specified Value

This is the associated JSON comparison expression: json-pointer gt json-value

Consider the following example:

"_queryFilter" : '/employeeNumber gt 5000'

The following REST call returns the user names of all managed users whose employeeNumber is higher than 5000:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+gt+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 1458,
  "result": [
    {
      "employeeNumber": 5003,
      "userName": "agilder"
    },
    {
      "employeeNumber": 5011,
      "userName": "bsmith"
    },
    {
      "employeeNumber": 5034,
      "userName": "bjensen"
    },
    {
      "employeeNumber": 5027,
      "userName": "cclarke"
    },
    {
      "employeeNumber": 5033,
      "userName": "scarter"
    }
...
  ]
}
8.3.4.1.7. Querying Objects That Are Greater Than or Equal to a Specified Value

This is the associated JSON comparison expression: json-pointer ge json-value.

Consider the following example:

"_queryFilter" : '/employeeNumber ge 5000'

The following REST call returns the user names of all managed users whose employeeNumber is 5000 or greater:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+ge+5000&_fields=userName,employeeNumber'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 1457,
  "result": [
    {
      "employeeNumber": 5000,
      "userName": "agilder"
    },
    {
      "employeeNumber": 5011,
      "userName": "bsmith"
    },
    {
      "employeeNumber": 5034,
      "userName": "bjensen"
    },
    {
      "employeeNumber": 5027,
      "userName": "cclarke"
    },
    {
      "employeeNumber": 5033,
      "userName": "scarter"
    }
...
  ]
}

8.3.4.2. Presence Expressions

The following examples show how you can build filters using a presence expression, shown as pr. The presence expression is a filter that returns all records with a given attribute.

A presence expression filter evaluates to true when a json-pointer pr matches any object in which the json-pointer is present, and contains a non-null value. Consider the following expression:

"_queryFilter" : '/mail pr'

The following REST call uses that expression to return the mail addresses for all managed users with a mail property:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=mail+pr&_fields=mail'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "mail": "jdoe@exampleAD.com"
    },
    {
      "mail": "bjensen@example.com"
    }
  ]
}

You can also apply the presence filter on system objects. For example, the following query returns the uid of all users in an LDAP system who have the uid attribute in their entries:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/system/ldap/account?_queryFilter=uid+pr&_fields=uid'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "uid": "jdoe"
    },
    {
      "uid": "bjensen"
    }
  ]
}

8.3.4.3. Literal Expressions

A literal expression is a boolean:

  • true matches any object in the resource.

  • false matches no object in the resource.

For example, you can list the _id of all managed objects as follows:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=true&_fields=_id'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 2,
  "result": [
    {
      "_id": "d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c"
    },
    {
      "_id": "709fed03-897b-4ff0-8a59-6faaa34e3af6"
    }
  ]
}
    

8.3.4.4. Complex Expressions

You can combine expressions using the boolean operators and, or, and ! (not). The following example queries managed user objects located in London, with last name Jensen:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user/?_queryFilter=city+eq+"London"+and+sn+eq+"Jensen"&_fields=userName,givenName,sn'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "sn": "Jensen",
      "givenName": "Clive",
      "userName": "cjensen"
    },
    {
      "sn": "Jensen",
      "givenName": "Dave",
      "userName": "djensen"
    },
    {
      "sn": "Jensen",
      "givenName": "Margaret",
      "userName": "mjensen"
    }
  ]
}

8.3.5. Paging Query Results

The common filter query mechanism supports paged query results for managed objects, and for some system objects, depending on the system resource. There are two ways to page objects in a query:

  • Using a cookie based on the value of a specified sort key.

  • Using an offset that specifies how many records should be skipped before the first result is returned.

These methods are implemented with the following query parameters:

_pagedResultsCookie

Opaque cookie used by the server to keep track of the position in the search results. The format of the cookie is a base-64 encoded version of the value of the unique sort key property.

You cannot page results without sorting them (using the _sortKeys parameter). If you do not specify a sort key, the _id of the record is used as the default sort key. At least one of the specified sort key properties must be a unique value property, such as _id.

Tip

For paged searches on generic mappings with the default DS repository, you should sort on the _id property, as this is the only property that is stored outside of the JSON blob. If you sort on something other than _id, the search will incur a performance hit because IDM effectively has to pull the entire result set, and then sort it.

The server provides the cookie value on the first request. You should then supply the cookie value in subsequent requests until the server returns a null cookie, meaning that the final page of results has been returned.

The _pagedResultsCookie parameter is supported only for filtered queries, that is, when used with the _queryFilter parameter. You cannot use the _pagedResultsCookie with a _queryExpression or a _queryId.

The _pagedResultsCookie and _pagedResultsOffset parameters are mutually exclusive, and cannot be used together.

Paged results are enabled only if the _pageSize is a non-zero integer.

_pagedResultsOffset

Specifies the index within the result set of the number of records to be skipped before the first result is returned. The format of the _pagedResultsOffset is an integer value. When the value of _pagedResultsOffset is greater than or equal to 1, the server returns pages, starting after the specified index.

This request assumes that the _pageSize is set, and not equal to zero.

For example, if the result set includes 10 records, the _pageSize is 2, and the _pagedResultsOffset is 6, the server skips the first 6 records, then returns 2 records, 7 and 8. The _remainingPagedResults value would be 2, the last two records (9 and 10) that have not yet been returned.

If the offset points to a page beyond the last of the search results, the result set returned is empty.

Note that the totalPagedResults and _remainingPagedResults parameters are not supported for all queries. Where they are not supported, their returned value is always -1.

_pageSize

An optional parameter indicating that query results should be returned in pages of the specified size. For all paged result requests other than the initial request, a cookie should be provided with the query request.

The default behavior is not to return paged query results. If set, this parameter should be an integer value, greater than zero.

8.3.6. Sorting Query Results

For common filter query expressions, you can sort the results of a query using the _sortKeys parameter. This parameter takes a comma-separated list as a value and orders the way in which the JSON result is returned, based on this list.

The _sortKeys parameter is not supported for predefined queries.

The following query returns all users with the givenName Dan, and sorts the results alphabetically, according to surname (sn):

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/system/ldap/account?_queryFilter=givenName+eq+"Dan"&_fields=givenName,sn&_sortKeys=sn'
{
  "remainingPagedResults": -1,
  "pagedResultsCookie": null,
  "resultCount": 3,
  "result": [
    {
      "sn": "Cope",
      "givenName": "Dan"
    },
    {
      "sn": "Langdon",
      "givenName": "Dan"
    },
    {
      "sn": "Lanoway",
      "givenName": "Dan"
    }
  ]
}   

8.3.7. Running Scripts on Query Results

For managed objects IDM includes an onRetrieve script hook that enables you to manipulate properties when an object is retrieved as the result of a query. To use the onRetrieve trigger, the query must include the executeOnRetrieve parameter, which indicates that the query must return the complete object.

For example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 'http://localhost:8080/openidm/managed/user?_queryFilter=sn+eq+"Jensen"&executeOnRetrieve=true'

For performance reasons, executeOnRetrieve is false by default.

Chapter 9. Managing Users, Groups, and Roles

IDM provides a default schema for typical managed object types, such as users and roles, but does not control the structure of objects that you store in the repository. You can modify or extend the schema for the default object types, and you can set up a new managed object type for any item that can be collected in a data set. For example, with the right schema, you can set up any device associated with the Internet of Things (IoT).

Managed objects and their properties are defined in your project's conf/managed.json file. Note that the schema defined in this file is not a comprehensive list of all the properties that can be stored in the managed object repository. If you use a generic object mapping, you can create a managed object with any arbitrary property, and that property will be stored in the repository. For more information about explicit and generic object mappings, see Section 6.2, "Using Generic and Explicit Object Mappings".

This chapter describes how to work with the default managed object types and how to create new object types as required by your deployment. For more information about the IDM object model, see Appendix B, "Data Models and Objects Reference".

9.1. Creating and Modifying Managed Object Types

If the managed object types provided in the default configuration are not sufficient for your deployment, you can create any number of new managed object types.

The easiest way to create a new managed object type is to use the Admin UI, as follows:

  1. Navigate to the Admin UI URL (https://localhost:8443/admin) then select Configure > Managed Objects > New Managed Object.

  2. Enter a name and readable title for the new managed object. The readable title controls how that object will be referred to in the UI. Optionally, specify an icon that will be displayed for that object type, and a description.

    Click Save.

  3. On the Properties tab, specify the schema for the object type, that is, the properties that make up the object.

  4. On the Scripts tab, specify any scripts that should be applied on various events associated with that object type, for example, when an object of that type is created, updated or deleted.

You can also create a new managed object type by adding its configuration, in JSON, to your project's conf/managed.json file. The following excerpt of the managed.json file shows the configuration of a "Phone" object, that was created through the UI.

{
    "name": "Phone",
    "schema": {
        "$schema": "http://forgerock.org/json-schema#",
        "type": "object",
        "properties": {
            "brand": {
                "description": "The supplier of the mobile phone",
                "title": "Brand",
                "viewable": true,
                "searchable": true,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": [
                    "string",
                    "null"
                ]
            },
            "assetNumber": {
                "description": "The asset tag number of the mobile device",
                "title": "Asset Number",
                "viewable": true,
                "searchable": true,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": "string"
            },
            "model": {
                "description": "The model number of the mobile device, such as 6 plus, Galaxy S4",
                "title": "Model",
                "viewable": true,
                "searchable": false,
                "userEditable": false,
                "policies": [],
                "returnByDefault": false,
                "minLength": "",
                "pattern": "",
                "isVirtual": false,
                "type": "string"
            }
        },
        "required": [],
        "order": [
            "brand",
            "assetNumber",
            "model"
        ]
    }
}

You can add any arbitrary properties to the schema of a new managed object type. A property definition typically includes the following fields:

name

The name of the property.

title

The name of the property, in human-readable language, used to display the property in the UI.

description

A brief description of the property.

viewable

Specifies whether this property is viewable in the object's profile in the UI). Boolean, true or false (true by default).

searchable

Specifies whether this property can be searched in the UI. A searchable property is visible within the Managed Object data grid in the Self-Service UI. Note that for a property to be searchable in the UI, it must be indexed in the repository configuration. For information on indexing properties in a repository, see Section 6.2, "Using Generic and Explicit Object Mappings".

Boolean, true or false (false by default).

userEditable

Specifies whether users can edit the property value in the UI. This property applies in the context of the Self-Service UI, where users are able to edit certain properties of their own accounts. Boolean, true or false (false by default).

isProtected

Specifies whether reauthentication is required if the value of this property changes.

For certain properties, such as passwords, changing the value of the property should force an end user to reauthenticate. These properties are referred to as protected properties. Depending on how the user authenticates (which authentication module is used), the list of protected properties is added to the user's security context. For example, if a user logs in with the login and password of their managed user entry (MANAGED_USER authentication module), their security context will include this list of protected properties. The list of protected properties is not included in the security context if the user logs in with a module that does not support reauthentication (such as through a social identity provider).

minLength

The minimum number of characters that the value of this property must have.

pattern

Any specific pattern to which the value of the property must adhere. For example, a property whose value is a date might require a specific date format.

policies

Any policy validation that must be applied to the property. For more information on managed object policies, see Section 12.1, "Configuring the Default Policy for Managed Objects".

required

Specifies whether the property must be supplied when an object of this type is created. Boolean, true or false.

type

The data type for the property value; can be string, array, boolean, integer, number, object, Resource Collection, or null.

Note

If a property (such as a telephoneNumber) might not exist for a particular user, you must include null as one of the property types. You can set a null property type in the Admin UI (Configure > Managed Objects > User > Schema then select the property and set Nullable to true). You can also set a null property type directly in your managed.json file by setting "type" : '[ "string","null" ]' for that property (where string can be any other valid property type. This information is validated by the policy.js script, as described in Section 12.1.3, "Validation of Managed Object Data Types".

If you're configuring a data type of array through the Admin UI, you're limited to two values.

isVirtual

Specifies whether the property takes a static value, or whether its value is calculated "on the fly" as the result of a script. Boolean, true or false.

returnByDefault

For non-core attributes (virtual attributes and relationship fields), specifies whether the property will be returned in the results of a query on an object of this type if it is not explicitly requested. Virtual attributes and relationship fields are not returned by default. Boolean, true or false. When the property is in an array within a relationship, always set to false.

9.2. Working with Managed Users

User objects that are stored in the repository are referred to as managed users. For a JDBC repository, IDM stores managed users in the managedobjects table. A second table, managedobjectproperties, serves as the index table.

IDM provides RESTful access to managed users, at the context path /openidm/managed/user. For more information, see Section 1.3, "Getting Started With the REST Interface" in the Installation Guide.

You can add, change, and delete managed users by using the Admin UI or over the REST interface. To use the Admin UI, select Manage > User. The UI is intuitive as regards user management.

If you have many managed users, the User List page now supports specialized filtering, with the Advanced Filter option, which allows you to build many of the queries shown in Section 8.3, "Defining and Calling Queries".

The following examples show how to add, change and delete users over the REST interface. For a reference of all managed user endpoints and actions, see Section D.7.2, "Managing Users Over REST". You can also use the API Explorer as a reference to the managed object REST API. For more information, see Section 4.8, "API Explorer".

The following example retrieves the JSON representation of all managed users in the repository:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
"http://localhost:8080/openidm/managed/user?_queryId=query-all-ids"

The following two examples query all managed users for a user named scarter:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
 "http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+%22scarter%22"

In this second example, note the use of single quotes around the URL, to avoid conflicts with the double quotes around the user named smith. Note also that the _queryFilter requires double quotes (or the URL encoded equivalent %22) around the search term:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"scarter"'

The following example retrieves the JSON representation of a managed user, specified by his ID, scarter:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/scarter"

The following example adds a user with a specific user ID, bjensen:

$ curl \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "If-None-Match: *" \
 --request PUT \
 --data '{
    "userName":"bjensen",
    "sn":"Jensen",
    "givenName":"Barbara",
    "mail": "bjensen@example.com",
    "telephoneNumber": "082082082",
    "password":"Passw0rd"
  }' \
"http://localhost:8080/openidm/managed/user/bjensen"

The following example adds the same user, but allows IDM to generate an ID. Creating objects with system-generated IDs is recommended in production environments:

$ curl \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data '{
    "userName":"bjensen",
    "sn":"Jensen",
    "givenName":"Barbara",
    "mail": "bjensen@example.com",
    "telephoneNumber": "082082082",
    "password":"Passw0rd"
  }' \
"http://localhost:8080/openidm/managed/user?_action=create"

The following example checks whether user bjensen exists, then replaces her telephone number with the new data provided in the request body:

$ curl \
 --header "Content-Type: application/json" \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 --data '[{
  "operation":"replace",
  "field":"/telephoneNumber",
  "value":"1234567"
  }]' \
  "http://localhost:8080/openidm/managed/user?_action=patch&_queryId=for-userName&uid=bjensen"

The following example deletes user bjensen:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "http://localhost:8080/openidm/managed/user/bjensen"

9.3. Working With Managed Groups

IDM provides support for a managed group object. For a JDBC repository, IDM stores managed groups with all other managed objects, in the managedobjects table, and uses the managedobjectproperties for indexing.

The managed group object is not provided by default. To use managed groups, add an object similar to the following to your conf/managed.json file:

{
   "name" : "group"
},  

With this addition, IDM provides RESTful access to managed groups, at the context path /openidm/managed/group.

For an example of a deployment that uses managed groups, see Chapter 5, "Synchronizing LDAP Groups" in the Samples Guide.

9.4. Working With Managed Roles

IDM supports two types of roles:

  • Provisioning roles - used to specify how objects are provisioned to an external system.

  • Authorization roles - used to specify the authorization rights of a managed object internally, within IDM.

Provisioning roles are always created as managed roles, at the context path openidm/managed/role/role-name. Provisioning roles are granted to managed users as values of the user's roles property.

Authorization roles can be created either as managed roles (at the context path openidm/managed/role/role-name) or as internal roles (at the context path openidm/repo/internal/role/role-name). Authorization roles are granted to managed users as values of the user's authzRoles property.

Both provisioning roles and authorization roles use the relationships mechanism to link the role to the managed object to which it applies. For more information about relationships between objects, see Chapter 10, "Managing Relationships Between Objects".

This section describes how to create and use managed roles, either managed provisioning roles, or managed authorization roles. For more information about internal authorization roles, and how IDM controls authorization to its own endpoints, see Section 19.3, "Authorization".

Managed roles are defined like any other managed object, and are granted to users through the relationships mechanism.

A managed role can be granted manually, as a static value of the user's roles or authzRoles attribute, or dynamically, as a result of a condition or script. For example, a user might be granted a role such as sales-role dynamically, if that user is in the sales organization.

A managed user's roles and authzRoles attributes take an array of references as a value, where the references point to the managed roles. For example, if user bjensen has been granted two provisioning roles (employee and supervisor), the value of bjensen's roles attribute would look something like the following:

"roles": [
    {
      "_ref": "managed/role/employee",
      "_refProperties": {
        "_id": "c090818d-57fd-435c-b1b1-bb23f47eaf09",
        "_rev": "0000000050c62938",
      }
    },
    {
      "_ref": "managed/role/supervisor",
      "_refProperties": {
        "_id": "4961912a-e2df-411a-8c0f-8e63b62dbef6",
        "_rev": "00000000a92657c7",
      }
    }
  ]

Important

The _ref property points to the ID of the managed role that has been granted to the user. This particular example uses a client-assigned ID that is the same as the role name, to make the example easier to understand. All other examples in this chapter use system-assigned IDs. In production, you should use system-assigned IDs for role objects.

The following sections describe how to create, read, update, and delete managed roles, and how to grant roles to users. For information about how roles are used to provision users to external systems, see Section 9.4.8, "Working With Role Assignments".

9.4.1. Creating a Role

The easiest way to create a new role is by using the Admin UI. Select Manage > Role and click New Role on the Role List page. Enter a name and description for the new role and click Save.

Optionally, select Enable Condition to define a query filter that will allow this role to be granted to members dynamically. For more information, see Section 9.4.3.2, "Granting Roles Dynamically".

To create a managed role over REST, send a PUT or POST request to the /openidm/managed/role context path. The following example creates a managed role named employee:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
     "name" : "employee",
     "description" : "Role granted to workers on the company payroll"
 }' \
 "http://localhost:8080/openidm/managed/role?_action=create"
{
  "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a",
	 "_rev": "000000004cab60c8",
  "name": "employee",
  "description": "Role granted to workers on the company payroll"
}

At this stage, the employee role has no corresponding assignments. Assignments are what enables the provisioning logic to the external system. Assignments are created and maintained as separate managed objects, and are referred to within role definitions. For more information about assignments, see Section 9.4.8, "Working With Role Assignments".

9.4.2. Listing Existing Roles

You can display a list of all configured managed roles over REST or by using the Admin UI.

To list the managed roles in the Admin UI, select Manage > Role.

If you have many managed roles, the Role List page now supports specialized filtering, with the Advanced Filter option, which allows you to build many of the queries shown in Section 8.3, "Defining and Calling Queries".

To list the managed roles over REST, query the openidm/managed/role endpoint. The following example shows the employee role that you created in the previous section:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/role?_queryFilter=true"
{
  "result": [
    {
      "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a",
	     "_rev": "00000000dc6160c8",
      "name": "employee",
      "description": "Role granted to workers on the company payroll"
    }
  ],
...
}

9.4.3. Granting a Role to a User

Roles are granted to users through the relationship mechanism. Relationships are essentially references from one managed object to another, in this case from a user object to a role object. For more information about relationships, see Chapter 10, "Managing Relationships Between Objects".

Roles can be granted manually or dynamically.

To grant a role manually, you must do one of the following:

  • Update the value of the user's roles property (if the role is a provisioning role) or authzRoles property (if the role is an authorization role) to reference the role.

  • Update the value of the role's members property to reference the user.

Manual role grants are described further in Section 9.4.3.1, "Granting Roles Manually".

Dynamic role grants use the result of a condition or script to update a user's list of roles. Dynamic role grants are described in detail in Section 9.4.3.2, "Granting Roles Dynamically".

9.4.3.1. Granting Roles Manually

To grant a role to a user manually, use the Admin UI or the REST interface as follows:

Using the Admin UI

Use one of the following UI methods to grant a role to a user:

  • Update the user entry:

    1. Select Manage > User and click on the user to whom you want to grant the role.

    2. Select the Provisioning Roles tab and click Add Provisioning Roles.

    3. Select the role from the dropdown list and click Add.

  • Update the role entry:

    1. Select Manage > Role and click on the role that you want to grant.

    2. Select the Role Members tab and click Add Role Members.

    3. Select the user from the dropdown list and click Add.

Over the REST interface

Use one of the following methods to grant a role to a user over REST:

  • Update the user to refer to the role.

    The following sample command grants the employee role (with ID cedadaed-5774-4d65-b4a2-41d455ed524a) to user scarter:

    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --request PATCH \
     --data '[
        {
           "operation": "add",
           "field": "/roles/-",
           "value": {"_ref" : "managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"}
        }
     ]' \
     "http://localhost:8080/openidm/managed/user/scarter"
    {
      "_id": "scarter",
      "_rev": "000000004121fb7e",
      "mail": "scarter@example.com",
      "givenName": "Steven",
      "sn": "Carter",
      "description": "Created By XML1",
      "userName": "scarter@example.com",
      "telephoneNumber": "1234567",
      "accountStatus": "active",
      "lastChanged" : {
        "date" : "2017-07-28T16:07:28.544Z"
      },
      "effectiveRoles": [
        {
          "_ref": "managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"
        }
      ],
      "effectiveAssignments": []
    }

    Note that scarter's effectiveRoles attribute has been updated with a reference to the new role. For more information about effective roles and effective assignments, see Section 9.4.9, "Understanding Effective Roles and Effective Assignments".

    When you update a user's existing roles array, you must use the - special index to add the new value to the set. For more information, see Set semantic arrays in Section D.1.10.1, "Patch Operation: Add".

  • Update the role to refer to the user.

    The following sample command makes scarter a member of the employee role:

    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --request PATCH \
     --data '[
        {
           "operation": "add",
           "field": "/members/-",
           "value": {"_ref" : "managed/user/scarter"}
        }
     ]' \
     "http://localhost:8080/openidm/managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"
    {
      "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a",
      "_rev": "0000000050c62938",
      "name": "employee",
      "description": "Role granted to workers on the company payroll"
    }

    Note that the members attribute of a role is not returned by default in the output. To show all members of a role, you must specifically request the relationship properties (*_ref) in your query. The following sample command lists the members of the employee role (currently only scarter):

    $ curl \
      --header "X-OpenIDM-Username: openidm-admin" \
      --header "X-OpenIDM-Password: openidm-admin" \
      --request GET \
      "http://localhost:8080/openidm/managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a?_fields=*_ref,name"
     {
      "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a",
    	 "_rev": "00000000dc6160c8",
      "name": "employee",
      "members": [
        {
          "_ref": "managed/user/scarter",
          "_refProperties": {
            "_id": "98d22d75-7090-47f8-9608-01ff92b447a4",
         	  "_rev": "000000004cab60c8"
          }
        }
      ],
      "authzMembers": [],
      "assignments": []
    }
  • You can replace an existing role grant with a new one by using the replace operation in your patch request.

    The following command replaces scarter's entire roles entry (that is, overwrites any existing roles) with a single entry, the reference to the employee role (ID cedadaed-5774-4d65-b4a2-41d455ed524a):

    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --header "Content-Type: application/json" \
     --request PATCH \
     --data '[
       {
         "operation": "replace",
         "field":"/roles",
         "value":[
              {"_ref":"managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"}
         ]
       }
     ]' \
     "http://localhost:8080/openidm/managed/user/scarter"

9.4.3.2. Granting Roles Dynamically

The previous section showed how to grant roles to a user manually, by listing a reference to the role as a value of the user's roles attribute. You can also grant a role dynamically by using one of the following methods:

  • Granting a role based on a condition, where that condition is expressed in a query filter in the role definition. If the condition is true for a particular member, that member is granted the role.

  • Using a custom script to define a more complex role granting strategy.

9.4.3.2.1. Granting Roles Based on a Condition

A role that is granted based on a defined condition is called a conditional role. To create a conditional role, include a query filter in the role definition.

To create a conditional role by using the Admin UI, select Condition on the role Details page, then define the query filter that will be used to assess the condition. In the following example, the role fr-employee will be granted only to those users who live in France (whose country property is set to FR):

Figure 9.1. Granting a Conditional Role, Based On a Query
Granting conditional roles, based on a query

To create a conditional role over REST, include the query filter as a value of the condition property in the role definition. The following command creates a role similar to the one created in the previous screen shot:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
    "name": "fr-employee",
    "description": "Role granted to employees resident in France",
    "condition": "/country eq \"FR\""
 }' \
 "http://localhost:8080/openidm/managed/role?_action=create"
 {
  "_id": "4b0a3e42-e5be-461b-a995-3e66c74551c1",
	 "_rev": "000000004cab60c8",
  "name": "fr-employee",
  "description": "Role granted to employees resident in France",
  "condition": "/country eq \"FR\""
}

Important

Properties that are used as the basis of a conditional role query must be configured as searchable and must be indexed in the repository configuration. A searchable property is visible within the Managed Object data grid in the Self-Service UI. For more information, see Section 9.1, "Creating and Modifying Managed Object Types".

When a conditional role is created or updated, IDM automatically assesses all managed users, and recalculates the value of their roles property, if they qualify for that role. When a condition is removed from a role, that is, when the role becomes an unconditional role, all conditional grants removed. So, users who were granted the role based on the condition have that role removed from their roles property.

Caution

When a conditional role is defined in an existing data set, every user entry (including the mapped entries on remote systems) must be updated with the assignments implied by that conditional role. The time that it takes to create a new conditional role is impacted by the following items:

  • The number of managed users affected by the condition

  • The number of assignments related to the conditional role

  • The average time required to provision updates to all remote systems affected by those assignments

In a data set with a very large number of users, creating a new conditional role can therefore incur a significant performance cost at the time of creation. Ideally, you should set up your conditional roles at the beginning of your deployment to avoid performance issues later.

9.4.3.2.2. Granting Roles By Using Custom Scripts

The easiest way to grant roles dynamically is to use conditional roles, as described in Section 9.4.3.2.1, "Granting Roles Based on a Condition". If your deployment requires complex conditional logic that cannot be achieved with a query filter, you can create a custom script to grant the role, as follows:

  1. Create a roles directory in your project's script directory and copy the default effective roles script to that new directory:

    $ mkdir project-dir/script/roles/
    $ cp /path/to/openidm/bin/defaults/script/roles/effectiveRoles.js \
     project-dir/script/roles/

    The new script will override the default effective roles script.

  2. Modify the script to reference additional roles that have not been granted manually, or as the result of a conditional grant. The effective roles script calculates the grants that are in effect when the user is retrieved.

    For example, the following addition to the effectiveRoles.js script grants the roles dynamic-role1 and dynamic-role2 to all active users (managed user objects whose accountStatus value is active). This example assumes that you have already created the managed roles, dynamic-role1 (with ID d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c) and dynamic-role2 (with ID 709fed03-897b-4ff0-8a59-6faaa34e3af6, and their corresponding assignments:

    // This is the location to expand to dynamic roles,
    // project role script return values can then be added via
    // effectiveRoles = effectiveRoles.concat(dynamicRolesArray);
    
    if (object.accountStatus === 'active') {
        effectiveRoles = effectiveRoles.concat([
          {"_ref": "managed/role/d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c"},
          {"_ref": "managed/role/709fed03-897b-4ff0-8a59-6faaa34e3af6"}
        ]);
    }

Note

For conditional roles, the user's roles property is updated if the user meets the condition. For custom scripted roles, the user's effectiveRoles property is calculated when the user is retrieved and includes the dynamic roles according to the custom script.

If you make any of the following changes to a scripted role grant, you must perform a manual reconciliation of all affected users before assignment changes will take effect on an external system:

  • If you create a new scripted role grant.

  • If you change the definition of an existing scripted role grant.

  • If you change any of the assignment rules for a role that is granted by a custom script.

9.4.4. Using Temporal Constraints to Restrict Effective Roles

To restrict the period during which a role is effective, you can set a temporal constraint on the role itself, or on the role grant. A temporal constraint that is set on a role definition applies to all grants of that role. A temporal constraint that is set on a role grant enables you to specify the period that the role is valid per user.

For example, you might want a role definition such as contractors-2016 to apply to all contract employees only for the year 2016. Or you might want a contractors role to apply to an individual user only for the period of his contract of employment.

The following sections describe how to set temporal constraints on role definitions, and on individual role grants.

9.4.4.1. Adding a Temporal Constraint to a Role Definition

When you create a role, you can include a temporal constraint in the role definition that restricts the validity of the entire role, regardless of how that role is granted. Temporal constraints are expressed as a time interval in ISO 8601 date and time format. For more information on this format, see the ISO 8601 standard .

To restrict the period during which a role is valid by using the Admin UI, select Temporal Constraint on the role Details page, then select the timezone and start and end dates for the required period.

In the following example, the Contractor role is effective from January 1st, 2016 to January 1st, 2017:

Figure 9.2. Restricting a Role's Effectiveness to a Specified Time Period
Restricting a role's effectiveness to a specified time period

The following example adds a similar contractor role, over the REST interface:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
     "name" : "contractor",
     "description" : "Role granted to contract workers for 2016",
     "temporalConstraints" : [
        {
            "duration" :  "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z"
        }
     ]
 }' \
 "http://localhost:8080/openidm/managed/role?_action=create"
{
  "_id": "071283a8-0237-40a2-a31e-ceaa4d93c93d",
	 "_rev": "000000004cab60c8",
  "name": "contractor",
  "description": "Role granted to contract workers for 2016",
  "temporalConstraints": [
    {
      "duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z"
    }
  ]
}

The preceding example specifies the time zone as Coordinated Universal Time (UTC) by appending Z to the time. If no time zone information is provided, the time zone is assumed to be local time. To specify a different time zone, include an offset (from UTC) in the format ±hh:mm. For example, an interval of 2016-01-01T00:00:00.000+04:00/2017-01-01T00:00:00.000+04:00 specifies a time zone that is four hours ahead of UTC.

When the period defined by the constraint has ended, the role object remains in the repository but the effective roles script will not include the role in the list of effective roles for any user.

The following example assumes that user scarter has been granted a role contractor-april. A temporal constraint has been included in the contractor-april definition that specifies that the role should be applicable only during the month of April 2016. At the end of this period, a query on scarter's entry shows that his roles property still includes the contractor-april role (with ID 3eb67be6-205b-483d-b36d-562b43a04ff8), but his effectiveRoles property does not:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/scarter?_fields=_id,userName,roles,effectiveRoles"
{
  "_id": "scarter",
  "_rev": "00000000792afa08",
  "userName": "scarter@example.com",
  "roles": [
    {
      "_ref": "managed/role/3eb67be6-205b-483d-b36d-562b43a04ff8",
      "_refProperties": {
        "temporalConstraints": [],
        "_grantType": "",
        "_id": "257099f5-56e5-4ce0-8580-f0f4d4b93d93",
        "_rev": "000000001298f6a6"
      }
    }
  ],
  "effectiveRoles": []
}

The role is still in place but is no longer effective.

9.4.4.2. Adding a Temporal Constraint to a Role Grant

To restrict the validity of a role for individual users, you can apply a temporal constraint at the grant level, rather than as part of the role definition. In this case, the temporal constraint is taken into account per user, when the user's effective roles are calculated. Temporal constraints that are defined at the grant level can be different for each user who is a member of that role.

To restrict the period during which a role grant is valid by using the Admin UI, set a temporal constraint when you add the member to the role.

For example, to specify that bjensen be added to a Contractor role only for the period of her employment contract, select Manage > Role, click the Contractor role, and click Add Role Members. On the Add Role Members screen, select bjensen from the list, then enable the Temporal Constraint and specify the start and end date of her contract.

To apply a temporal constraint to a grant over the REST interface, include the constraint as one of the _refProperties of the relationship between the user and the role. The following example assumes a contractor role, with ID 9321fd67-30d1-4104-934d-cfd0a22e8182. The command adds user bjensen as a member of that role, with a temporal constraint that specifies that she be a member of the role only for one year, from January 1st, 2016 to January 1st, 2017:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PATCH \
 --data '[
    {
     "operation": "add",
     "field": "/members/-",
     "value": {
      "_ref" : "managed/user/bjensen",
      "_refProperties": {
       "temporalConstraints": [{"duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z"}]
      }
     }
    }
 ]' \
 "http://localhost:8080/openidm/managed/role/9321fd67-30d1-4104-934d-cfd0a22e8182"
{
  "_id": "9321fd67-30d1-4104-934d-cfd0a22e8182",
  "_rev": "0000000050c62938",
  "name": "contractor",
  "description": "Role for contract workers"
}

A query on bjensen's roles property shows that the temporal constraint has been applied to this grant:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/bjensen/roles?_queryFilter=true"
{
  "result": [
    {
      "_ref": "managed/role/9321fd67-30d1-4104-934d-cfd0a22e8182",
      "_refProperties": {
        "temporalConstraints": [
          {
            "duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z"
          }
        ],
        "_id": "84f5342c-cebe-4f0b-96c9-0267bf68a095",
        "_rev": "000000001298f6a6"
      }
    }
  ],
...
}

9.4.5. Querying a User's Manual and Conditional Roles

The easiest way to check what roles have been granted to a user, either manually, or as the result of a condition, is to look at the user's entry in the Admin UI. Select Manage > User, click on the user whose roles you want to see, and select the Provisioning Roles tab.

If you have many managed roles, the Role List page now supports specialized filtering, with the Advanced Filter option, which allows you to build many of the queries shown in Section 8.3, "Defining and Calling Queries".

To obtain a similar list over the REST interface, you can query the user's roles property. The following sample query shows that scarter has been granted two roles - an employee role (with ID 6bf4701a-7579-43c4-8bb4-7fd6cac552a1) and an fr-employee role (with ID 00561df0-1e7d-4c8a-9c1e-3b1096116903). specifies :

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/scarter/roles?_queryFilter=true&_fields=_ref,_refProperties,name"
{
  "result": [
    {
      "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1",
      "_refProperties": {
        "temporalConstraints": [],
        "_grantType": "",
        "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308",
        "_rev": "00000000792afa08"
      },
      "name": "employee"
    },
    {
      "_ref": "managed/role/00561df0-1e7d-4c8a-9c1e-3b1096116903",
      "_refProperties": {
        "_grantType": "conditional",
        "_id": "e59ce7c3-46ce-492a-ba01-be27af731435",
        "_rev": "000000004121fb7e"
      },
      "name": "fr-employee"
    }
  ],
 ...
}

Note that the fr-employee role has an additional reference property, _grantType. This property indicates how the role was granted to the user. If there is no _grantType, the role was granted manually.

Querying a user's roles in this way does not return any roles that would be in effect as a result of a custom script, or of any temporal constraint applied to the role. To return a complete list of all the roles in effect at a specific time, you need to query the user's effectiveRoles property, as follows:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/scarter?_fields=effectiveRoles"

9.4.6. Deleting a User's Roles

Roles that have been granted manually can be removed from a user's entry in two ways:

  • Update the value of the user's roles property (if the role is a provisioning role) or authzRoles property (if the role is an authorization role) to remove the reference to the role.

  • Update the value of the role's members property to remove the reference to that user.

Both of these actions can be achieved by using the Admin UI, or over REST.

Using the Admin UI

Use one of the following methods to remove a user's roles:

  • Select Manage > User and click on the user whose role or roles you want to remove.

    Select the Provisioning Roles tab, select the role that you want to remove, and click Remove Selected Provisioning Roles.

  • Select Manage > Role and click on the role whose members you want to remove.

    Select the Role Members tab, select the member or members that that you want to remove, and click Remove Selected Role Members.

Over the REST interface

Use one of the following methods to remove a role grant from a user:

  • Delete the role from the user's roles property, including the reference ID (the ID of the relationship between the user and the role) in the delete request:

    The following sample command removes the employee role (with ID 6bf4701a-7579-43c4-8bb4-7fd6cac552a1) from user scarter:

    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request DELETE \
     "http://localhost:8080/openidm/managed/user/scarter/roles/8417106e-c3ef-4f59-a482-4c92dbf00308"
    {
      "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1",
      "_refProperties": {
        "temporalConstraints": [],
        "_grantType": "",
        "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308",
        "_rev": "000000001298f6a6"
      }
    }
  • PATCH the user entry to remove the role from the array of roles, specifying the value of the role object in the JSON payload.

    Caution

    When you remove a role in this way, you must include the entire object in the value, as shown in the following example:

    $ curl \
     --header "Content-type: application/json" \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request PATCH \
     --data '[
        {
          "operation" : "remove",
          "field" : "/roles",
          "value" :     {
           "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1",
           "_refProperties": {
             "temporalConstraints": [],
             "_grantType": "",
             "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308",
             "_rev": "000000001298f6a6"
           }
         }
        }
      ]' \
     "http://localhost:8080/openidm/managed/user/scarter"
    {
      "_id": "scarter",
      "_rev": "000000001298f6a6",
      "mail": "scarter@example.com",
      "givenName": "Steven",
      "sn": "Carter",
      "description": "Created By XML1",
      "userName": "scarter@example.com",
      "telephoneNumber": "1234567",
      "accountStatus": "active",
      "lastChanged" : {
        "date" : "2017-07-28T16:07:28.544Z"
      },
      "effectiveRoles": [],
      "effectiveAssignments": []
    }
  • Delete the user from the role's members property, including the reference ID (the ID of the relationship between the user and the role) in the delete request.

    The following example first queries the members of the employee role, to obtain the ID of the relationship, then removes bjensen's membership from that role:

    $ url \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request GET \
     "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1/members?_queryFilter=true"
    {
      "result": [
        {
          "_ref": "managed/user/bjensen",
          "_refProperties": {
            "temporalConstraints": [],
            "_grantType": "",
            "_id": "3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d",
            "_rev": "00000000c7554e13"
          }
        }
      ],
    ...
    }
    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request DELETE \
     "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1/members/3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d"
    {
      "_ref": "managed/user/bjensen",
      "_refProperties": {
        "temporalConstraints": [],
        "_grantType": "",
        "_id": "3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d",
        "_rev": "00000000c7554e13"
      }
    }

Note

Roles that have been granted as the result of a condition can only be removed when the condition is changed or removed, or when the role itself is deleted.

9.4.7. Deleting a Role Definition

You can delete a managed provisioning or authorization role by using the Admin UI, or over the REST interface.

To delete a role by using the Admin UI, select Manage > Role, select the role you want to remove, and click Delete.

To delete a role over the REST interface, simply delete that managed object. The following command deletes the employee role created in the previous section:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1"
{
  "_id": "6bf4701a-7579-43c4-8bb4-7fd6cac552a1",
  "_rev": "000000004cab60c8",
  "name": "employee",
  "description": "Role granted to workers on the company payroll"
}

Note

You cannot delete a role if it is currently granted to one or more users. If you attempt to delete a role that is granted to a user (either over the REST interface, or by using the Admin UI), IDM returns an error. The following command indicates an attempt to remove the employee role while it is still granted to user scarter:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1"
{
    "code":409,
    "reason":"Conflict",
    "message":"Cannot delete a role that is currently granted"
 }

9.4.8. Working With Role Assignments

Authorization roles control access to IDM itself. Provisioning roles define rules for how attribute values are updated on external systems. These rules are configured through assignments that are attached to a provisioning role definition. The purpose of an assignment is to provision an attribute or set of attributes, based on an object's role membership.

The synchronization mapping configuration between two resources (defined in the sync.json file) provides the basic account provisioning logic (how an account is mapped from a source to a target system). Role assignments provide additional provisioning logic that is not covered in the basic mapping configuration. The attributes and values that are updated by using assignments might include group membership, access to specific external resources, and so on. A group of assignments can collectively represent a role.

Assignment objects are created, updated and deleted like any other managed object, and are attached to a role by using the relationships mechanism, in much the same way as a role is granted to a user. Assignment are stored in the repository and are accessible at the context path /openidm/managed/assignment.

This section describes how to manipulate managed assignments over the REST interface, and by using the Admin UI. When you have created an assignment, and attached it to a role definition, all user objects that reference that role definition will, as a result, reference the corresponding assignment in their effectiveAssignments attribute.

9.4.8.1. Creating an Assignment

The easiest way to create an assignment is by using the Admin UI, as follows:

  1. Select Manage > Assignment and click New Assignment on the Assignment List page.

  2. Enter a name and description for the new assignment, and select the mapping to which the assignment should apply. The mapping indicates the target resource, that is, the resource on which the attributes specified in the assignment will be adjusted.

  3. Click Add Assignment.

  4. Select the Attributes tab and select the attribute or attributes whose values will be adjusted by this assignment.

    • If a regular text field appears, specify what the value of the attribute should be, when this assignment is applied.

    • If an Item button appears, you can specify a managed object type, such as an object, relationship, or string.

    • If a Properties button appears, you can specify additional information such as an array of role references, as described in Section 9.4, "Working With Managed Roles".

  5. Select the assignment operation from the dropdown list:

    • Merge With Target - the attribute value will be added to any existing values for that attribute. This operation merges the existing value of the target object attribute with the value(s) from the assignment. If duplicate values are found (for attributes that take a list as a value), each value is included only once in the resulting target. This assignment operation is used only with complex attribute values like arrays and objects, and does not work with strings or numbers. (Property: mergeWithTarget.)

    • Replace Target - the attribute value will overwrite any existing values for that attribute. The value from the assignment becomes the authoritative source for the attribute. (Property: replaceTarget.)

    Select the unassignment operation from the dropdown list. You can set the unassignment operation to one of the following:

    • Remove From Target - the attribute value is removed from the system object when the user is no longer a member of the role, or when the assignment itself is removed from the role definition. (Property: removeFromTarget.)

    • No Operation - removing the assignment from the user's effectiveAssignments has no effect on the current state of the attribute in the system object. (Property: noOp.)

  6. Optionally, click the Events tab to specify any scriptable events associated with this assignment.

    The assignment and unassignment operations described in the previous step operate at the attribute level. That is, you specify what should happen with each attribute affected by the assignment when the assignment is applied to a user, or removed from a user.

    The scriptable On assignment and On unassignment events operate at the assignment level, rather than the attribute level. You define scripts here to apply additional logic or operations that should be performed when a user (or other object) receives or loses an entire assignment. This logic can be anything that is not restricted to an operation on a single attribute.

    For information about the variables available to these scripts, see Section E.3.6, "Variables Available to Role Assignment Scripts".

  7. Click the Roles tab to attach this assignment to an existing role definition.

To create a new assignment over REST, send a PUT or POST request to the /openidm/managed/assignment context path.

The following example creates a new managed assignment named employee. The JSON payload in this example shows the following:

  • The assignment is applied for the mapping managedUser_systemLdapAccounts, so attributes will be updated on the external LDAP system specified in this mapping.

  • The name of the attribute on the external system whose value will be set is employeeType and its value will be set to Employee.

  • When the assignment is applied during a sync operation, the attribute value Employee will be added to any existing values for that attribute. When the assignment is removed (if the role is deleted, or if the managed user is no longer a member of that role), the attribute value Employee will be removed from the values of that attribute.

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
   "name" : "employee",
   "description": "Assignment for employees.",
   "mapping" : "managedUser_systemLdapAccounts",
   "attributes": [
       {
           "name": "employeeType",
           "value": "Employee",
           "assignmentOperation" : "mergeWithTarget",
           "unassignmentOperation" : "removeFromTarget"
       }
   ]
 }' \
 "http://localhost:8080/openidm/managed/assignment?_action=create"
{
  "_id": "2fb3aa12-109f-431c-bdb7-e42213747700",
	 "_rev": "00000000dc6160c8",
  "name": "employee",
  "description": "Assignment for employees.",
  "mapping": "managedUser_systemLdapAccounts",
  "attributes": [
    {
      "name": "employeeType",
      "value": "Employee",
      "assignmentOperation": "mergeWithTarget",
      "unassignmentOperation": "removeFromTarget"
    }
  ]
}

Note that at this stage, the assignment is not linked to any role, so no user can make use of the assignment. You must add the assignment to a role, as described in the following section.

9.4.8.2. Adding an Assignment to a Role

When you have created a managed role, and a managed assignment, you reference the assignment from the role, in much the same way as a user references a role.

You can update a role definition to include one or more assignments, either by using the Admin UI, or over the REST interface.

Using the Admin UI
  1. Select Manage > Role and click on the role to which you want to add an assignment.

  2. Select the Managed Assignments tab and click Add Managed Assignments.

  3. Select the assignment that you want to add to the role and click Add.

Over the REST interface

Update the role definition to include a reference to the ID of the assignment in the assignments property of the role. The following sample command adds the employee assignment (with ID 2fb3aa12-109f-431c-bdb7-e42213747700) to an existing employee role (whose ID is 59a8cc01-bac3-4bae-8012-f639d002ad8c):

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request PATCH \
 --data '[
   {
       "operation" : "add",
       "field" : "/assignments/-",
       "value" : { "_ref": "managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700" }
   }
 ]' \
 "http://localhost:8080/openidm/managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c"
{
  "_id": "59a8cc01-bac3-4bae-8012-f639d002ad8c",
  "_rev": "00000000c7554e13",
  "name": "employee",
  "description": "Role granted to workers on the company payroll"
}

To check that the assignment was added successfully, you can query the assignments property of the role:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c/assignments?_queryFilter=true&_fields=_ref,_refProperties,name"

{
  "result": [
    {
      "_ref": "managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700",
      "_refProperties": {
        "_id": "686b328a-e2bd-4e48-be25-4a4e12f3b431",
        "_rev": "0000000050c62938"
      },
      "name": "employee"
    }
  ],
...
}

Note that the role's assignments property now references the assignment that you created in the previous step.

To remove an assignment from a role definition, remove the reference to the assignment from the role's assignments property.

9.4.8.3. Deleting an Assignment

You can delete an assignment by using the Admin UI, or over the REST interface.

To delete an assignment by using the Admin UI, select Manage > Assignment, select the assignment you want to remove, and click Delete.

To delete an assignment over the REST interface, simply delete that object. The following command deletes the employee assignment created in the previous section:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request DELETE \
 "http://localhost:8080/openidm/managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700"
     {
  "_id": "2fb3aa12-109f-431c-bdb7-e42213747700",
  "_rev": "000000004121fb7e",
  "name": "employee",
  "description": "Assignment for employees.",
  "mapping": "managedUser_systemLdapAccounts",
  "attributes": [
    {
      "name": "employeeType",
      "value": "Employee",
      "assignmentOperation": "mergeWithTarget",
      "unassignmentOperation": "removeFromTarget"
    }
  ]
}

Note

You can delete an assignment, even if it is referenced by a managed role. When the assignment is removed, any users to whom the corresponding roles were granted will no longer have that assignment in their list of effectiveAssignments. For more information about effective roles and effective assignments, see Section 9.4.9, "Understanding Effective Roles and Effective Assignments".

9.4.8.4. Synchronizing Roles and Assignments

If you have mapped roles and assignments to properties on a target system, and you are preloading the result set into memory, make sure that your targetQuery returns the mapped property. For example, if you have mapped a specific role to the ldapGroups property on the target system, the target query must include the ldapGroups property when it returns the object.

The following excerpt of a mapping indicates that the target query must return the _id of the object as well as its ldapGroups property:

"targetQuery": {
    "_queryFilter" : true,
    "_fields" : "_id,ldapGroups"
},  

For more information about preloading the result set for reonciliation operations, see Section 15.16.4, "Improving Reconciliation Query Performance".

9.4.9. Understanding Effective Roles and Effective Assignments

Effective roles and effective assignments are virtual properties of a user object. Their values are calculated on the fly by the openidm/bin/defaults/script/roles/effectiveRoles.js and openidm/bin/defaults/script/roles/effectiveAssignments.js scripts. These scripts are triggered when a managed user is retrieved.

The following excerpt of a managed.json file shows how these two virtual properties are constructed for each managed user object:

"effectiveRoles" : {
    "type" : "array",
    "title" : "Effective Roles",
    "viewable" : false,
    "returnByDefault" : true,
    "isVirtual" : true,
    "onRetrieve" : {
        "type" : "text/javascript",
         "source" : "require('roles/effectiveRoles').calculateEffectiveRoles(object, 'roles');"
    },
    "items" : {
        "type" : "object"
    }
},
"effectiveAssignments" : {
    "type" : "array",
    "title" : "Effective Assignments",
    "viewable" : false,
    "returnByDefault" : true,
    "isVirtual" : true,
    "onRetrieve" : {
        "type" : "text/javascript",
        "file" : "roles/effectiveAssignments.js",
        "effectiveRolesPropName" : "effectiveRoles"
    },
    "items" : {
        "type" : "object"
    }
},

When a role references an assignment, and a user references the role, that user automatically references the assignment in its list of effective assignments.

The effectiveRoles.js script uses the roles attribute of a user entry to calculate the grants (manual or conditional) that are currently in effect at the time of retrieval, based on temporal constraints or other custom scripted logic.

The effectiveAssignments.js script uses the virtual effectiveRoles attribute to calculate that user's effective assignments. The synchronization engine reads the calculated value of the effectiveAssignments attribute when it processes the user. The target system is updated according to the configured assignmentOperation for each assignment.

Do not change the default effectiveRoles.js and effectiveAssignments.js scripts. If you need to change the logic that calculates effectiveRoles and effectiveAssignments, create your own custom script and include a reference to it in your project's conf/managed.json file. For more information about using custom scripts, see Appendix E, "Scripting Reference".

When a user entry is retrieved, IDM calculates the effectiveRoles and effectiveAssignments for that user based on the current value of the user's roles property, and on any roles that might be granted dynamically through a custom script. The previous set of examples showed the creation of a role employee that referenced an assignment employee and was granted to user bjensen. Querying that user entry would show the following effective roles and effective assignments:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin"  \
 --request GET \
 "http://localhost:8080/openidm/managed/user/bjensen?_fields=userName,roles,effectiveRoles,effectiveAssignments"
{
  "_id": "bjensen",
  "_rev": "00000000dc6160c8",
  "userName": "bjensen@example.com",
  "roles": [
    {
      "_ref": "managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c",
      "_refProperties": {
        "temporalConstraints": [],
        "_grantType": "",
        "_id": "881f0b96-06e9-4af4-b86b-aba4ee15e4ef",
        "_rev": "00000000a92657c7"
      }
    }
  ],
  "effectiveRoles": [
    {
      "_ref": "managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c"
    }
  ],
  "effectiveAssignments": [
    {
      "name": "employee",
      "description": "Assignment for employees.",
      "mapping": "managedUser_systemLdapAccounts",
      "attributes": [
        {
          "name": "employeeType",
          "value": "Employee",
          "assignmentOperation": "mergeWithTarget",
          "unassignmentOperation": "removeFromTarget"
        }
      ],
      "_id": "4606245c-9412-4f1f-af0c-2b06852dedb8",
      "_rev": "00000000792afa08"
    }
  ]
}

In this example, synchronizing the managed/user repository with the external LDAP system defined in the mapping should populate user bjensen's employeeType attribute in LDAP with the value employee.

9.4.10. Managed Role Script Hooks

Like any other managed object, a role has script hooks that enable you to configure role behavior. The default role definition in conf/managed.json includes the following script hooks:

{
    "name" : "role",
    "onDelete" : {
        "type" : "text/javascript",
        "file" : "roles/onDelete-roles.js"
    },
    "onSync" : {
        "type" : "text/javascript",
        "source" : "require('roles/onSync-roles').syncUsersOfRoles(resourceName, oldObject, newObject, ['members']);"
    },
    "onCreate" : {
        "type" : "text/javascript",
        "source" : "require('roles/conditionalRoles').roleCreate(object);"
    },
    "onUpdate" : {
        "type" : "text/javascript",
        "source" : "require('roles/conditionalRoles').roleUpdate(oldObject, object);"
    },
    "postCreate" : {
        "type" : "text/javascript",
        "file" : "roles/postOperation-roles.js"
    },
    "postUpdate" : {
        "type" : "text/javascript",
        "file" : "roles/postOperation-roles.js"
    },
    "postDelete" : {
        "type" : "text/javascript",
        "file" : "roles/postOperation-roles.js"
    },
...

When a role is deleted, the onDelete script hook calls the bin/default/script/roles/onDelete-roles.js script.

When a role is synchronized, the onSync hook causes a synchronization operation on all managed objects that reference the role.

When a conditional role is created or updated, the onCreate and onUpdate script hooks force an update on all managed users affected by the conditional role.

Directly after a role is created, updated or deleted, the postCreate, postUpdate, and postDelete hooks call the bin/default/script/roles/postOperation-roles.js script. Depending on when this script is called, it either creates or removes the scheduled jobs required to manage temporal constraints on roles.

9.5. Running Scripts on Managed Objects

IDM provides a number of hooks that enable you to manipulate managed objects using scripts. These scripts can be triggered during various stages of the lifecycle of the managed object, and are defined in the managed objects configuration file (managed.json).

The scripts can be triggered when a managed object is created (onCreate), updated (onUpdate), retrieved (onRetrieve), deleted (onDelete), validated (onValidate), or stored in the repository (onStore). A script can also be triggered when a change to a managed object triggers an implicit synchronization operation (onSync).

You can also use post-action scripts for managed objects, including after the creation of an object (postCreate), after the update of an object (postUpdate), and after the deletion of an object (postDelete).

The following sample extract of a managed.json file runs a script to calculate the effective assignments of a managed object, whenever that object is retrieved from the repository:

"effectiveAssignments" : {
    "type" : "array",
    "title" : "Effective Assignments",
    "viewable" : false,
    "returnByDefault" : true,
    "isVirtual" : true,
    "onRetrieve" : {
        "type" : "text/javascript",
        "file" : "roles/effectiveAssignments.js",
        "effectiveRolesPropName" : "effectiveRoles"
    },
    "items" : {
        "type" : "object"
    }
},

9.6. Encoding Attribute Values

There are two ways to encode attribute values for managed objects - reversible encryption and salted hashing algorithms. Attribute values that might be encoded include passwords, authentication questions, credit card numbers, and social security numbers. If passwords are already encoded on the external resource, they are generally excluded from the synchronization process. For more information, see Chapter 18, "Managing Passwords".

You configure attribute value encoding, per schema property, in the managed object configuration (in your project's conf/managed.json file). The following sections show how to use reversible encryption and salted hash algorithms to encode attribute values.

9.6.1. Encoding Attribute Values With Reversible Encryption

The following excerpt of a managed.json file shows a managed object configuration that encrypts and decrypts the password attribute using the default symmetric key:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                ...
                "properties" : {
                    ...
                    "password" : {
                        "title" : "Password",
                        ...
                        "encryption" : {
                            "key" : "openidm-sym-default"
                        },
                        "scope" : "private",
         ...
        }
    ]
} 

Tip

To configure encryption of properties by using the Admin UI:

  1. Select Configure > Managed Objects, and click on the object type whose property values you want to encrypt (for example User).

  2. On the Properties tab, select the property whose value should be encrypted and select the Encrypt checkbox.

For information about encrypting attribute values from the command-line, see Section 3.4, "Using the encrypt Subcommand".

Important

Hashing is a one way operation - property values that are hashed can not be "unhashed" in the way that they can be decrypted. Therefore, if you hash the value of any property, you cannot synchronize that property value to an external resource. For managed object properties with hashed values, you must either exclude those properties from the mapping or set a random default value if the external resource requires the property.

9.6.2. Encoding Attribute Values by Using Salted Hash Algorithms

To encode attribute values with salted hash algorithms, add the secureHash property to the attribute definition, and specify the algorithm that should be used to hash the value.

The following excerpt of a managed.json file shows a managed object configuration that hashes the values of the password attribute using the SHA-1 algorithm:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                ...
                "properties" : {
                    ...
                    "password" : {
                        "title" : "Password",
                        ...
                        "secureHash" : {
                            "algorithm" : "SHA-1"
                        },
                        "scope" : "private",
         ...
        }
    ]
} 

Tip

To configure hashing of properties by using the Admin UI:

  1. Select Configure > Managed Objects, and click on the object type whose property values you want to hash (for example User).

  2. On the Properties tab, select the property whose value must be hashed and select the Hash checkbox.

  3. Select the algorithm that should be used to hash the property value.

    IDM supports the following hash algorithms:

    MD5
    SHA-1
    SHA-256
    SHA-384
    SHA-512

For information about hashing attribute values from the command-line, see Section 3.5, "Using the secureHash Subcommand".

Chapter 10. Managing Relationships Between Objects

IDM enables you to define relationships between two managed objects. Managed roles are implemented using relationship objects, but you can create a variety of relationship objects, as required by your deployment.

10.1. Defining a Relationship Type

Relationships are defined in your project's managed object configuration file (conf/managed.json). The default configuration includes a relationship named manager that enables you to configure a management relationship between two managed users. The manager relationship is a good example from which to understand how relationships work.

The default manager relationship is configured as follows:

"manager" : {
   "type" : "relationship",
   "returnByDefault" : false,
   "description" : "",
   "title" : "Manager",
   "viewable" : true,
   "searchable" : false,
   "properties" : {
       "_ref" : { "type" : "string" },
       "_refProperties": {
           "type": "object",
           "properties": {
               "_id": { "type": "string" }
           }
   }
},

All relationships have the following configurable properties:

type (string)

The object type. Must be relationship for a relationship object.

returnByDefault (boolean true, false)

Specifies whether the relationship should be returned in the result of a read or search query on the managed object that has the relationship. If included in an array, always set this property to false. By default, relationships are not returned, unless explicitly requested.

description (string, optional)

An optional string that provides additional information about the relationship object.

title (string)

Used by the UI to refer to the relationship.

viewable (boolean, true, false)

Specifies whether the relationship is visible as a field in the UI. The default value is true.

searchable (boolean, true, false)

Specifies whether values of the relationship can be searched, in the UI. For example, if you set this property to true for the manager relationship, a user will be able to search for managed user entries using the manager field as a filter.

_ref (JSON object)

Specifies how the relationship between two managed objects is referenced.

In the relationship definition, the value of this property is { "type" : "string" }. In a managed user entry, the value of the _ref property is the reference to the other resource. The _ref property is described in more detail in Section 10.2, "Establishing a Relationship Between Two Objects".

_refProperties (JSON object)

Specifies any required properties from the relationship that should be included in the managed object. The _refProperties field includes a unique ID (_id) and the revision (_rev) of the object. _refProperties can also contain arbitrary fields to support metadata within the relationship.

10.2. Establishing a Relationship Between Two Objects

When you have defined a relationship type, (such as the manager relationship, described in the previous section), you can reference that relationship from a managed user, using the _ref property.

For example, imagine that you are creating a new user, psmith, and that psmith's manager will be bjensen. You would add psmith's user entry, and reference bjensen's entry with the _ref property, as follows:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "If-None-Match: *" \
 --header "Content-Type: application/json" \
 --request PUT \
 --data '{
    "sn":"Smith",
    "userName":"psmith",
    "givenName":"Patricia",
    "displayName":"Patti Smith",
    "description" : "psmith - new user",
    "mail" : "psmith@example.com",
    "phoneNumber" : "0831245986",
    "password" : "Passw0rd",
    "manager" : {"_ref" : "managed/user/bjensen"}
  }' \
"http://localhost:8080/openidm/managed/user/psmith" 
{
  "_id": "psmith",
  "_rev": "0000000050c62938",
  "sn": "Smith",
  "userName": "psmith",
  "givenName": "Patricia",
  "displayName": "Patti Smith",
  "description": "psmith - new user",
  "mail": "psmith@example.com",
  "phoneNumber": "0831245986",
  "accountStatus": "active",
  "lastChanged" : {
    "date" : "2017-07-28T15:44:30.177Z"
  },
  "effectiveRoles": [],
  "effectiveAssignments": []
}

Note that the relationship information is not returned by default in the command-line output.

Any change to a relationship triggers a synchronization operation on any other managed objects that are referenced by the relationship. For example, IDM maintains referential integrity by deleting the relationship reference, if the object referred to by that relationship is deleted. In our example, if bjensen's user entry is deleted, the corresponding reference in psmith's manager property is removed.

10.3. Validating Relationships Between Objects

Optionally, you can specify that a relationship between two objects must be validated when the relationship is created. For example, you can indicate that a user cannot reference a role, if that role does not exist.

When you create a new relationship type, validation is disabled by default as it entails a query to the relationship that can be expensive, if it is not required. To configure validation of a referenced relationship, set "validate": true in the object configuration (in managed.json). The managed.json files provided with the sample configurations enable validation for the following relationships:

  • For user objects ‒ roles, managers, and reports

  • For role objects ‒ members and assignments

  • For assignment objects ‒ roles

The following configuration of the manager relationship enables validation, and prevents a user from referencing a manager that has not already been created:

"manager" : {
   "type" : "relationship",
   ...
   "validate" : true,

10.4. Working With Bi-Directional Relationships

In many cases, it is useful to define a relationship between two objects in both directions. For example, a relationship between a user and his manager might indicate a reverse relationship between the manager and her direct report. Reverse relationships are particularly useful in querying. You might want to query jdoe's user entry to discover who his manager is, or query bjensen's user entry to discover all the users who report to bjensen.

A reverse relationship is declared in the managed object configuration (conf/managed.json). Consider the following sample excerpt of the default managed object configuration:

"reports" : {
   "description" : "",
   "title" : "Direct Reports",
   ...
   "type" : "array",
   "returnByDefault" : false,
   "items" : {
       "type" : "relationship",
       "reverseRelationship" : true,
       "reversePropertyName" : "manager",
       "validate" : true,
       }
   ...

The reports property is a relationship between users and managers. So, you can refer to a managed user's reports by referencing the reports. However, the reports property is also a reverse relationship ("reverseRelationship" : true) which means that you can list all users that reference that report.

You can list all users whose manager property is set to the currently queried user.

That reverse relationship uses a resourceCollection of managed users, as shown here:

"resourceCollection" : [
   {
       "path" : "managed/user",
       "label" : "User",
       "query" : {
           "queryFilter" : "true",
           "fields" : [
               "userName",
               "givenName",
               "sn"
           ]
       }
   }
]

In this case, users are listed with the noted fields. You can configure these relationships from the Admin UI. For an example of the process, see Section 10.7, "Managing Relationships Through the Admin UI".

10.5. Viewing Relationships Over REST

By default, information about relationships is not returned as the result of a GET request on a managed object. You must explicitly include the relationship property in the request, for example:

$ curl
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/psmith?_fields=manager"
{
  "_id": "psmith",
  "_rev": "000000001298f6a6",
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99",
      "_rev": "000000004cab60c8"
    }
  }
}

To obtain more information about the referenced object (psmith's manager, in this case), you can include additional fields from the referenced object in the query, using the syntax object/property (for a simple string value) or object/*/property (for an array of values).

The following example returns the email address and contact number for psmith's manager:

$ curl
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/psmith?_fields=manager/mail,manager/phoneNumber"
{
  "_id": "psmith",
  "_rev": "000000001298f6a6",
  "phoneNumber": "1234567",
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99",
      "_rev": "000000000cde398e"
    },
    "mail": "bjensen@example.com",
    "phoneNumber": "1234567"
  }
}

You can query all the relationships associated with a managed object by querying the reference (*_ref) property of the object. For example, the following query shows all the objects that are referenced by psmith's entry:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/managed/user/psmith?_fields=*_ref"
{
  "_id": "psmith",
  "_rev": "000000001298f6a6",
  "roles": [],
  "authzRoles": [
    {
      "_ref": "repo/internal/role/openidm-authorized",
      "_refProperties": {
        "_id": "8e7b2c97-dfa8-4eec-a95b-b40b710d443d",
        "_rev": "00000000c7554e13"
      }
    }
  ],
  "manager": {
    "_ref": "managed/user/bjensen",
    "_refProperties": {
      "_id": "3a246327-a972-4576-b6a6-7126df780029",
      "_rev": "00000000792afa08"
    }
  }
}

10.6. Viewing Relationships in Graph Form

The Identity Relationships widget gives a visual display of the relationships between objects.

This widget is not displayed on any dashboard by default. You can add it as follows:

  1. Log into the Admin UI.

  2. Select Dashboards, and choose the dashboard to which you want to add the widget.

    For more information about managing dashboards in the UI, see Section 4.1.2, "Creating and Modifying Dashboards".

  3. Select Add Widgets.

  4. In the Add Widgets window, select Identity Relationships, then click Settings.

  5. Choose the Widget Size (small, medium, or large).

  6. From the Chart Type list, select Collapsible Tree Layout or Radial Layout.

    The Collapsible Tree Layout looks something like this:

    Relationships graph showing manager and direct reports

    The Radial Layout looks something like this:

    Relationships graph showing manager and direct reports
  7. Select the object for which you want to display relationships, for example, User.

  8. Select the property or properties that will be used to search on that object, and that will be displayed in the widget, for example, userName and city. Click Add to add each new property.

  9. When you have configured the widget, you can click Preview for an idea of what the data represented by widget will look like, then click Save.

When you have added the Identity Relationships widget, select the user whose relationships you want to search.

The following graph shows all of imartinez's relationships. The graph shows imartinez's manager and her direct reports.

Relationships graph showing manager and direct reports

Select or deselect the Data Types on the left of the screen to control how much information is displayed.

Select and move the graph for a better view. Double-click on any user in the graph to view that user's profile.

10.7. Managing Relationships Through the Admin UI

This section describes how to set up relationships between managed objects by using the Admin UI. You can set up a relationship between any object types. The examples in this section demonstrate how to set up a relationship between users and devices, such as IoT devices.

For illustration purposes, these examples assume that you have started IDM and already have some managed users. If this is not the case, start the server with the sample configuration described in Chapter 2, "Synchronizing Data From a CSV File to IDM" in the Samples Guide, and run a reconciliation to populate the managed user repository.

In the following procedures, you will:

Procedure 10.1. To Create a New Device Object Type

This procedure illustrates how to set up a new Device managed object type, adding properties to collect information such as model, manufacturer, and serial number for each device. In the next procedure, you will set up the relationship.

  1. Click Configure > Managed Objects > New Managed Object.

    Give the object an appropriate name and Readable Title. For this procedure, specify Device for both these fields.

    Enter a description for the object, select an icon that represents the object, and click Save.

    You should now see three tabs: Properties, Details, and Scripts. Select the Properties tab.

  2. Click Add a Property to set up the schema for the device.

    For each property, enter a Name, and Label, select the data Type for the property, and specify whether that property is required for an object of this type.

    For the purposes of this example, include the properties shown in the following image: model, serialNumber, manufacturer, description, and category.

    Devices - All Properties

    When you save the properties for the new managed object type, IDM saves those entries in your project's conf/managed.json file.

  3. Now select Manage > Device > New Device and add a device as shown in the following image:

    Devices - All Properties
  4. Continue adding new devices to the Device object.

    When you have finished, select Manage > Device to view the complete list of Devices.

    The remaining procedures in this section assume that you have added devices similar to the following:

    List of Managed Devices
  5. (Optional) To change the order in which properties of the Device managed object are displayed, select Configure > Managed Objects > Device. Select the property that you want to move and drag it up or down the list.

    Alternatively, you can make the same changes to this schema (or any managed object schema) in your project's conf/managed.json file.

Procedure 10.2. To Configure the Relationship Between a Device and a User

To set up a relationship between the Device object type and the User object type, you must identify the specific property on each object that will form the basis of the relationship. For example, a device must have an owner and a user can own one or more devices. The property type for each of these must be relationship.

In this procedure, you will update the managed Device object type to add a new Relationship type property named owner. You will then link that property to a new property on the managed User object, named device. At the end of the procedure, the updated object types will look as follows:

Figure 10.1. Relationship Properties on User and Device Objects
Updating the managed user and managed device objects with relationship properties

  1. Create a new relationship property on the Device object:

    1. Select Configure > Managed Objects and select the Device object that you created previously.

    2. On the Properties tab, add a new property named owner. Select Relationship as the property Type. Select Required, as all device objects must have an owner:

      Creating a new device property on device objects

      Note

      You cannot change the Type of a property after it has been created. If you create the property with an incorrect Type, you must delete the property and recreate it.

  2. When you have saved the Owner property, select it to show the relationship on the Details tab:

    Details of a Relationship Property
  3. Click the + Related Resource item and select user as the Resource.

    This sets up a relationship between the new Device object and the managed User object.

    Under Display Properties, select all of the properties of the user object that should be visible when you display a user's devices in the UI. For example, you might want to see the user's name, email address and telephone number. Click Add to add each property to the list.

    Note that this list of Display Properties also specifies how you can search for user objects when you are assigning a device to a user.

    Click Show advanced options. Notice that the Query Filter field is set to true. This setting allows you to search on any of the Display Properties that you have selected, when you are assigning a device to a user.

    Click Save to continue.

    You now have a one-way relationship between a device and a user.

  4. Click the + Two-way Relationship item to configure the reverse relationship:

    1. Select Has Many to indicate that a single user can have more than one device.

    2. In the Reverse property name field, enter the new property name that will be created in the managed User object type. As shown in Figure 10.1, "Relationship Properties on User and Device Objects", that property is device in this example.

    3. Under Display Properties, select all of the properties of the device object that should be visible when you display a user in the UI. For example, you might want to see the model and serial number of each device. Click Add to add each property to the list.

    4. Click Show advanced options. Notice that the Query Filter field is set to true. This setting allows you to search on any of the Display Properties that you have selected, when you are assigning a device to a user.

    5. Select Validate relationship.

      This setting ensures that the relationship is valid when a device is assigned to a user. IDM verifies that both the user and device objects exist, and that that specific device has not already been assigned to user.

    6. Click Save to continue.

  5. You should now have the following reverse relationship configured between User objects and Device objects:

    Reverse relationship configured between users and devices

    Select Configure > Managed Objects > User.

    Scroll down to the end of the Properties tab and notice that the device property was created automatically when you configured the relationship.

Procedure 10.3. To Demonstrate the Relationship

This procedure demonstrates how devices can be assigned to users, based on the relationship configuration that you set up in the previous two procedures.

  1. Select Manage > User, click on a user entry and select the new Device tab.

  2. Click Add Device and click in the Device field to display the list of devices that you added in the previous procedure.

    Assigning a Device to a User
  3. Select two devices and click Add.

  4. On the Device tab, click the Show Chart icon at the top right.

    A graphical representation of the relationship between the user and her devices is displayed:

    Assigning a Device to a User
  5. You can also assign an owner to a device.

    Select Manage > Device, and select one of the devices that you did not assign in the previous step.

    Click Add Owner and search for the user to whom the device should be assigned.

  6. To demonstrate the relationship validation, try to assign a device that has already been assigned to a different user.

    The UI displays the error: Conflict with Existing Relationship.

10.8. Viewing the Relationship Configuration in the UI

The Managed Objects Relationship Diagram provides a visual display of the relationship configuration between managed objects. Unlike the Identity Relationships widget, described in Section 10.6, "Viewing Relationships in Graph Form", this widget does not show the actual relationship data, but rather shows the configured relationship types.

This widget is not displayed on any dashboard by default. You can add it as follows:

  1. Log into the Admin UI.

  2. Select Dashboards, and choose the dashboard to which you want to add the widget.

    For more information about managing dashboards in the UI, see Section 4.1.2, "Creating and Modifying Dashboards".

  3. Select Add Widgets.

  4. In the Add Widgets window, select Managed Objects Relationship Diagram.

    There are no configurable settings for this widget.

  5. The Preview button shows the current relationship configuration. The following image shows the relationship configuration for a basic IDM installation with no specific configuration:

    Managed Object Relationships Diagram showing relationship configuration for a base install

    The legend indicates which relationships are required, which are optional, and which are one to one or one to many. In the default relationship configuration shown in the previous image, you can see that a user can have one or more roles and a role can have one or more users. A manager can have one or more reports but a user can have only one manager. There are no mandatory relationships in this default configuration.

Chapter 11. Configuring Social Identity Providers

IDM provides a standards-based solution for social authentication requirements, based on the OAuth 2.0 and OpenID Connect 1.0 standards. They are similar, as OpenID Connect 1.0 is an authentication layer built on OAuth 2.0.

This chapter describes how to configure IDM to register and authenticate users with multiple social identity providers.

To configure different social identity providers, you'll take the same general steps:

  • Set up the provider. You'll need information such as a Client ID and Client Secret to set up an interface with IDM.

  • Configure the provider on IDM.

  • Set up User Registration. Activate Social Registration in the applicable Admin UI screen or configuration file.

  • After configuration is complete, test the result. For a common basic procedure, see Section 11.18, "Testing Social Identity Providers".

You can configure how IDM handles authentication using social identity providers by opening the Admin UI and selecting Configure > Authentication > Modules > Social Providers. The Social Providers authentication module is enabled by default. For more information, see Section 11.16, "Configuring the Social Providers Authentication Module".

To understand how data is transmitted between IDM and a social identity provider, read Section 11.1, "OpenID Connect Authorization Code Flow".

Note

For all social identity providers, set up a FQDN for IDM, along with information in a DNS server, or system hosts files. For test purposes, FQDNs that comply with RFC 2606, such as localhost and openidm.example.com, are acceptable.

11.1. OpenID Connect Authorization Code Flow

The OpenID Connect Authorization Code Flow specifies how IDM (Relying Party) interacts with the OpenID Provider (Social ID Provider), based on the use of the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, access token, ID token, and provisioning from the social identity provider to IDM.

Figure 11.1. OpenID Connect Authorization Code Flow for Social ID Providers
OpenID Connect Authorization Code Flow for Social ID Providers

The following list describes details of each item in the authorization flow:

  1. A user navigates to the IDM Self-Service UI, and selects the Sign In link for the desired social identity provider.

  2. IDM prepares an authorization request.

  3. IDM sends the request to the Authorization Endpoint that you configured for the social identity provider, with a Client ID.

  4. The social identity provider requests end user authentication and consent.

  5. The end user transmits authentication and consent.

  6. The social identity provider sends a redirect message, with an authorization code, to the end user's browser. The redirect message goes to an oauthReturn endpoint, configured in ui.context-oauth.json in your project's conf/ directory.

    When you configure a social identity provider, you'll find the endpoint in the applicable configuration file with the following property: redirectUri.

  7. The browser transmits the redirect message, with the authorization code, to IDM.

  8. IDM records the authorization code, and sends it to the social identity provider Token Endpoint.

  9. The social identity provider token endpoint returns access and ID tokens.

  10. IDM validates the token, and sends it to the social identity provider User Info Endpoint.

  11. The social identity provider responds with information on the user's account, that IDM can provision as a new Managed User.

You'll configure these credentials and endpoints, in some form, for each social identity provider.

11.2. Many Social Identity Providers, One Schema

Most social identity providers include common properties, such as name, email address, icon configuration, and location.

IDM includes two sets of property maps that translate information from a social identity provider to your managed user objects. These property maps are as follows:

  • The identityProviders.json file includes a propertyMap code block for each supported provider. This file maps properties from the provider to a generic managed user object. You should not customize this file.

  • The selfservice.propertymap.json file translates the generic managed user properties to the managed user schema that you have defined in managed.json. If you have customized the managed user schema, this is the file that you must change, to indicate how your custom schema maps to the generic managed user schema.

Examine the identityProviders.json file in the conf/ subdirectory for your project. The following excerpt represents the Facebook propertyMap code block from that file:

"propertyMap" : [
   {
      "source" : "id",
      "target" : "id"
   },
   {
      "source" : "name",
      "target" : "displayName"
   },
   {
      "source" : "first_name",
      "target" : "givenName"
   },
   {
      "source" : "last_name",
      "target" : "familyName"
   },
   {
      "source" : "email",
      "target" : "email"
   },
   {
      "source" : "email",
      "target" : "username"
   },
   {
      "source" : "locale",
      "target" : "locale"
   }
]

The source lists the Facebook property, the target lists the corresponding property for a generic managed user.

IDM then processes that information through the selfservice.propertymap.json file, where the source corresponds to the generic managed user and the target corresponds to your customized managed user schema (defined in your project's managed.json file).

{
   "properties" : [
      {
         "source" : "givenName",
         "target" : "givenName"
      },
      {
         "source" : "familyName",
         "target" : "sn"
      },
      {
         "source" : "email",
         "target" : "mail"
      },
      {
         "source" : "postalAddress",
         "target" : "postalAddress",
         "condition" : "/object/postalAddress  pr"
      },
      {
         "source" : "addressLocality",
         "target" : "city",
         "condition" : "/object/addressLocality  pr"
      },
      {
         "source" : "addressRegion",
         "target" : "stateProvince",
         "condition" : "/object/addressRegion  pr"
      },
      {
         "source" : "postalCode",
         "target" : "postalCode",
         "condition" : "/object/postalCode  pr"
      },
      {
         "source" : "country",
         "target" : "country",
         "condition" : "/object/country  pr"
      },
      {
         "source" : "phone",
         "target" : "telephoneNumber",
         "condition" : "/object/phone  pr"
      },
      {
         "source" : "username",
         "target" : "userName"
      }
   ]
}

Tip

To take additional information from a social identity provider, make sure the property is mapped through the identityProviders.json and selfservice.propertymap.json files.

Several of the property mappings include a pr presence expression which is a filter that returns all records with the given attribute. For more information, see Section 8.3.4.2, "Presence Expressions".

11.3. Setting Up Google as a Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Google as a social identity provider for IDM:

11.3.1. Setting Up Google

To set up Google as a social identity provider, navigate to the Google API Manager. You'll need a Google account. If you have GMail, you already have a Google account. While you could use a personal Google account, it is best to use an organizational account to avoid problems if specific individuals leave your organization. When you set up a Google social identity provider, you'll need to perform the following tasks:

Plan ahead. It may take some time before the Google+ API that you configure for IDM is ready for use.

  • In the Google API Manager, select and enable the Google+ API. It is one of the Google "social" APIs.

  • Create a project for IDM.

  • Create OAuth client ID credentials. You'll need to configure an OAuth consent screen with at least a product name and email address.

  • When you set up a Web application for the client ID, you'll need to set up a web client with:

    • Authorized JavaScript origins

      The origin URL for IDM, typically a URL such as https://openidm.example.com:8443

    • Authorized redirect URIs

      The redirect URI after users are authenticated, typically, https://openidm.example.com:8443/oauthReturn/

  • In the list of credentials, you'll see a unique Client ID and Client secret. You'll need this information when you configure the Google social identity provider, as described in Section 11.3.2, "Configuring a Google Social Identity Provider".

For Google's procedure, see the Google Identity Platform documentation on Setting Up OAuth 2.0.

11.3.2. Configuring a Google Social Identity Provider

  1. To configure a Google social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Google social identity provider, and if needed, select the edit icon.

  3. Include the Google values for Client ID and Client Secret for your project, as described earlier in this section.

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.1, "Google Social Identity Provider Configuration Details".

When you enable a Google social identity provider in the Admin UI, IDM generates the identityProvider-google.json file in your project's conf/ subdirectory.

When you review that file, you should see information from what you configured in the Admin UI, and beyond. The first part of the file includes the name of the provider, endpoints, as well as the values for clientId and clientSecret.

{
    "enabled" : true,
    "authorizationEndpoint" : "https://accounts.google.com/o/oauth2/v2/auth",
    "tokenEndpoint" : "https://www.googleapis.com/oauth2/v4/token",
    "userInfoEndpoint" : "https://www.googleapis.com/oauth2/v3/userinfo",
    "wellKnownEndpoint" : "https://accounts.google.com/.well-known/openid-configuration"
    "clientId" : "<someUUID>",
    "clientSecret" : {
        "$crypto" : {
            "type" : "x-simple-encryption",
            "value" : {
                "cipher" : "AES/CBC/PKCS5Padding",
                "salt" : "<hashValue>",
                "data" : "<encryptedValue>",
                "iv" : "<encryptedValue>",
                "key" : "openidm-sym-default",
                "mac" : "<hashValue>"
            }
        }
    },
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, scopes, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Google social identity, you can verify this by selecting Manage > Google, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.1, "Google Social Identity Provider Configuration Details".

11.3.3. Configuring User Registration to Link to Google

Once you've configured the Google social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and under the Social tab, enable the option associated with Social Registration. For more information on user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable social registration, you're allowing users to register on IDM through all active social identity providers.

11.4. Setting Up LinkedIn as a Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure LinkedIn as a social identity provider for IDM:

11.4.1. Setting Up LinkedIn

To set up LinkedIn as a social identity provider, navigate to the LinkedIn Developers page for My Applications. You'll need a LinkedIn account. While you could use a personal LinkedIn account, it is best to use an organizational account to avoid problems if specific individuals leave your organization. When you set up a LinkedIn social identity provider, you'll need to perform the following tasks:

  • In the LinkedIn Developers page for My Applications, select Create Application.

  • You'll need to include the following information when creating an application:

    • Company Name

    • Application Name

    • Description

    • Application Logo

    • Application Use

    • Website URL

    • Business Email

    • Business Phone

  • When you see Authentication Keys for your LinkedIn application, save the Client ID and Client Secret.

  • Enable the following default application permissions:

    • r_basicprofile

    • r_emailaddress

  • When you set up a Web application for the client ID, you'll need to set up a web client with OAuth 2.0 Authorized Redirect URLs. For example, if your IDM FQDN is openidm.example.com, add the following URL:

    • http://openidm.example.com:8080/oauthReturn/

    You can ignore any LinkedIn URL boxes related to OAuth 1.0a.

For LinkedIn's procedure, see their documentation on Authenticating with OAuth 2.0.

11.4.2. Configuring a LinkedIn Social Identity Provider

  1. To configure a LinkedIn social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the LinkedIn social identity provider.

  3. Include the values that LinkedIn created for Client ID and Client Secret, as described in Section 11.4.1, "Setting Up LinkedIn".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.2, "LinkedIn Social Identity Provider Configuration Details".

When you enable a LinkedIn social identity provider, IDM generates the identityProvider-linkedIn.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, endpoints, as well as the values for clientId and clientSecret.

{
   "provider" : "linkedIn",
   "authorizationEndpoint" : "https://www.linkedin.com/oauth/v2/authorization",
   "tokenEndpoint" : "https://www.linkedin.com/oauth/v2/accessToken",
   "userInfoEndpoint" : "https://api.linkedin.com/v1/people/~:(id,formatted-name,first-name,last-name,email-address,location)?format=json"
   "provider" : "linkedIn",
   "clientId" : "<someUUID>",
   "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
   },
   "scope" : [
       "r_basicprofile",
       "r_emailaddress"
   ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a LinkedIn social identity, you can verify this by selecting Manage > LinkedIn, and then selecting a user.

If you need more information about the properties in this file, refer to the following appendix: Section I.2, "LinkedIn Social Identity Provider Configuration Details".

11.4.3. Configuring User Registration With LinkedIn

Once you've configured the LinkedIn social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration. Under the Social tab, enable the option associated with Social Registration. For more information about user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable social registration, you're allowing users to register on IDM through all active social identity providers.

11.5. Setting Up Facebook as a Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Facebook as a social identity provider for IDM:

11.5.1. Setting Up Facebook

To set up Facebook as a social identity provider, navigate to the Facebook for Developers page. You'll need a Facebook account. While you could use a personal Facebook account, it is best to use an organizational account to avoid problems if specific individuals leave your organization. When you set up a Facebook social identity provider, you'll need to perform the following tasks:

  • In the Facebook for Developers page, select My Apps and Add a New App. For IDM, you'll create a Website application.

  • You'll need to include the following information when creating a Facebook website application:

    • Display Name

    • Contact Email

    • OpenIDM URL

  • When complete, you should see your App and a link to a Dashboard. Navigate to the Dashboard for your App.

  • Make a copy of the App ID and App Secret for when you configure the Facebook social identity provider in IDM.

  • In the settings for your App, you should see an entry for App Domains, such as example.com.

For Facebook's documentation on the subject, see Facebook Login for the Web with the JavaScript SDK.

11.5.2. Configuring a Facebook Social Identity Provider

  1. To configure a Facebook social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Facebook social identity provider.

  3. Include the values that Facebook created for App ID and App Secret, as described in Section 11.5.1, "Setting Up Facebook".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.3, "Facebook Social Identity Provider Configuration Details".

When you enable a Facebook social identity provider in the Admin UI, IDM generates the identityProvider-facebook.json file in your project's conf/ subdirectory.

It includes parts of the file that you may have configured through the Admin UI. While the labels in the UI specify App ID and App Secret, you'll see them as clientId and clientSecret, respectively, in the configuration file.

{
   "provider" : "facebook",
   "authorizationEndpoint" : "https://www.facebook.com/dialog/oauth",
   "tokenEndpoint" : "https://graph.facebook.com/v2.7/oauth/access_token",
   "userInfoEndpoint" : "https://graph.facebook.com/me?fields=id,name,picture,email,first_name,last_name,locale"
   "clientId" : "<someUUID>",
   "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
   },
   "scope" : [
       "email",
       "user_birthday"
   ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Facebook social identity, you can verify this by selecting Manage > Facebook, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.3, "Facebook Social Identity Provider Configuration Details".

11.5.3. Configuring User Registration to Link to Facebook

Once you've configured the Facebook social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and under the Social tab, enable the option associated with Social Registration. For more information about user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable social registration, you're allowing users to register on IDM through all active social identity providers.

11.6. Setting Up Amazon as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Amazon as a social identity provider for IDM:

Note

Amazon as a social identity provider requires access over secure HTTP (HTTPS).

11.6.1. Setting Up Amazon

To set up Amazon as a social identity provider, navigate to the following Amazon page: Register Your Website With Login With Amazon . You'll need an Amazon account. You'll also need to register a security profile.

When you set up Amazon as a social identity provider, navigate to the Amazon Security Profile Management page. You'll need to enter the following:

  • Security Profile Name (The name of your app)

  • Security Profile Description

  • Consent Privacy Notice URL

  • Consent Logo Image (optional)

When complete and saved, you should see a list of security profiles with OAuth2 credentials. You should be able to find the Client ID and Client Secret from this screen.

However, you still need to configure the web settings for your new Security Profile. You can find a list of your existing Security Profiles on the Login with Amazon Developer Console Page . You can access that page from the Amazon Developer Console dashboard by selecting Apps and Services > Login with Amazon. You can then Manage the Web Settings for that app.

In the Web Settings for your app, you'll need to set either of the following properties:

11.6.2. Configuring an Amazon Social Identity Provider

  1. To configure an Amazon social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Amazon social identity provider.

    In the Amazon Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Allowed Return URLs in Section 11.6.1, "Setting Up Amazon".

  3. Include the values that Amazon created for Client ID and Client Secret, as described in Section 11.6.1, "Setting Up Amazon".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.4, "Amazon Social Identity Provider Configuration Details".

When you enable an Amazon social identity provider in the Admin UI, IDM generates the identityProvider-amazon.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, endpoints, as well as the values for clientId and clientSecret.

{
   "provider" : "amazon",
   "authorizationEndpoint" : "https://www.amazon.com/ap/oa",
   "tokenEndpoint" : "https://api.amazon.com/auth/o2/token",
   "userInfoEndpoint" : "https://api.amazon.com/user/profile"
   "enabled" : true,
   "clientId" : "<someUUID>",
   "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
   },
   "scope" : [
       "profile"
   ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with an Amazon social identity, you can verify this by selecting Manage > Amazon, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.4, "Amazon Social Identity Provider Configuration Details".

11.6.3. Configuring User Registration to Link to Amazon

Once you've configured the Amazon social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.7. Setting Up Microsoft as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Microsoft as a social identity provider for IDM:

Note

Microsoft as a social identity provider requires access over secure HTTP (HTTPS). This example assumes that you've configured IDM on https://openidm.example.com:8443. Substitute your URL for openidm.example.com.

11.7.1. Setting Up Microsoft

For Microsoft documentation on how to set up a social identity provider, navigate to the following article: Sign-in Microsoft Account & Azure AD users in a single app . You'll need a Microsoft account.

To set up Microsoft as a social identity provider, navigate to the Microsoft Application Registration Portal, and select Go to App List. After logging in with your Microsoft account, you should see a list of existing Microsoft applications. Select Add an App, and you'll need to enter the Application Name and then

  • You should see an Application ID.

  • To find your Application Secret, select Generate New Password. That password is your Application Secret.

    Tip

    Save your new password. It is the only time you'll see the Application Secret for your new app.

  • Select Add Platform. You'll choose a Web platform, enable Allow Implicit Flow and set up the following value for Redirect URI:

    • https://openidm.example.com:8443/oauthReturn/

If desired, you can also enter the following information:

  • Logo image

  • Terms of Service URL

  • Privacy Statement URL

The OAuth2 credentials for your new Microsoft App include an Application ID and Application Secret for your app.

11.7.2. Configuring a Microsoft Social Identity Provider

  1. To configure a Microsoft social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Microsoft social identity provider.

    In the Microsoft Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Allowed Return URLs in Section 11.7.1, "Setting Up Microsoft".

  3. Include the values that Microsoft created for Client ID and Client Secret, as described in Section 11.7.1, "Setting Up Microsoft".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.5, "Microsoft Social Identity Provider Configuration Details".

When you enable a Microsoft social identity provider in the Admin UI, IDM generates the identityProvider-microsoft.json file in your project's conf/ subdirectory.

It includes parts of the file that you may have configured through the Admin UI. While the labels in the UI specify Application ID and Application Secret, you'll see them as clientId and clientSecret, respectively, in the configuration file.

"provider" : "microsoft",
   "authorizationEndpoint" : "https://login.microsoftonline.com/common/oauth2/v2.0/authorize",
   "tokenEndpoint" : "https://login.microsoftonline.com/common/oauth2/v2.0/token",
   "userInfoEndpoint" : "https://graph.microsoft.com/v1.0/me"
   "clientId" : "<someUUID>",
   "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
   },
   "scope" : [
       "User.Read"
   ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Microsoft social identity, you can verify this by selecting Manage > Microsoft, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.5, "Microsoft Social Identity Provider Configuration Details".

11.7.3. Configuring User Registration to Link to Microsoft

Once you've configured the Microsoft social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.8. Setting Up WordPress as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure WordPress as a social identity provider for IDM:

11.8.1. Setting Up WordPress

To set up WordPress as a social identity provider, navigate to the following WordPress Developers page: Developer Resources . You'll need a WordPress account. You can then navigate to the WordPress My Applications page, where you can create a new web application, with the following information:

  • Name

  • Description

  • Website URL, which becomes your Application URL

  • Redirect URL(s); for IDM, normally http://openidm.example.com:8080/oauthReturn/

  • Type, which allows you to select Web clients

When complete and saved, you should see a list of OAuth Information for your new webapp. That information should you your Client ID and Client Secret.

11.8.2. Configuring a WordPress Social Identity Provider

  1. To configure a WordPress social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the WordPress social identity provider.

    In the WordPress Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Allowed Return URLs in Section 11.8.1, "Setting Up WordPress".

  3. Include the values that WordPress created for Client ID and Client Secret, as described in Section 11.8.1, "Setting Up WordPress".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.6, "WordPress Social Identity Provider Configuration Details".

When you enable a WordPress social identity provider in the Admin UI, IDM generates the identityProvider-wordpress.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, endpoints, as well as the values for clientId and clientSecret.

{
    "provider" : "wordpress",
    "authorizationEndpoint" : "https://public-api.wordpress.com/oauth2/authorize",
    "tokenEndpoint" : "https://public-api.wordpress.com/oauth2/token",
    "userInfoEndpoint" : "https://public-api.wordpress.com/rest/v1.1/me/",
    "enabled" : true,
    "clientId" : "<someUUID>",
    "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
    },
    "scope" : [
        "auth"
    ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Wordpress social identity, you can verify this by selecting Manage > Wordpress, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.6, "WordPress Social Identity Provider Configuration Details".

11.8.3. Configuring User Registration to Link to WordPress

Once you've configured the WordPress social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.9. Setting Up WeChat as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure WeChat as a social identity provider for IDM:

These requirements assume that you have a WeChat developer account where you can get access to create WeChat web application credentials. To verify access, you'll need the WeChat app on your mobile phone.

11.9.1. Setting Up WeChat

To set up WeChat as a social identity provider, you'll need to get the following information for your WeChat app. The name may be different in WeChat.

  • Client ID (WeChat uses appid as of this writing.)

  • Client Secret (WeChat uses secret as of this writing.)

  • Scope

  • Authorization Endpoint URL

  • Token Endpoint URL

  • User Info Endpoint URL

  • Redirect URI, normally something like http://openidm.example.com/oauthReturn/

Note

WeChat supports URLs on one of the following ports: 80 or 443. For more information on how to configure IDM to use these ports, see Appendix A, "Ports Used".

11.9.2. Configuring a WeChat Social Identity Provider

  1. To configure a WeChat social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the WeChat social identity provider.

    In the WeChat Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Allowed Return URLs in Section 11.9.1, "Setting Up WeChat".

  3. Include the values that WeChat created for Client ID and Client Secret, as described in Section 11.9.1, "Setting Up WeChat".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.7, "WeChat Social Identity Provider Configuration Details".

When you enable a WeChat social identity provider in the Admin UI, IDM generates the identityProvider-wechat.json file in your project's conf/ subdirectory.

When you review that file, you should see information from what you configured in the Admin UI, and beyond. The first part of the file includes the name of the provider, endpoints, scopes, as well as the values for clientId and clientSecret.

{
    "provider" : "wechat",
    ...
    "clientId" : "<someUUID>",
    "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
    },
    "authorizationEndpoint" : "https://open.weixin.qq.com/connect/qrconnect",
    "tokenEndpoint" : "https://api.wechat.com/sns/oauth2/access_token",
    "refreshTokenEndpoint" : "https://api.wechat.com/sns/oauth2/refresh_token",
    "userInfoEndpoint" : "https://api.wechat.com/sns/userinfo",
    "redirectUri" : "http://openidm.example.com:8080/oauthReturn/",
    "scope" : [
        "snsapi_login"
    ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a WeChat social identity, you can verify this by selecting Manage > WeChat, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.7, "WeChat Social Identity Provider Configuration Details".

11.9.3. Configuring User Registration to Link to WeChat

Once you've configured the WeChat social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.10. Setting Up Instagram as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Instagram as a social identity provider for IDM:

11.10.1. Setting Up Instagram

To set up Instagram as a social identity provider, navigate to the following page: Instagram Developer Documentation . You'll need an Instagram account. You can then navigate to the Manage Clients page, where you can follow the Instagram process to create a new web application. As of this writing, you can do so on their Register a new Client ID page, where you'll need the following information:

  • Application Name

  • Description

  • Website URL for your app, such as http://openidm.example.com:8080

  • Redirect URL(s); for IDM: http://openidm.example.com:8080/oauthReturn/

When complete and saved, you should see a list of OAuth Information for your new webapp. That information should be your Client ID and Client Secret.

11.10.2. Configuring an Instagram Social Identity Provider

  1. To configure an Instagram social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Instagram social identity provider.

    In the Instagram Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Valid Redirect URIs in Section 11.10.1, "Setting Up Instagram".

  3. Include the values that Instagram created for Client ID and Client Secret, as described in Section 11.10.1, "Setting Up Instagram".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.8, "Instagram Social Identity Provider Configuration Details".

When you enable an Instagram social identity provider in the Admin UI, IDM generates the identityProvider-instagram.json file in your project's conf/ subdirectory.

When you review that file, you should see information from what you configured in the Admin UI, and beyond. The first part of the file includes the name of the provider, endpoints, scopes, as well as the values for clientId and clientSecret.

{
   "provider" : "instagram",
   ...
   "clientId" : "<Client_ID_Name>",
   "clientSecret" : {
      "$crypto" : {
          "type" : "x-simple-encryption",
          "value" : {
              "cipher" : "AES/CBC/PKCS5Padding",
              "salt" : "<hashValue>",
              "data" : "<encryptedValue>",
              "iv" : "<encryptedValue>",
              "key" : "openidm-sym-default",
              "mac" : "<hashValue>"
          }
      }
   },
   "authorizationEndpoint" : "https://api.instagram.com/oauth/authorize/",
   "tokenEndpoint" : "https://api.instagram.com/oauth/access_token",
   "userInfoEndpoint" : "https://api.instagram.com/v1/users/self/",
   "redirectUri" : "http://openidm.example.com:8080/oauthReturn/",
   "scope" : [
       "basic",
       "public_content"
   ],
...

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with an Instagram social identity, you can verify this by selecting Manage > Instagram, and then selecting a user.

If you need more information about the properties in this file, refer to the following appendix: Section I.8, "Instagram Social Identity Provider Configuration Details".

11.10.3. Configuring User Registration to Link to Instagram

Once you've configured the Instagram social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.11. Setting Up Vkontakte as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Vkontakte as a social identity provider for IDM:

Note

When you configure a Vkontakte app, look for an Application ID and a Secure Key. IDM uses this information as a clientId and clientSecret, respectively.

11.11.1. Setting Up Vkontakte

To set up Vkontakte as a social identity provider, navigate to the following Vkontakte page: Vkontakte Developers Page . You'll need a Vkontakte account. Find a My Apps link. You can then create an application with the following information:

  • Title (The name of your app)

  • Platform (Choose Website)

  • Site Address (The URL of your IDM deployment, such as http://openidm.example.com:8080/

  • Base domain (Example: example.com)

  • Authorized Redirect URI (Example: http://openidm.example.com:8080/oauthReturn/)

If you leave and need to return to Vkontakte, navigate to https://vk.com/dev and select My Apps. You can then Manage the new apps that you've created.

Navigate to the Settings for your app, where you'll find the Application ID and Secure Key for your app. You'll use that information as shown here:

  • Vkontakte Application ID = IDM Client ID

  • Vkontakte Secure Key = IDM Client Secret

11.11.2. Configuring a Vkontakte Social Identity Provider

  1. To configure a Vkontakte social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Vkontakte social identity provider.

    In the Vkontakte Provider pop-up that appears, the values for Redirect URI should match the values that you've entered for Authorized Redirect URI in Section 11.11.1, "Setting Up Vkontakte".

  3. Include the values that Vkontakte created for Client ID and Client Secret, as described in Section 11.11.1, "Setting Up Vkontakte".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.9, "Vkontakte Social Identity Provider Configuration Details".

When you enable a Vkontakte social identity provider in the Admin UI, IDM generates the identityProvider-vkontakte.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, endpoints, as well as information from the Consumer Key and Consumer Secret, you'll see them as clientId and clientSecret, respectively, in the configuration file.

{
    "provider" : "vkontakte",
    "configClass" : "org.forgerock.oauth.clients.vk.VKClientConfiguration",
    "basicAuth" : false,
    "clientId" : "<someUUID>",
    "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
    },
    "authorizationEndpoint" : "https://oauth.vk.com/authorize",
    "tokenEndpoint" : "https://oauth.vk.com/access_token",
    "userInfoEndpoint" : "https://api.vk.com/method/users.get",
    "redirectUri" : "http://openidm.example.com:8080/oauthReturn/",
    "scope" : [
        "email"
    ],
...

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Vkontakte social identity, you can verify this by selecting Manage > Vkontakte, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.9, "Vkontakte Social Identity Provider Configuration Details".

11.11.3. Configuring User Registration to Link to Vkontakte

Once you've configured the Vkontakte social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.12. Setting Up Salesforce as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Salesforce as a social identity provider for IDM:

Note

When you configure a Salesforce app, look for a Consumer Key and a Consumer Secret. IDM uses this information as a clientId and clientSecret, respectively.

For reference, read through the following Salesforce documentation: Connected Apps Overview.

11.12.1. Setting Up Salesforce

To set up Salesforce as a social identity provider, you will need a Salesforce developer account. Log in to the Salesforce Developers Page with your developer account credentials and create a new Connected App.

Note

These instructions were written with the Summer '17 Release of the Salesforce API. The menu items might differ slightly if you are working with a different version of the API.

Select Setup > Apps > App Manager > New Connected App. You will need to add the following information:

  • Connected App Name

  • API Name (defaults to the Connected App Name)

  • Contact Email

  • Activate the following option: Enable OAuth Settings

  • Callback URL (also known as the Redirect URI for other providers), for example https://localhost:8443/admin/oauth.html.

    The Callback URL must correspond to the URL that you use to log in to the IDM Admin UI.

  • Add the following OAuth scopes:

    • Access and Manage your data (api)

    • Access your basic information (id, profile, email, address, phone)

    • Perform requests on your behalf at any time (refresh_token, offline_access)

    • Provide access to your data via the Web (web)

After you have saved the Connected App, it might take a few minutes for the new app to appear under Apps > Connected Apps > Manage Connected Apps.

Select the new Connected App then locate the Consumer Key and Consumer Secret (under the API list). You'll use that information as shown here:

  • Salesforce Consumer Key = IDM Client ID

  • Salesforce Consumer Secret = IDM Client Secret

11.12.2. Configuring a Salesforce Social Identity Provider

  1. To configure a Salesforce social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Salesforce social identity provider.

    In the Salesforce Provider pop-up that appears, the values for Redirect URI should match the value that you've entered for Callback URL in Section 11.12.1, "Setting Up Salesforce".

  3. Include the values that Salesforce created for Consumer Key and Consumer Secret, as described in Section 11.12.1, "Setting Up Salesforce".

  4. Under regular and Advanced Options, include the options shown in the following appendix: Section I.10, "Salesforce Social Identity Provider Configuration Details".

When you enable a Salesforce social identity provider in the Admin UI, IDM generates the identityProvider-salesforce.json file in your project's conf/ subdirectory.

It includes parts of the file that you may have configured through the Admin UI. While the labels in the UI specify Consumer Key and Consumer Secret, you'll see them as clientId and clientSecret, respectively, in the configuration file.

{
    "provider" : "salesforce",
    "authorizationEndpoint" : "https://login.salesforce.com/services/oauth2/authorize",
    "tokenEndpoint" : "https://login.salesforce.com/services/oauth2/token",
    "userInfoEndpoint" : "https://login.salesforce.com/services/oauth2/userinfo",
    "clientId" : "<someUUID>",
    "clientSecret" : {
       "$crypto" : {
           "type" : "x-simple-encryption",
           "value" : {
               "cipher" : "AES/CBC/PKCS5Padding",
               "salt" : "<hashValue>",
               "data" : "<encryptedValue>",
               "iv" : "<encryptedValue>",
               "key" : "openidm-sym-default",
               "mac" : "<hashValue>"
           }
       }
    },
    "scope" : [
        "id",
        "api",
        "web"
    ],

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass; the location may vary.

The file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Salesforce social identity, you can verify this by selecting Manage > Salesforce, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.10, "Salesforce Social Identity Provider Configuration Details".

11.12.3. Configuring User Registration to Link to Salesforce

Once you've configured the Salesforce social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.13. Setting Up Yahoo as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Yahoo as a social identity provider for IDM:

11.13.1. Setting Up Yahoo

To set up Yahoo as a social identity provider, navigate to the following page: Yahoo OAuth 2.0 Guide . You'll need a Yahoo account. You can then navigate to the Create an App page, where you can follow the Yahoo process to create a new web application with the following information:

  • Application Name

  • Web Application

  • Callback Domain, such as openidm.example.com; required for IDM

  • API Permissions; for whatever you select, choose Read/Write. IDM only reads Yahoo user information.

When complete and saved, you should see a Client ID and Client Secret for your new web app.

Note

Yahoo supports URLs using only HTTPS, only on port 443. For more information on how to configure IDM to use these ports, see Appendix A, "Ports Used".

11.13.2. Configuring Yahoo as a Social Identity Provider

  1. To configure Yahoo as a social identity provider, log in to the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Yahoo social identity provider.

    In the Yahoo Provider pop-up that appears, the values for Redirect URI should use the same Callback Domain as shown in Section 11.13.1, "Setting Up Yahoo".

  3. Include the values that Yahoo created for Client ID and Client Secret, as described in Section 11.13.1, "Setting Up Yahoo".

  4. Under regular and Advanced Options, if necessary, include the options shown in the following appendix: Section I.11, "Yahoo Social Identity Provider Configuration Details".

When you enable a Yahoo social identity provider in the Admin UI, IDM generates the identityProvider-yahoo.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, the scope, and UI settings related to the social identity provider icon (badge) and the sign-in button. For more information on the icon and button, see Section I.14, "Social Identity Provider Button and Badge Properties".

{
    "provider" : "yahoo",
    "scope" : [
        "openid",
        "sdpp-w"
    ],
    "uiConfig" : {
        "iconBackground" : "#7B0099",
        "iconClass" : "fa-yahoo",
        "iconFontColor" : "white",
        "buttonClass" : "fa-yahoo",
        "buttonDisplayName" : "Yahoo",
        "buttonCustomStyle" : "background-color: #7B0099; border-color: #7B0099; color:white;",
        "buttonCustomStyleHover" : "background-color: #7B0099; border-color: #7B0099; color:white;"
    },

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

The next part of the file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Yahoo social identity, you can verify this by selecting Manage > Yahoo, and then selecting a user.

Next, there's the part of the file that you may have configured through the Admin UI, plus additional information on the redirectUri, the configClass, and the authenticationIdKey:

    "authorizationEndpoint" : "https://api.login.yahoo.com/oauth2/request_auth",
    "tokenEndpoint" : "https://api.login.yahoo.com/oauth2/get_token",
    "wellKnownEndpoint" : "https://login.yahoo.com/.well-known/openid-configuration",
    "clientId" : "<Client_ID_Name>",
    "clientSecret" : {
      "$crypto" : {
          "type" : "x-simple-encryption",
          "value" : {
              "cipher" : "AES/CBC/PKCS5Padding",
              "salt" : "<hashValue>",
              "data" : "<encryptedValue>",
              "iv" : "<encryptedValue>",
              "key" : "openidm-sym-default",
              "mac" : "<hashValue>"
          }
      }
    },
    "authenticationIdKey" : "sub",
    "redirectUri" : "https://openidm.example.com/oauthReturn/",
    "basicAuth" : false,
    "configClass" : "org.forgerock.oauth.clients.oidc.OpenIDConnectClientConfiguration",
    "enabled" : true
}

If you need more information about the properties in this file, refer to the following appendix: Section I.11, "Yahoo Social Identity Provider Configuration Details".

11.13.3. Configuring User Registration to Link to Yahoo

Once you've configured the Yahoo social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.14. Setting Up Twitter as an IDM Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Twitter as a social identity provider for IDM:

11.14.1. Setting Up Twitter

To set up Twitter as a social identity provider, navigate to the following page: Single-user OAuth with Examples . You'll need a Twitter account. You can then navigate to the Twitter Application Management page, where you can select Create New App and enter at least the following information:

  • Name

  • Description

  • Website, such as http://openidm.example.com:8080

  • Callback URL, such as http://openidm.example.com:8080/oauthReturn/; required for IDM; for other providers, known as RedirectURI

When complete and saved, you should see a Consumer Key and Consumer Secret for your new web app.

Note

Twitter Apps use the OAuth 1.0a protocol. Fortunately, with IDM, you can use the same process used to configure OIDC and OAuth 2 social identity providers.

11.14.2. Configuring Twitter as a Social Identity Provider

  1. To configure Twitter as a social identity provider, log in to the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the Twitter social identity provider.

    In the Twitter Provider pop-up that appears, the values for Callback URL should use the same value shown in Section 11.14.1, "Setting Up Twitter".

  3. Include the values that Twitter created for Consumer Key and Consumer Secret, as described in Section 11.14.1, "Setting Up Twitter".

  4. Under regular and Advanced Options, if necessary, include the options shown in the following appendix: Section I.12, "Twitter Social Identity Provider Configuration Details".

When you enable a Twitter social identity provider in the Admin UI, IDM generates the identityProvider-twitter.json file in your project's conf/ subdirectory.

When you review that file, you should see information beyond what you see in the Admin UI. The first part of the file includes the name of the provider, endpoints, as well as information from the Consumer Key and Consumer Secret, you'll see them as clientId and clientSecret, respectively, in the configuration file.

{
    "provider" : "twitter",
    "requestTokenEndpoint" : "https://api.twitter.com/oauth/request_token",
    "authorizationEndpoint" : "https://api.twitter.com/oauth/authenticate",
    "tokenEndpoint" : "https://api.twitter.com/oauth/access_token",
    "userInfoEndpoint" : "https://api.twitter.com/1.1/account/verify_credentials.json",
    "clientId" : "<Client_ID_Name>",
    "clientSecret" : {
      "$crypto" : {
          "type" : "x-simple-encryption",
          "value" : {
              "cipher" : "AES/CBC/PKCS5Padding",
              "salt" : "<hashValue>",
              "data" : "<encryptedValue>",
              "iv" : "<encryptedValue>",
              "key" : "openidm-sym-default",
              "mac" : "<hashValue>"
          }
      }
    },

You should also see UI settings related to the social identity provider icon (badge) and the sign-in button, described in Section I.14, "Social Identity Provider Button and Badge Properties".

You'll see links related to the authenticationIdKey, redirectUri, and configClass.

The next part of the file includes schema information, which includes properties for each social identity account, as collected by IDM, as well as the order in which it appears in the Admin UI. When you've registered a user with a Twitter social identity, you can verify this by selecting Manage > Twitter, and then selecting a user.

Another part of the file includes a propertyMap, which maps user information entries between the source (social identity provider) and the target (IDM).

If you need more information about the properties in this file, refer to the following appendix: Section I.12, "Twitter Social Identity Provider Configuration Details".

11.14.3. Configuring User Registration to Link to Twitter

Once you've configured the Twitter social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and activate that feature. Under the Social tab that appears, enable Social Registration. For more information on IDM user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable Social Registration, you're allowing users to register on IDM through all active social identity providers.

11.15. Setting Up a Custom Social Identity Provider

As suggested in the introduction to this chapter, you'll need to take four basic steps to configure a custom social identity provider:

Note

These instructions require the social identity provider to be fully compliant with The OAuth 2.0 Authorization Framework or the OpenID Connect standards.

11.15.1. Preparing IDM

While IDM includes provisions to work with OpenID Connect 1.0 and OAuth 2.0 social identity providers, connections to those providers are not supported, other than those listed in this chapter.

To set up another social provider, first add a code block to the identityProviders.json file, such as:

{
   "name" : "custom",
   "type" : "OAUTH",
   "authorizationEndpoint" : "",
   "tokenEndpoint" : "",
   "userInfoEndpoint" : "",
   "clientId" : "",
   "clientSecret" : "",
   "scope" : [ ],
   "uiConfig" : {
        "iconBackground" : "",
        "iconClass" : "",
        "iconFontColor" : "",
        "buttonImage" : "",
        "buttonClass" : "",
        "buttonDisplayName" : "",
        "buttonCustomStyle" : "",
        "buttonCustomStyleHover" : ""
   },
   "authenticationId" : "id",
   "schema" : {
      "id" : "http://jsonschema.net",
      "viewable" : true,
      "type" : "object",
      "$schema" : "http://json-schema.org/draft-03/schema",
      "properties" : {
         "id" : {
            "title" : "ID",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         },
         "name" : {
            "title" : "Name",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         },
         "first_name" : {
            "title" : "First Name",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         },
         "last_name" : {
            "title" : "Last Name",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         },
         "email" : {
            "title" : "Email Address",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         },
         "locale" : {
            "title" : "Locale Code",
            "viewable" : true,
            "type" : "string",
            "searchable" : true
         }
      },
      "order" : [
         "id",
         "name",
         "first_name",
         "last_name",
         "email",
         "locale"
      ],
      "required" : [ ]
   },
   "propertyMap" : [
      {
         "source" : "id",
         "target" : "id"
      },
      {
         "source" : "name",
         "target" : "displayName"
      },
      {
         "source" : "first_name",
         "target" : "givenName"
      },
      {
         "source" : "last_name",
         "target" : "familyName"
      },
      {
         "source" : "email",
         "target" : "email"
      },
      {
         "source" : "email",
         "target" : "username"
      },
      {
         "source" : "locale",
         "target" : "locale"
      }
   ]
},

Modify this code block for your selected social provider. Some of these properties may appear under other names. For example, some providers specify an App ID that you'd include as a clientId.

In the propertyMap code block, you should substitute the properties from the selected social identity provider for various values of source. Make sure to trace the property mapping through selfservice.propertymap.json to the Managed User property shown in managed.json. For more information on this multi-step mapping, see Section 11.2, "Many Social Identity Providers, One Schema".

As shown in Section 11.1, "OpenID Connect Authorization Code Flow", user provisioning information goes through the User Info Endpoint. Some providers, such as LinkedIn and Facebook, may require a list of properties with the endpoint. Consult the documentation for your provider for details.

For more information on the uiConfig code block, see Section I.14, "Social Identity Provider Button and Badge Properties".

Both files, identityProviders.json and identityProvider-custom.json, should include the same information for the new custom identity provider. For property details, see Section I.13, "Custom Social Identity Provider Configuration Details".

Once you've included information from your selected social identity provider, proceed with the configuration process. You'll use the same basic steps described for other specified social providers.

11.15.2. Setting Up a Custom Social Identity Provider

Every social identity provider should be able to provide the information you need to specify properties in the code block shown in Section 11.15.1, "Preparing IDM".

In general, you'll need an authorizationEndpoint, a tokenEndpoint and a userInfoEndpoint. To link to the custom provider, you'll also have to copy the clientId and clientSecret that you created with that provider. In some cases, you'll get this information in a slightly different format, such as an App ID and App Secret.

For the propertyMap, check the source properties. You may need to revise these properties to match those available from your custom provider.

For examples, refer to the specific social identity providers documented in this chapter.

11.15.3. Configuring a Custom Social Identity Provider

  1. To configure a custom social identity provider, log into the Admin UI and navigate to Configure > Social ID Providers.

  2. Enable the custom social identity provider. The name you see is based on the name property in the relevant code block in the identityProviders.json file.

  3. If you haven't already done so, include the values provided by your social identity provider for the properties shown. For more information, see the following appendix: Section I.13, "Custom Social Identity Provider Configuration Details".

11.15.4. Configuring User Registration to Link to a Custom Provider

Once you've configured a custom social identity provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and under the Social tab, enable the option associated with Social Registration. For more information about user self-service features, see Chapter 5, "Configuring User Self-Service".

When you enable social identity providers, you're allowing users to register on IDM through all active social identity providers.

11.16. Configuring the Social Providers Authentication Module

The SOCIAL_PROVIDERS authentication module incorporates the requirements from social identity providers who rely on either the OAuth2 or the OpenID Connect standards. The Social Providers authentication module is turned on by default. To configure or disable this module in the Admin UI, select Configure > Authentication, choose the Modules tab, then select Social Providers from the list of modules.

Authentication settings can be configured from the Admin UI, or by making changes directly in the authentication.json file for your project. IDM includes the following code block in the authentication.json file for your project:

{
   "name" : "SOCIAL_PROVIDERS",
   "properties" : {
       "defaultUserRoles" : [
           "openidm-authorized"
       ],
       "augmentSecurityContext" : {
           "type" : "text/javascript",
           "globals" : { },
           "file" : "auth/populateAsManagedUserFromRelationship.js"
       },
       "propertyMapping" : {
           "userRoles" : "authzRoles"
       }
   },
   "enabled" : true
}

For more information on these options, see Table H.3, "Common Module Properties".

11.17. Managing Social Identity Providers Over REST

You can identify the current status of configured social identity providers with the following REST call:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
http://localhost:8080/openidm/authentication

The output that you see includes JSON information from each configured social identity provider, as described in the identityProvider-provider file in your project's conf/ subdirectory.

One key line from this output specifies whether the social identity provider is enabled:

"enabled" : true

If the SOCIAL_PROVIDERS authentication module is disabled, you'll see the following output from that REST call:

{
   "providers" : [ ]
}

For more information, see Section 11.16, "Configuring the Social Providers Authentication Module".

If the SOCIAL_PROVIDERS module is disabled, you can still review the standard configuration of each social provider (enabled or not) by running the same REST call on a different endpoint (do not forget the s at the end of identityProviders):

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
http://localhost:8080/openidm/identityProviders

Note

If you have not configured a social identity provider, you'll see the following output from the REST call on the openidm/identityProviders endpoint:

{
    "providers" : [ ]
}

You can still get information about the available configuration for social identity providers on a slightly different endpoint:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
http://localhost:8080/openidm/config/identityProviders

The config in the endpoint refers to the configuration, starting with the identityProviders.json configuration file. Note how it matches the corresponding term in the endpoint.

You can review information for a specific provider by including the name with the endpoint. For example, if you've configured LinkedIn as described in Section 11.4, "Setting Up LinkedIn as a Social Identity Provider", run the following command:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request GET \
http://localhost:8080/openidm/config/identityProvider/linkedIn

The above command differs in subtle ways. The config in the endpoint points to configuration data. The identityProvider at the end of the endpoint is singular, which matches the corresponding configuration file, identityProvider-linkedIn.json. And linkedIn includes a capital I in the middle of the word.

In a similar fashion, you can delete a specific provider:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--request DELETE \
http://localhost:8080/openidm/config/identityProvider/linkedIn

If you have the information needed to set up a provider, such as the output from the previous two REST calls, you can use the following command to add a provider:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-type: application/json" \
 --request PUT \
--data '{
 <Include content from an identityProvider-linkedIn.json file>
}' \
http://localhost:8080/openidm/config/identityProvider/linkedIn

IDM incorporates the given information in a file named for the provider, in this case, identityProvider-linkedIn.json.

You can even disable a social identity provider with a PATCH REST call, as shown:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-type: application/json" \
--request PATCH \
--data '[
   {
      "operation":"replace",
      "field" : "enabled",
      "value" : false
   }
]' \
http://localhost:8080/openidm/config/identityProvider/linkedIn

You can reverse the process by substituting true for false in the previous PATCH REST call.

You can manage the social identity providers associated with individual users over REST, as described in Section 11.19, "Managing Links Between End User Accounts and Social Identity Providers".

11.18. Testing Social Identity Providers

In all cases, once configuration is complete, you should test the social identity provider. To do so, go through the steps in the following procedure:

  1. Navigate to the login screen for the self-service UI, https://openidm.example.com:8443.

  2. Select the Register link (after the "Don't have an account?" question) on the login page.

  3. You should see a link to sign in with your selected social identity provider. Select that link.

    Note

    If you do not see a link to sign in with any social identity provider, you probably did not enable the option associated with Social Registration. To make sure, access the Admin UI, and select Configure > User Registration.

    Warning

    If you see a redirect URI error from a social identity provider, check the configuration for your web application in the social identity provider developer console. There may be a mistake in the redirect URI or redirect URL.

  4. Follow the prompts from your social identity provider to log into your account.

    Note

    If there is a problem with the interface to the social identity provider, you might see a Register Your Account screen with information acquired from that provider.

  5. As Knowledge-based Authentication (KBA) is enabled by default, you'll need to add at least one security question and answer to proceed. For more information, see Section 5.2.3.3, "Configuring Self-Service Questions (KBA)".

    When the Social ID registration process is complete, you are redirected to the self-service login URL at https://openidm.example.com:8443.

  6. At the self-service login URL, you should now be able to use the sign in link for your social identity provider to log into IDM.

11.20. Scenarios When Registering With a Social ID

When users connect to IDM with a social identity provider, it could be the first time they're connecting to your system. They could already have an regular IDM account. They could already have registered with a different social identity provider. This section describes what happens during the self-registration process. The process varies depending on whether there's an existing account in the IDM managed user store.

Figure 11.2. When Registering Social Identity Providers on IDM
Flow When Registering With a Social Identity Provider

The following list describes each item in the flow shown in the adjacent figure:

  1. From the IDM Self-Service UI, the user selects the Register link

  2. The self-registration Interface returns a Register Your Account page at http://localhost:8080/#register with a list of configured providers.

  3. The user then selects one configured social identity provider.

  4. IDM connects to the selected social identity provider.

  5. The social identity provider requests end user authentication.

  6. The end user authenticates with the social identity provider.

  7. The social identity provider prompts the user to accept sharing selected account information.

  8. The user accepts the conditions presented by the social identity provider.

  9. The social identity provider notifies IDM of the user registration request.

  10. IDM passes responsibility to the administrative interface.

  11. IDM uses the email address from the social identity provider, and compares it with email addresses of existing managed users.

  12. If the email address is found, IDM links the social identity information to that account (and skips to step 16).

  13. IDM returns to the self-registration (Self-Service) interface.

  14. The self-registration interface prompts the user for additional information, such as security questions, and reCAPTCHA, if configured per Section 5.2.3.1, "Configuring Google ReCaptcha".

  15. The user responds appropriately.

  16. The user is redirected to the Success URL.

11.21. Setting Up Users for Marketo Lead Generation

Lead generation can work for both social identity providers and users who have self-registered directly with IDM. You can reconcile selected registered users to a Marketo database for lead generation. To set up reconciliation to a Marketo database, you'll need the following files, available in the /path/to/openidm/samples/example-configurations/ directory:

provisioners/provisioner.openicf-marketo.json

This is the Marketo provisioner file, which you can copy directly to s your project's conf/ subdirectory.

preferences/conf/sync.json

Use the content of this sync.json file. If you already have a sync.json file in your project's conf/ subdirectory, add the content to set up reconciliation between the managed user repository and Marketo.

Now you can configure the Marketo connector and set up registration to determine which users get reconciled to the Marketo database.

11.21.1. Configuring the Marketo Connector

Before going through the following procedure, make sure you have values at least for Client ID, Client Secret, and List Name for your Marketo service. For the Marketo procedure, see their Quick Start Guide for Marketo REST API.

  1. Open the Marketo connector. In the Admin UI, select Configure > Connectors, and select the connector.

  2. Include appropriate information for the fields shown in the following list:

    • Instance: Find this FQDN in within the REST API endpoint associated with your Marketo web service. It may be a value such as some-number.mktorest.com.

    • Client ID: Find this in the details of your Marketo service LaunchPoint.

    • Client Secret: Find this in the details of your Marketo service LaunchPoint.

    • List Name: If you have created a Marketo Group Name, enter it here.

    • Lead Fields: Normally left blank; refer to Marketo documentation for options.

    • Partition Name: may be left blank. Standard Marketo accounts do not support partitions.

    For more information on these fields, see Chapter 12, "Marketo Connector" in the Connector Reference.

    As a Marketo administrator, you should be able to find values for each of these fields for your Marketo custom service.

  3. When you've finished making changes, press Save.

11.21.2. Setting Up Users for Marketo

This section reconciles all qualifying IDM users, including those registered through a social identity provider, to the Marketo database.

Review the mapping. Navigate to Configure > Mappings, and select the managed/user to system/marketo/account mapping.

Under the Association tab, select Individual Record Validation. The Valid Source condition, Send me special offers and services, limits reconciliation to users who have accepted that condition.

Filtered Mapping From ForgeRock Managed User to Marketo

When a user registers with IDM, they can choose to accept the noted condition. As a regular user, they can also select (or deselect) the noted condition in the self-service UI. To do so, log into IDM at http://localhost:8080/, and select Preferences.

For more information on how preferences work in a mapping, see Section 15.3.4, "Configuring Synchronization Filters With User Preferences".

11.21.3. Reviewing Marketo Leads

When you have reconciled data to the Marketo database, you should have a list of users who have accepted the rules associated with Individual Record Validation. You should find those users in the Marketo group list that you configured with your Marketo administrative account.

If any of your users deselect their applicable marketing preferences, as described in Section 11.21.2, "Setting Up Users for Marketo", IDM removes those accounts from the Marketo database upon the next reconciliation.

11.22. Social Identity Widgets

The Admin UI includes widgets that can help you measure the success of your social identity efforts. To add these widgets, take the following steps:

  1. Log into the Admin UI.

  2. Select Dashboards, and choose the dashboard to which you want to add the widget.

    For more information about managing dashboards in the UI, see Section 4.1.2, "Creating and Modifying Dashboards".

  3. Select Add Widgets. In the Add Widgets window, scroll down to the widget of your choice. To see activity related to social identities, you may be interested in the following widgets:

    • Daily Social Logins

    • Daily Social Registration

    • Social Registration (Year)

As an example, the following figure depicts daily social registrations, in pie chart form:

Figure 11.3. Daily Social Registrations on IDM
Pie chart view, three providers

Chapter 12. Using Policies to Validate Data

IDM provides an extensible policy service that enables you to apply specific validation requirements to various components and properties. This chapter describes the policy service, and provides instructions on configuring policies for managed objects.

The policy service provides a REST interface for reading policy requirements and validating the properties of components against configured policies. Objects and properties are validated automatically when they are created, updated, or patched. Policies are generally applied to user passwords, but can also be applied to any managed or system object, and to internal user objects.

The policy service enables you to accomplish the following tasks:

  • Read the configured policy requirements of a specific component.

  • Read the configured policy requirements of all components.

  • Validate a component object against the configured policies.

  • Validate the properties of a component against the configured policies.

The router service limits policy application to managed, system, and internal user objects. To apply policies to additional objects, such as the audit service, you must modify your project's conf/router.json file. For more information about the router service, see Appendix F, "Router Service Reference".

A default policy applies to all managed objects. You can configure this default policy to suit your requirements, or you can extend the policy service by supplying your own scripted policies.

12.1. Configuring the Default Policy for Managed Objects

Policies applied to managed objects are configured in two files:

  • A policy script file (openidm/bin/defaults/script/policy.js) that defines each policy and specifies how policy validation is performed. For more information, see Section 12.1.1, "Understanding the Policy Script File".

  • A managed object policy configuration element, defined in your project's conf/managed.json file, that specifies which policies are applicable to each managed resource. For more information, see Section 12.1.2, "Understanding the Policy Configuration Element".

    Note

    The configuration for determining which policies apply to resources other than managed objects is defined in your project's conf/policy.json file. The default policy.json file includes policies that are applied to internal user objects, but you can extend the configuration in this file to apply policies to system objects.

12.1.1. Understanding the Policy Script File

The policy script file (openidm/bin/defaults/script/policy.js) separates policy configuration into two parts:

  • A policy configuration object, which defines each element of the policy. For more information, see Section 12.1.1.1, "Policy Configuration Objects".

  • A policy implementation function, which describes the requirements that are enforced by that policy.

Together, the configuration object and the implementation function determine whether an object is valid in terms of the applied policy. The following excerpt of a policy script file configures a policy that specifies that the value of a property must contain a certain number of capital letters:

...
{   "policyId" : "at-least-X-capitals",
    "policyExec" : "atLeastXCapitalLetters",
    "clientValidation": true,
    "validateOnlyIfPresent":true,
    "policyRequirements" : ["AT_LEAST_X_CAPITAL_LETTERS"]
},
...

policyFunctions.atLeastXCapitalLetters = function(fullObject, value, params, property) {
  var isRequired = _.find(this.failedPolicyRequirements, function (fpr) {
      return fpr.policyRequirement === "REQUIRED";
    }),
    isNonEmptyString = (typeof(value) === "string" && value.length),
    valuePassesRegexp = (function (v) {
      var test = isNonEmptyString ? v.match(/[(A-Z)]/g) : null;
      return test !== null && test.length >= params.numCaps;
    }(value));

  if ((isRequired || isNonEmptyString) && !valuePassesRegexp) {
    return [ { "policyRequirement" : "AT_LEAST_X_CAPITAL_LETTERS", "params" : {"numCaps": params.numCaps} } ];
  }

  return [];
}     
...

To enforce user passwords that contain at least one capital letter, the policyId from the preceding example is applied to the appropriate resource (managed/user/*). The required number of capital letters is defined in the policy configuration element of the managed object configuration file (see Section 12.1.2, "Understanding the Policy Configuration Element".

12.1.1.1. Policy Configuration Objects

Each element of the policy is defined in a policy configuration object. The structure of a policy configuration object is as follows:

{
    "policyId" : "minimum-length",
    "policyExec" : "propertyMinLength",
    "clientValidation": true,
    "validateOnlyIfPresent": true,
    "policyRequirements" : ["MIN_LENGTH"]
}   
  • policyId - a unique ID that enables the policy to be referenced by component objects.

  • policyExec - the name of the function that contains the policy implementation. For more information, see Section 12.1.1.2, "Policy Implementation Functions".

  • clientValidation - indicates whether the policy decision can be made on the client. When "clientValidation": true, the source code for the policy decision function is returned when the client requests the requirements for a property.

  • validateOnlyIfPresent - notes that the policy is to be validated only if it exists.

  • policyRequirements - an array containing the policy requirement ID of each requirement that is associated with the policy. Typically, a policy will validate only one requirement, but it can validate more than one.

12.1.1.2. Policy Implementation Functions

Each policy ID has a corresponding policy implementation function that performs the validation. Implementation functions take the following form:

function <name>(fullObject, value, params, propName) {	
	<implementation_logic>
}   
  • fullObject is the full resource object that is supplied with the request.

  • value is the value of the property that is being validated.

  • params refers to the params array that is specified in the property's policy configuration.

  • propName is the name of the property that is being validated.

The following example shows the implementation function for the required policy:

function required(fullObject, value, params, propName) {
    if (value === undefined) {
        return [ { "policyRequirement" : "REQUIRED" } ];
    }
    return [];
}      

12.1.2. Understanding the Policy Configuration Element

The configuration of a managed object property (in the managed.json file) can include a policies element that specifies how policy validation should be applied to that property. The following excerpt of the default managed.json file shows how policy validation is applied to the password and _id properties of a managed/user object:

{
    "objects" : [
        {
            "name" : "user",
            ...
            "schema" : {
                "id" : "http://jsonschema.net",
                ...
                "properties" : {
                    "_id" : {
                        "type" : "string",
                        "viewable" : false,
                        "searchable" : false,
                        "userEditable" : false,
                        "policies" : [
                            {
                                "policyId" : "cannot-contain-characters",
                                "params" : {
                                    "forbiddenChars" : ["/"]
                                }
                            }
                        ]
                    },
                    "password" : {
                        "type" : "string",
                        "viewable" : false,
                        "searchable" : false,
                        "minLength" : 8,
                        "userEditable" : true,
                        "policies" : [
                            {
                                "policyId" : "at-least-X-capitals",
                                "params" : {
                                    "numCaps" : 1
                                }
                            },
                            {
                                "policyId" : "at-least-X-numbers",
                                "params" : {
                                    "numNums" : 1
                                }
                            },
                            {
                                "policyId" : "cannot-contain-others",
                                "params" : {
                                    "disallowedFields" : [
                                        "userName",
                                        "givenName",
                                        "sn"
                                    ]
                                }
                            }
                        ]
                    },

Note that the policy for the _id property references the function cannot-contain-characters, that is defined in the policy.js file. The policy for the password property references the functions at-least-X-capitals, at-least-X-numbers, and cannot-contain-others, that are defined in the policy.js file. The parameters that are passed to these functions (number of capitals required, and so forth) are specified in the same element.

12.1.3. Validation of Managed Object Data Types

The type property of a managed object specifies the data type of that property, for example, array, boolean, integer, number, null, object, or string. For more information about data types, see the JSON Schema Primitive Types section of the JSON Schema standard.

The type property is subject to policy validation when a managed object is created or updated. Validation fails if data does not match the specified type, such as when the data is an array instead of a string. The valid-type policy in the default policy.js file enforces the match between property values and the type defined in the managed.json file.

IDM supports multiple valid property types. For example, you might have a scenario where a managed user can have more than one telephone number, or an null telephone number (when the user entry is first created and the telephone number is not yet known). In such a case, you could specify the accepted property type as follows in your managed.json file:

"telephoneNumber" : {
    "description" : "",
    "title" : "Mobile Phone",
    "viewable" : true,
    "searchable" : false,
    "userEditable" : true,
    "policies" : [ ],
    "returnByDefault" : false,
    "minLength" : null,
    "pattern" : "^\\+?([0-9\\- \\(\\)])*$",
    "type" : [
        "string",
        "null"
    ]
},

In this case, the valid-type policy from the policy.js file checks the telephone number for an accepted type and pattern, either for a real telephone number or a null entry.

12.1.4. Configuring Policy Validation in the UI

The Admin UI provides rudimentary support for applying policy validation to managed object properties. To configure policy validation for a managed object type update the configuration of the object type in the UI. For example, to specify validation policies for specific properties of managed user objects, select Configure > Managed Objects then click on the User object. Scroll down to the bottom of the Managed Object configuration, then update, or add, a validation policy. The Policy field here refers to a function that has been defined in the policy script file. For more information, see Section 12.1.1, "Understanding the Policy Script File". You cannot define additional policy functions by using the UI.

Note

Take care with Validation Policies. If it relates to an array of relationships, such as between a user and multiple devices, "Return by Default" should always be set to false. You can verify this in the managed.json file for your project, with the "returnByDefault" : false entry for the applicable managed object, whenever there are items of "type" : "relationship".

12.2. Extending the Policy Service

You can extend the policy service by adding custom scripted policies, and by adding policies that are applied only under certain conditions.

12.2.1. Adding Custom Scripted Policies

If your deployment requires additional validation functionality that is not supplied by the default policies, you can add your own policy scripts to your project's script directory, and reference them from your project's conf/policy.json file.

Do not modify the default policy script file (openidm/bin/defaults/script/policy.js) as doing so might result in interoperability issues in a future release. To reference additional policy scripts, set the additionalFiles property conf/policy.json.

The following example creates a custom policy that rejects properties with null values. The policy is defined in a script named mypolicy.js:

var policy = {   "policyId" : "notNull",
       "policyExec" : "notNull",  
       "policyRequirements" : ["NOT_NULL"]
}

addPolicy(policy);

function notNull(fullObject, value, params, property) {
   if (value == null) {
      var requireNotNull = [
        {"policyRequirement": "NOT_NULL"}
      ];
      return requireNotNull;
   }
   return [];
}  

The mypolicy.js policy is referenced in the policy.json configuration file as follows:

{
    "type" : "text/javascript",
    "file" : "bin/defaults/script/policy.js",
    "additionalFiles" : ["script/mypolicy.js"],
    "resources" : [
        {
...
   

12.2.2. Adding Conditional Policy Definitions

You can extend the policy service to support policies that are applied only under specific conditions. To apply a conditional policy to managed objects, add the policy to your project's managed.json file. To apply a conditional policy to other objects, add it to your project's policy.json file.

The following excerpt of a managed.json file shows a sample conditional policy configuration for the "password" property of managed user objects. The policy indicates that sys-admin users have a more lenient password policy than regular employees:

{
    "objects" : [
        {
            "name" : "user",
            ...
                "properties" : {
                ...
                    "password" : {
                        "title" : "Password",
                        "type" : "string",
                        ...
                        "conditionalPolicies" : [
                            {
                                "condition" : {
                                    "type" : "text/javascript",
                                    "source" : "(fullObject.org === 'sys-admin')"
                                },
                                "dependencies" : [ "org" ],
                                "policies" : [
                                    {
                                        "policyId" : "max-age",
                                        "params" : {
                                            "maxDays" : ["90"]
                                        }
                                    }
                                ]
                            },
                            {
                                "condition" : {
                                    "type" : "text/javascript",
                                    "source" : "(fullObject.org === 'employees')"
                                },
                                "dependencies" : [ "org" ],
                                "policies" : [
                                    {
                                        "policyId" : "max-age",
                                        "params" : {
                                            "maxDays" : ["30"]
                                        }
                                    }
                                ]
                            }
                        ],
                        "fallbackPolicies" : [
                            {
                                "policyId" : "max-age",
                                "params" : {
                                    "maxDays" : ["7"]
                                }
                            }
                        ]
            }

To understand how a conditional policy is defined, examine the components of this sample policy. For more information on the policy function, see Section 12.1.1.2, "Policy Implementation Functions".

There are two distinct scripted conditions (defined in the condition elements). The first condition asserts that the user object, contained in the fullObject argument, is a member of the sys-admin org. If that assertion is true, the max-age policy is applied to the password attribute of the user object, and the maximum number of days that a password may remain unchanged is set to 90.

The second condition asserts that the user object is a member of the employees org. If that assertion is true, the max-age policy is applied to the password attribute of the user object, and the maximum number of days that a password may remain unchanged is set to 30.

In the event that neither condition is met (the user object is not a member of the sys-admin org or the employees org), an optional fallback policy can be applied. In this example, the fallback policy also references the max-age policy and specifies that for such users, their password must be changed after 7 days.

The dependencies field prevents the condition scripts from being run at all, if the user object does not include an org attribute.

Note

This example assumes that a custom max-age policy validation function has been defined, as described in Section 12.2.1, "Adding Custom Scripted Policies".

12.3. Disabling Policy Enforcement

Policy enforcement is the automatic validation of data when it is created, updated, or patched. In certain situations you might want to disable policy enforcement temporarily. You might, for example, want to import existing data that does not meet the validation requirements with the intention of cleaning up this data at a later stage.

You can disable policy enforcement by setting openidm.policy.enforcement.enabled to false in your project's conf/boot/boot.properties file. This setting disables policy enforcement in the back-end only, and has no impact on direct policy validation calls to the Policy Service (which the UI makes to validate input fields). So, with policy enforcement disabled, data added directly over REST is not subject to validation, but data added with the UI is still subject to validation.

You should not disable policy enforcement permanently, in a production environment.

12.4. Managing Policies Over REST

You can manage the policy service over the REST interface, by calling the REST endpoint https://localhost:8443/openidm/policy, as shown in the following examples.

12.4.1. Listing the Defined Policies

The following REST call displays a list of all the policies defined in policy.json (policies for objects other than managed objects). The policy objects are returned in JSON format, with one object for each defined policy ID:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/policy"
{
  "_id": "",
  "resources": [
    {
      "resource": "repo/internal/user/*",
      "properties": [
        {
          "name": "_id",
          "policies": [
            {
              "policyId": "cannot-contain-characters",
              "params": {
                "forbiddenChars": [
                  "/"
                ]
              },
              "policyFunction": "\nfunction (fullObject, value, params, property)
...

To display the policies that apply to a specific resource, include the resource name in the URL. For example, the following REST call displays the policies that apply to managed users:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request GET \
 "http://localhost:8080/openidm/policy/managed/user/*"
{
  "_id": "*",
  "resource": "managed/user/*",
  "properties": [
    {
      "name": "_id",
      "conditionalPolicies": null,
      "fallbackPolicies": null,
      "policyRequirements": [
        "CANNOT_CONTAIN_CHARACTERS"
      ],
      "policies": [
        {
          "policyId": "cannot-contain-characters",
          "params": {
            "forbiddenChars": [
              "/"
            ]
...

12.4.2. Validating Objects and Properties Over REST

To verify that an object adheres to the requirements of all applied policies, include the validateObject action in the request.

The following example verifies that a new managed user object is acceptable, in terms of the policy requirements:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{
  "sn":"Jones",
  "givenName":"Bob",
  "_id":"bjones",
  "telephoneNumber":"0827878921",
  "passPhrase":null,
  "mail":"bjones@example.com",
  "accountStatus":"active",
  "userName":"bjones@example.com",
  "password":"123"
 }' \
 "http://localhost:8080/openidm/policy/managed/user/bjones?_action=validateObject"
{
  "result": false,
  "failedPolicyRequirements": [
    {
      "policyRequirements": [
        {
          "policyRequirement": "MIN_LENGTH",
          "params": {
            "minLength": 8
          }
        }
      ],
      "property": "password"
    },
    {
      "policyRequirements": [
        {
          "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS",
          "params": {
            "numCaps": 1
          }
        }
      ],
      "property": "password"
    }
  ]
}

The result (false) indicates that the object is not valid. The unfulfilled policy requirements are provided as part of the response - in this case, the user password does not meet the validation requirements.

Use the validateProperty action to verify that a specific property adheres to the requirements of a policy.

The following example checks whether Barbara Jensen's new password (12345) is acceptable:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{ "password" : "12345" }' \
 "http://localhost:8080/openidm/policy/managed/user/bjensen?_action=validateProperty"
{
  "result": false,
  "failedPolicyRequirements": [
    {
      "policyRequirements": [
        {
          "policyRequirement": "MIN_LENGTH",
          "params": {
            "minLength": 8
          }
        }
      ],
      "property": "password"
    },
    {
      "policyRequirements": [
        {
          "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS",
          "params": {
            "numCaps": 1
          }
        }
      ],
      "property": "password"
    }
  ]
}

The result (false) indicates that the password is not valid. The unfulfilled policy requirements are provided as part of the response - in this case, the minimum length and the minimum number of capital letters.

Validating a property that does fulfil the policy requirements returns a true result, for example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --header "Content-Type: application/json" \
 --request POST \
 --data '{ "password" : "1NewPassword" }' \
 "http://localhost:8080/openidm/policy/managed/user/bjensen?_action=validateProperty"
{
  "result": true,
  "failedPolicyRequirements": []
}

Chapter 13. Configuring Server Logs

In this chapter, you will learn about server logging, that is, the messages that IDM logs related to server activity.

Server logging is separate from auditing. Auditing logs activity on the IDM system, such as access and synchronization. For information about audit logging, see Chapter 22, "Setting Up Audit Logging". To configure server logging, edit the logging.properties file in your project-dir/conf directory.

Important

When you change the logging settings you must restart the server for those changes to take effect. Alternatively, you can use JMX via jconsole to change the logging settings, in which case changes take effect without restarting the server.

13.1. Log Message Files

By default, IDM writes log messages in simple format to openidm/logs/openidm*.log files, rotating files when the size reaches 5 MB, and retaining up to 5 files. All system and custom log messages are also written to these files.

You can modify these limits in the following properties in the logging.properties file for your project:

# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit = 5242880

# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count = 5

Note

There is currently no logging.properties setting for time-based rotation of server log files. However, on UNIX systems you can use the logrotate command to schedule server log rotation at a regular interval. For more information, see the logrotate man page.

13.2. Logging Levels

You can update the configuration to attach loggers to individual packages, setting the log level to one of the following values:

SEVERE (highest value)
WARNING
INFO
CONFIG
FINE
FINER
FINEST (lowest value)

If you use logger functions in your JavaScript scripts, you can set the log level for the scripts as follows:

org.forgerock.openidm.script.javascript.JavaScript.level=level

You can override the log level settings per script by using the following:

org.forgerock.openidm.script.javascript.JavaScript.script-name.level

13.3. Disabling Logs

If required, you can also disable logs. For example, to disable ConsoleHandler logging, make the following changes in your project's conf/logging.properties file before you start IDM.

Set java.util.logging.ConsoleHandler.level = OFF, and comment out other references to ConsoleHandler, as shown in the following excerpt:

   # ConsoleHandler: A simple handler for writing formatted records to System.err
   #handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler
   handlers=java.util.logging.FileHandler
   ...
   # --- ConsoleHandler ---
   # Default: java.util.logging.ConsoleHandler.level = INFO
   java.util.logging.ConsoleHandler.level = OFF
   #java.util.logging.ConsoleHandler.formatter = ...
   #java.util.logging.ConsoleHandler.filter=...

Chapter 14. Connecting to External Resources

This chapter describes how to connect to external resources such as LDAP, Active Directory, flat files, and others. Configurations shown here are simplified to show essential aspects. Not all resources support all IDM operations; however, the resources shown here support most of the CRUD operations, and also reconciliation and liveSync.

Resources refer to external systems, databases, directory servers, and other sources of identity data that are managed and audited by the identity management system. To connect to resources, IDM loads the Identity Connector Framework, OpenICF. OpenICF aims to avoid the need to install agents to access resources, instead using the resources' native protocols. For example, OpenICF connects to database resources using the database's Java connection libraries or JDBC driver. It connects to directory servers over LDAP. It connects to UNIX systems by using ssh.

IDM provides several connectors by default, in the path/to/openidm/connectors directory. You can download additional connectors from ForgeRock's BackStage site.

For details about all connectors supported for use with IDM, see Connector Reference.

14.1. The Open Identity Connector Framework (OpenICF)

OpenICF provides a common interface to allow identity services access to the resources that contain user information. IDM loads the OpenICF API as one of its OSGi modules. OpenICF uses connectors to separate the IDM implementation from the dependencies of the resource to which IDM is connecting. A specific connector is required for each remote resource. Connectors can run locally (on the IDM host) or remotely.

Local connectors are loaded by OpenICF as regular bundles in the OSGi container. Most connectors run locally. Remote connectors must be executed on a remote connector server. If a resource requires access libraries that cannot be included as part of the IDM process, you must use a connector server. For example, OpenICF connects to Microsoft Active Directory through a remote connector server that is implemented as a .NET service.

Connections to remote connector servers are configured in a single connector info provider configuration file, located in your project's conf/ directory.

Connectors themselves are configured through provisioner files. One provisioner file must exist for each connector. Provisioner files are named provisioner.openicf-name where name corresponds to the name of the connector, and are also located in the conf/ directory.

A number of sample connector configurations are available in the openidm/samples/example-configurations/provisioners directory. To use these connectors, edit the configuration files as required, and copy them to your project's conf/ directory.

The following figure shows how IDM connects to resources by using connectors and remote connector servers. The figure shows one local connector (LDAP) and two remote connectors (Scripted SQL and PowerShell). In this example, the remote Scripted SQL connector uses a remote Java connector server. The remote PowerShell connector always requires a remote .NET connector server.

Figure 14.1. How IDM Uses the OpenICF Framework and Connectors
OpenICF architecture

Tip

Connectors that use the .NET framework must run remotely. Java connectors can be run locally or remotely. You might run a Java connector remotely for security reasons (firewall constraints), for geographical reasons, or if the JVM version that is required by the connector conflicts with the JVM version that is required by IDM.

14.2. Configuring Connectors

Connectors are configured through the OpenICF provisioner service. Each connector configuration is stored in a file in your project's conf/ directory, and accessible over REST at the openidm/conf endpoint. Connector configuration files are named project-dir/conf/provisioner.openicf-name where name corresponds to the name of the connector.

If you are creating your own connector configuration files, do not include additional dash characters ( - ) in the connector name, as this might cause problems with the OSGi parser. For example, the name provisioner.openicf-hrdb.json is fine. The name provisioner.openicf-hr-db.json is not.

You can create a connector configuration in the following ways:

14.2.1. Using the Sample Provisioner Files

A number of sample connector configurations are available in the openidm/samples/example-configurations/provisioners directory. To use these connector configurations, edit the configuration files as required, and copy them to your project's conf directory.

The following example shows a connector configuration for a CSV file resource:

{
  "name"                      : "csvfile",
  "connectorRef"              : connector-ref-object,
  "producerBufferSize"        : integer,
  "connectorPoolingSupported" : boolean, true/false,
  "poolConfigOption"          : pool-config-option-object,
  "operationTimeout"          : operation-timeout-object,
  "configurationProperties"   : configuration-properties-object,
  "syncFailureHandler"        : sync-failure-handler-object,
  "resultsHandlerConfig"      : results-handler-config-object,
  "objectTypes"               : object-types-object,
  "operationOptions"          : operation-options-object
 }

The name property specifies the name of the system to which you are connecting. This name must be alphanumeric.

All the other configuration objects are described in more detail later in this section.

14.2.2. Creating Connector Configurations With the Admin UI

To configure connectors in the Admin UI, select Configure > Connector. If your project has an existing connector configuration (for example, if you have started IDM with one of the sample configurations) click on that connector to edit it. If you're starting with a new project, click New Connector to configure a new connector.

The connectors displayed on the Connectors page reflect the provisioner files that are in your project's conf/ directory. To add a new connector configuration, you can also copy a provisioner file from the /path/to/openidm/samples/example-configurations/provisioners directory, then edit it to fit your deployment.

When you add a new connector, the Connector Type dropdown list reflects the actual connector JARs that are in the /path/to/openidm/connectors directory. You can have more than one connector configuration for a specific connector type. For example, you might use the LDAP connector to set up two connector configurations - one to an Active Directory server and one to a ForgeRock Directory Services (DS) instance. The Connector Types listed here do not include all supported connectors - only those that are bundled with IDM. You can download additional connectors from ForgeRock's BackStage site and place them in the /path/to/openidm/connectors directory. For information on all supported connectors and how to configure them, see the Connector Reference.

The tabs on the connector configuration screens correspond to the objects and properties described in the remaining sections of this chapter.

When a connector configuration is complete, and IDM is able to establish the connection to the remote resource, the Data tab displays the objects in that remote resource. For example, the following image shows the contents of a connected LDAP resource:

Figure 14.2. Data Tab For a Connected LDAP Resource
Image shows the data tab for a connected LDAP resource

You can search through these objects with either the Basic Filter shown in each column, or the Advanced Filter option, which allows you to build many of the queries shown in Section 8.3, "Defining and Calling Queries".

14.2.3. Creating Connector Configurations Over REST

You create a new connector configuration over REST in three stages:

  1. List the available connectors.

  2. Generate the core configuration.

  3. Connect to the target system and generate the final configuration.

List the available connectors by using the following command:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/system?_action=availableConnectors"

Available connectors are installed in openidm/connectors. IDM bundles the connectors described in Section 2.3, "Supported Connectors" in the Release Notes.

The preceding command therefore returns the following output:

{
  "connectorRef": [
    {
      "bundleName": "org.forgerock.openidm.provisioner-salesforce",
      "bundleVersion": "5.5.0",
      "displayName": "Salesforce Connector",
      "connectorName": "org.forgerock.openidm.salesforce.Salesforce",
      "systemType": "provisioner.salesforce"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.ssh-connector",
      "connectorName": "org.forgerock.openicf.connectors.ssh.SSHConnector",
      "displayName": "SSH Connector",
      "bundleVersion": "1.4.2.0"
    },
    {
      "systemType" : "provisioner.openicf",
      "bundleName" : "org.forgerock.openicf.connectors.scim-connector",
      "connectorName" : "org.forgerock.openicf.connectors.scim.ScimConnector",
      "displayName" : "Scim Connector",
      "bundleVersion" : "1.4.0.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.marketo-connector",
      "connectorName": "org.forgerock.openicf.connectors.marketo.MarketoConnector",
      "displayName": "Marketo Connector",
      "bundleVersion": "1.4.3.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
      "connectorName": "org.identityconnectors.ldap.LdapConnector",
      "displayName": "LDAP Connector",
      "bundleVersion": "1.4.6.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.kerberos-connector",
      "connectorName": "org.forgerock.openicf.connectors.kerberos.KerberosConnector",
      "displayName": "Kerberos Connector",
      "bundleVersion": "1.4.3.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "connectorName": "org.forgerock.openicf.connectors.scriptedsql.ScriptedSQLConnector",
      "displayName": "Scripted SQL Connector",
      "bundleVersion": "1.4.4.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "connectorName": "org.forgerock.openicf.connectors.scriptedrest.ScriptedRESTConnector",
      "displayName": "Scripted REST Connector",
      "bundleVersion": "1.4.4.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "connectorName": "org.forgerock.openicf.connectors.scriptedcrest.ScriptedCRESTConnector",
      "displayName": "Scripted CREST Connector",
      "bundleVersion": "1.4.4.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedPoolableConnector",
      "displayName": "Scripted Poolable Groovy Connector",
      "bundleVersion": "1.4.4.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.groovy-connector",
      "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedConnector",
      "displayName": "Scripted Groovy Connector",
      "bundleVersion": "1.4.4.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.googleapps-connector",
      "connectorName": "org.forgerock.openicf.connectors.googleapps.GoogleAppsConnector",
      "displayName": "GoogleApps Connector",
      "bundleVersion": "1.4.2.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.databasetable-connector",
      "connectorName": "org.identityconnectors.databasetable.DatabaseTableConnector",
      "displayName": "Database Table Connector",
      "bundleVersion": "1.1.1.0"
    },
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
      "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
      "displayName": "CSV File Connector",
      "bundleVersion": "1.5.2.0"
    },
    {
      "systemType" : "provisioner.openicf",
      "bundleName" : "org.forgerock.openicf.connectors.adobecm-connector",
      "connectorName" : "org.forgerock.openicf.acm.ACMConnector",
      "displayName" : "Adobe Marketing Cloud Connector",
      "bundleVersion" : "1.5.0.0"
    }
  ]
}

To generate the core configuration, choose one of the available connectors by copying one of the JSON objects from the generated list into the body of the REST command, as shown in the following command for the CSV file connector:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '{"connectorRef":
    {
      "systemType": "provisioner.openicf",
      "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
      "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
      "displayName": "CSV File Connector",
      "bundleVersion": "1.5.2.0"
    }
 }' \
 "http://localhost:8080/openidm/system?_action=createCoreConfig"

This command returns a core connector configuration, similar to the following:

{
  "connectorRef": {
    "systemType": "provisioner.openicf",
    "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
    "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
    "displayName": "CSV File Connector",
    "bundleVersion": "1.5.2.0"
  },
  "poolConfigOption": {
    "maxObjects": 10,
    "maxIdle": 10,
    "maxWait": 150000,
    "minEvictableIdleTimeMillis": 120000,
    "minIdle": 1
  },
  "resultsHandlerConfig": {
    "enableNormalizingResultsHandler": true,
    "enableFilteredResultsHandler": true,
    "enableCaseInsensitiveFilter": false,
    "enableAttributesToGetSearchResultsHandler": true
  },
  "operationTimeout": {
    "CREATE": -1,
    "UPDATE": -1,
    "DELETE": -1,
    "TEST": -1,
    "SCRIPT_ON_CONNECTOR": -1,
    "SCRIPT_ON_RESOURCE": -1,
    "GET": -1,
    "RESOLVEUSERNAME": -1,
    "AUTHENTICATE": -1,
    "SEARCH": -1,
    "VALIDATE": -1,
    "SYNC": -1,
    "SCHEMA": -1
  },
  "configurationProperties": {
    "headerPassword": "password",
    "csvFile": null,
    "newlineString": "\n",
    "headerUid": "uid",
    "quoteCharacter": "\"",
    "fieldDelimiter": ",",
    "syncFileRetentionCount": 3
  }
}

The configuration that is returned is not yet functional. It does not contain the required system-specific configurationProperties, such as the host name and port for an external system, or the csvFile for the CSV file connector. In addition, the configuration does not include the complete list of objectTypes and operationOptions.

To generate the final configuration, add values for the required configurationProperties to the core configuration, and use the updated configuration as the body for the next command:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '{
  "configurationProperties": {
    "headerPassword": "password",
    "csvFile": "&{launcher.project.location}/data/csvConnectorData.csv",
    "newlineString": "\n",
    "headerUid": "uid",
    "quoteCharacter": "\"",
    "fieldDelimiter": ",",
    "syncFileRetentionCount": 3
  },
  "connectorRef": {
    "systemType": "provisioner.openicf",
    "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
    "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
    "displayName": "CSV File Connector",
    "bundleVersion": "1.5.2.0"
  },
  "poolConfigOption": {
    "maxObjects": 10,
    "maxIdle": 10,
    "maxWait": 150000,
    "minEvictableIdleTimeMillis": 120000,
    "minIdle": 1
  },
  "resultsHandlerConfig": {
    "enableNormalizingResultsHandler": true,
    "enableFilteredResultsHandler": true,
    "enableCaseInsensitiveFilter": false,
    "enableAttributesToGetSearchResultsHandler": true
  },
  "operationTimeout": {
    "CREATE": -1,
    "UPDATE": -1,
    "DELETE": -1,
    "TEST": -1,
    "SCRIPT_ON_CONNECTOR": -1,
    "SCRIPT_ON_RESOURCE": -1,
    "GET": -1,
    "RESOLVEUSERNAME": -1,
    "AUTHENTICATE": -1,
    "SEARCH": -1,
    "VALIDATE": -1,
    "SYNC": -1,
    "SCHEMA": -1
  }
 }   ' \
 "http://localhost:8080/openidm/system?_action=createFullConfig"

Note

Notice the single quotes around the argument to the --data option in the preceding command. For most UNIX shells, single quotes around a string prevent the shell from executing the command when encountering a new line in the content. You can therefore pass the --data '...' option on a single line, or including line feeds.

IDM attempts to read the schema, if available, from the external resource in order to generate output. IDM then iterates through schema objects and attributes, creating JSON representations for objectTypes and operationOptions for supported objects and operations.

The output includes the basic --data input, along with operationOptions and objectTypes.

Because IDM produces a full property set for all attributes and all object types in the schema from the external resource, the resulting configuration can be large. For an LDAP server, IDM can generate a configuration containing several tens of thousands of lines, for example. You might therefore want to reduce the schema to a minimum on the external resource before you run the createFullConfig command.

When you have the complete connector configuration, save that configuration in a file named provisioner.openicf-name.json (where name corresponds to the name of the connector) and place it in the conf directory of your project.

14.2.4. Setting the Connector Reference Properties

The following example shows a connector reference object:

"connectorRef" : {
    "bundleName"    : "org.forgerock.openicf.connectors.csvfile-connector",
    "bundleVersion" : "[1.5.1.4,1.6.0.0)",
    "connectorName" : "org.forgerock.openicf.csvfile.CSVFileConnector",
    "connectorHostRef" : "csv"
},
bundleName

string, required

The ConnectorBundle-Name of the OpenICF connector.

bundleVersion

string, required

The ConnectorBundle-Version of the OpenICF connector. The value can be a single version (such as1.4.0.0) or a range of versions, which enables you to support multiple connector versions in a single project.

You can specify a range of versions as follows:

  • [1.1.0.0,1.4.0.0] indicates that all connector versions from 1.1 to 1.4, inclusive, are supported.

  • [1.1.0.0,1.4.0.0) indicates that all connector versions from 1.1 to 1.4, including 1.1 but excluding 1.4, are supported.

  • (1.1.0.0,1.4.0.0] indicates that all connector versions from 1.1 to 1.4, excluding 1.1 but including 1.4, are supported.

  • (1.1.0.0,1.4.0.0) indicates that all connector versions from 1.1 to 1.4, exclusive, are supported.

When a range of versions is specified, IDM uses the latest connector that is available within that range. If your project requires a specific connector version, you must explicitly state the version in your connector configuration file, or constrain the range to address only the version that you need.

connectorName

string, required

The connector implementation class name.

connectorHostRef

string, optional

If the connector runs remotely, the value of this field must match the name field of the RemoteConnectorServers object in the connector server configuration file (provisioner.openicf.connectorinfoprovider.json). For example:

...
    "remoteConnectorServers" :
        [
            {
                "name" : "dotnet",
...

If the connector runs locally, the value of this field can be one of the following:

  • If the connector .jar is installed in openidm/connectors/, the value must be "#LOCAL". This is currently the default, and recommended location.

  • If the connector .jar is installed in openidm/bundle/ (not recommended), the value must be "osgi:service/org.forgerock.openicf.framework.api.osgi.ConnectorManager".

14.2.5. Setting the Pool Configuration

The poolConfigOption specifies the pool configuration for poolable connectors only (connectors that have "connectorPoolingSupported" : true). Non-poolable connectors ignore this parameter.

The following example shows a pool configuration option object for a poolable connector:

{
  "maxObjects"                 : 10,
  "maxIdle"                    : 10,
  "maxWait"                    : 150000,
  "minEvictableIdleTimeMillis" : 120000,
  "minIdle"                    : 1
}
maxObjects

The maximum number of idle and active instances of the connector.

maxIdle

The maximum number of idle instances of the connector.

maxWait

The maximum time, in milliseconds, that the pool waits for an object before timing out. A value of 0 means that there is no timeout.

minEvictableIdleTimeMillis

The maximum time, in milliseconds, that an object can be idle before it is removed. A value of 0 means that there is no idle timeout.

minIdle

The minimum number of idle instances of the connector.

14.2.6. Setting the Operation Timeouts

The operation timeout property enables you to configure timeout values per operation type. By default, no timeout is configured for any operation type. A sample configuration follows:

{
  "CREATE"              : -1,
  "TEST"                : -1,
  "AUTHENTICATE"        : -1,
  "SEARCH"              : -1,
  "VALIDATE"            : -1,
  "GET"                 : -1,
  "UPDATE"              : -1,
  "DELETE"              : -1,
  "SCRIPT_ON_CONNECTOR" : -1,
  "SCRIPT_ON_RESOURCE"  : -1,
  "SYNC"                : -1,
  "SCHEMA"              : -1
}
operation-name

Timeout in milliseconds

A value of -1 disables the timeout.

14.2.7. Setting the Connection Configuration

The configurationProperties object specifies the configuration for the connection between the connector and the resource, and is therefore resource-specific.

The following example shows a configuration properties object for the default CSV sample resource connector:

"configurationProperties" : {
    "csvFile" : "&{launcher.project.location}/data/csvConnectorData.csv"
},
property

Individual properties depend on the type of connector.

14.2.8. Setting the Synchronization Failure Configuration

The syncFailureHandler object specifies what should happen if a liveSync operation reports a failure for an operation. The following example shows a synchronization failure configuration:

{
    "maxRetries" : 5,
    "postRetryAction" : "logged-ignore"
}  
maxRetries

positive integer or -1, required

The number of attempts that IDM should make to process a failed modification. A value of zero indicates that failed modifications should not be reattempted. In this case, the post retry action is executed immediately when a liveSync operation fails. A value of -1 (or omitting the maxRetries property, or the entire syncFailureHandler object) indicates that failed modifications should be retried an infinite number of times. In this case, no post retry action is executed.

postRetryAction

string, required

The action that should be taken if the synchronization operation fails after the specified number of attempts. The post retry action can be one of the following:

  • logged-ignore - IDM ignores the failed modification, and logs its occurrence.

  • dead-letter-queue - IDM saves the details of the failed modification in a table in the repository (accessible over REST at repo/synchronisation/deadLetterQueue/provisioner-name).

  • script specifies a custom script that should be executed when the maximum number of retries has been reached.

For more information, see Section 15.10, "Configuring the LiveSync Retry Policy".

14.2.9. Configuring How Results Are Handled

The resultsHandlerConfig object specifies how OpenICF returns results. These configuration properties do not apply to all connectors and depend on the interfaces that are implemented by each connector. For information about the interfaces that connectors support, see the Connector Reference.

The following example shows a results handler configuration object:

"resultsHandlerConfig" : {
    "enableNormalizingResultsHandler" : true,
    "enableFilteredResultsHandler" : false,
    "enableCaseInsensitiveFilter" : false,
    "enableAttributesToGetSearchResultsHandler" : false
}  
enableNormalizingResultsHandler

boolean, false by default

When this property is enabled, OpenICF normalizes returned attributes to ensure that they are filtered consistently. If the connector implements the attribute normalizer interface, enable the interface by setting this property to true. If the connector does not implement the attribute normalizer interface, the value of this property has no effect.

enableFilteredResultsHandler

boolean, false by default

Most connectors use the filtering and search capabilities of the remote connected system. In these cases, you can leave this property set to false. If the connector does not use the remote system's filtering and search capabilities, you must set this property to true.

All the non-scripted connectors, apart from the CSV connector use the filtering mechanism of the remote system. In the case of the CSV connector, the remote resource has no filtering mechanism, so you must set enableFilteredResultsHandler to true. For the scripted connectors, the setting will depend on how you have implemented the connector.

enableCaseInsensitiveFilter

boolean, false by default

This property applies only if enableFilteredResultsHandler is set to true. The filtered results handler is case-sensitive by default. For example, a search for lastName = "Jensen" will not match a stored user with lastName : jensen. When the filtered results handler is enabled, you can use this property to enable case-insensitive filtering. If you leave this property set to false, searches on that resource will be case-sensitive.

enableAttributesToGetSearchResultsHandler

boolean, false by default

By default, IDM determines which attributes should be retrieved in a search. If you set this property to true, the OpenICF framework removes all attributes from the READ/QUERY response, except for those that are specifically requested. For performance reasons, you should set this property to false for local connectors and to true for remote connectors.

14.2.10. Specifying the Supported Object Types

The objectTypes configuration specifies the object types (user, group, account, and so on) that are supported by the connector. The object names that you define here determine how the object is accessed in the URI. For example:

system/systemName/objectType

This configuration is based on the JSON Schema with the extensions described in the following section.

Attribute names that start or end with __ are regarded as special attributes by OpenICF. The purpose of the special attributes in OpenICF is to enable someone who is developing a new connector to create a contract regarding how a property can be referenced, regardless of the application that is using the connector. In this way, the connector can map specific object information between an arbitrary application and the resource, without knowing how that information is referenced in the application.

These attributes have no specific meaning in the context of IDM, although some of the connectors that are bundled with IDM use these attributes. The generic LDAP connector, for example, can be used with ForgeRock Directory Services (DS), Active Directory, OpenLDAP, and other LDAP directories. Each of these directories might use a different attribute name to represent the same type of information. For example, Active Directory uses unicodePassword and DS uses userPassword to represent the same thing, a user's password. The LDAP connector uses the special OpenICF __PASSWORD__ attribute to abstract that difference. In the same way, the LDAP connector maps the __NAME__ attribute to an LDAP dn.

The OpenICF __UID__ is a special case. The __UID__ must not be included in the IDM configuration or in any update or create operation. This attribute denotes the unique identity attribute of an object and IDM always maps it to the _id of the object.

The following excerpt shows the configuration of an account object type:

{
  "account" :
  {
    "$schema" : "http://json-schema.org/draft-03/schema",
    "id" : "__ACCOUNT__",
    "type" : "object",
    "nativeType" : "__ACCOUNT__",
    "absentIfEmpty" : false,
    "absentIfNull" : true,
    "properties" :
    {
      "name" :
      {
        "type" : "string",
        "nativeName" : "__NAME__",
        "nativeType" : "JAVA_TYPE_PRIMITIVE_LONG",
        "flags" :
        [
          "NOT_CREATABLE",
          "NOT_UPDATEABLE",
          "NOT_READABLE",
          "NOT_RETURNED_BY_DEFAULT"
        ]
      },
      "groups" :
      {
        "type" : "array",
        "items" :
        {
          "type" : "string",
          "nativeType" : "string"
        },
        "nativeName" : "__GROUPS__",
        "nativeType" : "string",
        "flags" :
        [
          "NOT_RETURNED_BY_DEFAULT"
        ]
      },                
      "givenName" : {
         "type" : "string",
         "nativeName" : "givenName",
         "nativeType" : "string"
         },
    }
  }
}

OpenICF supports an __ALL__ object type that ensures that objects of every type are included in a synchronization operation. The primary purpose of this object type is to prevent synchronization errors when multiple changes affect more than one object type.

For example, imagine a deployment synchronizing two external systems. On system A, the administrator creates a user, jdoe, then adds the user to a group, engineers. When these changes are synchronized to system B, if the __GROUPS__ object type is synchronized first, the synchronization will fail, because the group contains a user that does not yet exist on system B. Synchronizing the __ALL__ object type ensures that user jdoe is created on the external system before he is added to the group engineers.

The __ALL__ object type is assumed by default - you do not need to declare it in your provisioner configuration file. If it is not declared, the object type is named __ALL__. If you want to map a different name for this object type, declare it in your provisioner configuration. The following excerpt from a sample provisioner configuration uses the name allobjects:

"objectTypes": {
    "allobjects": {
        "$schema": "http://json-schema.org/draft-03/schema",
        "id": "__ALL__",
        "type": "object",
        "nativeType": "__ALL__"
    },
...

A liveSync operation invoked with no object type assumes an object type of __ALL__. For example, the following call invokes a liveSync operation on all defined object types in an LDAP system:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/system/ldap?_action=liveSync"

Note

Using the __ALL__ object type requires a mechanism to ensure the order in which synchronization changes are processed. Servers that use the cn=changelog mechanism to order sync changes, such as ForgeRock Directory Services (DS), Oracle DSEE, and the legacy Sun Directory Server, cannot use the __ALL__ object type by default. Such servers must be forced to use time stamps to order their sync changes. For these LDAP server types, set useTimestampsForSync to true in the provisioner configuration.

LDAP servers that use timestamps by default (such as Active Directory GCs and OpenLDAP) can use the __ALL__ object type without any additional configuration. Active Directory and Active Directory LDS, which use Update Sequence Numbers, can also use the __ALL__ object type without additional configuration.

14.2.10.1. Adding Objects and Properties Using the UI

To add object types and properties to a connector configuration by using the Admin UI, select Configure > Connectors. Select the connector that you want to change, then select the Object Types tab.

The connector reads the schema from the remote resource to determine the object types and properties that can be added to its configuration. When you select one of these object types, you can think of it as a template. Edit the basic object type, as required, to suit your deployment.

For example, when you connect to ForgeRock Directory Services (DS) using the LDAP connector, the following list of object types is available to add to the connector configuration:

Adding Object Types To a Connector Configuration

To add a property to an object type, select the Edit icon next to the object type, then select Add Property.

14.2.10.2. Extending the Object Type Configuration

nativeType

string, optional

The native OpenICF object type.

The list of supported native object types is dependent on the resource, or on the connector. For example, an LDAP connector might have object types such as __ACCOUNT__ and __GROUP__.

14.2.10.3. Specifying the Behavior For Empty Attributes

The absentIfEmpty and absentIfNull object class properties enable you to specify how attributes are handled during synchronization if their values are null (for single-valued attributes) or empty (for multi-valued attributes). You can set these properties per object type.

By default, these properties are set as follows:

"absentIfEmpty" : false

Multi-valued attributes whose values are empty are included in the resource response during synchronization.

"absentIfNull" : true

Single-valued attributes whose values are null are removed from the resource response during synchronization.

14.2.10.4. Extending the Property Type Configuration

nativeType

string, optional

The native OpenICF attribute type.

The following native types are supported:

JAVA_TYPE_BIGDECIMAL
JAVA_TYPE_BIGINTEGER
JAVA_TYPE_BYTE
JAVA_TYPE_BYTE_ARRAY
JAVA_TYPE_CHAR
JAVA_TYPE_CHARACTER
JAVA_TYPE_DATE
JAVA_TYPE_DOUBLE
JAVA_TYPE_FILE
JAVA_TYPE_FLOAT
JAVA_TYPE_GUARDEDBYTEARRAY
JAVA_TYPE_GUARDEDSTRING
JAVA_TYPE_INT
JAVA_TYPE_INTEGER
JAVA_TYPE_LONG
JAVA_TYPE_OBJECT
JAVA_TYPE_PRIMITIVE_BOOLEAN
JAVA_TYPE_PRIMITIVE_BYTE
JAVA_TYPE_PRIMITIVE_DOUBLE
JAVA_TYPE_PRIMITIVE_FLOAT
JAVA_TYPE_PRIMITIVE_LONG
JAVA_TYPE_STRING

Note

The JAVA_TYPE_DATE property is deprecated. Functionality may be removed in a future release. This property-level extension is an alias for string. Any dates assigned to this extension should be formatted per ISO 8601.

nativeName

string, optional

The native OpenICF attribute name.

flags

string, optional

The native OpenICF attribute flags. OpenICF supports the following attribute flags:

  • MULTIVALUED - specifies that the property can be multivalued.

    For multi-valued properties, if the property value type is anything other than a string, you must include an items property that declares the data type.

    The following example shows the entries property of the authentication object in a provisioner file. The entries property is multi-valued, and its elements are of type object:

    "authentication" : {
    ...
        "properties" : {
            ...
            "entries" : {
                "type" : "object",
                "required" : false,
                "nativeName" : "entries",
                "nativeType" : "object",
                    "items" : {
                        "type" : "object"
                    },
                "flags" : [
                    "MULTIVALUED"
                ]
            },
    ...
  • NOT_CREATABLE, NOT_READABLE, NOT_RETURNED_BY_DEFAULT, NOT_UPDATEABLE

    In some cases, the connector might not support manipulating an attribute because the attribute can only be changed directly on the remote system. For example, if the name attribute of an account can only be created by Active Directory, and never changed by IDM, you would add NOT_CREATABLE and NOT_UPDATEABLE to the provisioner configuration for that attribute.

    Certain attributes such as LDAP groups or other calculated attributes might be expensive to read. You might want to avoid returning these attributes in a default read of the object, unless they are explicitly requested. In this case, you would add the NOT_RETURNED_BY_DEFAULT flag to the provisioner configuration for that attribute.

  • REQUIRED - specifies that the property is required in create operations. This flag sets the required property of an attribute as follows:

    "required" : true

You can configure connectors to enable provisioning of any arbitrary property. For example, the following property definitions would enable you to provision image files, used as avatars, to account objects in a system resource. The first definition would work for a single photo encoded as a base64 string. The second definition would work for multiple photos encoded in the same way:

"attributeByteArray" : {
     "type" : "string",
     "nativeName" : "attributeByteArray",
     "nativeType" : "JAVA_TYPE_BYTE_ARRAY"
 },  
"attributeByteArrayMultivalue": {
     "type": "array",
     "items": {
         "type": "string",
         "nativeType": "JAVA_TYPE_BYTE_ARRAY"
     },
     "nativeName": "attributeByteArrayMultivalue"
 }, 

Note

Do not use the dash character ( - ) in property names, like last-name. Dashes in names make JavaScript syntax more complex. If you cannot avoid the dash, write source['last-name'] instead of source.last-name in your JavaScript scripts.

14.2.11. Configuring the Operation Options

The operationOptions object enables you to deny specific operations on a resource. For example, you can use this configuration object to deny CREATE and DELETE operations on a read-only resource to avoid IDM accidentally updating the resource during a synchronization operation.

The following example defines the options for the "SYNC" operation:

"operationOptions" : {
  {
    "SYNC" :
    {
      "denied" : true,
      "onDeny" : "DO_NOTHING",
      "objectFeatures" :
      {
        "__ACCOUNT__" :
        {
          "denied" : true,
          "onDeny" : "THROW_EXCEPTION",
          "operationOptionInfo" :
          {
            "$schema" : "http://json-schema.org/draft-03/schema",
            "id" : "FIX_ME",
            "type" : "object",
            "properties" :
            {
              "_OperationOption-float" :
              {
                 "type" : "number",
                 "nativeType" : "JAVA_TYPE_PRIMITIVE_FLOAT"
              }
            }
          }
        },
        "__GROUP__" :
        {
          "denied" : false,
          "onDeny" : "DO_NOTHING"
        }
      }
    }
  }
...

The OpenICF Framework supports the following operations:

  • AUTHENTICATE

  • CREATE

  • DELETE

  • GET

  • RESOLVEUSERNAME

  • SCHEMA

  • SCRIPT_ON_CONNECTOR

  • SCRIPT_ON_RESOURCE

  • SEARCH

  • SYNC

  • TEST

  • UPDATE

  • VALIDATE

For detailed information on these operations, see the OpenICF API documentation.

The operationOptions object has the following configurable properties:

denied

boolean, optional

This property prevents operation execution if the value is true.

onDeny

string, optional

If denied is true, then the service uses this value. Default value: DO_NOTHING.

  • DO_NOTHING: On operation the service does nothing.

  • THROW_EXCEPTION: On operation the service throws a ForbiddenException exception.

14.3. Accessing Remote Connectors

When you configure a remote connector, you use the connector info provider service to connect through a remote connector server. The connector info provider service configuration is stored in the file project-dir/conf/provisioner.openicf.connectorinfoprovider.json. A sample configuration file is provided in the openidm/samples/example-configurations/provisioners/ directory. To use this sample configuration, edit the file as required, and copy it to your project's conf/ directory.

The sample connector info provider configuration is as follows:

{
   "remoteConnectorServers" :
      [
         {
            "name" : "dotnet",
            "host" : "127.0.0.1",
            "port" : 8759,
            "useSSL" : false,
            "timeout" : 0,
            "protocol" : "websocket",
            "key" : "Passw0rd"
         }
      ]
}

You can configure the following remote connector server properties:

name

string, required

The name of the remote connector server object. This name is used to identify the remote connector server in the list of connector reference objects.

host

string, required

The remote host to connect to.

port

integer, optional

The remote port to connect to. The default remote port is 8759.

heartbeatInterval

integer, optional

The interval, in seconds, at which heartbeat packets are transmitted. If the connector server is unreachable based on this heartbeat interval, all services that use the connector server are made unavailable until the connector server can be reached again. The default interval is 60 seconds.

useSSL

boolean, optional

Specifies whether to connect to the connector server over SSL. The default value is false.

timeout

integer, optional

Specifies the timeout (in milliseconds) to use for the connection. The default value is 0, which means that there is no timeout.

protocol

string

Version 1.5.4.0 of the OpenICF framework supports an efficient communication protocol with remote connector servers. This protocol is enabled by default, and its value is websocket in the default configuration.

For compatibility reasons, you might want to enable the legacy protocol for specific remote connectors. For example, if you deploy the connector server on a Java 5 or 6 JVM, you must use the old protocol. In this case, remove the protocol property from the connector server configuration.

For the .NET connector server, the service with the default protocol listens on port 8759 and the service with the legacy protocol listens on port 8760 by default. For more information on running the connector server in legacy mode, see Procedure 14.2, "Running the .NET Connector Server in Legacy Mode".

For the Java connector server, the service listens on port 8759 by default, for both protocols. To run the service with the legacy protocol, you must change the main class that is executed in the ConnectorServer.sh or ConnectorServer.bat file. The class that starts the websocket protocol is MAIN_CLASS=org.forgerock.openicf.framework.server.Main. The class that starts the legacy protocol is MAIN_CLASS=org.identityconnectors.framework.server.Main. To change the port on which the Java connector server listens, change the connectorserver.port property in the openicf/conf/ConnectorServer.properties file.

key

string, required

The secret key, or password, to use to authenticate to the remote connector server.

To run remotely, you must copy the connector .jar itself to the openicf/bundles directory on the remote machine.

14.3.1. Installing and Configuring Remote Connector Servers

Connectors that use the .NET framework must run remotely. Java connectors can run locally or remotely. Connectors that run remotely require a connector server to enable IDM to access the connector.

For a list of supported versions, and compatibility between versions, see Table 2.1, "Supported Connectors, Connector Servers, and Plugins" in the Release Notes.

This section describes the steps to install a .NET connector server and a remote Java Connector Server.

14.3.1.1. Installing and Configuring a .NET Connector Server

A .NET connector server is useful when an application is written in Java, but a connector bundle is written using C#. Because a Java application (for example, a J2EE application) cannot load C# classes, you must deploy the C# bundles under a .NET connector server. The Java application can communicate with the C# connector server over the network, and the C# connector server acts as a proxy to provide access to the C# bundles that are deployed within the C# connector server, to any authenticated application.

By default, the connector server outputs log messages to a file named connectorserver.log, in the C:\path\to\openicf directory. To change the location of the log file set the initializeData parameter in the configuration file, before you install the connector server. For example, the following excerpt sets the log directory to C:\openicf\logs\connectorserver.log:

<add name="file"
   type="System.Diagnostics.TextWriterTraceListener"
   initializeData="C:\openicf\logs\connectorserver.log"
   traceOutputOptions="DateTime">
     <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information"/>
     </add>
Procedure 14.1. Installing the .NET Connector Server
  1. Download the OpenICF .NET Connector Server from the ForgeRock BackStage site.

    The .NET connector server is distributed in two formats. The .msi file is a wizard that installs the Connector Server as a Windows Service. The .zip file is simply a bundle of all the files required to run the Connector Server.

    • If you do not want to run the Connector Server as a Windows service, download and extract the .zip file, then move on to Procedure 14.3, "Configuring the .NET Connector Server".

    • If you have deployed the .zip file and then decide to run the Connector Server as a service, install the service manually with the following command:

      .\ConnectorServerService.exe /install /serviceName service-name

      Then proceed to Procedure 14.3, "Configuring the .NET Connector Server".

    • To install the Connector Server as a Windows service automatically, follow the remaining steps in this section.

  2. Run the openicf-version-dotnet.msi installation file and complete the wizard.

    You must run the wizard as a user who has permissions to start and stop a Windows service, otherwise the service will not start.

    When you choose the Setup Type, select Typical unless you require backward compatibility with the 1.4.0.0 connector server. If you need backward compatibility, select Custom, and install the Legacy Connector Service.

    When the wizard has completed, the Connector Server is installed as a Windows Service.

  3. Open the Microsoft Services Console and make sure that the Connector Server is listed there.

    The name of the service is OpenICF Connector Server, by default.

    .Net Connector Server as Windows Service
Procedure 14.2. Running the .NET Connector Server in Legacy Mode
  1. If you are installing the .NET Connector Server from the .msi distribution, select Custom for the Setup Type, and install the Legacy Connector Service.

  2. If you are installing the .NET Connector Server from the .zip distribution, launch the Connector Server by running the ConnectorServer.exe command, and not the ConnectorServerService.exe command.

  3. Adjust the port parameter in your IDM remote connector server configuration file. In legacy mode, the connector server listens on port 8760 by default.

  4. Remove the "protocol" : "websocket", from your IDM remote connector server configuration file to specify that the connector server should use the legacy protocol.

  5. In the commands shown in Procedure 14.3, "Configuring the .NET Connector Server", replace ConnectorServerService.exe with ConnectorServer.exe.

Procedure 14.3. Configuring the .NET Connector Server

After you have installed the .NET Connector Server, as described in the previous section, follow these steps to configure the Connector Server:

  1. Make sure that the Connector Server is not currently running. If it is running, use the Microsoft Services Console to stop it.

  2. At the command prompt, change to the directory where the Connector Server was installed:

    c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF"
  3. Run the ConnectorServerService /setkey command to set a secret key for the Connector Server. The key can be any string value. This example sets the secret key to Passw0rd:

    ConnectorServerService /setkey Passw0rd
    Key has been successfully updated.

    This key is used by clients connecting to the Connector Server. The key that you set here must also be set in the IDM connector info provider configuration file (conf/provisioner.openicf.connectorinfoprovider.json). For more information, see Procedure 14.5, "Configuring IDM to Connect to the .NET Connector Server".

  4. Edit the Connector Server configuration.

    The Connector Server configuration is saved in a file named ConnectorServerService.exe.Config (in the directory in which the Connector Server is installed).

    Check and edit this file, as necessary, to reflect your installation. Specifically, verify that the baseAddress reflects the host and port on which the connector server is installed:

    <system.serviceModel>
      <services>
        <service name="Org.ForgeRock.OpenICF.Framework.Service.WcfServiceLibrary.WcfWebsocket">
          <host>
            <baseAddresses>
              <add baseAddress="http://0.0.0.0:8759/openicf" />
            </baseAddresses>
          <host>
        </service>
      </services>
    </system.serviceModel>

    The baseAddress specifies the host and port on which the Connector Server listens, and is set to http://0.0.0.0:8759/openicf by default. If you set a host value other than the default 0.0.0.0, connections from all IP addresses other than the one specified are denied.

    If Windows firewall is enabled, you must create an inbound port rule to open the TCP port for the connector server (8759 by default). If you do not open the TCP port, IDM will be unable to contact the Connector Server. For more information, see the Microsoft documentation on creating an inbound port rule.

  5. Optionally, configure the Connector Server to use SSL:

    1. Open a Powershell terminal as a user with administrator privileges, then change to the OpenICF installation directory:

      PS C:\Users\Administrator> cd 'C:\Program Files (x86)\ForgeRock\OpenICF'
    2. Use an existing CA certificate, or use the New-SelfSignedCertificate cmdlet to create a self-signed certificate:

      PS ...> New-SelfSignedCertificate -DnsName "dotnet", "dotnet.example.com" -CertStoreLocation "cert:\LocalMachine\My"
      PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My
      
      Thumbprint                                Subject
      ----------                                -------
      770F531F14AF435E963E14AD82B70A47A4BFFBF2  CN=dotnet
    3. Assign the certificate to the Connector Server:

      PS ...> .\ConnectorServerService.exe /setCertificate
      
      Select certificate you want to use:
      Index  Issued To                Thumbprint
      -----  ---------                -------------------------
      
         0)  dotnet                    770F531F14AF435E963E14AD82B70A47A4BFFBF2
      
      0
      Certificate Thumbprint has been successfully updated to 770F531F14AF435E963E14AD82B70A47A4BFFBF2.
    4. Bind the certificate to the Connector Server port (8759 by default). To bind the certificate:

      1. Use uuidgen.exe to generate a new UUID:

        PS ...> & 'C:\Program Files (x86)\Windows Kits\10\bin\10.0.15063.0\x64\uuidgen.exe'
        058d7a64-8628-49ec-a417-a70c8974046d
      2. Enter the netsh http console and add the certificate thumbprint generated in the previous step and the UUID that you have just generated:

        PS ...> netsh
        netsh>http
        netsh http>add sslcert ipport=0.0.0.0:8759 certhash=770F5...FFBF2 appid={058d7...4046d}
        SSL Certificate successfully added
    5. Change the Connector Server configuration (in the ConnectorServerService.exe.Config file) to use HTTPS and not HTTP:

      <host>
          <baseAddresses>
              ...
              <add baseAddress="https://0.0.0.0:8759/openicf"/>
          </baseAddresses>
      </host>
    6. Export the certificate:

      1. Launch the certificate management MMC by selecting Run > certlm.msc.

      2. Right-click on the dotnet certificate then select All Tasks > Export.

        The Certificate Export Wizard is launched.

      3. Select Next > No, do not export the private key > DER encoded binary X.509 (.CER) > Next.

      4. Save the file in an accessible location (for example, C:\Users\Administrator\Desktop\dotnet.cer) then select Finish.

    7. Import the certificate into the IDM truststore:

      1. Transfer the certificate from the Windows machine to the machine that's running IDM.

      2. Change to the openidm/security directory and use the Java keytool command to import the certificate:

        $ cd /path/to/openidm/security
        $ keytool -import -alias dotnet -file ~/Downloads/dotnet.cer -keystore ./truststore
        Enter keystore password: changeit
        Owner: CN=dotnet
        Issuer: CN=dotnet
        Serial number: 1e3af7baed05ce834da5cd1bf1241835
        Valid from: Tue Aug 08 15:58:32 SAST 2017 until: Wed Aug 08 16:18:32 SAST 2018
        Certificate fingerprints:
        	 MD5:  D1:B7:B7:46:C2:59:1A:3C:94:AA:65:99:B4:43:3B:E8
        	 SHA1: 77:0F:53:1F:14:AF:43:5E:96:3E:14:AD:82:B7:0A:47:A4:BF:FB:F2
        	 SHA256: C0:52:E2:E5:E5:72:9D:69:F8:11:4C:B8:4C:E4:E3:1C:19:95:86:19:70:E5:31:FA:D8:81:4B:F2:AC:30:9C:73
        	 Signature algorithm name: SHA256withRSA
        	 Version: 3
        
        ...
        
        Trust this certificate? [no]: yes
        Certificate was added to keystore
    8. Update your project's connector server configuration file (conf/provisioner.openicf.connectorinfoprovider.json) to use SSL:

      $ cd /path/to/my-project/conf
      $ more provisioner.openicf.connectorinfoprovider.json
      "remoteConnectorServers" : [
          {
              "name" : "dotnet",
              "host" : "my-windows-host",
              "port" : 8759,
              "protocol" : "websocket",
              "useSSL" : true,
              "timeout" : 0,
              "key" : {...}
          }
      ]
  6. Check the trace settings, in the same Connector Server configuration file, under the system.diagnostics item:

    <system.diagnostics>
      <trace autoflush="true" indentsize="4">
        <listeners>
          <remove name="Default" />
          <add name="console" />
          <add name="file" />
        </listeners>
      </trace>
      <sources>
        <source name="ConnectorServer" switchName="switch1">
          <listeners>
            <remove name="Default" />
            <add name="file" />
          </listeners>
        </source>
      </sources>
      <switches>
        <add name="switch1" value="Information" />
      </switches>
      <sharedListeners>
        <add name="console" type="System.Diagnostics.ConsoleTraceListener" />
        <add name="file" type="System.Diagnostics.TextWriterTraceListener"
                initializeData="logs\ConnectorServerService.log"
                traceOutputOptions="DateTime">
            <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information" />
        </add>
      </sharedListeners>
    </system.diagnostics>

    The Connector Server uses the standard .NET trace mechanism. For more information about tracing options, see Microsoft's .NET documentation for System.Diagnostics.

    The default trace settings are a good starting point. For less tracing, set the EventTypeFilter's initializeData to Warning or Error. For very verbose logging set the value to Verbose or All. The logging level has a direct effect on the performance of the Connector Servers, so take care when setting this level.

Procedure 14.4. Starting the .NET Connector Server

Start the .NET Connector Server in one of the following ways:

  1. Start the server as a Windows service, by using the Microsoft Services Console.

    Locate the connector server service (OpenICF Connector Server), and click Start the service or Restart the service.

    The service is executed with the credentials of the "run as" user (System, by default).

  2. Start the server as a Windows service, by using the command line.

    In the Windows Command Prompt, run the following command:

    net start ConnectorServerService

    To stop the service in this manner, run the following command:

    net stop ConnectorServerService
  3. Start the server without using Windows services.

    In the Windows Command Prompt, change directory to the location where the Connector Server was installed. The default location is c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF".

    Start the server with the following command:

    ConnectorServerService.exe /run

    Note that this command starts the Connector Server with the credentials of the current user. It does not start the server as a Windows service.

Procedure 14.5. Configuring IDM to Connect to the .NET Connector Server

The connector info provider service configures one or more remote connector servers to which IDM can connect. The connector info provider configuration is stored in a file named project-dir/conf/provisioner.openicf.connectorinfoprovider.json. A sample connector info provider configuration file is located in openidm/samples/example-configurations/provisioners/.

To configure IDM to use the remote .NET connector server, follow these steps:

  1. Start IDM, if it is not already running.

  2. Copy the sample connector info provider configuration file to your project's conf/ directory:

    $ cd /path/to/openidm
    $ cp samples/example-configurations/provisioners/provisioner.openicf.connectorinfoprovider.json project-dir/conf/
  3. Edit the connector info provider configuration, specifying the details of the remote connector server:

    "remoteConnectorServers" : [
        {
            "name" : "dotnet",
            "host" : "192.0.2.0",
            "port" : 8759,
            "useSSL" : false,
            "timeout" : 0,
            "protocol" : "websocket",
            "key" : "Passw0rd"
        }

    Configurable properties are as follows:

    name

    Specifies the name of the connection to the .NET connector server. The name can be any string. This name is referenced in the connectorHostRef property of the connector configuration file (provisioner.openicf-ad.json).

    host

    Specifies the IP address of the host on which the Connector Server is installed.

    port

    Specifies the port on which the Connector Server listens. This property matches the connectorserver.port property in the ConnectorServerService.exe.config file.

    For more information, see Procedure 14.3, "Configuring the .NET Connector Server".

    useSSL

    Specifies whether the connection to the Connector Server should be secured. This property matches the "connectorserver.usessl" property in the ConnectorServerService.exe.config file.

    timeout

    Specifies the length of time, in seconds, that IDM should attempt to connect to the Connector Server before abandoning the attempt. To disable the timeout, set the value of this property to 0.

    protocol

    Version 1.5.4.0 of the OpenICF framework supports a new communication protocol with remote connector servers. This protocol is enabled by default, and its value is websocket in the default configuration.

    key

    Specifies the connector server key. This property matches the key property in the ConnectorServerService.exe.config file. For more information, see Procedure 14.3, "Configuring the .NET Connector Server".

    The string value that you enter here is encrypted as soon as the file is saved.

14.3.1.2. Installing and Configuring a Remote Java Connector Server

In certain situations, it might be necessary to set up a remote Java Connector Server. This section provides instructions for setting up a remote Java Connector Server on Unix/Linux and Windows.

Procedure 14.6. Installing a Remote Java Connector Server for Unix/Linux
  1. Download the OpenICF Java Connector Server from ForgeRock's BackStage site.

  2. Change to the appropriate directory and unpack the zip file. The following command unzips the file in the current directory:

    $ unzip openicf-zip-1.5.4.0.zip
  3. Change to the openicf directory:

    $ cd path/to/openicf
  4. The Java Connector Server uses a key property to authenticate the connection. The default key value is changeit. To change the value of the secret key, run a command similar to the following. This example sets the key value to Passw0rd:

    $ cd /path/to/openicf
    $  bin/ConnectorServer.sh /setkey Passw0rd
    Key has been successfully updated.
  5. Review the ConnectorServer.properties file in the /path/to/openicf/conf directory, and make any required changes. By default, the configuration file has the following properties:

    connectorserver.port=8759
    connectorserver.libDir=lib
    connectorserver.usessl=false
    connectorserver.bundleDir=bundles
    connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog
    connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=

    The connectorserver.usessl parameter indicates whether client connections to the connector server should be over SSL. This property is set to false by default.

    To secure connections to the connector server, set this property to true and set the following properties before you start the connector server:

    java -Djavax.net.ssl.keyStore=mySrvKeystore -Djavax.net.ssl.keyStorePassword=Passw0rd
  6. Start the Java Connector Server:

    $ bin/ConnectorServer.sh /run

    The connector server is now running, and listening on port 8759, by default.

    Log files are available in the /path/to/openicf/logs directory.

    $ ls logs/
    Connector.log  ConnectorServer.log  ConnectorServerTrace.log
  7. If required, stop the Java Connector Server by pressing CTRL-C.

Procedure 14.7. Installing a Remote Java Connector Server for Windows
  1. Download the OpenICF Java Connector Server from ForgeRock's BackStage site.

  2. Change to the appropriate directory and unpack the zip file.

  3. In a Command Prompt window, change to the openicf directory:

    C:\>cd C:\path\to\openicf\bin
  4. If required, secure the communication between IDM and the Java Connector Server. The Java Connector Server uses a key property to authenticate the connection. The default key value is changeit.

    To change the value of the secret key, use the bin\ConnectorServer.bat /setkey command. The following example sets the key to Passw0rd:

    c:\path\to\openicf>bin\ConnectorServer.bat /setkey Passw0rd
    lib\framework\connector-framework.jar;lib\framework\connector-framework-internal
    .jar;lib\framework\groovy-all.jar;lib\framework\icfl-over-slf4j.jar;lib\framework
    \slf4j-api.jar;lib\framework\logback-core.jar;lib\framework\logback-classic.jar
  5. Review the ConnectorServer.properties file in the path\to\openicf\conf directory, and make any required changes. By default, the configuration file has the following properties:

    connectorserver.port=8759
    connectorserver.libDir=lib
    connectorserver.usessl=false
    connectorserver.bundleDir=bundles
    connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog
    connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=
  6. You can either run the Java Connector Server as a Windows service, or start and stop it from the command-line.

    • To install the Java Connector Server as a Windows service, run the following command:

      c:\path\to\openicf>bin\ConnectorServer.bat /install

      If you install the connector server as a Windows service you can use the Microsoft Services Console to start, stop and restart the service. The Java Connector Service is named OpenICFConnectorServerJava.

      To uninstall the Java Connector Server as a Windows service, run the following command:

      c:\path\to\openicf>bin\ConnectorServer.bat /uninstall
  7. To start the Java Connector Server from the command line, enter the following command:

    c:\path\to\openicf>bin\ConnectorServer.bat /run

    The connector server is now running, and listening on port 8759, by default.

    Log files are available in the \path\to\openicf\logs directory.

  8. If required, stop the Java Connector Server by pressing ^C.

14.3.2. Example : Using the CSV Connector to Reconcile Users in a Remote CSV Data Store

This example demonstrates reconciliation of users stored in a CSV file on a remote machine. The remote Java Connector Server enables IDM to synchronize its repository with the remote CSV repository.

The example assumes that a remote Java Connector Server is installed on a host named remote-host. For instructions on setting up the remote Java Connector Server, see Procedure 14.6, "Installing a Remote Java Connector Server for Unix/Linux" or Procedure 14.7, "Installing a Remote Java Connector Server for Windows".

Procedure 14.8. Configuring the Remote Connector Server for the CSV Connector Example

This example assumes that the Java Connector Server is running on the machine named remote-host. The example uses the small CSV data set provided with the Getting Started sample (hr.csv). The CSV connector runs as a remote connector, that is, on the remote host on which the Java Connector Server is installed. Before you start, copy the sample data file, and the CSV connector itself over to the remote machine.

  1. Shut down the remote connector server, if it is running. In the connector server terminal window, type q:

    q
    INFO: Stopped listener bound to [0.0.0.0:8759]
    May 30, 2016 12:33:24 PM INFO  o.f.o.f.server.ConnectorServer: Server is
     shutting down org.forgerock.openicf.framework.server.ConnectorServer@171ba877
    
         
  2. Copy the CSV data file from the Getting Started sample (/path/to/openidm/samples/getting-started/data/hr.csv) to an accessible location on the machine that hosts the remote Java Connector Server. For example:

    $ cd /path/to/openidm/samples/getting-started/data/
    $ scp hr.csv testuser@remote-host:/home/testuser/csv-sample/data/
    Password:********
    hr.csv     100%  651     0.6KB/s   00:00 
  3. Copy the CSV connector .jar from the IDM installation to the openicf/bundles directory on the remote host:

    $ cd path/to/openidm
    $ scp connectors/csvfile-connector-1.5.2.0.jar testuser@remote-host:/path/to/openicf/bundles/
    Password:********
    csvfile-connector-1.5.2.0.jar    100%   40KB  39.8KB/s   00:00
  4. The CSV connector depends on the Super CSV library, that is bundled with IDM. Copy the Super CSV library (super-csv-2.4.0.jar) from the openidm/bundle directory to the openicf/lib directory on the remote server:

    $ cd path/to/openidm
    $ scp bundle/super-csv-2.4.0.jar testuser@remote-host:/path/to/openicf/lib/
    Password:********
    super-csv-2.4.0.jar              100%   96KB  95.8KB/s   00:00
  5. On the remote host, restart the Connector Server so that it picks up the new CSV connector and its dependent libraries:

    $ cd /path/to/openicf
    $ bin/ConnectorServer.sh /run
    ...
    May 30, 2016 3:58:29 PM INFO  o.i.f.i.a.l.LocalConnectorInfoManagerImpl: Add ConnectorInfo ConnectorKey(
     bundleName=org.forgerock.openicf.connectors.csvfile-connector bundleVersion="[1.5.1.4,1.6.0.0)"
     connectorName=org.forgerock.openicf.csvfile.CSVFileConnector ) to Local Connector Info Manager from
     file:/path/to/openicf/bundles/csvfile-connector-1.5.2.0.jar
    May 30, 2016 3:58:30 PM org.glassfish.grizzly.http.server.NetworkListener start
    INFO: Started listener bound to [0.0.0.0:8759]
    May 30, 2016 3:58:30 PM org.glassfish.grizzly.http.server.HttpServer start
    INFO: [OpenICF Connector Server] Started.
    May 30, 2016 3:58:30 PM INFO  o.f.openicf.framework.server.Main: ConnectorServer
     listening on: ServerListener[0.0.0.0:8759 - plain] 

    The connector server logs are noisy by default. You should, however, notice the addition of the CSV connector.

Procedure 14.9. Configuring IDM for the Remote CSV Connector Example

Before you start, copy the following files to your /path/to/openidm/conf directory:

  • sync.json

    A customised mapping file required for this example.

  • /openidm/samples/example-configurations/provisioners/provisioner.openicf.connectorinfoprovider.json The sample connector server configuration file.

  • /openidm/samples/example-configurations/provisioners/provisioner.openicf-csv.json

    The sample connector configuration file.

  1. Edit the remote connector server configuration file (provisioner.openicf.connectorinfoprovider.json) to match your network setup.

    The following example indicates that the Java connector server is running on the host remote-host, listening on the default port, and configured with a secret key of Passw0rd:

    {
        "remoteConnectorServers" : [
            {
                "name" : "csv",
                "host" : "remote-host",
                "port" : 8759,
                "useSSL" : false,
                "timeout" : 0,
                "protocol" : "websocket",
                "key" : "Passw0rd"
            }
        ]
    }

    The name that you set in this file will be referenced in the connectorHostRef property of the connector configuration, in the next step.

    The key that you specify here must match the password that you set when you installed the Java connector server.

  2. Edit the CSV connector configuration file (provisioner.openicf-csv.json) as follows:

    {
        "name" : "csvfile",
        "connectorRef" : {
            "connectorHostRef" : "csv",
            "bundleName" : "org.forgerock.openicf.connectors.csvfile-connector",
            "bundleVersion" : "[1.5.1.4,1.6.0.0)",
            "connectorName" : "org.forgerock.openicf.connectors.csv.CSVFileConnector"
        },
        ...
        "configurationProperties" : {
            "csvFile" : "/home/testuser/csv-sample/data/hr.csv"
        },
    }
    • The connectorHostRef property indicates which remote connector server to use, and refers to the name property you specified in the provisioner.openicf.connectorinfoprovider.json file.

    • The bundleVersion : "[1.5.1.4,1.6.0.0)", must either be exactly the same as the version of the CSV connector that you are using or, if you specify a range, the CSV connector version must be included in this range.

    • The csvFile property must specify the absolute path to the CSV data file that you copied to the remote host on which the Java Connector Server is running.

  3. Start IDM:

    $ cd /path/to/openidm
    $ ./startup.sh
  4. Verify that IDM can reach the remote connector server and that the CSV connector has been configured correctly:

    $ curl \
     --header "X-OpenIDM-Username: openidm-admin" \
     --header "X-OpenIDM-Password: openidm-admin" \
     --request POST \
     "http://localhost:8080/openidm/system?_action=test"
    [
      {
        "name": "csv",
        "enabled": true,
        "config": "config/provisioner.openicf/csv",
        "objectTypes": [
          "__ALL__",
          "account"
        ],
        "connectorRef": {
          "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
          "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
          "bundleVersion": "[1.5.1.4,1.6.0.0)"
        },
        "displayName": "CSV File Connector",
        "ok": true
      }
    ]

    The connector must return "ok": true.

    Alternatively, use the Admin UI to verify that IDM can reach the remote connector server and that the CSV connector is active. Log in to the Admin UI (https://localhost:8443/openidm/admin) and select Configure > Connectors. The CSV connector should be listed on the Connectors page, and its status should be Active.

    Figure 14.3. Connectors Tab Showing an Active CSV Connector
    Connectors tab showing an active CSV connector

  5. To test that the connector has been configured correctly, run a reconciliation operation as follows:

    1. Select Configure > Mappings and click the systemCsvAccounts_managedUser mapping.

    2. Click Reconcile.

    If the reconciliation is successful, the three users from the remote CSV file should have been added to the managed user repository.

    To check this, select Manage > User.

14.3.3. Configuring Failover Between Remote Connector Servers

To prevent the connector server from being a single point of failure, you can specify a list of remote connector servers that the connector can target. This failover configuration is included in your project's conf/provisioner.openicf.connectorinfoprovider.json file. The connector attempts to contact the first connector server in the list. If that connector server is down, it proceeds to the next connector server.

The following sample configuration defines two remote connector servers, on hosts remote-host-1 and remote-host-2. These servers are listed, by their name property in a group, specified in the remoteConnectorServersGroups property. You can configure multiple servers per group, and multiple groups in a single remote connector server configuration file.

{
    "connectorsLocation" : "connectors",
    "remoteConnectorServers" : [
        {
            "name" : "dotnet1",
            "host" : "remote-host-1",
            "port" : 8759,
            "protocol" : "websocket",
            "useSSL" : false,
            "timeout" : 0,
            "key" : "password"
        },
        {
            "name" : "dotnet2",
            "host" : "remote-host-2",
            "port" : 8759,
            "protocol" : "websocket",
            "useSSL" : false,
            "timeout" : 0,
            "key" : "password"
         }
    ],
    "remoteConnectorServersGroups" : [
        {
            "name" : "dotnet-ha",
            "algorithm" : "failover",
            "serversList" : [
                {"name": "dotnet1"},
                {"name": "dotnet2"}
            ]
        }
    ]
}  

The algorithm can be either failover or roundrobin. If the algorithm is failover, requests are always sent to the first connector server in the list, unless it is unavailable, in which case requests are sent to the next connector server in the list. If the algorithm is roundrobin, requests are distributed equally between the connector servers in the list, in the order in which they are received.

Your connector configuration file (provisioner.openicf-connector-name.json) references the remote connector server group, rather than a single remote connector server. For example, the following excerpt of a PowerShell connector configuration file references the dotnet-ha connector server group from the previous configuration:

{
  "connectorRef" : {
    "bundleName" : "MsPowerShell.Connector",
    "connectorName" : "Org.ForgeRock.OpenICF.Connectors.MsPowerShell.MsPowerShellConnector",
    "connectorHostRef" : "dotnet-ha",
    "bundleVersion" : "[1.4.2.0,1.5.0.0)"
  },
  ...

Note

Failover is not supported between connector servers that are running in legacy mode. Therefore, the configuration of each connector server that is part of the failover group must have the protocol property set to websocket.

14.4. Checking the Status of External Systems Over REST

After a connection has been configured, external systems are accessible over the REST interface at the URL http://localhost:8080/openidm/system/connector-name. Aside from accessing the data objects within the external systems, you can test the availability of the systems themselves.

To list the external systems that are connected to an IDM instance, use the test action on the URL http://localhost:8080/openidm/system/. The following example shows the connector configuration for an external LDAP system:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/system?_action=test"
[
  {
    "ok": true,
    "displayName": "LDAP Connector",
    "connectorRef": {
      "bundleVersion": "[1.4.0.0,2.0.0.0)",
      "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
      "connectorName": "org.identityconnectors.ldap.LdapConnector"
    },
    "objectTypes": [
      "__ALL__",
      "group",
      "account"
    ],
    "config": "config/provisioner.openicf/ldap",
    "enabled": true,
    "name": "ldap"
  }
]

The status of the system is provided by the ok parameter. If the connection is available, the value of this parameter is true.

To obtain the status for a single system, include the name of the connector in the URL, for example:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/system/ldap?_action=test"
{
  "ok": true,
  "displayName": "LDAP Connector",
  "connectorRef": {
    "bundleVersion": "[1.4.0.0,2.0.0.0)",
    "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
    "connectorName": "org.identityconnectors.ldap.LdapConnector"
  },
  "objectTypes": [
    "__ALL__",
    "group",
    "account"
  ],
  "config": "config/provisioner.openicf/ldap",
  "enabled": true,
  "name": "ldap"
}

If there is a problem with the connection, the ok parameter returns false, with an indication of the error. In the following example, the LDAP server named ldap, running on localhost:1389, is down:

$ curl \
 --header "X-OpenIDM-Username: openidm-admin" \
 --header "X-OpenIDM-Password: openidm-admin" \
 --request POST \
 "http://localhost:8080/openidm/system/ldap?_action=test"
{
  "ok": false,
  "error": "localhost:1389",
  "displayName": "LDAP Connector",
  "connectorRef": {
    "bundleVersion": "[1.4.0.0,2.0.0.0)",
    "bundleName": "org.forgerock.openicf.connectors.ldap-connector",
    "connectorName": "org.identityconnectors.ldap.LdapConnector"
  },
  "objectTypes": [
    "__ALL__",
    "group",
    "account"
  ],
  "config": "config/provisioner.openicf/ldap",
  "enabled": true,
  "name": "ldap"
}

To test the validity of a connector configuration, use the testConfig action and include the configuration in the command. For example:

$ curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Content-Type: application/json" \
--request POST \
--data '{
  "name": "csvfile",
  "configurationProperties": {
    "headerPassword": "password",
    "csvFile": "&{launcher.project.location}/data/csvConnectorData.csv",
    "newlineString": "\n",
    "headerUid": "uid",
    "quoteCharacter": "\"",
    "fieldDelimiter": ",",
    "syncFileRetentionCount": 3
  },
  "connectorRef": {
    "systemType": "provisioner.openicf",
    "bundleName": "org.forgerock.openicf.connectors.csvfile-connector",
    "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector",
    "displayName": "CSV File Connector",
    "bundleVersion": "1.5.1.5"
  },
  "poolConfigOption": {
    "maxObjects": 10,
    "maxIdle": 10,
    "maxWait": 150000,
    "minEvictableIdleTimeMillis": 120000,
    "minIdle": 1
  },
  "resultsHandlerConfig": {
    "enableNormalizingResultsHandler": true,
    "enableFilteredResultsHandler": true,
    "enableCaseInsensitiveFilter": false,
    "enableAttributesToGetSearchResultsHandler": true
  },
  "operationTimeout": {
    "CREATE": -1,
    "UPDATE": -1,
    "DELETE": -1,
    "TEST": -1,
    "SCRIPT_ON_CONNECTOR": -1,
    "SCRIPT_ON_RESOURCE": -1,
    "GET": -1,
    "RESOLVEUSERNAME": -1,
    "AUTHENTICATE": -1,
    "SEARCH": -1,
    "VALIDATE": -1,
    "SYNC": -1,
    "SCHEMA": -1
  }
 }' \
 "http://localhost:8080/openidm/system?_action=testConfig"

If the configuration is valid, the command returns "ok": true, for example:

{
   "ok": true,
   "name": "csvfile"
}

If the configuration is not valid, the command returns an error, indicating the problem with the configuration. For example, the following result is returned when the LDAP connector configuration is missing a required property (in this case, the baseContexts to synchronize):

{
  "error": "org.identityconnectors.framework.common.exceptions.ConfigurationException:
           The list of base contexts cannot be empty",
  "name": "OpenDJ",
  "ok": false
} 

The testConfig action requires a running IDM instance, as it uses the REST API, but does not require an active connector instance for the connector whose configuration you want to test.

14.5. Removing a Connector

If you have reason to remove a connector, be careful. If you remove a connector used in a mapping, while it's part of a scheduled task, you may see unintended consequences.

If you're removing a connector, consider the following checklist. Depending on your configuration, this list may not be comprehensive:

  • Consider the remote resource. Make sure you no longer need data from that resource, and that the resource no longer requires data from IDM.

  • Open the sync.json file for your project. Delete the code block associated with the mapping.

  • Review the schedule-recon.json file. If it contains the schedule for a single operation, delete the file or update it as a schedule for a different mapping.

When these steps are complete, you can delete the connector configuration file, typically named provisioner-*.json.

You can also delete the connector via the Admin UI. Log in as openidm-admin and select Configure > Connectors. Find the target connector, select the vertical ellipsis. In the pop-up menu that appears, press Delete. The Admin UI will automatically make the specified changes to the noted configuration files.

Chapter 15. Synchronizing Data Between Resources

One of the core IDM services is synchronizing identity data between resources. In this chapter, you will learn about the different types of synchronization, and how to configure the flexible synchronization mechanism.

15.1. Types of Synchronization

Synchronization happens either when IDM receives a change directly, or when IDM discovers a change on an external resource. An external resource can be any system that holds identity data, such as Active Directory, DS, a CSV file, a JDBC database, and so on. IDM connects to external resources by using connectors. For more information, see Chapter 14, "Connecting to External Resources".

For direct changes to managed objects, IDM immediately synchronizes those changes to all mappings configured to use those objects as their source. A direct change can originate not only as a write request through the REST interface, but also as an update resulting from reconciliation with another resource.

  • IDM discovers and synchronizes changes from external resources by using reconciliation and liveSync.

  • IDM synchronizes changes made to its internal repository with external resources by using implicit synchronization.

Reconciliation

In identity management, reconciliation is the bidirectional synchronization of objects between different data stores. Traditionally, reconciliation applies mainly to user objects, but IDM can reconcile any objects, such as groups, roles, and devices.

In any reconciliation operation, there is a source system (the system that contains the changes) and a target system (the system to which the changes will be propagated). The source and target system are defined in a mapping. The IDM repository can be either the source or the target in a mapping. You can configure multiple mappings for one IDM instance, depending on the external resources to which you are connecting.

To perform reconciliation, IDM analyzes both the source system and the target system, to discover the differences that it must reconcile. Reconciliation can therefore be a heavyweight process. When working with large data sets, finding all changes can be more work than processing the changes.

Reconciliation is, however, thorough. It recognizes system error conditions and catches changes that might be missed by liveSync. Reconciliation therefore serves as the basis for compliance and reporting functionality.

LiveSync

LiveSync captures the changes that occur on a remote system, then pushes those changes to IDM. IDM uses the defined mappings to replay the changes where they are required; either in the repository, or on another remote system, or both. Unlike reconciliation, liveSync uses a polling system, and is intended to react quickly to changes as they happen.

To perform this polling, liveSync relies on a change detection mechanism on the external resource to determine which objects have changed. The change detection mechanism is specific to the external resource, and can be a time stamp, a sequence number, a change vector, or any other method of recording changes that have occurred on the system. For example, ForgeRock Directory Services (DS) implements a change log that provides IDM with a list of objects that have changed since the last request. Active Directory implements a change sequence number, and certain databases might have a lastChange attribute.

Note

In the case of DS, the change log (cn=changelog) can be read only by cn=directory manager by default. If you are configuring liveSync with DS, the principal that is defined in the LDAP connector configuration must have access to the change log. For information about allowing a regular user to read the change log, see To Allow a User to Read the Change Log in the Administration Guide for DS.

Implicit synchronization

Implicit synchronization automatically pushes changes that are made in the IDM repository to external systems.

Note that implicit synchronization only synchronizes changed objects to the external data sources. To synchronize a complete data set, you must start with a reconciliation operation. The entire changed object is synchronized. If you want to synchronize only the attributes that have changed, you can modify the onUpdate script in your mapping to compare attribute values before pushing changes.

IDM uses mappings, configured in your project's conf/sync.json file, to determine which data to synchronize, and how that data must be synchronized. You can schedule reconciliation operations, and the frequency with which IDM polls for liveSync changes, as described in Chapter 17, "Scheduling Tasks and Events".

IDM logs reconciliation and synchronization operations in the audit logs by default. For information about querying the reconciliation and synchronization logs, see Section 22.10, "Querying Audit Logs Over REST".

15.2. Defining Your Data Mapping Model

In general, identity management software implements one of the following data models:

  • A meta-directory data model, where all data are mirrored in a central repository.

    The meta-directory model offers fast access at the risk of getting outdated data.

  • A virtual data model, where only a minimum set of attributes are stored centrally, and most are loaded on demand from the external resources in which they are stored.

    The virtual model guarantees fresh data, but pays for that guarantee in terms of performance.

IDM leaves the data model choice up to you. You determine the right trade offs for a particular deployment. IDM does not hard code any particular schema or set of attributes stored in the repository. Instead, you define how external system objects map onto managed objects, and IDM dynamically updates the repository to store the managed object attributes that you configure.

You can, for example, choose to follow the data model defined in the Simple Cloud Identity Management (SCIM) specification. The following object represents a SCIM user:

{
    "userName": "james1",
    "familyName": "Berg",
    "givenName": "James",
    "email": [
        "james1@example.com"
    ],
    "description": "Created by OpenIDM REST.",
    "password": "asdfkj23",
    "displayName": "James Berg",
    "phoneNumber": "12345",
    "employeeNumber": "12345",
    "userType": "Contractor",
    "title": "Vice President",
    "active": true
}

Note

Avoid using the dash character ( - ) in property names, like last-name, as dashes in names make JavaScript syntax more complex. If you cannot avoid the dash, then write source['last-name'] instead of source.last-name in your JavaScript.

15.3. Configuring Synchronization Between Two Resources

This section describes the high-level steps required to set up synchronization between two resources. A basic synchronization configuration involves the following steps:

  1. Set up the connector configuration.

    Connector configurations are defined in conf/provisioner-*.json files. One provisioner file must be defined for each external resource to which you are connecting.

  2. Map source objects to target objects.

    Mappings are normally defined in the conf/sync.json file. There is only one sync.json file per IDM instance, but multiple mappings can be defined in that file.

    If you are configuring social identity (see Chapter 11, "Configuring Social Identity Providers"), you can also define mappings between the social identity provider and IDM in the conf/selfservice.propertymap.json file.

  3. Configure any scripts that are required to check source and target objects, and to manipulate attributes.

  4. In addition to these configuration elements, IDM stores a links table in its repository. The links table maintains a record of relationships established between source and target objects.

15.3.1. Setting Up the Connector Configuration

Connector configuration files map external resource objects to IDM objects, and are described in detail in Chapter 14, "Connecting to External Resources". Connector configuration files are stored in the conf/ directory of your project, and are named provisioner.resource-name.json, where resource-name reflects the connector technology and the external resource, for example, openicf-csv.

You can create and modify connector configurations through the Admin UI or directly in the configuration files, as described in the following sections.

15.3.1.1. Setting up and Modifying Connector Configurations in the Admin UI

The easiest way to set up and modify connector configurations is to use the Admin UI.

To add or modify a connector configuration in the Admin UI:

  1. Log in to the UI (http://localhost:8080/admin) as an administrative user. The default administrative username and password is openidm-admin and openidm-admin.

  2. Select Configure > Connectors.

  3. Click on the connector that you want to modify (if there is an existing connector configuration) or click New Connector to set up a new connector configuration.

15.3.1.2. Editing Connector Configuration Files

A number of sample provisioner files are provided in path/to/openidm/samples/example-configurations/provisioners. To modify connector configuration files directly, edit one of the sample provisioner files that corresponds to the resource to which you are connecting.

The following excerpt of an example LDAP connector configuration shows the name for the connector and two attributes of an account object type. In the attribute mapping definitions, the attribute name is mapped from the nativeName (the attribute name used on the external resource) to the attribute name that is used in IDM. The sn attribute in LDAP is mapped to lastName in IDM. The homePhone attribute is defined as an array, because it can have multiple values:

{
    "name": "MyLDAP",
    "objectTypes": {
        "account": {
            "lastName": {
                "type": "string",
                "required": true,
                "nativeName": "sn",
                "nativeType": "string"
            },
            "homePhone": {
                "type": "array",
                "items": {
                    "type": "string",
                    "nativeType": "string"
                },
                "nativeName": "homePhone",
                "nativeType": "string"
            }
        }
    }
}

For IDM to access external resource objects and attributes, the object and its attributes must match the connector configuration. Note that the connector file only maps external resource objects to IDM objects. To construct attributes and to manipulate their values, you use the synchronization mappings file, described in the following section.

15.3.2. Mapping Source Objects to Target Objects

A synchronization mapping specifies a relationship between objects and their attributes in two data stores. A typical attribute mapping, between objects in an external LDAP directory and an internal Managed User data store, is:

"source": "lastName",
"target": "sn"

In this case, the lastName source attribute is mapped to the sn (surname) attribute on the target.

The core synchronization configuration is defined in your project's synchronization mappings file (conf/sync.json). The mappings file contains one or more mappings for every resource that must be synchronized.

Mappings are always defined from a source resource to a target resource. To configure bidirectional synchronization, you must define two mappings. For example, to configure bidirectional synchronization between an LDAP server and a local repository, you would define the following two mappings:

  • LDAP Server > Local Repository

  • Local Repository > LDAP Server

With bidirectional synchronization, IDM includes a links property that enables you to reuse the links established between objects, for both mappings. For more information, see Section 15.6, "Reusing Links Between Mappings".

You can update a mapping while the server is running. To avoid inconsistencies between repositories, do not update a mapping while a reconciliation is in progress for that mapping.

15.3.2.1. Specifying the Resource Mapping

Objects in external resources are specified in a mapping as system/name/object-type, where name is the name used in the connector configuration file, and object-type is the object defined in the connector configuration file list of object types. Objects in the repository are specified in the mapping as managed/object-type, where object-type is defined in your project's managed objects configuration file (conf/managed.json).

External resources, and IDM managed objects, can be the source or the target in a mapping. By convention, the mapping name is a string of the form source_target, as shown in the following example:

{
    "mappings": [
        {
            "name": "systemLdapAccounts_managedUser",
            "source": "system/ldap/account",
            "target": "managed/user",
            "properties": [
                {
                    "source": "lastName",
                    "target": "sn"
                },
                {
                    "source": "telephoneNumber",
                    "target": "telephoneNumber"
                },
                {
                    "target": "phoneExtension",
                    "default": "0047"
                },
                {
                    "source": "email",
                    "target": "mail",
                    "comment": "Set mail if non-empty.",
                    "condition": {
                        "type": "text/javascript",
                        "source": "(object.email != null)"
                    }
                },
                {
                    "source": "",
                    "target": "displayName",
                    "transform": {
                        "type": "text/javascript",
                        "source": "source.lastName +', ' + source.firstName;"
                    }
                },
               {
                    "source" : "uid",
                    "target" : "userName",
                    "condition" : "/linkQualifier eq \"user\""
                    }
               },
            ]
        }
    ]
}    

In this example, the name of the source is the external resource (ldap), and the target is IDM's user repository, specifically managed/user. The properties defined in the mapping reflect attribute names that are defined in the IDM configuration. For example, the source attribute uid is defined in the ldap connector configuration file, rather than on the external resource itself.

You can also configure synchronization mappings in the Admin UI. To do so, navigate to http://localhost:8080/admin, and click Configure > Mappings. The Admin UI serves as a front end to IDM configuration files, so, the changes you make to mappings in the Admin UI are written to your project's conf/sync.json file.

You can also configure mappings between social identity providers and various IDM properties, based on the selfservice.propertymap.json file. However, these mappings are not reconciled or synchronized.

To review the list of available properties in the Admin UI, select the line with the applicable source and target properties, and choose the Property List tab.

Matching target and source properties in the Admin UI

15.3.2.2. Transforming Attributes in a Mapping

Use a mapping to define attribute transformations during synchronization. In the following sample mapping excerpt, the value of the displayName attribute on the target is set using a combination of the lastName and firstName attribute values from the source:

{
    "source": "",
    "target": "displayName",
    "transform": {
        "type": "text/javascript",
        "source": "source.lastName +', ' + source.firstName;"
    }
},   

For transformations, the source property is optional. However, a source object is only available when you specify the source property. Therefore, in order to use source.lastName and source.firstName to calculate the displayName, the example specifies "source" : "".

If you set "source" : "" (not specifying an attribute), the entire object is regarded as the source, and you must include the attribute name in the transformation script. For example, to transform the source username to lower case, your script would be source.mail.toLowerCase();. If you do specify a source attribute (for example "source" : "mail"), just that attribute is regarded as the source. In this case, the transformation script would be source.toLowerCase();.

To set up a transformation script in the Admin UI:

  1. Select Configure > Mappings, and select the Mapping.

  2. Select the line with the target attribute whose value you want to set.

  3. On the Transformation Script tab, select Javascript or Groovy, and enter the transformation as an Inline Script or specify the path to the file containing your transformation script.

When you use the UI to map a property whose value is encrypted, you are prompted to set up a transformation script to decrypt the value when that property is synchronized. The resulting mapping in sync.json looks similar to the following, which shows the transformation of a user's password property:

{
    "target" : "userPassword",
    "source" : "password",
    "transform" : {
        "type" : "text/javascript",
        "globals" : { },
        "source" : "openidm.decrypt(source);"
    },
    "condition" : {
        "type" : "text/javascript",
        "globals" : { },
        "source" : "object.password != null"
    }
}

15.3.2.3. Using Scriptable Conditions in a Mapping

By default, IDM synchronizes all attributes in a mapping. To facilitate more complex relationships between source and target objects, you can define conditions for which IDM maps certain attributes. You can define two types of mapping conditions:

  • Scriptable conditions, in which an attribute is mapped only if the defined script evaluates to true

  • Condition filters, a declarative filter that sets the conditions under which the attribute is mapped. Condition filters can include a link qualifier, that identifies the type of relationship between the source object and multiple target objects. For more information, see Section 15.3.2.5, "Mapping a Single Source Object to Multiple Target Objects".

    Examples of condition filters include:

    • "condition": "/object/country eq 'France'" - only map the attribute if the object's country attribute equals France.

    • "condition": "/object/password pr" - only map the attribute if the object's password attribute is present.

    • "/linkQualifier eq 'admin'" - only map the attribute if the link between this source and target object is of type admin.

To set up mapping conditions in the Admin UI, select Configure > Mappings. Click the mapping for which you want to configure conditions. On the Properties tab, click on the attribute that you want to map, then select the Conditional Updates tab.

Configure the filtered condition on the Condition Filter tab, or a scriptable condition on the Script tab.

Scriptable conditions create mapping logic, based on the result of the condition script. If the script does not return true, IDM does not manipulate the target attribute during a synchronization operation.

In the following excerpt, the value of the target mail attribute is set to the value of the source email attribute only if the source attribute is not empty:

{
    "target": "mail",
        "comment": "Set mail if non-empty.",
        "source": "email",
        "condition": {
            "type": "text/javascript",
            "source": "(object.email != null)"
        }
...   

Only the source object is in the condition script's scope, so the object.email in this example refers to the email property of the source object.

Tip

You can add comments to JSON files. While this example includes a property named comment, you can use any unique property name, as long as it is not used elsewhere in the server. IDM ignores unknown property names in JSON configuration files.

15.3.2.4. Creating Default Attributes in a Mapping

You can use a mapping to create attributes on the target resource. In the preceding example, the mapping creates a phoneExtension attribute with a default value of 0047 on the target object.

The default property specifies a value to assign to the attribute on the target object. Before IDM determines the value of the target attribute, it first evaluates any applicable conditions, followed by any transformation scripts. If the source property and the transform script yield a null value, it then applies the default value, create and update actions. The default value overrides the target value, if one exists.

To set up attributes with default values in the Admin UI:

  1. Select Configure > Mappings, and click on the Mapping you want to edit.

  2. Click on the Target Property that you want to create (phoneExtension in the previous example), select the Default Values tab, and enter a default value for that property mapping.

15.3.2.5. Mapping a Single Source Object to Multiple Target Objects

In certain cases, you might have a single object in a resource that maps to more than one object in another resource. For example, assume that managed user, bjensen, has two distinct accounts in an LDAP directory: an employee account (under uid=bjensen,ou=employees,dc=example,dc=com) and a customer account (under uid=bjensen,ou=customers,dc=example,dc=com). You want to map both of these LDAP accounts to the same managed user account.

IDM uses link qualifiers to manage this one-to-many scenario. To map a single source object to multiple target objects, you indicate how the source object should be linked to the target object by defining link qualifiers. A link qualifier is essentially a label that identifies the type of link (or relationship) between each object.

In the previous example, you would define two link qualifiers that enable you to link both of bjensen's LDAP accounts to her managed user object, as shown in the following diagram:

Graphic shows one managed user object for bjensen with two links to two distinct system objects in an LDAP directory

Note from this diagram that the link qualifier is a property of the link between the source and target object, and not a property of the source or target object itself.

Link qualifiers are defined as part of the mapping (in your project's conf/sync.json file). Each link qualifier must be unique within the mapping. If no link qualifier is specified (when only one possible matching target object exists), IDM uses a default link qualifier with the value default.

Link qualifiers can be defined as a static list, or dynamically, using a script. The following excerpt from a sample mapping shows the two static link qualifiers, employee and customer, described in the previous example:

{
    "mappings": [
        {
            "name": "managedUser_systemLdapAccounts",
            "source": "managed/user",
            "target": "system/MyLDAP/account",
            "linkQualifiers" : [ "employee", "customer" ],
...

The list of static link qualifiers is evaluated for every source record. That is, every reconciliation processes all synchronization operations, for each link qualifier, in turn.

A dynamic link qualifier script returns a list of link qualifiers applicable for each source record. For example, suppose you have two types of managed users - employees and contractors. For employees, a single managed user (source) account can correlate with three different LDAP (target) accounts - employee, customer, and manager. For contractors, a single managed user account can correlate with only two separate LDAP accounts - contractor, and customer. The possible linking situations for this scenario are shown in the following diagram:

Graphic shows two managed user objects, with different link qualifiers to map to multiple possible target objects in an LDAP directory

In this scenario, you could write a script to generate a dynamic list of link qualifiers, based on the managed user type. For employees, the script would return [employee, customer, manager] in its list of possible link qualifiers. For contractors, the script would return [contractor, customer] in its list of possible link qualifiers. A reconciliation operation would then only process the list of link qualifiers applicable to each source object.

If your source resource includes a large number of records, you should use a dynamic link qualifier script instead of a static list of link qualifiers. Generating the list of applicable link qualifiers dynamically avoids unnecessary additional processing for those qualifiers that will never apply to specific source records. Synchronization performance is therefore improved for large source data sets.

You can include a dynamic link qualifier script inline (using the source property), or by referencing a JavaScript or Groovy script file (using the file property). The following link qualifier script sets up the dynamic link qualifier lists described in the previous example.

Note

The source property value has been formatted into multiple lines to be more clear to readers. If you use this content, it needs to be formatted as a single line.

{
  "mappings": [
    {
      "name": "managedUser_systemLdapAccounts",
      "source": "managed/user",
      "target": "system/MyLDAP/account",
      "linkQualifiers" : {
        "type" : "text/javascript",
        "globals" : { },
        "source" : "if (returnAll) {
                      ['contractor', 'employee', 'customer', 'manager']
                    } else {
                      if(object.type === 'employee') {
                        ['employee', 'customer', 'manager']
                      } else {
                        ['contractor', 'customer']
                      }
                    }"
      }
...

To reference an external link qualifier script, provide a link to the file in the file property:

{
    "mappings": [
        {
            "name": "managedUser_systemLdapAccounts",
            "source": "managed/user",
            "target": "system/MyLDAP/account",
            "linkQualifiers" : {
                "type" : "text/javascript",
                "file" : "script/linkQualifiers.js"
            }
... 

Dynamic link qualifier scripts must return all valid link qualifiers when the returnAll global variable is true. The returnAll variable is used during the target reconciliation phase to check whether there are any target records that are unassigned, for each known link qualifier. For a list of the variables available to a dynamic link qualifier script, see Section E.3.2, "Script Triggers Defined in sync.json".

On their own, link qualifiers have no functionality. However, they can be referenced by various aspects of reconciliation to manage the situations where a single source object maps to multiple target objects. The following examples show how link qualifiers can be used in reconciliation operations:

  • Use link qualifiers during object creation, to create multiple target objects per source object.

    The following excerpt of a sample mapping defines a transformation script that generates the value of the dn attribute on an LDAP system. If the link qualifier is employee, the value of the target dn is set to "uid=userName,ou=employees,dc=example,dc=com". If the link qualifier is customer, the value of the target dn is set to "uid=userName,ou=customers,dc=example,dc=com". The reconciliation operation iterates through the link qualifiers for each source record. In this case, two LDAP objects, with different dns would created for each managed user object.

            {
              "target" : "dn",
              "transform" : {
                "type" : "text/javascript",
                "globals" : { },
                "source" : "if (linkQualifier === 'employee')
                           { 'uid=' + source.userName + ',ou=employees,dc=example,dc=com'; }
                           else
                           if (linkQualifier === 'customer')
                           { 'uid=' + source.userName + ',ou=customers,dc=example,dc=com'; }"
              },
              "source" : ""
            } 
  • Use link qualifiers in conjunction with a correlation query that assigns a link qualifier based on the values of an existing target object.

    During source synchronization, IDM queries the target system for every source record and link qualifier, to check if there are any matching target records. If a match is found, the sourceId, targetId, and linkQualifier are all saved as the link.

    The following excerpt of a sample mapping shows the two link qualifiers described previously (employee and customer). The correlation query first searches the target system for the employee link qualifier. If a target object matches the query, based on the value of its dn attribute, IDM creates a link between the source object and that target object and assigns the employee link qualifier to that link. This process is repeated for all source records. Then, the correlation query searches the target system for the customer link qualifier. If a target object matches that query, IDM creates a link between the source object and that target object and assigns the customer link qualifier to that link.

    "linkQualifiers" : ["employee", "customer"],
      "correlationQuery" : [
        {
          "linkQualifier" : "employee",
          "type" : "text/javascript",
          "source" : "var query = {'_queryFilter': 'dn co \"' + uid=source.userName + 'ou=employees\"'}; query;"
        },
        {
          "linkQualifier" : "customer",
          "type" : "text/javascript",
          "source" : "var query = {'_queryFilter': 'dn co \"' + uid=source.userName + 'ou=customers\"'}; query;"
        }
      ]
    ...  

    For more information about correlation queries, see Section 15.3.2.6, "Correlating Source Objects With Existing Target Objects".

  • Use link qualifiers during policy validation to apply different policies based on the link type.

    The following excerpt of a sample sync.json file shows two link qualifiers, user and test. Depending on the link qualifier, different actions are taken when the target record is ABSENT:

    {
        "mappings" : [
            {
                "name" : "systemLdapAccounts_managedUser",
                "source" : "system/ldap/account",
                "target" : "managed/user",
                "linkQualifiers" : [
                    "user",
                    "test"
            ],
        "properties" : [
        ...
        "policies" : [
            {
                 "situation" : "CONFIRMED",
                 "action" : "IGNORE"
            },
            {
                 "situation" : "FOUND",
                 "action" : "UPDATE
            }
            {
                 "condition" : "/linkQualifier eq \"user\"",
                 "situation" : "ABSENT",
                 "action" : "CREATE",
                 "postAction" : {
                     "type" : "text/javascript",
                     "source" : "java.lang.System.out.println('Created user: \');"
                 }
            },
            {
                "condition" : "/linkQualifier eq \"test\"",
                "situation" : "ABSENT",
                "action" : "IGNORE",
                "postAction" : {
                    "type" : "text/javascript",
                    "source" : "java.lang.System.out.println('Ignored user: ');"
                }
            },
            ...

    With this sample mapping, the synchronization operation creates an object in the target system only if the potential match is assigned a user link qualifier. If the match is assigned a test qualifier, no target object is created. In this way, the process avoids creating duplicate test-related accounts in the target system.

Tip

To set up link qualifiers in the Admin UI select Configure > Mappings. Select a mapping, and click Properties > Link Qualifiers.

For an example that uses link qualifiers in conjunction with roles, see Chapter 14, "Linking Multiple Accounts to a Single Identity" in the Samples Guide.

15.3.2.6. Correlating Source Objects With Existing Target Objects

When IDM creates an object on a target system during synchronization, it also creates a link between the source and target object. IDM then uses that link to determine the object's synchronization situation during later synchronization operations. For a list of synchronization situations, see Section 15.13.1, "How Synchronization Situations Are Assessed".

With every synchronization operation, IDM can correlate existing source and target objects. Correlation matches source and target objects, based on the results of a query or script, and creates links between matched objects.

Correlation queries and correlation scripts are defined in your project's mapping (conf/sync.json) file. Each query or script is specific to the mapping for which it is configured. You can also configure correlation by using the Admin UI. Select Configure > Mappings, and click on the mapping for which you want to correlate. On the Association tab, expand Association Rules, and select Correlation Queries or Correlation Script from the list.

The following sections describe how to write correlation queries and scripts.

15.3.2.6.1. Writing Correlation Queries

IDM processes a correlation query by constructing a query map. The content of the query is generated dynamically, using values from the source object. For each source object, a new query is sent to the target system, using (possibly transformed) values from the source object for its execution.

Queries are run against target resources, either managed or system objects, depending on the mapping. Correlation queries on system objects access the connector, which executes the query on the external resource.

Correlation queries can be expressed using a query filter (_queryFilter), a predefined query (_queryId), or a native query expression (_queryExpression). For more information on these query types, see Section 8.3, "Defining and Calling Queries". The synchronization process executes the correlation query to search through the target system for objects that match the current source object.

The preferred syntax for a correlation query is a filtered query, using the _queryFilter keyword. Filtered queries should work in the same way on any backend, whereas other query types are generally specific to the backend. Predefined queries (using _queryId) and native queries (using _queryExpression) can also be used for correlation queries on managed resources. Note that system resources do not support native queries or predefined queries other than query-all-ids (which serves no purpose in a correlation query).

To configure a correlation query, define a script whose source returns a query that uses the _queryFilter, _queryId, or _queryExpression keyword. For example:

  • For a _queryId, the value is the named query. Named parameters in the query map are expected by that query.

    {'_queryId' : 'for-userName', 'uid' : source.name}
  • For a _queryFilter, the value is the abstract filter string:

    { "_queryFilter" : "uid eq \"" + source.userName + "\"" }
  • For a _queryExpression, the value is the system-specific query expression, such as raw SQL.

    {'_queryExpression': 'select * from managed_user where givenName = \"' + source.firstname + '\"' }

    Caution

    Using a query expression in this way is not recommended as it exposes your system to SQL injection exploits.

Using Filtered Queries to Correlate Objects

For filtered queries, the script that is defined or referenced in the correlationQuery property must return an object with the following elements:

  • The element that is being compared on the target object, for example, uid.

    The element on the target object is not necessarily a single attribute. Your query filter can be simple or complex; valid query filters range from a single operator to an entire boolean expression tree.

    If the target object is a system object, this attribute must be referred to by its IDM name rather than its OpenICF nativeName. For example, given the following provisioner configuration excerpt, the attribute to use in the correlation query would be uid and not __NAME__:

    "uid" : {
        "type" : "string",
        "nativeName" : "__NAME__",
        "required" : true,
        "nativeType" : "string"
    }
    ...   
  • The value to search for in the query.

    This value is generally based on one or more values from the source object. However, it does not have to match the value of a single source object property. You can define how your script uses the values from the source object to find a matching record in the target system.

    You might use a transformation of a source object property, such as toUpperCase(). You can concatenate that output with other strings or properties. You can also use this value to call an external REST endpoint, and redirect the response to the final "value" portion of the query.

The following correlation query matches source and target objects if the value of the uid attribute on the target is the same as the userName attribute on the source:

"correlationQuery" : {
    "type" : "text/javascript",
    "source" : "var qry = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; qry"
},  

The query can return zero or more objects. The situation that IDM assigns to the source object depends on the number of target objects that are returned, and on the presence of any link qualifiers in the query. For information about synchronization situations, see Section 15.13.1, "How Synchronization Situations Are Assessed". For information about link qualifiers, see Section 15.3.2.5, "Mapping a Single Source Object to Multiple Target Objects".

Using Predefined Queries to Correlate Objects

For correlation queries on managed objects, you can use a query that has been predefined in the database table configuration file for the repository, either conf/repo.jdbc.json or conf/repo.opendj.json. You reference the query ID in your project's conf/sync.json file.

The following example shows a query defined in the DS repository configuration (conf/repo.opendj.json) that can be used as the basis for a correlation query:

"for-userName": {
        "_queryFilter": "/userName eq \"${uid}\""
},

You would call this query in the mapping (sync.json) file as follows:

{
    "correlationQuery": {
      "type": "text/javascript",
      "source":
        "var qry = {'_queryId' : 'for-userName', 'uid' : source.name}; qry;"
    }
  } 

In this correlation query, the _queryId property value (for-userName) matches the name of the query specified in conf/repo.opendj.json. The source.name value replaces ${uid} in the query.

Using the Expression Builder to Create Correlation Queries

The expression builder is a declarative correlation mechanism that makes it easier to configure correlation queries.

The easiest way to use the expression builder to create a correlation query is through the Admin UI:

  1. Select Configure > Mappings and select the mapping for which you want to configure a correlation query.

  2. On the Association tab, expand the Association Rules item and select Correlation Queries.

  3. Click Add Correlation query.

  4. In the Correlation Query window, select a link qualifier.

    If you do not need to correlate multiple potential target objects per source object, select the default link qualifier. For more information about linking to multiple target objects, see Section 15.3.2.5, "Mapping a Single Source Object to Multiple Target Objects".

  5. Select Expression Builder, and add or remove the fields whose values in the source and target must match.

    The following image shows how you can use the expression builder to build a correlation query for a mapping from managed/user to system/ldap/accounts objects. The query will create a match between the source (managed) object and the target (LDAP) object if the value of the givenName or the telephoneNumber of those objects is the same.

    Admin UI mapping screen showing correlation query
  6. Click Submit to exit the Correlation Query pop-up then click Save.

The correlation query created in the previous steps displays as follows in the mapping configuration (sync.json):

"correlationQuery" : [
    {
        "linkQualifier" : "default",
        "expressionTree" : {
            "any" : [
                "givenName",
                "telephoneNumber"
            ]
        },
        "mapping" : "managedUser_systemLdapAccounts",
        "type" : "text/javascript",
        "file" : "ui/correlateTreeToQueryFilter.js"
    }
]   
15.3.2.6.2. Writing Correlation Scripts

If you need a more powerful correlation mechanism than a simple query can provide, you can write a correlation script with additional logic. Correlation scripts are generally more complex than correlation queries and impose no restrictions on the methods used to find matching objects. A correlation script must execute a query and return the result of that query.

The result of a correlation script is a list of maps, each of which contains a candidate _id value. If no match is found, the script returns a zero-length list. If exactly one match is found, the script returns a single-element list. If there are multiple ambiguous matches, the script returns a list with multiple elements. There is no assumption that the matching target record or records can be found by a simple query on the target system. All of the work necessary to find matching records is left to the script.

In general, a correlation query should meet the requirements of most deployments. Correlation scripts can be useful, however, if your query needs extra processing, such as fuzzy-logic matching or out-of-band verification with a third-party service over REST.

The following example shows a correlation script that uses link qualifiers. The script returns resultData.result - a list of maps, each of which has an _id entry. These entries will be the values that are used for correlation.

Example 15.1. Correlation Script Using Link Qualifiers
(function () {
    var query, resultData;
    switch (linkQualifier) {
        case "test":
            logger.info("linkQualifier = test");
	        query = {'_queryFilter': 'uid eq \"' + source.userName + '-test\"'};
            break;
        case "user":
            logger.info("linkQualifier = user");
	        query = {'_queryFilter': 'uid eq \"' + source.userName + '\"'};
            break;
        case "default":
            logger.info("linkQualifier = default");
	        query = {'_queryFilter': 'uid eq \"' + source.userName + '\"'};
            break;
        default:
            logger.info("No linkQualifier provided.");
	        break;
    }
    var resultData = openidm.query("system/ldap/account", query);
    logger.info("found " + resultData.result.length + " results for link qualifier " + linkQualifier)
    for (i=0;i<resultData.result.length;i++) {
        logger.info("found target: " + resultData.result[i]._id);
    }
    return resultData.result;
} ());

To configure a correlation script in the Admin UI, follow these steps:

  1. Select Configure > Mappings and select the mapping for which you want to configure the correlation script.

  2. On the Association tab, expand the Association Rules item and select Correlation Script from the list.

    Admin UI mapping screen showing correlation script
  3. Select a script type (either JavaScript or Groovy) and either enter the script source in the Inline Script box, or specify the path to a file that contains the script.

    To create a correlation script, use the details from the source object to find the matching record in the target system. If you are using link qualifiers to match a single source record to multiple target records, you must also use the value of the linkQualifier variable within your correlation script to find the target ID that applies for that qualifier.

  4. Click Save to save the script as part of the mapping.

15.3.3. Filtering Synchronized Objects

By default, IDM synchronizes all objects that match those defined in the connector configuration for the resource. Many connectors allow you to limit the scope of objects that the connector accesses. For example, the LDAP connector allows you to specify base DNs and LDAP filters so that you do not need to access every entry in the directory. You can also filter the source or target objects that are included in a synchronization operation. To apply these filters, use the validSource, validTarget, or sourceCondition properties in your mapping:

validSource

A script that determines if a source object is valid to be mapped. The script yields a boolean value: true indicates that the source object is valid; false can be used to defer mapping until some condition is met. In the root scope, the source object is provided in the "source" property. If the script is not specified, then all source objects are considered valid:

{
    "validSource": {
        "type": "text/javascript",
        "source": "source.ldapPassword != null"
    }
}
validTarget

A script used during the second phase of reconciliation that determines if a target object is valid to be mapped. The script yields a boolean value: true indicates that the target object is valid; false indicates that the target object should not be included in reconciliation. In the root scope, the source object is provided in the "target" property. If the script is not specified, then all target objects are considered valid for mapping:

{
    "validTarget": {
        "type": "text/javascript",
        "source": "target.employeeType == 'internal'"
    }
}
sourceCondition

The sourceCondition element defines an additional filter that must be met for a source object's inclusion in a mapping.

This condition works like a validSource script. Its value can be either a queryFilter string, or a script configuration. sourceCondition is used principally to specify that a mapping applies only to a particular role or entitlement.

The following sourceCondition restricts synchronization to those user objects whose account status is active:

{
    "mappings": [
        {
            "name": "managedUser_systemLdapAccounts",
            "source": "managed/user",
            "sourceCondition": "/source/accountStatus eq \"active\"",
        ...
        }
    ]
}

During synchronization, your scripts and filters have access to a source object and a target object. Examples already shown in this section use source.attributeName to retrieve attributes from the source objects. Your scripts can also write to target attributes using target.attributeName syntax:

{
    "onUpdate": {
        "type": "text/javascript",
        "source": "if (source.email != null) {target.mail = source.email;}"
    }
}

In addition, the sourceCondition filter has the linkQualifier variable in its scope.

For more information about scripting, see Appendix E, "Scripting Reference".

15.3.4. Configuring Synchronization Filters With User Preferences

For all regular users (other than openidm-admin), you can set up preferences, such as those related to marketing and news updates. You can then use those preferences as a filter when reconciling users to a target repository.

IDM includes default user preferences defined for the managed user object, available in the Admin UI and configured in the managed.json file.

15.3.4.1. Configuring End User Preferences

In the default project, common marketing preference options are included for the managed user object. To find these preferences in the Admin UI, select Configure > Managed Objects and select the User managed object. Under the Preferences tab, you'll see keys and descriptions. You can also see these preferences in the managed.json file, illustrated here:

"preferences" : {
    "title" : "Preferences",
    "viewable" : true,
    "searchable" : false,
    "use