Guide to configuring and integrating OpenIDM software into identity management solutions. This software offers flexible services for automating management of the identity life cycle.
Preface
ForgeRock Identity Platform™ serves as the basis for our simple and comprehensive Identity and Access Management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.
1. About This Guide
In this guide you will learn how to integrate OpenIDM software as part of a complete identity management solution.
This guide is written for systems integrators building solutions based on OpenIDM services. This guide describes the product functionality, and shows you how to set up and configure OpenIDM software as part of your overall identity management solution.
2. Formatting Conventions
Most examples in the documentation are created in GNU/Linux or Mac OS X
operating environments.
If distinctions are necessary between operating environments,
examples are labeled with the operating environment name in parentheses.
To avoid repetition file system directory names are often given
only in UNIX format as in /path/to/server
,
even if the text applies to C:\path\to\server
as well.
Absolute path names usually begin with the placeholder
/path/to/
.
This path might translate to /opt/
,
C:\Program Files\
, or somewhere else on your system.
Command-line, terminal sessions are formatted as follows:
$ echo $JAVA_HOME /path/to/jdk
Command output is sometimes formatted for narrower, more readable output even though formatting parameters are not shown in the command.
Program listings are formatted as follows:
class Test { public static void main(String [] args) { System.out.println("This is a program listing."); } }
3. Accessing Documentation Online
ForgeRock publishes comprehensive documentation online:
The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.
While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.
ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.
4. Using the ForgeRock.org Site
The ForgeRock.org site has links to source code for ForgeRock open source software, as well as links to the ForgeRock forums and technical blogs.
If you are a ForgeRock customer, raise a support ticket instead of using the forums. ForgeRock support professionals will get in touch to help you.
Chapter 1. Architectural Overview
This chapter introduces the OpenIDM architecture, and describes component modules and services.
In this chapter you will learn:
How OpenIDM uses the OSGi framework as a basis for its modular architecture
How the infrastructure modules provide the features required for OpenIDM's core services
What those core services are and how they fit in to the overall architecture
How OpenIDM provides access to the resources it manages
1.1. Modular Framework
OpenIDM implements infrastructure modules that run in an OSGi framework. It exposes core services through RESTful APIs to client applications.
The following figure provides an overview of the OpenIDM architecture, which is covered in more detail in subsequent sections of this chapter.
The OpenIDM framework is based on OSGi:
- OSGi
OSGi is a module system and service platform for the Java programming language that implements a complete and dynamic component model. For a good introduction to OSGi, see the OSGi site. OpenIDM currently runs in Apache Felix, an implementation of the OSGi Framework and Service Platform.
- Servlet
The Servlet layer provides RESTful HTTP access to the managed objects and services. OpenIDM embeds the Jetty Servlet Container, which can be configured for either HTTP or HTTPS access.
1.2. Infrastructure Modules
Infrastructure modules provide the underlying features needed for core services:
- BPMN 2.0 Workflow Engine
OpenIDM provides an embedded workflow and business process engine based on Activiti and the Business Process Model and Notation (BPMN) 2.0 standard.
For more information, see "Integrating Business Processes and Workflows".
- Task Scanner
OpenIDM provides a task-scanning mechanism that performs a batch scan for a specified property in OpenIDM data, on a scheduled interval. The task scanner then executes a task when the value of that property matches a specified value.
For more information, see "Scanning Data to Trigger Tasks".
- Scheduler
The scheduler provides a cron-like scheduling component implemented using the Quartz library. Use the scheduler, for example, to enable regular synchronizations and reconciliations.
For more information, see "Scheduling Tasks and Events".
- Script Engine
The script engine is a pluggable module that provides the triggers and plugin points for OpenIDM. OpenIDM currently supports JavaScript and Groovy.
- Policy Service
OOpenIDM provides an extensible policy service that applies validation requirements to objects and properties, when they are created or updated.
For more information, see "Using Policies to Validate Data".
- Audit Logging
Auditing logs all relevant system activity to the configured log stores. This includes the data from reconciliation as a basis for reporting, as well as detailed activity logs to capture operations on the internal (managed) and external (system) objects.
For more information, see "Logging Audit Information".
- Repository
The repository provides a common abstraction for a pluggable persistence layer. OpenIDM supports reconciliation and synchronization with several major external repositories in production, including relational databases, LDAP servers, and even flat CSV and XML files.
The repository API uses a JSON-based object model with RESTful principles consistent with the other OpenIDM services. To facilitate testing, OpenIDM includes an embedded instance of OrientDB, a NoSQL database. You can then incorporate a supported internal repository, as described in "Installing a Repository For Production" in the Installation Guide.
1.3. Core Services
The core services are the heart of the OpenIDM resource-oriented unified object model and architecture:
- Object Model
Artifacts handled by OpenIDM are Java object representations of the JavaScript object model as defined by JSON. The object model supports interoperability and potential integration with many applications, services, and programming languages.
OpenIDM can serialize and deserialize these structures to and from JSON as required. OpenIDM also exposes a set of triggers and functions that system administrators can define, in either JavaScript or Groovy, which can natively read and modify these JSON-based object model structures.
- Managed Objects
A managed object is an object that represents the identity-related data managed by OpenIDM. Managed objects are configurable, JSON-based data structures that OpenIDM stores in its pluggable repository. The default managed object configuration includes users and roles, but you can define any kind of managed object, for example, groups or devices.
You can access managed objects over the REST interface with a query similar to the following:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/..."
- System Objects
System objects are pluggable representations of objects on external systems. For example, a user entry that is stored in an external LDAP directory is represented as a system object in OpenIDM.
System objects follow the same RESTful resource-based design principles as managed objects. They can be accessed over the REST interface with a query similar to the following:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/system/..."
There is a default implementation for the OpenICF framework, that allows any connector object to be represented as a system object.
- Mappings
Mappings define policies between source and target objects and their attributes during synchronization and reconciliation. Mappings can also define triggers for validation, customization, filtering, and transformation of source and target objects.
For more information, see "Synchronizing Data Between Resources".
- Synchronization and Reconciliation
Reconciliation enables on-demand and scheduled resource comparisons between the OpenIDM managed object repository and source or target systems. Comparisons can result in different actions, depending on the mappings defined between the systems.
Synchronization enables creating, updating, and deleting resources from a source to a target system, either on demand or according to a schedule.
For more information, see "Synchronizing Data Between Resources".
1.4. Secure Commons REST Commands
Representational State Transfer (REST) is a software architecture style for exposing resources, using the technologies and protocols of the World Wide Web. For more information on the ForgeRock REST API, see "REST API Reference".
REST interfaces are commonly tested with a curl command. Many of these commands are used in this document. They work with the standard ports associated with Java EE communications, 8080 and 8443.
To run curl over the secure port, 8443, you must include
either the --insecure option, or follow the instructions
shown in "Restrict REST Access to the HTTPS Port". You can use those
instructions with the self-signed certificate generated when OpenIDM
starts, or with a *.crt
file provided by a
certificate authority.
In many examples in this guide, curl commands to the
secure port are shown with a --cacert self-signed.crt
option. Instructions for creating that self-signed.crt
file are shown in "Restrict REST Access to the HTTPS Port".
1.5. Access Layer
The access layer provides the user interfaces and public APIs for accessing and managing the OpenIDM repository and its functions:
- RESTful Interfaces
OpenIDM provides REST APIs for CRUD operations, for invoking synchronization and reconciliation, and to access several other services.
For more information, see "REST API Reference".
- User Interfaces
User interfaces provide access to most of the functionality available over the REST API.
Chapter 2. Starting and Stopping the Server
This chapter covers the scripts provided for starting and stopping OpenIDM, and describes how to verify the health of a system, that is, that all requirements are met for a successful system startup.
2.1. To Start and Stop the Server
By default you start and stop OpenIDM in interactive mode.
To start OpenIDM interactively, open a terminal or command
window, change to the openidm
directory, and run the
startup script:
startup.sh (UNIX)
startup.bat (Windows)
The startup script starts the server, and opens an OSGi console with a
->
prompt where you can issue console commands.
The default hostname and ports for OpenIDM are set in the
conf/boot/boot.properties
file found in the
openidm/
directory. OpenIDM is initially
configured to run on http
on port 8080
,
https
on port 8443
, with a hostname
of localhost
. For more information about changing ports
and hostnames, see "Host and Port Information".
To stop the server interactively in the OSGi console, run the shutdown command:
-> shutdown
You can also start OpenIDM as a background process on UNIX and Linux. Follow these steps before starting OpenIDM for the first time.
If you have already started the server, shut it down and remove the Felix cache files under
openidm/felix-cache/
:-> shutdown ... $ rm -rf felix-cache/*
Start the server in the background. The nohup survives a logout and the 2>&1& redirects standard output and standard error to the noted
console.out
file:$ nohup ./startup.sh > logs/console.out 2>&1& [1] 2343
To stop OpenIDM running as a background process, use the shutdown.sh script:
$ ./shutdown.sh ./shutdown.sh Stopping OpenIDM (2343)
Incidentally, the process identifier (PID) shown during startup should match the PID shown during shutdown.
Note
Although installations on OS X systems are not supported in production, you might want to run OpenIDM on OS X in a demo or test environment. To run OpenIDM in the background on an OS X system, take the following additional steps:
Remove the
org.apache.felix.shell.tui-*.jar
bundle from theopenidm/bundle
directory.Disable
ConsoleHandler
logging, as described in "Disabling Logs".
2.2. Specifying the Startup Configuration
By default, OpenIDM starts with the configuration, script, and binary
files in the openidm/conf
,
openidm/script
, and openidm/bin
directories. You can launch OpenIDM with a different set of
configuration, script, and binary files for test purposes, to manage
different projects, or to run one of the included samples.
The startup.sh script enables you to specify the following elements of a running instance:
--project-location
or-p
/path/to/project/directory
The project location specifies the directory with OpenIDM configuration and script files.
All configuration objects and any artifacts that are not in the bundled defaults (such as custom scripts) must be included in the project location. These objects include all files otherwise included in the
openidm/conf
andopenidm/script
directories.For example, the following command starts the server with the configuration of Sample 1, with a project location of
/path/to/openidm/samples/sample1
:$ ./startup.sh -p /path/to/openidm/samples/sample1
If you do not provide an absolute path, the project location path is relative to the system property,
user.dir
. OpenIDM then setslauncher.project.location
to that relative directory path. Alternatively, if you start OpenIDM without the -p option, OpenIDM setslauncher.project.location
to/path/to/openidm/conf
.Note
In this documentation, "your project" refers to the value of
launcher.project.location
.--working-location
or-w
/path/to/working/directory
The working location specifies the directory to which OpenIDM writes its database cache, audit logs, and felix cache. The working location includes everything that is in the default
db/
andaudit/
, andfelix-cache/
subdirectories.The following command specifies that OpenIDM writes its database cache and audit data to
/Users/admin/openidm/storage
:$ ./startup.sh -w /Users/admin/openidm/storage
If you do not provide an absolute path, the path is relative to the system property,
user.dir
. If you do not specify a working location, OpenIDM writes this data to theopenidm/db
,openidm/felix-cache
andopenidm/audit
directories.Note that this property does not affect the location of the OpenIDM system logs. To change the location of the OpenIDM logs, edit the
conf/logging.properties
file.You can also change the location of the Felix cache, by editing the
conf/config.properties
file, or by starting OpenIDM with the-s
option, described later in this section.--config
or-c
/path/to/config/file
A customizable startup configuration file (named
launcher.json
) enables you to specify how the OSGi Framework is started.Unless you are working with a highly customized deployment, you should not modify the default framework configuration. This option is therefore described in more detail in "Advanced Configuration".
--storage
or-s
/path/to/storage/directory
Specifies the OSGi storage location of the cached configuration files.
You can use this option to redirect output if you are installing OpenIDM on a read-only filesystem volume. For more information, see "Installing on a Read-Only Volume" in the Installation Guide. This option is also useful when you are testing different configurations. Sometimes when you start OpenIDM with two different sample configurations, one after the other, the cached configurations are merged and cause problems. Specifying a storage location creates a separate
felix-cache
directory in that location, and the cached configuration files remain completely separate.
By default, properties files are loaded in the following order, and property values are resolved in the reverse order:
system.properties
config.properties
boot.properties
If both system and boot properties define the same attribute, the
property substitution process locates the attribute in
boot.properties
and does not attempt to locate the
property in system.properties
.
You can use variable substitution in any .json
configuration file with the install, working and project locations
described previously. You can substitute the following properties:
install.location |
install.url |
working.location |
working.url |
project.location |
project.url |
Property substitution takes the following syntax:
&{launcher.property}
For example, to specify the location of the OrientDB database, you
can set the dbUrl
property in repo.orientdb.json
as follows:
"dbUrl" : "local:&{launcher.working.location}/db/openidm",
The database location is then relative to a working location defined in the startup configuration.
You can find more examples of property substitution in many other files in
your project's conf/
subdirectory.
Note that property substitution does not work for connector reference properties. So, for example, the following configuration would not be valid:
"connectorRef" : { "connectorName" : "&{connectorName}", "bundleName" : "org.forgerock.openicf.connectors.ldap-connector", "bundleVersion" : "&{LDAP.BundleVersion}" ...
The "connectorName"
must be the precise string from the
connector configuration. If you need to specify multiple connector version
numbers, use a range of versions, for example:
"connectorRef" : { "connectorName" : "org.identityconnectors.ldap.LdapConnector", "bundleName" : "org.forgerock.openicf.connectors.ldap-connector", "bundleVersion" : "[1.4.0.0,2.0.0.0)", ...
2.3. Monitoring Basic Server Health
Due to the highly modular, configurable nature of OpenIDM, it is often difficult to assess whether a system has started up successfully, or whether the system is ready and stable after dynamic configuration changes have been made.
OpenIDM includes a health check service, with options to monitor the status of internal resources.
To monitor the status of external resources such as LDAP servers and external databases, use the commands described in "Checking the Status of External Systems Over REST".
2.3.1. Basic Health Checks
The health check service reports on the state of the OpenIDM system and outputs this state to the OSGi console and to the log files. The system can be in one of the following states:
STARTING
- OpenIDM is starting upACTIVE_READY
- all of the specified requirements have been met to consider the OpenIDM system readyACTIVE_NOT_READY
- one or more of the specified requirements have not been met and the OpenIDM system is not considered readySTOPPING
- OpenIDM is shutting down
You can verify the current state of an OpenIDM system with the following REST call:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/info/ping" { "_id" : "", "state" : "ACTIVE_READY", "shortDesc" : "OpenIDM ready" }
The information is provided by the following script:
openidm/bin/defaults/script/info/ping.js
.
2.3.2. Getting Current Session Information
You can get more information about the current OpenIDM session, beyond basic health checks, with the following REST call:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/info/login" { "_id" : "", "class" : "org.forgerock.services.context.SecurityContext", "name" : "security", "authenticationId" : "openidm-admin", "authorization" : { "id" : "openidm-admin", "component" : "repo/internal/user", "roles" : [ "openidm-admin", "openidm-authorized" ], "ipAddress" : "127.0.0.1" }, "parent" : { "class" : "org.forgerock.caf.authentication.framework.MessageContextImpl", "name" : "jaspi", "parent" : { "class" : "org.forgerock.services.context.TransactionIdContext", "id" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288", "name" : "transactionId", "transactionId" : { "value" : "2b4ab479-3918-4138-b018-1a8fa01bc67c-288", "subTransactionIdCounter" : 0 }, "parent" : { "class" : "org.forgerock.services.context.ClientContext", "name" : "client", "remoteUser" : null, "remoteAddress" : "127.0.0.1", "remoteHost" : "127.0.0.1", "remotePort" : 56534, "certificates" : "", ...
The information is provided by the following script:
openidm/bin/defaults/script/info/login.js
.
2.3.3. Monitoring Tuning and Health Parameters
You can extend OpenIDM monitoring beyond what you can check on the
openidm/info/ping
and openidm/info/login
endpoints. Specifically, you can get more detailed information about the
state of the:
Operating System
on theopenidm/health/os
endpointMemory
on theopenidm/health/memory
endpointJDBC Pooling
, based on theopenidm/health/jdbc
endpointReconciliation
, on theopenidm/health/recon
endpoint.
You can regulate access to these endpoints as described in the following
section: "Understanding the Access Configuration Script (access.js
)".
2.3.3.1. Operating System Health Check
With the following REST call, you can get basic information about the host operating system:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/health/os" { "_id" : "", "_rev" : "", "availableProcessors" : 1, "systemLoadAverage" : 0.06, "operatingSystemArchitecture" : "amd64", "operatingSystemName" : "Linux", "operatingSystemVersion" : "2.6.32-504.30.3.el6.x86_64" }
From the output, you can see that this particular system has one 64-bit
CPU, with a load average of 6 percent, on a Linux system with the noted
kernel operatingSystemVersion
number.
2.3.3.2. Memory Health Check
With the following REST call, you can get basic information about overall JVM memory use:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/health/memory" { "_id" : "", "_rev" : "", "objectPendingFinalization" : 0, "heapMemoryUsage" : { "init" : 1073741824, "used" : 88538392, "committed" : 1037959168, "max" : 1037959168 }, "nonHeapMemoryUsage" : { "init" : 24313856, "used" : 69255024, "committed" : 69664768, "max" : 224395264 } }
The output includes information on JVM Heap and Non-Heap memory, in bytes. Briefly:
JVM Heap memory is used to store Java objects.
JVM Non-Heap Memory is used by Java to store loaded classes and related meta-data
2.3.3.3. JDBC Health Check
Running a health check on the JDBC repository is supported only if you are using the BoneCP connection pool. This is not the default connection pool, so you must make the following changes to your configuration before running this command:
In your project's
conf/datasource.jdbc-default.json
file, change theconnectionPool
parameter as follows:"connectionPool" : { "type" : "bonecp" }
In your project's
conf/boot/boot.properties
file, enable the statistics MBean for the BoneCP connection pool:openidm.bonecp.statistics.enabled=true
For a BoneCP connection pool, the following REST call returns basic information about the status of the JDBC repository:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/health/jdbc" { "_id": "", "_rev": "", "com.jolbox.bonecp:type=BoneCP-4ffa60bd-5dfc-400f-850e-439c7aa27094": { "connectionWaitTimeAvg": 0.012701142857142857, "statementExecuteTimeAvg": 0.8084880967741935, "statementPrepareTimeAvg": 1.6652538867562894, "totalLeasedConnections": 0, "totalFreeConnections": 7, "totalCreatedConnections": 7, "cacheHits": 0, "cacheMiss": 0, "statementsCached": 0, "statementsPrepared": 31, "connectionsRequested": 28, "cumulativeConnectionWaitTime": 0, "cumulativeStatementExecutionTime": 25, "cumulativeStatementPrepareTime": 18, "cacheHitRatio": 0, "statementsExecuted": 31 } }
The BoneCP metrics are self-explanatory.
2.3.3.4. Reconciliation Health Check
With the following REST call, you can get basic information about the system demands related to reconciliation:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/health/recon" { "_id" : "", "_rev" : "", "activeThreads" : 1, "corePoolSize" : 10, "largestPoolSize" : 1, "maximumPoolSize" : 10, "currentPoolSize" : 1 }
From the output, you can review the number of active threads used by the reconciliation, as well as the available thread pool.
2.3.4. Customizing Health Check Scripts
You can extend or override the default information that is provided by
creating your own script file and its corresponding configuration file in
openidm/conf/info-name.json
.
Custom script files can be located anywhere, although a best practice is to
place them in openidm/script/info
. A sample customized
script file for extending the default ping service is provided in
openidm/samples/infoservice/script/info/customping.js
.
The corresponding configuration file is provided in
openidm/samples/infoservice/conf/info-customping.json
.
The configuration file has the following syntax:
{ "infocontext" : "ping", "type" : "text/javascript", "file" : "script/info/customping.js" }
The parameters in the configuration file are as follows:
infocontext
specifies the relative name of the info endpoint under the info context. The information can be accessed over REST at this endpoint, for example, settinginfocontext
tomycontext/myendpoint
would make the information accessible over REST athttp://localhost:8080/openidm/info/mycontext/myendpoint
.type
specifies the type of the information source. JavaScript ("type" : "text/javascript"
) and Groovy ("type" : "groovy"
) are supported.file
specifies the path to the JavaScript or Groovy file, if you do not provide a"source"
parameter.source
specifies the actual JavaScript or Groovy script, if you have not provided a"file"
parameter.
Additional properties can be passed to the script as depicted in this
configuration file
(openidm/samples/infoservice/conf/info-name.json
).
Script files in openidm/samples/infoservice/script/info/
have access to the following objects:
request
- the request details, including the method called and any parameters passed.healthinfo
- the current health status of the system.openidm
- access to the JSON resource API.Any additional properties that are depicted in the configuration file (
openidm/samples/infoservice/conf/info-name.json
.)
2.3.5. Verifying the State of Health Check Service Modules
The configurable OpenIDM health check service can verify the status of required modules and services for an operational system. During system startup, OpenIDM checks that these modules and services are available and reports on whether any requirements for an operational system have not been met. If dynamic configuration changes are made, OpenIDM rechecks that the required modules and services are functioning, to allow ongoing monitoring of system operation.
OpenIDM checks all required modules. Examples of those modules are shown here:
"org.forgerock.openicf.framework.connector-framework" "org.forgerock.openicf.framework.connector-framework-internal" "org.forgerock.openicf.framework.connector-framework-osgi" "org.forgerock.openidm.audit" "org.forgerock.openidm.core" "org.forgerock.openidm.enhanced-config" "org.forgerock.openidm.external-email" ... "org.forgerock.openidm.system" "org.forgerock.openidm.ui" "org.forgerock.openidm.util" "org.forgerock.commons.org.forgerock.json.resource" "org.forgerock.commons.org.forgerock.util" "org.forgerock.openidm.security-jetty" "org.forgerock.openidm.jetty-fragment" "org.forgerock.openidm.quartz-fragment" "org.ops4j.pax.web.pax-web-extender-whiteboard" "org.forgerock.openidm.scheduler" "org.ops4j.pax.web.pax-web-jetty-bundle" "org.forgerock.openidm.repo-jdbc" "org.forgerock.openidm.repo-orientdb" "org.forgerock.openidm.config" "org.forgerock.openidm.crypto"
OpenIDM checks all required services. Examples of those services are shown here:
"org.forgerock.openidm.config" "org.forgerock.openidm.provisioner" "org.forgerock.openidm.provisioner.openicf.connectorinfoprovider" "org.forgerock.openidm.external.rest" "org.forgerock.openidm.audit" "org.forgerock.openidm.policy" "org.forgerock.openidm.managed" "org.forgerock.openidm.script" "org.forgerock.openidm.crypto" "org.forgerock.openidm.recon" "org.forgerock.openidm.info" "org.forgerock.openidm.router" "org.forgerock.openidm.scheduler" "org.forgerock.openidm.scope" "org.forgerock.openidm.taskscanner"
You can replace the list of required modules and services, or add to it, by
adding the following lines to your project's
conf/boot/boot.properties
file. Bundles and services
are specified as a list of symbolic names, separated by commas:
openidm.healthservice.reqbundles
- overrides the default required bundles.openidm.healthservice.reqservices
- overrides the default required services.openidm.healthservice.additionalreqbundles
- specifies required bundles (in addition to the default list).openidm.healthservice.additionalreqservices
- specifies required services (in addition to the default list).
By default, OpenIDM gives the system 15 seconds to start up all the required
bundles and services, before the system readiness is assessed. Note that this
is not the total start time, but the time required to complete the service
startup after the framework has started. You can change this default by
setting the value of the servicestartmax
property (in
milliseconds) in your project's conf/boot/boot.properties
file. This example sets the startup time to five seconds:
openidm.healthservice.servicestartmax=5000
2.4. Displaying Information About Installed Modules
On a running OpenIDM instance, you can list the installed modules and their states by typing the following command in the OSGi console. (The output will vary by configuration):
-> scr list Id State Name [ 12] [active ] org.forgerock.openidm.endpoint [ 13] [active ] org.forgerock.openidm.endpoint [ 14] [active ] org.forgerock.openidm.endpoint [ 15] [active ] org.forgerock.openidm.endpoint [ 16] [active ] org.forgerock.openidm.endpoint ... [ 34] [active ] org.forgerock.openidm.taskscanner [ 20] [active ] org.forgerock.openidm.external.rest [ 6] [active ] org.forgerock.openidm.router [ 33] [active ] org.forgerock.openidm.scheduler [ 19] [unsatisfied ] org.forgerock.openidm.external.email [ 11] [active ] org.forgerock.openidm.sync [ 25] [active ] org.forgerock.openidm.policy [ 8] [active ] org.forgerock.openidm.script [ 10] [active ] org.forgerock.openidm.recon [ 4] [active ] org.forgerock.openidm.http.contextregistrator [ 1] [active ] org.forgerock.openidm.config [ 18] [active ] org.forgerock.openidm.endpointservice [ 30] [unsatisfied ] org.forgerock.openidm.servletfilter [ 24] [active ] org.forgerock.openidm.infoservice [ 21] [active ] org.forgerock.openidm.authentication ->
To display additional information about a particular module or service, run
the following command, substituting the Id
of that module
from the preceding list:
-> scr info Id
The following example displays additional information about the router service:
-> scr info 9 ID: 9 Name: org.forgerock.openidm.router Bundle: org.forgerock.openidm.api-servlet (127) State: active Default State: enabled Activation: immediate Configuration Policy: optional Activate Method: activate (declared in the descriptor) Deactivate Method: deactivate (declared in the descriptor) Modified Method: - Services: org.forgerock.json.resource.ConnectionFactory java.io.Closeable java.lang.AutoCloseable Service Type: service Reference: requestHandler Satisfied: satisfied Service Name: org.forgerock.json.resource.RequestHandler Target Filter: (org.forgerock.openidm.router=*) Multiple: single Optional: mandatory Policy: static ... Properties: component.id = 9 component.name = org.forgerock.openidm.router felix.fileinstall.filename = file:/path/to/openidm-latest/conf/router.json jsonconfig = { "filters" : [ { "condition" : { "type" : "text/javascript", "source" : "context.caller.external === true || context.current.name === 'selfservice'" }, "onRequest" : { "type" : "text/javascript", "file" : "router-authz.js" } }, { "pattern" : "^(managed|system|repo/internal)($|(/.+))", "onRequest" : { "type" : "text/javascript", "source" : "require('policyFilter').runFilter()" }, "methods" : [ "create", "update" ] }, { "pattern" : "repo/internal/user.*", "onRequest" : { "type" : "text/javascript", "source" : "request.content.password = require('crypto').hash(request.content.password);" }, "methods" : [ "create", "update" ] } ] } maintenanceFilter.target = (service.pid=org.forgerock.openidm.maintenance) requestHandler.target = (org.forgerock.openidm.router=*) service.description = OpenIDM Common REST Servlet Connection Factory service.pid = org.forgerock.openidm.router service.vendor = ForgeRock AS. ->
2.5. Starting in Debug Mode
To debug custom libraries, you can start OpenIDM with the option to use the Java Platform Debugger Architecture (JPDA):
Start OpenIDM with the
jpda
option:$ cd /path/to/openidm $ ./startup.sh jpda Executing ./startup.sh... Using OPENIDM_HOME: /path/to/openidm Using OPENIDM_OPTS: -Xmx1024m -Xms1024m -Denvironment=PROD -Djava.compiler=NONE -Xnoagent -Xdebug -Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=n Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties Listening for transport dt_socket at address: 5005 Using boot properties at /path/to/openidm/conf/boot/boot.properties -> OpenIDM version "5.0.0" (revision: xxxx) OpenIDM ready
The relevant JPDA options are outlined in the startup script (
startup.sh
).In your IDE, attach a Java debugger to the JVM via socket, on port 5005.
Caution
This interface is internal and subject to change. If you depend on this interface, contact ForgeRock support.
2.6. Running As a Service on Linux Systems
OpenIDM provides a script that generates an initialization script to run OpenIDM as a service on Linux systems. You can start the script as the root user, or configure it to start during the boot process.
When OpenIDM runs as a service, logs are written to the directory in which OpenIDM was installed.
To run OpenIDM as a service, take the following steps:
If you have not yet installed OpenIDM, follow the procedure described in "Preparing to Install and Run Servers" in the Installation Guide.
Run the RC script:
$ cd /path/to/openidm/bin $ ./create-openidm-rc.sh
As a user with administrative privileges, copy the
openidm
script to the/etc/init.d
directory:$ sudo cp openidm /etc/init.d/
If you run Linux with SELinux enabled, change the file context of the newly copied script with the following command:
$ sudo restorecon /etc/init.d/openidm
You can verify the change to SELinux contexts with the
ls -Z /etc/init.d
command. For consistency, change the user context to match other scripts in the same directory with thesudo chcon -u system_u /etc/init.d/openidm
command.Run the appropriate commands to add OpenIDM to the list of RC services:
On Red Hat-based systems, run the following commands:
$ sudo chkconfig --add openidm
$ sudo chkconfig openidm on
On Debian/Ubuntu systems, run the following command:
$ sudo update-rc.d openidm defaults Adding system startup for /etc/init.d/openidm ... /etc/rc0.d/K20openidm -> ../init.d/openidm /etc/rc1.d/K20openidm -> ../init.d/openidm /etc/rc6.d/K20openidm -> ../init.d/openidm /etc/rc2.d/S20openidm -> ../init.d/openidm /etc/rc3.d/S20openidm -> ../init.d/openidm /etc/rc4.d/S20openidm -> ../init.d/openidm /etc/rc5.d/S20openidm -> ../init.d/openidm
Note the output, as Debian/Ubuntu adds start and kill scripts to appropriate runlevels.
When you run the command, you may get the following warning message:
update-rc.d: warning: /etc/init.d/openidm missing LSB information
. You can safely ignore that message.
As an administrative user, start the OpenIDM service:
$ sudo /etc/init.d/openidm start
Alternatively, reboot the system to start the OpenIDM service automatically.
(Optional) The following commands stops and restarts the service:
$ sudo /etc/init.d/openidm stop
$ sudo /etc/init.d/openidm restart
If you have set up a deployment of OpenIDM in a custom directory, such as
/path/to/openidm/production
, you can modify the
/etc/init.d/openidm
script.
Open the openidm
script in a text editor and navigate to
the START_CMD
line.
At the end of the command, you should see the following line:
org.forgerock.commons.launcher.Main -c bin/launcher.json > logs/server.out 2>&1 &"
Include the path to the production directory. In this case, you would add -p production as shown:
org.forgerock.commons.launcher.Main -c bin/launcher.json -p production > logs/server.out 2>&1 &
Save the openidm
script file in the
/etc/init.d
directory. The
sudo /etc/init.d/openidm start command should now start
OpenIDM with the files in your production
subdirectory.
Chapter 3. Command-Line Interface
This chapter describes the basic command-line interface (CLI). The CLI includes a number of utilities for managing an OpenIDM instance.
All of the utilities are subcommands of the cli.sh
(UNIX) or cli.bat
(Windows) scripts. To use the utilities,
you can either run them as subcommands, or launch the cli
script first, and then run the utility. For example, to run the
encrypt utility on a UNIX system:
$ cd /path/to/openidm $ ./cli.sh Using boot properties at /path/to/openidm/conf/boot/boot.properties openidm# encrypt ....
or
$ cd /path/to/openidm $ ./cli.sh encrypt ...
By default, the command-line utilities run with the properties defined in your
project's conf/boot/boot.properties
file.
If you run the cli.sh command by itself, it opens an OpenIDM-specific shell prompt:
openidm#
The startup and shutdown scripts are not discussed in this chapter. For information about these scripts, see "Starting and Stopping the Server".
The following sections describe the subcommands and their use. Examples assume that you are running the commands on a UNIX system. For Windows systems, use cli.bat instead of cli.sh.
For a list of subcommands available from the openidm#
prompt, run the cli.sh help command. The
help and exit options shown below are
self-explanatory. The other subcommands are explained in the subsections
that follow:
local:keytool Export or import a SecretKeyEntry. The Java Keytool does not allow for exporting or importing SecretKeyEntries. local:encrypt Encrypt the input string. local:secureHash Hash the input string. local:validate Validates all json configuration files in the configuration (default: /conf) folder. basic:help Displays available commands. basic:exit Exit from the console. remote:update Update the system with the provided update file. remote:configureconnector Generate connector configuration. remote:configexport Exports all configurations. remote:configimport Imports the configuration set from local file/directory.
The following options are common to the configexport, configimport, and configureconnector subcommands:
- -u or --user USER[:PASSWORD]
Allows you to specify the server user and password. Specifying a username is mandatory. If you do not specify a username, the following error is output to the OSGi console:
Remote operation failed: Unauthorized
. If you do not specify a password, you are prompted for one. This option is used by all three subcommands.- --url URL
The URL of the OpenIDM REST service. The default URL is
http://localhost:8080/openidm/
. This can be used to import configuration files from a remote running instance of OpenIDM. This option is used by all three subcommands.- -P or --port PORT
The port number associated with the OpenIDM REST service. If specified, this option overrides any port number specified with the --url option. The default port is 8080. This option is used by all three subcommands.
3.1. Using the configexport Subcommand
The configexport subcommand exports all configuration objects to a specified location, enabling you to reuse a system configuration in another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment. This subcommand also enables you to inspect the active configuration of an OpenIDM instance.
OpenIDM must be running when you execute this command.
Usage is as follows:
$ ./cli.sh configexport --user username:password export-location
For example:
$ ./cli.sh configexport --user openidm-admin:openidm-admin /tmp/conf
On Windows systems, the export-location must be provided in quotation marks, for example:
C:\openidm\cli.bat configexport --user openidm-admin:openidm-admin "C:\temp\openidm"
Configuration objects are exported as .json
files to the
specified directory. The command creates the directory if needed.
Configuration files that are present in this directory are renamed as backup
files, with a timestamp, for example,
audit.json.2014-02-19T12-00-28.bkp
, and are not
overwritten. The following configuration objects are exported:
The internal repository table configuration (
repo.orientdb.json
orrepo.jdbc.json
) and the datasource connection configuration, for JDBC repositories (datasource.jdbc-default.json
)The script configuration (
script.json
)The log configuration (
audit.json
)The authentication configuration (
authentication.json
)The cluster configuration (
cluster.json
)The configuration of a connected SMTP email server (
external.email.json)
Custom configuration information (
info-name.json
)The managed object configuration (
managed.json
)The connector configuration (
provisioner.openicf-*.json
)The router service configuration (
router.json
)The scheduler service configuration (
scheduler.json
)Any configured schedules (
schedule-*.json
)Standard knowledge-based authentication questions (
selfservice.kba.json)
The synchronization mapping configuration (
sync.json
)If workflows are defined, the configuration of the workflow engine (
workflow.json
) and the workflow access configuration (process-access.json
)Any configuration files related to the user interface (
ui-*.json
)The configuration of any custom endpoints (
endpoint-*.json
)The configuration of servlet filters (
servletfilter-*.json
)The policy configuration (
policy.json
)
3.2. Using the configimport Subcommand
The configimport subcommand imports configuration objects from the specified directory, enabling you to reuse a system configuration from another environment. For example, you can test a configuration in a development environment, then export it and import it into a production environment.
The command updates the existing configuration from the import-location over the OpenIDM REST interface. By default, if configuration objects are present in the import-location and not in the existing configuration, these objects are added. If configuration objects are present in the existing location but not in the import-location, these objects are left untouched in the existing configuration.
The subcommand takes the following options:
-r
,--replaceall
,--replaceAll
Replaces the entire list of configuration files with the files in the specified import location.
Note that this option wipes out the existing configuration and replaces it with the configuration in the import-location. Objects in the existing configuration that are not present in the import-location are deleted.
--retries
(integer)New in OpenIDM 5.0.0, this option specifies the number of times the command should attempt to update the configuration if OpenIDM is not ready.
Default value : 10
--retryDelay
(integer)New in OpenIDM 5.0.0, this option specifies the delay (in milliseconds) between configuration update retries if OpenIDM is not ready.
Default value : 500
Usage is as follows:
$ ./cli.sh configimport --user username:password [--replaceAll] [--retries integer] [--retryDelay integer] import-location
For example:
$ ./cli.sh configimport --user openidm-admin:openidm-admin --retries 5 --retryDelay 250 --replaceAll /tmp/conf
On Windows systems, the import-location must be provided in quotation marks, for example:
C:\openidm\cli.bat configimport --user openidm-admin:openidm-admin --replaceAll "C:\temp\openidm"
Configuration objects are imported as .json
files from the
specified directory to the conf
directory. The
configuration objects that are imported are the same as those for the
export command, described in the previous section.
3.3. Using the configureconnector Subcommand
The configureconnector subcommand generates a configuration for an OpenICF connector.
Usage is as follows:
$ ./cli.sh configureconnector --user username:password --name connector-name
Select the type of connector that you want to configure. The following example configures a new XML connector:
$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myXmlConnector Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties 0. XML Connector version 1.1.0.3 1. SSH Connector version 1.4.1.0 2. LDAP Connector version 1.4.3.0 3. Kerberos Connector version 1.4.2.0 4. Scripted SQL Connector version 1.4.3.0 5. Scripted REST Connector version 1.4.3.0 6. Scripted CREST Connector version 1.4.3.0 7. Scripted Poolable Groovy Connector version 1.4.3.0 8. Scripted Groovy Connector version 1.4.3.0 9. Database Table Connector version 1.1.0.2 10. CSV File Connector version 1.5.1.4 11. Exit Select [0..11]: 0 Edit the configuration file and run the command again. The configuration was saved to /openidm/temp/provisioner.openicf-myXmlConnector.json
The basic configuration is saved in a file named
/openidm/temp/provisioner.openicf-connector-name.json
.
Edit the configurationProperties
parameter in this file to
complete the connector configuration. For an XML connector, you can use the
schema definitions in Sample 1 for an example configuration:
"configurationProperties" : { "xmlFilePath" : "samples/sample1/data/resource-schema-1.xsd", "createFileIfNotExists" : false, "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd", "xsdIcfFilePath" : "samples/sample1/data/xmlConnectorData.xml" },
For more information about the connector configuration properties, see "Configuring Connectors".
When you have modified the file, run the configureconnector command again so that OpenIDM can pick up the new connector configuration:
$ ./cli.sh configureconnector --user openidm-admin:openidm-admin --name myXmlConnector Executing ./cli.sh... Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties Configuration was found and read from: /path/to/openidm/temp/provisioner.openicf-myXmlConnector.json
You can now copy the new
provisioner.openicf-myXmlConnector.json
file to the
conf/
subdirectory.
You can also configure connectors over the REST interface, or through the Admin UI. For more information, see "Creating Default Connector Configurations" and "Adding New Connectors from the Admin UI".
3.4. Using the encrypt Subcommand
The encrypt subcommand encrypts an input string, or JSON object, provided at the command line. This subcommand can be used to encrypt passwords, or other sensitive data, to be stored in the OpenIDM repository. The encrypted value is output to standard output and provides details of the cryptography key that is used to encrypt the data.
Usage is as follows:
$ ./cli.sh encrypt [-j] string
If you do not enter the string as part of the command, the command prompts for the string to be encrypted. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.
The -j
option indicates that the string to be encrypted is
a JSON object, and validates the object. If the object is malformed JSON and
you use the -j
option, the command throws an error. It is
easier to input JSON objects in interactive mode. If you input the JSON
object on the command-line, the object must be surrounded by quotes and any
special characters, including curly braces, must be escaped. The rules for
escaping these characters are fairly complex. For more information, see
section 4.8.2 of the OSGi draft specification.
For example:
$ ./cli.sh encrypt -j '\{\"password\":\"myPassw0rd\"\}'
The following example encrypts a normal string value:
$ ./cli.sh encrypt mypassword Executing ./cli.sh... Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties -----BEGIN ENCRYPTED VALUE----- { "$crypto" : { "type" : "x-simple-encryption", "value" : { "cipher" : "AES/CBC/PKCS5Padding", "salt" : "0pRncNLTJ6ZySHfV4DEtgA==", "data" : "pIrCCkLPhBt0rbGXiZBHkw==", "iv" : "l1Hau6nf2zizQSib8kkW0g==", "key" : "openidm-sym-default", "mac" : "SoqfhpvhBVuIkux8mztpeQ==" } } } ------END ENCRYPTED VALUE------
The following example prompts for a JSON object to be encrypted:
$ ./cli.sh encrypt -j Using boot properties at /path/to/openidm/conf/boot/boot.properties Enter the Json value > Press ctrl-D to finish input Start data input: {"password":"myPassw0rd"} ^D -----BEGIN ENCRYPTED VALUE----- { "$crypto" : { "type" : "x-simple-encryption", "value" : { "cipher" : "AES/CBC/PKCS5Padding", "salt" : "vdz6bUztiT6QsExNrZQAEA==", "data" : "RgMLRbX0guxF80nwrtaZkkoFFGqSQdNWF7Ve0zS+N1I=", "iv" : "R9w1TcWfbd9FPmOjfvMhZQ==", "key" : "openidm-sym-default", "mac" : "9pXtSKAt9+dO3Mu0NlrJsQ==" } } } ------END ENCRYPTED VALUE------
3.5. Using the secureHash Subcommand
The secureHash subcommand hashes an input string, or JSON object, using the specified hash algorithm. This subcommand can be used to hash password values, or other sensitive data, to be stored in the OpenIDM repository. The hashed value is output to standard output and provides details of the algorithm that was used to hash the data.
Usage is as follows:
$ ./cli.sh secureHash --algorithm [-j] string
The -a
or --algorithm
option specifies the
hash algorithm to use. OpenIDM supports the following hash algorithms:
MD5
, SHA-1
, SHA-256
,
SHA-384
, and SHA-512
. If you do not
specify a hash algorithm, SHA-256
is used.
If you do not enter the string as part of the command, the command prompts for the string to be hashed. If you enter the string as part of the command, any special characters, for example quotation marks, must be escaped.
The -j
option indicates that the string to be hashed is
a JSON object, and validates the object. If the object is malformed JSON and
you use the -j
option, the command throws an error. It is
easier to input JSON objects in interactive mode. If you input the JSON
object on the command-line, the object must be surrounded by quotes and any
special characters, including curly braces, must be escaped. The rules for
escaping these characters are fairly complex. For more information, see
section 4.8.2 of the OSGi draft specification.
For example:
$ ./cli.sh secureHash --algorithm SHA-1 '\{\"password\":\"myPassw0rd\"\}'
The following example hashes a password value (mypassword
)
using the SHA-1
algorithm:
$ ./cli.sh secureHash --algorithm SHA-1 mypassword Executing ./cli.sh... Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties -----BEGIN HASHED VALUE----- { "$crypto" : { "value" : { "algorithm" : "SHA-1", "data" : "T9yf3dL7oepWvUPbC8kb4hEmKJ7g5Zd43ndORYQox3GiWAGU" }, "type" : "salted-hash" } } ------END HASHED VALUE------
The following example prompts for a JSON object to be hashed:
$ ./cli.sh secureHash --algorithm SHA-1 -j Executing ./cli.sh... Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties Enter the Json value > Press ctrl-D to finish input Start data input: {"password":"myPassw0rd"} ^D -----BEGIN HASHED VALUE----- { "$crypto" : { "value" : { "algorithm" : "SHA-1", "data" : "PBsmFJZEVNHuYPZJwaF5oX0LtamUA2tikFCiQEfgIsqa/VHK" }, "type" : "salted-hash" } } ------END HASHED VALUE------
3.6. Using the keytool Subcommand
The keytool subcommand exports or imports secret key values.
The Java keytool command enables you to export and import public keys and certificates, but not secret or symmetric keys. The OpenIDM keytool subcommand provides this functionality.
Usage is as follows:
$ ./cli.sh keytool [--export, --import] alias
For example, to export the default OpenIDM symmetric key, run the following command:
$ ./cli.sh keytool --export openidm-sym-default Using boot properties at /openidm/conf/boot/boot.properties Use KeyStore from: /openidm/security/keystore.jceks Please enter the password: [OK] Secret key entry with algorithm AES AES:606d80ae316be58e94439f91ad8ce1c0
The default keystore password is changeit
. For security
reasons, you must change this password in a production
environment. For information about changing the keystore password, see
"Change the Default Keystore Password".
To import a new secret key named my-new-key, run the following command:
$ ./cli.sh keytool --import my-new-key Using boot properties at /openidm/conf/boot/boot.properties Use KeyStore from: /openidm/security/keystore.jceks Please enter the password: Enter the key: AES:606d80ae316be58e94439f91ad8ce1c0
If a secret key of that name already exists, OpenIDM returns the following error:
"KeyStore contains a key with this alias"
3.7. Using the validate Subcommand
The validate subcommand validates all .json configuration
files in your project's conf/
directory.
Usage is as follows:
$ ./cli.sh validate Executing ./cli.sh Starting shell in /path/to/openidm Using boot properties at /path/to/openidm/conf/boot/boot.properties ................................................................... [Validating] Load JSON configuration files from: [Validating] /path/to/openidm/conf [Validating] audit.json .................................. SUCCESS [Validating] authentication.json ......................... SUCCESS ... [Validating] sync.json ................................... SUCCESS [Validating] ui-configuration.json ....................... SUCCESS [Validating] ui-countries.json ........................... SUCCESS [Validating] workflow.json ............................... SUCCESS
3.8. Using the update Subcommand
The update subcommand supports updates of OpenIDM for patches and migrations. For an example of this process, see "Updating Servers" in the Installation Guide.
Chapter 4. Web-Based User Interfaces
OpenIDM includes a customizable, browser-based user interface. The functionality is subdivided into Administrative and Self-Service User Interfaces.
If you are administering OpenIDM, navigate to the Administrative User
Interface, also known as the Admin UI. If OpenIDM is installed on the
local system, you can get to the Admin UI at the following URL:
https://localhost:8443/admin
. In the Admin UI, you
can configure connectors, customize managed objects, set up attribute
mappings, manage accounts, and more.
The Self-Service User Interface, also known as the Self-Service UI,
provides role-based access to tasks based on BPMN2 workflows, and
allows users to manage certain aspects of their own accounts, including
configurable self-service registration. When OpenIDM starts, you can
access the Self-Service UI at https://localhost:8443/
.
Warning
The default password for the administrative user,
openidm-admin
, is openidm-admin
.
To protect your deployment in production, change this password.
All users, including openidm-admin
, can change their
password through the Self-Service UI. After you have logged in, click Change
Password.
4.1. Configuring the Server from the Admin UI
You can set up a basic configuration with the Administrative User Interface (Admin UI).
Through the Admin UI, you can connect to resources, configure attribute mapping and scheduled reconciliation, and set up and manage objects, such as users, groups, and devices.
You can configure OpenIDM through Quick Start cards, and from the Configure and Manage drop-down menus. Try them out, and see what happens when you select each option.
In the following sections, you will examine the default Admin UI dashboard, and learn how to set up custom Admin UI dashboards.
Caution
If your browser uses an AdBlock extension, it might inadvertently block
some UI functionality, particularly if your configuration includes strings
such as ad
. For example, a connection to an Active
Directory server might be configured at the endpoint
system/ad
. To avoid problems related to blocked UI
functionality, either remove the AdBlock extension, or set up a suitable
white list to ensure that none of the targeted endpoints are blocked.
4.1.1. Default Admin UI Dashboard
When you log into the Admin UI, the first screen you should see is the "Reconciliation Dashboard".
The Admin UI includes a fixed top menu bar. As you navigate around the Admin UI, you should see the same menu bar throughout. You can click the Dashboards > Reconciliation Dashboard to return to that screen.
The default dashboard is split into four sections, based on widgets.
Quick Start cards support one-click access to common administrative tasks, and are described in detail in the following section.
Last Reconciliation includes data from the most recent reconciliation between data stores. After you run a reconciliation, you should see data similar to:
System Health includes data on current CPU and memory usage.
Resources include an abbreviated list of configured connectors, mappings, and managed objects.
The Quick Start
cards allow quick access to the labeled
configuration options, described here:
Add Connector
Use the Admin UI to connect to external resources. For more information, see "Adding New Connectors from the Admin UI".
Create Mapping
Configure synchronization mappings to map objects between resources. For more information, see "Mapping Source Objects to Target Objects".
Manage Role
Set up managed provisioning or authorization roles. For more information, see "Working With Managed Roles".
Add Device
Use the Admin UI to set up managed objects, including users, groups, roles, or even Internet of Things (IoT) devices. For more information, see "Managing Accounts".
Set Up Registration
Configure User Self-Registration. You can set up the Self-Service UI login screen, with a link that allows new users to start a verified account registration process. For more information, see "Configuring User Self-Service".
Set Up Password Reset
Configure user self-service Password Reset, allowing end-users to reset forgotten passwords. For more information, see "Configuring User Self-Service".
Manage User
Allows management of users in the repository. You may have to run a reconciliation from an external repository first. For more information, see "Working with Managed Users".
Set Up System
Configure the following server elements:
Authentication, as described in "Supported Authentication and Session Modules".
Audit, as described in "Logging Audit Information".
Self-Service UI, as described in "Changing the UI Path".
Email, as described in "Configuring Outbound Email".
Updates, as described in "Updating Servers" in the Installation Guide.
4.1.2. Creating and Modifying Dashboards
To create a new dashboard, click Dashboards > New Dashboard. You're prompted for a dashboard name, and whether to set it as the default. You can then add widgets.
Alternatively, you can start with an existing dashboard. In the upper-right corner of the UI, next to the Add Widgets button, click the vertical ellipsis. In the menu that appears, you can take the following actions on the current dashboard:
Rename
Duplicate
Set as Default
Delete
To add a widget to a dashboard, click Add Widgets and add the widget of your choice in the window that appears.
To modify the position of a widget in a dashboard, click and drag on the move icon for the widget. You can find that four arrow icon in the upper right corner of the widget window, next to the three dot vertical ellipsis.
If you add a new Quick Start widget, select the vertical ellipsis in the upper right corner of the widget, and click Settings. You can configure an Admin UI sub-widget to embed in the Quick Start widget in the pop-up menu that appears.
Click Add a Link. You can then enter a name, a destination URL, and an icon for the widget.
If you are linking to a specific page in the OpenIDM Admin UI, the
destination URL can be the part of the address after the main page for the
Admin UI, such as https://localhost:8443/admin
For example, if you want to create a quick start link to the Audit
configuration tab, at
https://localhost:8443/admin/#settings/audit/
,
you could enter #settings/audit
in the
destination URL text box.
OpenIDM writes the changes you make to the
ui-dashboard.json
file for your project.
For example, if you add a Last Reconciliation and Embed Web Page widget to
a new dashboard named Test, you'll see the following excerpt in your
ui-dashboard.json
file:
{ "name" : "Test", "isDefault" : false, "widgets" : [ { "type" : "frame", "size" : "large", "frameUrl" : "http://example.com", "height" : "100px", "title" : "Example.com" }, { "type" : "lastRecon", "size" : "large", "barchart" : "true" }, { "type" : "quickStart", "size" : "large", "cards" : [ { "name" : "Audit", "icon" : "fa-align-justify", "href" : "#settings/audit" } ] }, ] }
For more information on each property, see the following table:
ui-dashboard.json
Property | Options | Description |
---|---|---|
name | User entry | Dashboard name |
isDefault | true or false | Default dashboard; can set one default |
widgets | Different options for type | Code blocks that define a widget |
type | lifeCycleMemoryHeap , lifeCycleMemoryNonHeap ,
systemHealthFull , cpuUsage ,
lastRecon , resourceList ,
quickStart , frame ,
userRelationship
| Widget name |
size | x-small , small ,
medium , or large | Width of widget, based on a 12-column grid system, where x-small=4, small=6, medium=8, and large=12; for more information, see Bootstrap CSS |
height | Height, in units such as cm , mm ,
px , and in | Height; applies only to Embed Web Page widget |
frameUrl | URL | Web page to embed; applies only to Embed Web Page widget |
title | User entry | Label shown in the UI; applies only to Embed Web Page widget |
barchart | true or false | Reconciliation bar chart; applies only to Last Reconciliation widget |
When complete, you can select the name of the new dashboard under the Dashboards menu.
You can modify the options for each dashboard and widget. Select the vertical ellipsis in the upper right corner of the object, and make desired choices from the pop-up menu that appears.
4.2. Working With the Self-Service UI
For all users, the Self-Service UI includes Dashboard and Profile links in the top menu bar.
To access the Self-Service UI, start OpenIDM, then navigate to
https://localhost:8443/
. If you have not installed a
certificate that is trusted by a certificate authority, you are prompted
with an Untrusted Connection warning the first time you log in to the UI.
4.2.1. The Self-Service UI Dashboard
The Dashboard includes a list tasks assigned to the user who has logged in, tasks assigned to the relevant group, processes available to be invoked, current notifications for that user, along with Quick Start cards for that user's profile and password.
For examples of these tasks, processes, and notifications, see "Workflow Samples" in the Samples Guide.
4.2.2. The Self-Service UI Profile
Every user who logs into the Self-Service UI has a profile, with Basic Info
and Password Tabs. Users other than openidm-admin
may
see additional information, including Preferences, Social Identities, and
Security Questions tabs.
You'll see the following information under each tab:
- Basic Info
Specifies basic account information, including username, first name, last name, and email address.
- Password
Supports password changes; for more information on password policy criteria, see "Enforcing Password Policy".
- Preferences
Allows selection of preferences, as defined in the
managed.json
file, and the Managed Object User property Preferences tab. The default preferences relate to updates and special offers.- Social Identities
Lists social ID providers that have been enabled in the Admin UI. If you have registered with one provider, you can enable logins to this account with additional social ID providers. For more information on configuring and linking each provider, see "Configuring Social ID Providers".
- Security Questions
Shown if KBA is enabled. Includes security questions and answers for this account, created when a new user goes through the registration process. For more information on KBA, see "Configuring Self-Service Questions".
4.3. Customizing a UI Template
You may want to customize information included in the Self-Service UI.
These procedures do not address actual data store requirements. If you add text boxes in the UI, it is your responsibility to set up associated properties in your repositories.
To do so, you should copy existing default template files in the
openidm/ui/selfservice/default
subdirectory to
associated extension/
subdirectories.
To simplify the process, you can copy some or all of the content from the
openidm/ui/selfservice/default/templates
to the
openidm/ui/selfservice/extension/templates
directory.
You can use a similar process to modify what is shown in the Admin UI.
4.3.1. Customizing User Self-Service Screens
In the following procedure, you will customize the screen that users see during the User Registration process. You can use a similar process to customize what a user sees during the Password Reset and Forgotten Username processes.
For user Self-Service features, you can customize options in three files.
Navigate to the extension/templates/user/process
subdirectory, and examine the following files:
User Registration:
registration/userDetails-initial.html
Password Reset:
reset/userQuery-initial.html
Forgotten Username:
username/userQuery-initial.html
The following procedure demonstrates the process for User Registration.
When you configure user self-service, as described in "Configuring User Self-Service", anonymous users who choose to register will see a screen similar to:
The screen you see is from the following file:
userDetails-initial.html
, in theselfservice/extension/templates/user/process/registration
subdirectory. Open that file in a text editor.Assume that you want new users to enter an employee ID number when they register.
Create a new
form-group
stanza for that number. For this procedure, the stanza appears after the stanza for Last Name (or surname)sn
:<div class="form-group"> <label class="sr-only" for="input-employeeNum">{{t 'common.user.employeeNum'}}</label> <input type="text" placeholder="{{t 'common.user.employeeNum'}}" id="input-employeeNum" name="user.employeeNum" class="form-control input-lg" /> </div>
Edit the relevant
translation.json
file. As this is the customized file for the Self-Service UI, you will find it in theselfservice/extension/locales/en
directory that you set up in "Customizing the UI".You need to find the right place to enter text associated with the
employeeNum
property. Look for the other properties in theuserDetails-initial.html
file.The following excerpt illustrates the
employeeNum
property as added to thetranslation.json
file.... "givenName" : "First Name", "sn" : "Last Name", "employeeNum" : "Employee ID Number", ...
The next time an anonymous user tries to create an account, that user should see a screen similar to:
In the following procedure, you will customize what users can modify when they navigate to their User Profile page:
If you want to allow users to modify additional data on their profiles, this procedure is for you.
Log in to the Self-Service UI. Click the Profile tab. You should see at least the following tabs:
Basic Info
andPassword
. In this procedure, you will add aMobile Phone
tab.OpenIDM generates the user profile page from the following file:
UserProfileTemplate.html
. Assuming you set up customextension
subdirectories, as described in "Customizing a UI Template", you should find a copy of this file in the following directory:selfservice/extension/templates/user
.Examine the first few lines of that file. Note how the
tablist
includes the tabs in the Self-Service UI user profile: Basic Info and Password, associated with thecommon.user.basicInfo
andcommon.user.password
properties.The following excerpt includes a third tab, with the
mobilePhone
property:<div class="container"> <div class="page-header"> <h1>{{t "common.user.userProfile"}}</h1> </div> <div class="tab-menu"> <ul class="nav nav-tabs" role="tablist"> <li class="active"><a href="#userDetailsTab" role="tab" data-toggle="tab"> {{t "common.user.basicInfo"}}</a></li> <li><a href="#userPasswordTab" role="tab" data-toggle="tab"> {{t "common.user.password"}}</a></li> <li><a href="#userMobilePhoneNumberTab" role="tab" data-toggle="tab"> {{t "common.user.mobilePhone"}}</a></li> </ul> </div> ...
Next, you should provide information for the tab. Based on the comments in the file, and the entries in the
Password
tab, the following code sets up a Mobile Phone number entry:<div role="tabpanel" class="tab-pane panel panel-default fr-panel-tab" id="userMobilePhoneNumberTab"> <form class="form-horizontal" id="password"> <div class="panel-body"> <div class="form-group"> <label class="col-sm-3 control-label" for="input-telephoneNumber"> {{t "common.user.mobilePhone"}}</label> <div class="col-sm-6"> <input class="form-control" type="telephoneNumber" id="input-mobilePhone" name="mobilePhone" value="" /> </div> </div> </div> <div class="panel-footer clearfix"> {{> form/_basicSaveReset}} </div> </form> </div> ...
Note
For illustration, this procedure uses the HTML tags found in the
UserProfileTemplate.html
file. You can use any standard HTML content withintab-pane
tags, as long as they include a standardform
tag and standardinput
fields. OpenIDM picks up this information when the tab is saved, and uses it toPATCH
user content.Review the
managed.json
file. Make sure it isviewable
anduserEditable
as shown in the following excerpt:"telephoneNumber" : { "type" : "string", "title" : "Mobile Phone", "viewable" : true, "userEditable" : true, "pattern" : "^\\+?([0-9\\- \\(\\)])*$" },
Open the applicable
translation.json
file. You should find a copy of this file in the following subdirectory:selfservice/extension/locales/en/
.Search for the line with
basicInfo
, and add an entry formobilePhone
:"basicInfo": "Basic Info", "mobilePhone": "Mobile Phone",
Review the result. Log in to the Self-Service UI, and click Profile. Note the entry for the Mobile Phone tab.
4.3.2. Modifying Valid Query Fields
For Password Reset and Forgotten Username functionality, you may choose to modify Valid Query Fields, such as those described in "Configuring User Self-Service".
For example, if you click Configure > Password Reset > User Query Form, you can make changes to Valid Query Fields.
If you add, delete, or modify any Valid Query Fields, you will have to
change the corresponding userQuery-initial.html
file.
Assuming you set up custom extension
subdirectories, as
described in "Customizing a UI Template", you can find this file
in the following directory:
selfservice/extension/templates/user/process
.
If you change any Valid Query Fields, you should make corresponding changes.
For Forgotten Username functionality, you would modify the
username/userQuery-initial.html
file.For Password Reset functionality, you would modify the
reset/userQuery-initial.html
file.
For a model of how you can change the userQuery-initial.html
file, see "Customizing the User Registration Page".
4.4. Managing Accounts
Only administrative users (with the role openidm-admin
)
can add, modify, and delete accounts from the Admin UI. Regular users
can modify certain aspects of their own accounts from the Self-Service UI.
4.4.1. Account Configuration
In the Admin UI, you can manage most details associated with an account, as shown in the following screenshot.
You can configure different functionality for an account under each tab:
- Details
The Details tab includes basic identifying data for each user, with two special entries:
- Status
By default, accounts are shown as active. To suspend an account, such as for a user who has taken a leave of absence, set that user's status to inactive.
- Manager
You can assign a manager from the existing list of managed users.
- Password
As an administrator, you can create new passwords for users in the managed user repository.
- Provisioning Roles
Used to specify how objects are provisioned to an external system. For more information, see "Working With Managed Roles".
- Authorization Roles
Used to specify the authorization rights of a managed user within OpenIDM. For more information, see "Working With Managed Roles".
- Direct Reports
Users who are listed as managers of others have entries under the Direct Reports tab, as shown in the following illustration:
- Linked Systems
Used to display account information reconciled from external systems.
4.4.2. Procedures for Managing Accounts
With the following procedures, you can add, update, and deactivate accounts for managed objects such as users.
The managed object does not have to be a user. It can be a role, a group, or even be a physical item such as an IoT device. The basic process for adding, modifying, deactivating, and deleting other objects is the same as it is with accounts. However, the details may vary; for example, many IoT devices do not have telephone numbers.
Log in to the Admin UI at
https://localhost:8443/admin
.Click Manage > User.
Click New User.
Complete the fields on the New User page.
Most of these fields are self-explanatory. Be aware that the user interface is subject to policy validation, as described in "Using Policies to Validate Data". So, for example, the email address must be a valid email address, and the password must comply with the password validation settings that appear if you enter an invalid password.
In a similar way, you can create accounts for other managed objects.
You can review new managed object settings in the managed.json
file of your project-dir/conf
directory.
In the following procedures, you learn how:
Log in to the Admin UI at
https://localhost:8443/admin
as an administrative user.Click Manage > User.
Click the Username of the user that you want to update.
On the profile page for the user, modify the fields you want to change and click Update.
The user account is updated in the OpenIDM repository.
Log in to the Admin UI at
https://localhost:8443/admin
as an administrative user.Click Manage > User.
Select the checkbox next to the desired Username.
Click the Delete Selected button.
Click OK to confirm the deletion.
The user is deleted from the internal repository.
The Admin UI displays the details of the account in the OpenIDM repository (managed/user). When a mapping has been configured between the repository and one or more external resources, you can view details of that account in any external system to which it is linked. As this view is read-only, you cannot update a user record in a linked system from within the Self-Service UI.
By default, implicit synchronization is enabled for
mappings from the managed/user
repository to any external resource. This means that
when you update a managed object, any mappings defined in the
sync.json
file that have the managed object as the
source are automatically executed to update the target system. You can see
these changes in the Linked Systems section of a user's profile.
To view a user's linked accounts:
Log in to the Admin UI at
https://localhost:8443/admin
.Click Manage User > Username > Linked Systems.
The Linked Systems panel indicates the external mapped resource or resources.
Select the resource in which you want to view the account, from the Linked Resource list.
The user record in the linked resource is displayed.
4.5. Configuring Account Relationships
This section will help you set up relationships between human users and devices, such as IoT devices.
You'll set this up with the help of the Admin UI schema editor, which
allows you to create and customize managed objects such as
Users
and Devices
as well as
relationships between managed objects. You can also create these
options in the managed.json
file for your project.
When complete, you will have users who can own multiple unique devices. If you try to assign the same device to more than one owner, OpenIDM will stop you with an error message.
This section assumes that you have started OpenIDM with "Sample 2b - LDAP Two Way" in the Samples Guide.
After you have started OpenIDM with "Sample 2b", go through the following procedures, where you will:
Set up a managed object named
Device
, with unique serial numbers for each device. You can configure the searchable schema of your choice. See "Configuring Schema for a Device" for details.Set up a relationship from the Device to the User managed object. See "Configure a Relationship from the Device Managed Object" for details.
Set up a Two-way from the User to the Device managed object. See "Configure a Relationship From the User Managed Object" for details.
Demonstrate the relationships. Assign users to devices. See what happens when you try to assign a device to more than one user. For details, see "Demonstrating an IoT Relationship".
This procedure illustrates how you might set up a Device managed object, with schema that configures relationships to users.
After you configure the schema for the Device managed object, you can
collect information such as model, manufacturer, and serial number for each
device. In the next procedure, you'll set up an owner
schema property that includes a relationship to the User managed object.
Click Configure > Managed Objects > New Managed Object. Give that object an appropriate IoT name. For this procedure, specify
Device
. You should also select a managed object icon. Click Save.You should now see five tabs: Details, Schema, Scripts, Properties, and Preferences. Select the Schema tab.
The items that you can add to the new managed object depend on the associated properties.
The Schema tab includes the
Readable Title
of the device; in this case, set it toDevice
.You can add schema properties as needed in the UI. Click the Property button. Include the properties shown in the illustration: model, serialNumber, manufacturer, description, and category.
Initially, the new property is named
Property 1
. As soon as you enter a property name such asmodel
, OpenIDM changes that property name accordingly.To support UI-based searches of devices, make sure to set the Searchable option to true for all configured schema properties, unless it includes extensive text, In this case, you should set Searchable to false for the
description
property.The Searchable option is used in the data grid for the given object. When you click Manage > Device (or another object such as User), OpenIDM displays searchable properties for that object.
After you save the properties for the new managed object type, OpenIDM saves those entries in the
managed.json
file in theproject-dir/conf
directory.Now click Manage > Device > New Device. Add a device as shown in the following illustration.
You can continue adding new devices to the managed object, or reconcile that managed object with another data store. The other procedures in this section assume that you have set up the devices as shown in the next illustration.
When complete, you can review the list of devices. Based on this procedure, click Manage > Device.
Select one of the listed devices. You'll note that the label for the device in the Admin UI matches the name of the first property of the device.
You can change the order of schema properties for the Device managed object by clicking Configure > Managed Objects > Device > Schema, and select the property that you want to move up or down the list.
Alternatively, you can make the same changes to this (or any managed object schema) in the
managed.json
file for your project.
In this procedure, you will add a property to the schema of the Device managed object.
In the Admin UI, click Configure > Managed Objects > Device > Schema.
Under the Schema tab, add a new property. For this procedure, we call it owner. Unlike other schema properties, set the Searchable property to false.
Scroll down to Validation Policies; click the Type box and select Relationship. This opens additional relationship options.
Set up a Target Property Name of
IoT_Devices
. You'll use that property name in the next "Configure a Relationship From the User Managed Object".Be sure to set the Two-way Relationship and Validate Relationship options to
true
, which ensures that each device is associated with no more than one user.Scroll down and add a Resource Collection. Set up a link to the
managed/user
resource, with a label that matches theUser
managed object.Enable queries of the User managed object by setting Query Filter to true. The Query Filter value for this Device object allows you to identify the user who "owns" each device. For more information, see "Common Filter Expressions".
Set up Display Properties from
managed/user
properties. The properties shown in the illustration are just examples, based on "Sample 2b - LDAP Two Way" in the Samples Guide.Press Save to exit the Resource Collection pop-up. Press Save again in the Manage Device window.
In this procedure, you will configure an existing User Managed Object with schema to match what was created in "Configure a Relationship from the Device Managed Object".
With the settings you create, OpenIDM supports a relationship between a single user and multiple devices. In addition, this procedure prevents multiple users from "owning" any single device.
In the Admin UI, click Configure > Managed Objects > User > Schema.
Under the Schema tab, add a new property, called IoT_Devices.
Make sure the searchable property is set to false, to minimize confusion in the relationship. Otherwise, you'll see every device owned by every user, when you select Manage > User.
For validation policies, you'll set up an array with a relationship. Note how the reverse property name matches the property that you configured in "Configure a Relationship from the Device Managed Object".
Be sure to set the Two-way Relationship and Validate Relationship options to
true
, which ensures that no more than one user gets associated with a specific device.Scroll down to Resource Collection, and add references to the
managed/device
resource, as shown in the next illustration.Enter
true
in the Query Filter text box. In this relationship, OpenIDM will read all information from themanaged/device
managed object, with information from the device fields and sort keys that you configured in "Configure a Relationship from the Device Managed Object".
This procedure assumes that you have already taken the steps described in the previous procedures in this section, specifically, "Configuring Schema for a Device", "Configure a Relationship from the Device Managed Object", and "Configure a Relationship From the User Managed Object".
This procedure also assumes that you started OpenIDM with "Sample 2b - LDAP Two Way" in the Samples Guide, and have reconciled to set up users.
From the Admin UI, click Manage > User. Select a user, and in this case, click the IoT Devices tab. See how you can select any of the devices that you may have added in "Configuring Schema for a Device".
Alternatively, try to assign a device to an owner. To do so, click Manage > Device, and select a device. You'll see either an
Add Owner
orUpdate Owner
button, which allows you to assign a device to a specific user.If you try to assign a device already assigned by a different user, you'll get the following message:
Conflict with Existing Relationship
.
4.6. Customizing the UI
OpenIDM allows you to customize both the Admin and Self-Service UIs. When you install OpenIDM, you can find the default UI configuration files in two directories:
Admin UI:
openidm/ui/admin/default
Self-Service UI:
openidm/ui/selfservice/default
OpenIDM looks for custom themes and templates in the following directories:
Admin UI:
openidm/ui/admin/extension
Self-Service UI:
openidm/ui/selfservice/extension
Before starting the customization process, you should create these directories. If you are running UNIX/Linux, the following commands create a copy of the appropriate subdirectories:
$ cd /path/to/openidm/ui $ cp -r selfservice/default/. selfservice/extension $ cp -r admin/default/. admin/extension
OpenIDM also includes templates that may help, in two other directories:
Admin UI:
openidm/ui/admin/default/templates
Self-Service UI:
openidm/ui/selfservice/default/templates
If you want to customize workflows in the UI, see "Managing User Access to Workflows".
4.7. Changing the UI Theme
You can customize the theme of the user interface. OpenIDM uses the Bootstrap framework. You can download and customize the OpenIDM UI with the Bootstrap themes of your choice. OpenIDM is also configured with the Font Awesome CSS toolkit.
Note
If you use Brand Icons from the Font Awesome CSS Toolkit, be aware of the following statement:
All brand icons are trademarks of their respective owners. The use of these trademarks does not indicate endorsement of the trademark holder by ForgeRock, nor vice versa.
4.7.1. OpenIDM UI Themes and Bootstrap
You can configure a few features of the OpenIDM UI in the
ui-themeconfig.json
file in your project's
conf/
subdirectory. However, to change most
theme-related features of the UI, you must copy target files to the
appropriate extension
subdirectory, and then modify
them as discussed in "Customizing the UI".
The default configuration files for the Admin and Self-Service UIs are identical for theme configuration.
By default the UI reads the stylesheets and images from the respective
openidm/ui/function/default
directories. Do not modify the files in this directory. Your changes may be
overwritten the next time you update or even patch your system.
To customize your UI, first set up matching subdirectories for your system
(openidm/ui/admin/extension
and
openidm/ui/selfservice/extension
). For example,
assume you want to customize colors, logos, and so on.
You can set up a new theme, primarily through custom Bootstrap CSS
files, in appropriate extension/
subdirectories, such
as openidm/ui/selfservice/extension/libs
and
openidm/ui/selfservice/extension/css
.
You may also need to update the "stylesheets"
listing in
the ui-themeconfig.json
file for your project, in the
project-dir/conf
directory.
"stylesheets" : [ "css/bootstrap-3.4.1-custom.css", "css/structure.css", "css/theme.css" ],
You can find these stylesheets
in the
/css
subdirectory.
bootstrap-3.4.1-custom.css
: Includes custom settings that you can get from various Bootstrap configuration sites, such as the Bootstrap Customize and Download website.You may find the ForgeRock version of this in the
config.json
file in theui/selfservice/default/css/common/structure/
directory.structure.css
: Supports configuration of structural elements of the UI.theme.css
: Includes customizable options for UI themes such as colors, buttons, and navigation bars.
If you want to set up custom versions of these files, copy them to the
extension/css
subdirectories.
4.7.2. Changing the Default Logo
For the Self-Service UI, you can find the default logo in the
openidm/ui/selfservice/default/images
directory. To
change the default logo, copy desired files to the
openidm/ui/selfservice/extension/images
directory.
You should see the changes after refreshing your browser.
To specify a different file name, or to control the size, and other
properties of the image file that is used for the logo, adjust the
logo
property in the UI theme configuration file
for your project:
project-dir/conf/ui-themeconfig.json
).
The following change to the UI theme configuration file points to an image
file named example-logo.png
, in the
openidm/ui/extension/images
directory:
... "loginLogo" : { "src" : "images/example-logo.png", "title" : "Example.com", "alt" : "Example.com", "height" : "104px", "width" : "210px" }, ...
Refresh your browser window for the new logo to appear.
4.7.3. Changing the Language of the UI
Currently, the UI is provided only in US English. You can translate the UI and specify that your own locale is used. The following example shows how to translate the UI into French:
Assuming you set up custom
extension
subdirectories, as described in "Customizing the UI", you can copy the default (en
) locale to a new (fr
) subdirectory as follows:$ cd /path/to/openidm/ui/selfservice/extension/locales $ cp -R en fr
The new locale (
fr
) now contains the defaulttranslation.json
file:$ ls fr/ translation.json
Translate the values of the properties in the
fr/translate.json
file. Do not translate the property names. For example:... "UserMessages" : { "changedPassword" : "Mot de passe a été modifié", "profileUpdateFailed" : "Problème lors de la mise à jour du profil", "profileUpdateSuccessful" : "Profil a été mis à jour", "userNameUpdated" : "Nom d'utilisateur a été modifié", ....
Change the UI configuration to use the new locale by setting the value of the
lang
property in theproject-dir/conf/ui-configuration.json
file, as follows:"lang" : "fr",
Refresh your browser window, and OpenIDM applies your change.
You can also change the labels for accounts in the UI. To do so,
navigate to the Schema Properties
for the managed object
to be changed.
To change the labels for user accounts, navigate to the Admin UI. Click Configure > Managed Objects > User, and scroll down to Schema.
Under Schema Properties, select a property and modify the
Readable Title
. For example, you can
modify the Readable Title
for userName
to a label in another language, such as Nom d'utilisateur
.
4.7.4. Creating a Project-Specific UI Theme
You can create specific UI themes for different projects and then point a particular UI instance to use a defined theme on startup. To create a complete custom theme, follow these steps:
Shut down the OpenIDM instance, if it is running. In the OSGi console, type:
shutdown ->
Copy the entire default Self-Service UI theme to an accessible location. For example:
$ cd /path/to/openidm/ui/selfservice $ cp -r default /path/to/openidm/new-project-theme
If desired, repeat the process with the Admin UI; just remember to copy files to a different directory:
$ cd /path/to/openidm/ui/admin $ cp -r default /path/to/openidm/admin-project-theme
In the copied theme, modify the required elements, as described in the previous sections. Note that nothing is copied to the extension folder in this case - changes are made in the copied theme.
In the
conf/ui.context-selfservice.json
file, modify the values fordefaultDir
andextensionDir
to the directory with yournew-project-theme
:{ "enabled" : true, "urlContextRoot" : "/", "defaultDir" : "&{launcher.install.location}/ui/selfservice/default", "extensionDir" : "&{launcher.install.location}/ui/selfservice/extension", "responseHeaders" : { "X-Frame-Options" : "DENY" } }
If you want to repeat the process for the Admin UI, make parallel changes to the
project-dir/conf/ui.context-admin.json
file.Restart OpenIDM.
$ cd /path/to/openidm $ ./startup.sh
Relaunch the UI in your browser. The UI is displayed with the new custom theme.
4.8. Resetting User Passwords
When working with end users, administrators frequently have to reset their passwords. OpenIDM allows you to do so directly, through the Admin UI. Alternatively, you can configure an external system for that purpose.
4.8.1. Resetting a User Password Through the Admin UI
From the Admin UI, you can reset the passwords of accounts in the internal Managed User datastore. If you haven't already done so, start by configuring the outbound email service, as described in "Configuring Outbound Email". Then take the following steps in the Admin UI:
Select Manage > User. Choose a specific user from the list that appears.
Select the Password tab for that user; you should see a Reset Password option.
When you select Reset Password, OpenIDM by default generates a random 16 character password with at least one of each of the following types of characters:
Uppercase letters:
A-Z
Lowercase letters:
a-z
Integers:
0-9
Special characters:
: ; < = > ? @
OpenIDM then uses its configured outgoing email service to send that password
to the specified user. For example, user mike
might an
email message with the following subject line:
Your password has been reset by an administrator
along with the following message:
mike's new password is: <generated_password>
If desired, you can configure that message (along with password complexity)
by modifying the following code block in your project's
managed.json
file:
"actions" : { "resetPassword": { "type": "text/javascript", "source": "require('ui/resetPassword').sendMail(object, subject, message, passwordRules, passwordLength);", "globals": { "subject": "Your password has been reset by an administrator", "message": "<html><body><p>{{object.userName}}'s new password is: {{password}}</p></body></html>", "passwordRules": [ { "rule": "UPPERCASE", "minimum": 1 }, { "rule": "LOWERCASE", "minimum": 1 }, { "rule": "INTEGERS", "minimum": 1 }, { "rule": "SPECIAL", "minimum": 1 } ], "passwordLength": 16 } }
4.8.2. Using an External System for Password Reset
By default, the Password Reset mechanism is handled internally, in OpenIDM. You can reroute Password Reset in the event that a user has forgotten their password, by specifying an external URL to which Password Reset requests are sent. Note that this URL applies to the Password Reset link on the login page only, not to the security data change facility that is available after a user has logged in.
To set an external URL to handle Password Reset, set the
passwordResetLink
parameter in the UI configuration file
(conf/ui-configuration.json
) file. The following example
sets the passwordResetLink
to
https://accounts.example.com/account/reset-password
:
passwordResetLink: "https://accounts.example.com/reset-password"
The passwordResetLink
parameter takes either an empty
string as a value (which indicates that no external link is used) or a full
URL to the external system that handles Password Reset requests.
Note
External Password Reset and security questions for internal Password Reset
are mutually exclusive. Therefore, if you set a value for the
passwordResetLink
parameter, users will not be prompted
with any security questions, regardless of the setting of the
securityQuestions
parameter.
4.9. Providing a Logout URL to External Applications
By default, a UI session is invalidated when a user clicks on the Log out link. In certain situations your external applications might require a distinct logout URL to which users can be routed, to terminate their UI session.
The logout URL is #logout
, appended to the UI URL, for
example, https://localhost:8443/#logout/
.
The logout URL effectively performs the same action as clicking on the Log out link of the UI.
4.10. Changing the UI Path
By default, the Self-Service UI is registered at the root context and is
accessible at the URL https://localhost:8443
. To specify
a different URL, edit the
project-dir/conf/ui.context-selfservice.json
file, setting the urlContextRoot
property to the new URL.
For example, to change the URL of the Self-Service UI to
https://localhost:8443/exampleui
, edit
the file as follows:
"urlContextRoot" : "/exampleui",
Alternatively, to change the Self-Service UI URL in the Admin UI, follow these steps:
Log in to the Admin UI.
Select Configure > System Preferences, and select the Self-Service UI tab.
Specify the new context route in the Relative URL field.
4.11. API Explorer
OpenIDM includes an API Explorer, an implementation of the OpenAPI Initiative Specification, also known as Swagger.
To access the API Explorer, log into the Admin UI, select the question mark in the upper right corner, and choose API Explorer from the drop-down menu.
Note
If the API Explorer does not appear, you may need to enable it in your
project's conf/boot/boot.properties
file, specifically
with the openidm.apidescriptor.enabled
property. For more
information see, "Disable the API Explorer".
In the API Explorer, you'll find several navigable endpoints, including:
/managed/assignment
/managed/role
/managed/user
Each endpoint lists supported HTTP methods such as POST and GET. When custom
actions are available, the API Explorer lists them as
HTTP Method
/path/to/endpoint?_action=something
.
To see how this works, navigate to the User
endpoint, select List Operations, and choose the GET option associated
with the /managed/user#_query_id_query-all
endpoint.
In this case, the defaults are set, and all you need to do is select
the Try it out!
button. The output you see includes:
The REST call, in the form of the curl command.
The request URL, which specifies the endpoint and associated parameters.
The response body, which contains the data that you requested.
The HTTP response code; if everything works, this should be
200
.Response headers.
If you're familiar with "Sample 2b - LDAP Two Way" in the Samples Guide, you may recognize the output as users in the OpenIDM managed user object, after reconciliation.
Tip
If you see a 401
Access Denied
code
in the response body, your OpenIDM session may have timed out, and you'll
have to log into the Admin UI again.
For details on common ForgeRock REST parameters, see "About ForgeRock Common REST".
You'll see examples of REST calls throughout ForgeRock OpenIDM documentation. You can now try these calls with the API Explorer.
You can generate an OpenAPI-compliant descriptor of the REST API to provide
API reference documentation specific to your deployment. The following
command saves the API descriptor of the managed/user endpoint to a file named
my-openidm-api.json
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ --output "my-openidm-api.json" \ "http://localhost:8080/openidm/managed/user?_api"
For information about publishing reference documentation using the API descriptor, see "To Publish OpenAPI Documentation".
4.12. Disabling the UI
The UI is packaged as a separate bundle that can be disabled in the
configuration before server startup. To disable the registration of the UI
servlet, edit the
project-dir/conf/ui.context-selfservice.json
file, setting the enabled
property to false:
"enabled" : false,
Chapter 5. Configuring User Self-Service
The following sections describe how you can configure three functions of user self-service: User Registration, Forgotten Username, and Password Reset.
User Registration: You can configure limited access that allows a current anonymous user to create their own accounts. To aid in this process, you can configure reCAPTCHA, email validation, and KBA questions.
If you have configured one or more social identity providers, as described in "Configuring Social ID Providers", you can enable the use of those providers for User Registration. You can also configure the user terms and conditions of your choice, typically a license and/or a privacy agreement.
Forgotten Username: You can set up OpenIDM to allow users to recover forgotten usernames via their email addresses or first and last names. OpenIDM can then display that username on the screen, and/or email such information to that user.
Password Reset: You can set up OpenIDM to verify user identities via KBA questions. If email configuration is included, OpenIDM would email a link that allows users to reset their passwords.
If you enable email functionality, the one solution that works for all three self-service functions is to configure an outgoing email service for OpenIDM, as described in "Configuring Outbound Email".
Note
If you disable email validation only for user registration, you should perform one of the following actions:
Disable validation for
mail
in the managed user schema. Click Configure > Managed Objects > User > Schema. Under Schema Properties, click Mail, scroll down to Validation Policies, and set Required tofalse
.Configure the User Registration template to support user email entries. To do so, use "Customizing the User Registration Page", and substitute
mail
foremployeeNum
.
Without these changes, users who try to register accounts will see a
Forbidden Request Error
.
You can configure user self-service through the UI and through configuration files.
In the UI, log into the Admin UI. You can enable these features when you click Configure > User Registration, Configure > Forgotten Username, and Configure > Password Reset.
In the command-line interface, copy the following files from
samples/misc
to your workingproject-dir/conf
directory:User Registration: selfservice-registration.json
Forgotten username: selfservice-username.json
Password reset: selfservice-reset.json
Examine the
ui-configuration.json
file in the same directory. You can activate or deactivate User Registration and Password Reset by changing the value associated with theselfRegistration
andpasswordReset
properties:{ "configuration" : { "selfRegistration" : true, "passwordReset" : true, "forgotUsername" : true, ...
For each of these functions, you can configure several options, including:
- reCAPTCHA
Google reCAPTCHA helps prevent bots from registering users or resetting passwords on your system. For Google documentation, see Google reCAPTCHA. For directions on how to configure reCAPTCHA for user self-service, see "Configuring Google reCAPTCHA".
- Email Validation / Email Username
You can configure the email messages that OpenIDM sends to users, as a way to verify identities for user self-service. For more information, see "Configuring Self-Service Email Messages".
If you configure email validation, you must also configure an outgoing email service in OpenIDM. To do so, click Configure > System Preferences > Email. For more information, read "Configuring Outbound Email".
- User Details
You can modify the Identity Email Field associated with user registration; by default, it is set to
mail
.- User Query
When configuring password reset and forgotten username functionality, you can modify the fields that a user is allowed to query. If you do, you may need to modify the HTML templates that appear to users who request such functionality. For more information, see "Modifying Valid Query Fields".
- Valid Query Fields
Property names that you can use to help users find their usernames or verify their identity, such as
userName
,mail
, orgivenName
.- Identity ID Field
Property name associated with the User ID, typically
_id
.- Identity Email Field
Property name associated with the user email field, typically something like
mail
oremail
.- Identity Service URL
The path associated with the identity data store, such as
managed/user
.
- KBA Stage
You can modify the list of Knowledge-based Authentication (KBA) questions in the
conf/selfservice.kba.json
file. Users can then select the questions they will use to help them verify their own identities. For directions on how to configure KBA questions, see "Configuring Self-Service Questions". For User Registration, you cannot configure these questions in the Admin UI.- Password Reset Form
You can change the Password Field for the Password Reset feature to specify a relevant password property such as
password
,pwd
, oruserPassword
. Make sure the property you select matches the canonical form for user passwords.- Snapshot Token
OpenIDM User Self-Service uses JWT tokens, with a default token lifetime of 1800 seconds.
You can reorder how OpenIDM works with relevant self-service options, specifically reCAPTCHA, KBA stage questions, and email validation. Based on the following screen, users who need to reset their passwords will go through reCAPTCHA, followed by email validation, and then answer any configured KBA questions.
To reorder the steps, either "drag and drop" the options in the Admin UI, or
change the sequence in the associated configuration file, in the
project-dir/conf
directory.
OpenIDM generates a token for each process. For example, users who forget their usernames and passwords go through two steps:
The user goes through the User Registration process, gets a JWT token, and has the token lifetime (default = 1800 seconds) to get to the next step in the process.
With username in hand, that user may then start the Password Reset process. That user gets a second JWT token, with the token lifetime configured for that process.
5.1. Common Configuration Details
This section describes configuration details common to OpenIDM Self-Service features: User Registration, Password Reset, and Forgotten Username.
5.1.1. Configuring Self-Service Email Messages
When a user requests a new account, a Password Reset, or a reminder of their username, you can configure OpenIDM to send that user an email message, to confirm the request.
You can configure that email message either through the UI or the
associated configuration files, as illustrated in the following excerpt of
the selfservice-registration.json
file.
{ "stageConfigs" : { { "name" : "emailValidation", "identityEmailField" : "mail", "emailServiceUrl" : "external/email", "from" : "admin@example.net", "subject" : "Register new account", "mimeType" : "text/html", "subjectTranslations" : { "en" : "Register new account", "fr" : "Créer un nouveau compte" }, "messageTranslations" : { "en" : "<h3>This is your registration email.</h3><h4><a href=\"%link%\">Email verification link</a></h4>", "fr" : "<h3>Ceci est votre mail d'inscription.</h3><h4><a href=\"%link%\">Lien de vérification email</a></h4>", "verificationLinkToken" : "%link%", "verificationLink" : "https://localhost:8443/#register/" } ...
Note the two languages in the subjectTranslations
and messageTranslations
code blocks. You can add
translations for languages other than US English en
and French fr
. Use the appropriate two-letter code
based on ISO 639. End users will see the message in the language
configured in their web browsers.
You can set up similar emails for password reset and forgotten username
functionality, in the selfservice-reset.json
and
selfservice-username.json
files. For templates,
see the /path/to/openidm/samples/misc
directory.
One difference between User Registration and Password Reset is
in the verificationLink
; for Password Reset,
the corresponding URL is:
... "verificationLink" : "https://localhost:8443/#passwordReset/" ...
Substitute the IP address or FQDN where you've deployed OpenIDM for
localhost
.
5.1.2. Configuring Google reCAPTCHA
To use Google reCAPTCHA, you will need a Google account and your domain
name (RFC 2606-compliant URLs such as localhost
and
example.com
are acceptable for test purposes). Google
then provides a Site key and a Secret key that you can include in the
self-service function configuration.
For example, you can add the following reCAPTCHA code block (with appropriate
keys as defined by Google) into the selfservice-registration.json
,
selfservice-reset.json
or the
selfservice-username.json
configuration files.
{ "stageConfigs" : [ { "name" : "captcha", "recaptchaSiteKey" : "< Insert Site Key Here >", "recaptchaSecretKey" : "< Insert Secret Key Here >", "recaptchaUri" : "https://www.google.com/recaptcha/api/siteverify" },
You may also add the reCAPTCHA keys through the UI.
5.1.3. Configuring Self-Service Questions
OpenIDM uses Knowledge-based Authentication (KBA) to help users prove
their identity when they perform the noted functions. In other words,
they get a choice of questions configured in the following file:
selfservice.kba.json
.
The default version of this file is straightforward:
{ "kbaPropertyName" : "kbaInfo", "questions" : { "1" : { "en" : "What's your favorite color?", "en_GB" : "What's your favorite colour?", "fr" : "Quelle est votre couleur préférée?" }, "2" : { "en" : "Who was your first employer?" } } }
You may change or add the questions of your choice, in JSON format.
At this time, OpenIDM supports editing KBA questions only through the noted configuration file. However, individual users can configure their own questions and answers, during the User Registration process.
After a regular user logs into the Self-Service UI, that user can modify, add, and delete KBA questions under the Profile tab:
Note
The Self-Service KBA modules do not preserve the case of the answers when
they hash the value. All answers are first converted to lowercase. If you
intend to pre-populate KBA answer strings by using a mapping, or any other
means that uses the openidm.hash
function or the CLI
secureHash
mechanism, you must provide the KBA string
in lowercase for the value to be matched correctly.
5.1.4. Setting a Minimum Number of Self-Service Questions
In addition, you can set a minimum number of questions that users have to
define to register for their accounts. To do so, open the associated
configuration file, selfservice-registration.json
,
in your project-dir/conf
directory. Look for the code block that starts with
kbaSecurityAnswerDefinitionStage
:
{ "name" : "kbaSecurityAnswerDefinitionStage", "numberOfAnswersUserMustSet" : 1, "kbaConfig" : null },
In a similar fashion, you can set a minimum number of questions that users
have to answer before OpenIDM allows them to reset their passwords. The
associated configuration file is
selfservice-reset.json
, and the relevant code block
is:
{ "name" : "kbaSecurityAnswerVerificationStage", "kbaPropertyName" : "kbaInfo", "identityServiceUrl" : "managed/user", "numberOfQuestionsUserMustAnswer" : "1", "kbaConfig" : null },
5.1.5. Enabling Social Identities in User Self-Registration
If you've configured a social identity provider as described in "Configuring Social ID Providers", you can enable those providers in the Admin UI. To do so, select Configure > User Registration, and enable the option associated with Social Registration.
When you activate the Social Registration option, that changes one line
in the selfservice-registration.json
file, from:
"name" : "userDetails",
to:
"name" : "socialUserDetails",
5.2. The End User and Commons User Self-Service
When all self-service features are enabled, OpenIDM includes three links on
the self-service login page: Reset your password
,
Register
, and Forgot Username?
.
When the account registration page is used to create an account, OpenIDM creates a managed object in the OpenIDM repository, and applies default policies for managed objects.
Chapter 6. Managing the Repository
OpenIDM stores managed objects, internal users, and configuration objects in a repository. By default, the server uses OrientDB for its internal repository. In production, you must replace OrientDB with a supported JDBC repository, as described in "Installing a Repository For Production" in the Installation Guide.
This chapter describes the JDBC repository configuration, the use of mappings in the repository, and how to configure a connection to the repository over SSL. It also describes how to interact with the repository over the REST interface.
6.1. Understanding the JDBC Repository Configuration File
OpenIDM provides configuration files for each supported JDBC repository, as
well as example configurations for other repositories. These configuration
files are located in the
/path/to/openidm/db/database/conf
directory. The configuration is defined in two files:
datasource.jdbc-default.json
, which specifies the connection details to the repository.repo.jdbc.json
, which specifies the mapping between OpenIDM resources and the tables in the repository, and includes a number of predefined queries.
Copy the configuration files for your specific database type to your
project's conf/
directory.
6.1.1. Understanding the JDBC Connection Configuration File
The default database connection configuration file for a MySQL database follows:
{ "driverClass" : "com.mysql.jdbc.Driver", "jdbcUrl" : "jdbc:mysql://&{openidm.repo.host}:&{openidm.repo.port}/openidm?allowMultiQueries=true&characterEncoding=utf8", "databaseName" : "openidm", "username" : "openidm", "password" : "openidm", "connectionTimeout" : 30000, "connectionPool" : { "type" : "hikari", "minimumIdle" : 20, "maximumPoolSize" : 50 } }
The configuration file includes the following properties:
driverClass
,jndiName
, orjtaName
Depending on the mechanism you use to acquire the data source, set one of these properties:
"driverClass" : string
To use the JDBC driver manager to acquire a data source, set this property, as well as
"jdbcUrl"
,"username"
, and"password"
. The driver class must be the fully qualified class name of the database driver to use for your database.Using the JDBC driver manager to acquire a data source is the most likely option, and the only one supported "out of the box". The remaining options in the sample repository configuration file assume that you are using a JDBC driver manager.
Example:
"driverClass" : "com.mysql.jdbc.Driver"
"jndiName" : string
If you use JNDI to acquire the data source, set this property to the JNDI name of the data source.
This option might be relevant if you want to run OpenIDM inside your own web container.
Example:
"jndiName" : "jdbc/my-datasource"
"jtaName" : string
If you use an OSGi service to acquire the data source, set this property to a stringified version of the OsgiName.
This option would only be relevant in a highly customized deployment, for example, if you wanted to develop your own connection pool.
Example:
"jtaName" : "osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/openidm)"
jdbcUrl
The connection URL to the JDBC database. The URL should include all of the parameters required by your database. For example, to specify the encoding in MySQL use
'characterEncoding=utf8'
.Specify the values for
openidm.repo.host
andopenidm.repo.port
in one of the following ways:Set the values in your project's
conf/system.properties
orconf/boot/boot.properties
file, for example:openidm.repo.host = localhost openidm.repo.port = 3306
Set the properties in the
OPENIDM_OPTS
environment variable and export that variable before startup. You must include the JVM memory options when you set this variable. For example:$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dopenidm.repo.host=localhost -Dopenidm.repo.port=3306" $ ./startup.sh Executing ./startup.sh... Using OPENIDM_HOME: /path/to/openidm Using PROJECT_HOME: /path/to/openidm Using OPENIDM_OPTS: -Xmx1024m -Xms1024m -Dopenidm.repo.host=localhost -Dopenidm.repo.port=3306 Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties Using boot properties at /path/to/openidm/conf/boot/boot.properties -> OpenIDM version "5.0.0" OpenIDM ready
databaseName
The name of the database to which OpenIDM connects. By default, this is
openidm
.username
The username with which to access the JDBC database.
password
The password with which to access the JDBC database. OpenIDM automatically encrypts clear string passwords. To replace an existing encrypted value, replace the whole
crypto-object
value, including the brackets, with a string of the new password.connectionTimeout
The period of time, in milliseconds, after which OpenIDM should consider an attempted connection to the database to have failed. The default period is 30000 milliseconds (30 seconds).
connectionPool
Database connection pooling configuration. The default connection pool library is Hikari (
"type" : "hikari"
).OpenIDM uses the default Hikari configuration, except for the following parameters. You might need to adjust these parameters, according to your database workload:
minimumIdle
This property controls the minimum number of idle connections that Hikari maintains in the connection pool. If the number of idle connections drops below this value, Hikari attempts to add additional connections.
By default, Hikari runs as a fixed-sized connection pool, that is, this property is not set. The connection configuration files provided with OpenIDM set the minimum number of idle connections to
20
.maximumPoolSize
This property controls the maximum number of connections to the database, including idle connections and connections that are being used.
By default, Hikari sets the maximum number of connections to
10
. The connection configuration files provided with OpenIDM set the maximum number of connections to50
.
For information about the Hikari configuration parameters, see the Hikari Project Page.
OpenIDM also supports the BoneCP connection pool library. To use BoneCP, change the configuration as follows:
"connectionPool" : { "type" : "bonecp" }
OpenIDM uses the default BoneCP configuration, except for the following parameters. You might need to adjust these parameters, according to your database workload:
partitionCount
The partition count determines the lock segmentation in the connection pool. Each incoming connection request acquires a connection from a pool that has thread-affinity. Threads are dispatched to the appropriate lock by using a value of
threadId % partitionCount
. A partition count that is greater than 1 protects the connection pool with more than a single lock, thereby reducing lock contention.By default, BoneCP creates a single partition. The JDBC Connection Configuration Files provided with OpenIDM set the partition count to
4
.maxConnectionsPerPartition
The maximum number of connections to create per partition. The maximum number of database connections is equal to
partitionCount * maxConnectionsPerPartition
. BoneCP does not create all these connections at once, but starts off with theminConnectionsPerPartition
and gradually increases connections as required.By default, BoneCP creates a maximum of
20
connections per partition. The JDBC Connection Configuration Files provided with OpenIDM set the maximum connections per partition to25
.minConnectionsPerPartition
The number of connections to start off with, per partition. The minimum number of database connections is equal to
partitionCount * minConnectionsPerPartition
.By default, BoneCP starts with a minimum of
1
connection per partition. The JDBC Connection Configuration Files provided with OpenIDM set the minimum connections per partition to5
.
For more information about the BoneCP configuration parameters, see http://www.jolbox.com/configuration.html.
6.1.2. Understanding the Database Table Configuration
An excerpt from an database table configuration file follows:
{ "dbType" : "MYSQL", "useDataSource" : "default", "maxBatchSize" : 100, "maxTxRetry" : 5, "queries" : {...}, "commands" : {...}, "resourceMapping" : {...} }
The configuration file includes the following properties:
"dbType"
: string, optionalThe type of database. The database type might affect the queries used and other optimizations. Supported database types include MYSQL, SQLSERVER, ORACLE, MS SQL, and DB2.
"useDataSource"
: string, optionalThis option refers to the connection details that are defined in the configuration file, described previously. The default configuration file is named
datasource.jdbc-default.json
. This is the file that is used by default (and the value of the"useDataSource"
is therefore"default"
). You might want to specify a different connection configuration file, instead of overwriting the details in the default file. In this case, set your connection configuration filedatasource.jdbc-name.json
and set the value of"useDataSource"
to whatever name you have used."maxBatchSize"
The maximum number of SQL statements that will be batched together. This parameter allows you to optimize the time taken to execute multiple queries. Certain databases do not support batching, or limit how many statements can be batched. A value of
1
disables batching."maxTxRetry"
The maximum number of times that a specific transaction should be attempted before that transaction is aborted.
"queries"
Enables you to create predefined queries that can be referenced from the configuration. For more information about predefined queries, see "Parameterized Queries". The queries are divided between those for
"genericTables"
and those for"explicitTables"
.The following sample extract from the default MySQL configuration file shows two credential queries, one for a generic mapping, and one for an explicit mapping. Note that the lines have been broken here for legibility only. In a real configuration file, the query would be all on one line.
"queries" : { "genericTables" : { "credential-query" : "SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj INNER JOIN ${_dbSchema}.${_propTable} prop ON obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes objtype ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName' AND prop.propvalue = ${username} AND objtype.objecttype = ${_resource}", ... "explicitTables" : { "credential-query" : "SELECT * FROM ${_dbSchema}.${_table} WHERE objectid = ${username} and accountStatus = 'active'", ... } }
Options supported for query parameters include the following:
A default string parameter, for example:
openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });
For more information about the query function, see "openidm.query(resourceName, params, fields)".
A list parameter (
${list:propName}
).Use this parameter to specify a set of indeterminate size as part of your query. For example:
WHERE targetObjectId IN (${list:filteredIds})
An integer parameter (
${int:propName}
).Use this parameter if you need query for non-string values in the database. This is particularly useful with explicit tables.
"commands"
Specific commands configured for to managed the database over the REST interface. Currently, only two default commands are included in the configuration:
purge-by-recon-expired
purge-by-recon-number-of
Both of these commands assist with removing stale reconciliation audit information from the repository, and preventing the repository from growing too large. For more information about repository commands, see "Running Queries and Commands on the Repository".
"resourceMapping"
Defines the mapping between OpenIDM resource URIs (for example,
managed/user
) and JDBC tables. The structure of the resource mapping is as follows:"resourceMapping" : { "default" : { "mainTable" : "genericobjects", "propertiesTable" : "genericobjectproperties", "searchableDefault" : true }, "genericMapping" : {...}, "explicitMapping" : {...} }
The default mapping object represents a default generic table in which any resource that does not have a more specific mapping is stored.
The generic and explicit mapping objects are described in the following section.
6.2. Using Explicit or Generic Object Mapping With a JDBC Repository
For JDBC repositories, there are two ways of mapping OpenIDM objects to the database tables:
Generic mapping, which allows arbitrary objects to be stored without special configuration or administration.
Explicit mapping, which allows for optimized storage and queries by explicitly mapping objects to tables and columns in the database.
These two mapping strategies are discussed in the following sections.
6.2.1. Using Generic Mappings
Generic mapping speeds up development, and can make system maintenance more flexible by providing a more stable database structure. However, generic mapping can have a performance impact and does not take full advantage of the database facilities (such as validation within the database and flexible indexing). In addition, queries can be more difficult to set up.
In a generic table, the entire object content is stored in a single
large-character field named fullobject
in the
mainTable
for the object. To search on specific fields,
you can read them by referring to them in the corresponding
properties table
for that object. The disadvantage of
generic objects is that, because every property you might like to filter by
is stored in a separate table, you must join to that table each time you need
to filter by anything.
The following diagram shows a pared down database structure for the default generic table, and indicates the relationship between the main table and the corresponding properties table for each object.
These separate tables can make the query syntax particularly complex. For example, a simple query to return user entries based on a user name would need to be implemented as follows:
SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj INNER JOIN ${_dbSchema}.${_propTable} prop ON obj.id = prop.${_mainTable}_id INNER JOIN ${_dbSchema}.objecttypes objtype ON objtype.id = obj.objecttypes_id WHERE prop.propkey='/userName' AND prop.propvalue = ${uid} AND objtype.objecttype = ${_resource}",
The query can be broken down as follows:
Select the full object from the main table:
SELECT fullobject FROM ${_dbSchema}.${_mainTable} obj
Join to the properties table and locate the object with the corresponding ID:
INNER JOIN ${_dbSchema}.${_propTable} prop ON obj.id = prop.${_mainTable}_id
Join to the object types table to restrict returned entries to objects of a specific type. For example, you might want to restrict returned entries to
managed/user
objects, ormanaged/role
objects:INNER JOIN ${_dbSchema}.objecttypes objtype ON objtype.id = obj.objecttypes_id
Filter records by the
userName
property, where the userName is equal to the specifieduid
and the object type is the specified type (in this case, managed/user objects):WHERE prop.propkey='/userName' AND prop.propvalue = ${uid} AND objtype.objecttype = ${_resource}",
The value of the
uid
field is provided as part of the query call, for example:openidm.query("managed/user", { "_queryId": "for-userName", "uid": "jdoe" });
Tables for user definable objects use a generic mapping by default.
The following sample generic mapping object illustrates how
managed/
objects are stored in a generic table:
"genericMapping" : { "managed/*" : { "mainTable" : "managedobjects", "propertiesTable" : "managedobjectproperties", "searchableDefault" : true, "properties" : { "/picture" : { "searchable" : false } } } },
mainTable
(string, mandatory)Indicates the main table in which data is stored for this resource.
The complete object is stored in the
fullobject
column of this table. The table includes anentityType
foreign key that is used to distinguish the different objects stored within the table. In addition, the revision of each stored object is tracked, in therev
column of the table, enabling multi version concurrency control (MVCC). For more information, see "Manipulating Managed Objects Programmatically".propertiesTable
(string, mandatory)Indicates the properties table, used for searches.
The contents of the properties table is a defined subset of the properties, copied from the character large object (CLOB) that is stored in the
fullobject
column of the main table. The properties are stored in a one-to-many style separate table. The set of properties stored here is determined by the properties that are defined assearchable
.The stored set of searchable properties makes these values available as discrete rows that can be accessed with SQL queries, specifically, with
WHERE
clauses. It is not otherwise possible to query specific properties of the full object.The properties table includes the following columns:
${_mainTable}_id
corresponds to theid
of the full object in the main table, for example,manageobjects_id
, orgenericobjects_id
.propkey
is the name of the searchable property, stored in JSON pointer format (for example/mail
).proptype
is the data type of the property, for examplejava.lang.String
. The property type is obtained from the Class associated with the value.propvalue
is the value of property, extracted from the full object that is stored in the main table.Regardless of the property data type, this value is stored as a string, so queries against it should treat it as such.
searchableDefault
(boolean, optional)Specifies whether all properties of the resource should be searchable by default. Properties that are searchable are stored and indexed. You can override the default for individual properties in the
properties
element of the mapping. The preceding example indicates that all properties are searchable, with the exception of thepicture
property.For large, complex objects, having all properties searchable implies a substantial performance impact. In such a case, a separate insert statement is made in the properties table for each element in the object, every time the object is updated. Also, because these are indexed fields, the recreation of these properties incurs a cost in the maintenance of the index. You should therefore enable
searchable
only for those properties that must be used as part of a WHERE clause in a query.properties
Lists any individual properties for which the searchable default should be overridden.
Note that if an object was originally created with a subset of
searchable
properties, changing this subset (by adding a newsearchable
property in the configuration, for example) will not cause the existing values to be updated in the properties table for that object. To add the new property to the properties table for that object, you must update or recreate the object.
6.2.2. Improving Search Performance for Generic Mappings
All properties in a generic mapping are searchable by default. In other
words, the value of the searchableDefault
property is
true
unless you explicitly set it to false. Although
there are no individual indexes in a generic mapping, you can improve search
performance by setting only those properties that you need to search as
searchable
. Properties that are searchable are created
within the corresponding properties table. The properties table exists only
for searches or look-ups, and has a composite index, based on the resource,
then the property name.
The sample JDBC repository configuration files
(db/database/conf/repo.jdbc.json
)
restrict searches to specific properties by setting the
searchableDefault
to false
for
managed/user
mappings. You must explicitly set
searchable
to true for each property that should be
searched. The following sample extract from
repo.jdbc.json
indicates searches restricted to the
userName
property:
"genericMapping" : { "managed/user" : { "mainTable" : "manageduserobjects", "propertiesTable" : "manageduserobjectproperties", "searchableDefault" : false, "properties" : { "/userName" : { "searchable" : true } } } },
With this configuration, OpenIDM creates entries in the properties table
only for userName
properties of managed user objects.
If the global searchableDefault
is set to false,
properties that do not have a searchable attribute explicitly set to true
are not written in the properties table.
6.2.3. Using Explicit Mappings
Explicit mapping is more difficult to set up and maintain, but can take complete advantage of the native database facilities.
An explicit table offers better performance and simpler queries. There is less work in the reading and writing of data, since the data is all in a single row of a single table. In addition, it is easier to create different types of indexes that apply to only specific fields in an explicit table. The disadvantage of explicit tables is the additional work required in creating the table in the schema. Also, because rows in a table are inherently more simple, it is more difficult to deal with complex objects. Any non-simple key:value pair in an object associated with an explicit table is converted to a JSON string and stored in the cell in that format. This makes the value difficult to use, from the perspective of a query attempting to search within it.
Note that it is possible to have a generic mapping configuration for most
managed objects, and to have an explicit mapping that
overrides the default generic mapping in certain cases. The sample
configuration provided in
/path/to/openidm/db/mysql/conf/repo.jdbc-mysql-explicit-managed-user.json
has a generic mapping for managed objects, but an explicit mapping for
managed user objects.
OpenIDM uses explicit mapping for internal system tables, such as the tables used for auditing.
Depending on the types of usage your system is supporting, you might find that an explicit mapping performs better than a generic mapping. Operations such as sorting and searching (such as those performed in the default UI) tend to be faster with explicitly-mapped objects, for example.
The following sample explicit mapping object illustrates how
internal/user
objects are stored in an explicit table:
"explicitMapping" : { "internal/user" : { "table" : "internaluser", "objectToColumn" : { "_id" : "objectid", "_rev" : "rev", "password" : "pwd", "roles" : "roles" } }, ... }
<resource-uri>
(string, mandatory)Indicates the URI for the resources to which this mapping applies, for example,
"internal/user"
.table
(string, mandatory)The name of the database table in which the object (in this case internal users) is stored.
objectToColumn
(string, mandatory)The way in which specific managed object properties are mapped to columns in the table.
The mapping can be a simple one to one mapping, for example
"userName": "userName",
or a more complex JSON map or list. When a column is mapped to a JSON map or list, the syntax is as shown in the following examples:"messageDetail" : { "column" : "messagedetail", "type" : "JSON_MAP" }
or
"roles": { "column" : "roles", "type" : "JSON_LIST" }
Caution
Support for data types in columns is restricted to String
(VARCHAR
in the case of MySQL). If you use a different
data type, such as DATE
or TIMESTAMP
,
your database must attempt to convert from String
to the
other data type. This conversion is not guaranteed to work.
If the conversion does work, the format might not be the same when it is
read from the database as it was when it was saved. For example, your
database might parse a date in the format 12/12/2012
and return the date in the format 2012-12-12
when the
property is read.
6.3. Configuring SSL with a JDBC Repository
To configure SSL with a JDBC repository, you need to import the CA
certificate file for the server into the OpenIDM truststore. That certificate
file could have a name like ca-cert.pem
. If you have a
different genuine or self-signed certificate file, substitute accordingly.
To import the CA certificate file into the OpenIDM truststore, use the
keytool command native to the Java environment, typically
located in the /path/to/jre-version/bin
directory. On
some UNIX-based systems, /usr/bin/keytool may link
to that command.
Import the
ca-cert.pem
certificate into the OpenIDM truststore file with the following command:$ keytool \ -importcert \ -trustcacerts \ -file ca-cert.pem \ -alias "DB cert" \ -keystore /path/to/openidm/security/truststore
You are prompted for a keystore password. You must use the same password as is shown in the your project's
conf/boot/boot.properties
file. The default truststore password is:openidm.truststore.password=changeit
After entering a keystore password, you are prompted with the following question. Assuming you have included an appropriate
ca-cert.pem
file, enteryes
.Trust this certificate? [no]:
Open the repository connection configuration file,
datasource.jdbc-default.json
and locate thejdbcUrl
property.Append
&useSSL=true
to the end of that URL.The value of the
jdbcUrl
property depends on your JDBC repository. The following example shows a MySQL repository, configured for SSL:"jdbcUrl" : "jdbc:mysql://&{openidm.repo.host}:&{openidm.repo.port}/openidm?allowMultiQueries=true&characterEncoding=utf8&useSSL=true"
Open your project's
conf/config.properties
file. Find theorg.osgi.framework.bootdelegation
property. Make sure that property includes a reference to thejavax.net.ssl
option. If you started with the default version ofconfig.properties
that line should now read as follows:org.osgi.framework.bootdelegation=sun.*,com.sun.*,apple.*,com.apple.*,javax.net.ssl
Open your project's
conf/system.properties
file. Add the following line to that file. If appropriate, substitute the path to your own truststore:# Set the truststore javax.net.ssl.trustStore=&{launcher.install.location}/security/truststore
Even if you are setting up this instance of OpenIDM as part of a cluster, you still need to configure this initial truststore. After this instance joins a cluster, the SSL keys in this particular truststore are replaced. For more information on clustering, see "Clustering, Failover, and Availability".
6.4. Interacting With the Repository Over REST
The OpenIDM repository is accessible over the REST interface, at the
openidm/repo
endpoint.
In general, you must ensure that external calls to the
openidm/repo
endpoint are protected. Native queries and
free-form command actions on this endpoint are disallowed by default, as the endpoint
is vulnerable to injection attacks. For more information, see
"Running Queries and Commands on the Repository".
6.4.1. Changing the Repository Password
In the case of an embedded OrientDB repository, the default username and
password are admin
and admin
. You can
change the default password, by sending the following POST request on the
repo
endpoint:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "https://localhost:8443/openidm/repo?_action=updateDbCredentials&user=admin&password=newPassword"
You must restart OpenIDM for the change to take effect.
6.4.2. Running Queries and Commands on the Repository
Free-form commands and native queries on the repository are disallowed by default and should remain so in production to reduce the risk of injection attacks.
Common filter expressions, called with the _queryFilter
keyword, enable you to form arbitrary queries on the repository, using a
number of supported filter operations. For more information on these filter
operations, see "Constructing Queries". Parameterized or
predefined queries and commands (using the _queryId
and
_commandId
keywords) can be authorized on the repository
for external calls if necessary. For more information, see
"Parameterized Queries".
Running commands on the repository is supported primarily from scripts. Certain scripts that interact with the repository are provided by default, for example, the scripts that enable you to purge the repository of reconciliation audit records.
You can define your own commands, and specify them in the database table
configuration file (either repo.orientdb.json
or
repo.jdbc.json
). In the following simple example, a
command is called to clear out UI notification entries from the repository,
for specific users.
The command is defined in the repository configuration file, as follows:
"commands" : { "delete-notifications-by-id" : "DELETE FROM ui_notification WHERE receiverId = ${username}" ... },
The command can be called from a script, as follows:
openidm.action("repo/ui/notification", "command", {}, { "commandId" : "delete-notifications-by-id", "userName" : "scarter"});
Exercise caution when allowing commands to be run on the repository over the REST interface, as there is an attached risk to the underlying data.
Chapter 7. Configuring the Server
This chapter describes how OpenIDM loads and stores its configuration, how the configuration can be changed, and specific configuration recommendations in a production environment.
The configuration is defined in a combination of
.properties
files, container configuration files, and
dynamic configuration objects. Most of the configuration files are
stored in your project's conf/
directory. Note that you
might see files with a .patch
extension in the
conf/
and
db/repo/conf/
directories.
These files specify differences relative to the last released version of
OpenIDM and are used by the update mechanism. They do not affect your
current configuration.
When the same configuration object is declared in more than one location, the configuration is loaded with the following order of precedence:
System properties passed in on startup through the
OPENIDM_OPTS
environment variableProperties declared in the
project-dir/conf/system.properties
fileProperties declared in the
project-dir/conf/boot/boot.properties
fileProperties set explicitly in the various
project-dir/conf/*.json
files
Properties that are set using the first three options are not stored in the repository. You can therefore use these mechanisms to set different configurations for multiple nodes participating in a cluster.
To set the configuration in the OPENIDM_OPTS
environment
variable, export that variable before startup. The following example starts
OpenIDM with a different keystore and truststore:
$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m \ -Dopenidm.keystore.location=/path/to/keystore.jceks -Dopenidm.truststore.location=/path/to/truststore" $ ./startup.sh Executing ./startup.sh... Using OPENIDM_HOME: /path/to/openidm Using PROJECT_HOME: /path/to/openidm Using OPENIDM_OPTS: -Xmx1024m -Xms1024m -Dopenidm.keystore.location=/path/to/keystore.jceks -Dopenidm.truststore.location=/path/to/truststore Using LOGGING_CONFIG: -Djava.util.logging.config.file=/path/to/openidm/conf/logging.properties Using boot properties at /path/to/openidm/conf/boot/boot.properties -> OpenIDM version "5" OpenIDM ready
Configuration properties that are explicitly set in
project-dir/conf/*.json
files are stored in the internal
repository. You can manage these configuration objects by using the REST
interface or by using the JSON files themselves. Several aspects of the
configuration can also be managed by using the Admin UI, as described in
"Configuring the Server from the Admin UI".
7.1. Configuration Objects
OpenIDM exposes internal configuration objects in JSON format. Configuration elements can be either single instance or multiple instance for an OpenIDM installation.
7.1.1. Single Instance Configuration Objects
Single instance configuration objects correspond to services that have at
most one instance per installation. JSON file views of these configuration
objects are named
object-name.json
.
The following list describes the single instance configuration objects:
The
audit
configuration specifies how audit events are logged.The
authentication
configuration controls REST access.The
cluster
configuration defines how one OpenIDM instance can be configured in a cluster.The
endpoint
configuration controls any custom REST endpoints.The
info
configuration points to script files for the customizable information service.The
managed
configuration defines managed objects and their schemas.The
policy
configuration defines the policy validation service.The
process access
configuration defines access to configured workflows.The
repo.repo-type
configuration such asrepo.orientdb
orrepo.jdbc
configures the internal repository.The
router
configuration specifies filters to apply for specific operations.The
script
configuration defines the parameters that are used when compiling, debugging, and running JavaScript and Groovy scripts.The
sync
configuration defines the mappings that OpenIDM uses when it synchronizes and reconciles managed objects.The
ui
configuration defines the configurable aspects of the default user interfaces.The
workflow
configuration defines the configuration of the workflow engine.
OpenIDM stores managed objects in the repository, and exposes them under
/openidm/managed
. System objects on external resources
are exposed under /openidm/system
.
The following image shows the paths to objects in the OpenIDM namespace.
7.1.2. Multiple Instance Configuration Objects
Multiple instance configuration objects correspond to services that can have
many instances per installation. Multiple instance configuration objects are
named
objectname/instancename
,
for example, provisioner.openicf/xml
.
JSON file views of these configuration objects
are named
objectname-instancename.json
,
for example, provisioner.openicf-xml.json.
OpenIDM provides the following multiple instance configuration objects:
Multiple
schedule
configurations can run reconciliations and other tasks on different schedules.Multiple
provisioner.openicf
configurations correspond to the resources connected to OpenIDM.Multiple
servletfilter
configurations can be used for different servlet filters such as the Cross Origin and GZip filters.
7.2. Changing the Default Configuration
When you change OpenIDM's configuration objects, take the following points into account:
OpenIDM's authoritative configuration source is the internal repository. JSON files provide a view of the configuration objects, but do not represent the authoritative source.
OpenIDM updates JSON files after making configuration changes, whether those changes are made through REST access to configuration objects, or through edits to the JSON files.
OpenIDM recognizes changes to JSON files when it is running. OpenIDM must be running when you delete configuration objects, even if you do so by editing the JSON files.
Avoid editing configuration objects directly in the internal repository. Rather, edit the configuration over the REST API, or in the configuration JSON files to ensure consistent behavior and that operations are logged.
OpenIDM stores its configuration in the internal database by default. If you remove an OpenIDM instance and do not specifically drop the repository, the configuration remains in effect for a new OpenIDM instance that uses that repository. For testing or evaluation purposes, you can disable this persistent configuration in the
conf/system.properties
file by uncommenting the following line:# openidm.config.repo.enabled=false
Disabling persistent configuration means that OpenIDM will store its configuration in memory only. You should not disable persistent configuration in a production environment.
7.3. Configuring the Server for Production
Out of the box, OpenIDM is configured to make it easy to install and evaluate. Specific configuration changes are required before you deploy the server in a production environment.
7.3.1. Configuring a Production Repository
By default, OpenIDM uses OrientDB for its internal repository so that you do not have to install a database in order to evaluate OpenIDM. Before you use OpenIDM in production, you must replace OrientDB with a supported repository.
For more information, see "Installing a Repository For Production" in the Installation Guide.
7.3.2. Disabling Automatic Configuration Updates
By default, OpenIDM polls the JSON files in the conf
directory periodically for any changes to the configuration. In a production
system, it is recommended that you disable automatic polling for updates to
prevent untested configuration changes from disrupting your identity service.
To disable automatic polling for configuration changes, edit the
conf/system.properties
file for your project, and
uncomment the following line:
# openidm.fileinstall.enabled=false
This setting also disables the file-based configuration view, which means that OpenIDM reads its configuration only from the repository.
Before you disable automatic polling, you must have started the OpenIDM instance at least once to ensure that the configuration has been loaded into the repository. Be aware, if automatic polling is enabled, OpenIDM immediately uses changes to scripts called from a JSON configuration file.
When your configuration is complete, you can disable writes to configuration
files. To do so, add the following line to the
conf/config.properties
file for your project:
felix.fileinstall.enableConfigSave=false
7.3.3. Communicating Through a Proxy Server
To set up OpenIDM to communicate through a proxy server, you must use JVM parameters that identify the proxy host system, and the OpenIDM port number.
If you have configured OpenIDM behind a proxy server, include JVM properties from the following table, in the OpenIDM startup script:
If an insecure port is acceptable, you can also use the
-Dhttp.proxyHost
and -Dhttp.proxyPort
options. You can add these JVM proxy properties to the value of
OPENIDM_OPTS
in your startup script
(startup.sh
or startup.bat
):
# Only set OPENIDM_OPTS if not already set [ -z "$OPENIDM_OPTS" ] && OPENIDM_OPTS="-Xmx1024m -Xms1024m -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8443"
7.4. Configuring the Server Over REST
OpenIDM exposes configuration objects under the
/openidm/config
context path.
You can list the configuration on the local host by performing a GET request
on http://localhost:8080/openidm/config
. The examples
shown in this section are based on first OpenIDM sample, described in "Reconciling an XML File Resource" in the Samples Guide.
The following REST call includes excerpts of the default configuration for an OpenIDM instance started with Sample 1:
$ curl \ --request GET \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ http://localhost:8080/openidm/config { "_id" : "", "configurations" : [ { "_id" : "endpoint/usernotifications", "pid" : "endpoint.95b46fcd-f0b7-4627-9f89-6f3180c826e4", "factoryPid" : "endpoint" }, { "_id" : "router", "pid" : "router", "factoryPid" : null }, ... { "_id" : "endpoint/reconResults", "pid" : "endpoint.ad3f451c-f34e-4096-9a59-0a8b7bc6989a", "factoryPid" : "endpoint" }, { "_id" : "endpoint/gettasksview", "pid" : "endpoint.bc400043-f6db-4768-92e5-ebac0674e201", "factoryPid" : "endpoint" }, ... { "_id" : "workflow", "pid" : "workflow", "factoryPid" : null }, { "_id" : "ui.context/selfservice", "pid" : "ui.context.537a5838-217b-4f67-9301-3fde19a51784", "factoryPid" : "ui.context" } ] }
Single instance configuration objects are located under
openidm/config/object-name
.
The following example shows the Sample 1 audit
configuration. The output has been cropped for legibility:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ "http://localhost:8080/openidm/config/audit" { "_id": "audit", "auditServiceConfig": { "handlerForQueries": "repo", "availableAuditEventHandlers": [ "org.forgerock.audit.handlers.csv.CsvAuditEventHandler", "org.forgerock.audit.handlers.elasticsearch.ElasticsearchAuditEventHandler", "org.forgerock.audit.handlers.jms.JmsAuditEventHandler", "org.forgerock.audit.handlers.json.JsonAuditEventHandler", "org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler", "org.forgerock.openidm.audit.impl.RouterAuditEventHandler", "org.forgerock.audit.handlers.splunk.SplunkAuditEventHandler", "org.forgerock.audit.handlers.syslog.SyslogAuditEventHandler" ], "filterPolicies": { "value": { "excludeIf": [ "/access/http/request/headers/Authorization", "/access/http/request/headers/X-OpenIDM-Password", "/access/http/request/cookies/session-jwt", "/access/http/response/headers/Authorization", "/access/http/response/headers/X-OpenIDM-Password" ], "includeIf": [] } } }, "eventHandlers": [ { "class": "org.forgerock.audit.handlers.json.JsonAuditEventHandler", "config": { "name": "json", "logDirectory": "/path/to/openidm/audit", "topics": [ "access", "activity", "recon", "sync", "authentication", "config" ] } }, ... }
Multiple instance configuration objects are found under
openidm/config/object-name/instance-name
.
The following example shows the configuration for the XML connector shown in the first OpenIDM sample. The output has been cropped for legibility:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ "http://localhost:8080/openidm/config/provisioner.openicf/xml" { "_id": "provisioner.openicf/xml", "name": "xmlfile", "connectorRef": { "bundleName": "org.forgerock.openicf.connectors.xml-connector", "bundleVersion": "[1.1.0.3,1.2.0.0)", "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector" }, "producerBufferSize": 100, "connectorPoolingSupported": true, "poolConfigOption": { "maxObjects": 1, "maxIdle": 1, "maxWait": 150000, "minEvictableIdleTimeMillis": 120000, "minIdle": 1 }, "operationTimeout": { "CREATE": -1, "TEST": -1, "AUTHENTICATE": -1, "SEARCH": -1, "VALIDATE": -1, "GET": -1, "UPDATE": -1, "DELETE": -1, "SCRIPT_ON_CONNECTOR": -1, "SCRIPT_ON_RESOURCE": -1, "SYNC": -1, "SCHEMA": -1 }, "configurationProperties": { "xsdIcfFilePath": "/path/to/openidm/samples/sample1/data/resource-schema-1.xsd", "xsdFilePath": "/path/to/openidm/samples/sample1/data/resource-schema-extension.xsd", "xmlFilePath": "/path/to/openidm/samples/sample1/data/xmlConnectorData.xml" }, ... }
You can change the configuration over REST by using an HTTP PUT or HTTP PATCH request to modify the required configuration object.
The following example uses a PUT request to modify the configuration of the scheduler service, increasing the maximum number of threads that are available for the concurrent execution of scheduled tasks:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PUT \ --data '{ "threadPool": { "threadCount": "20" }, "scheduler": { "executePersistentSchedules": "&{openidm.scheduler.execute.persistent.schedules}" } }' \ "http://localhost:8080/openidm/config/scheduler" { "_id" : "scheduler", "threadPool": { "threadCount": "20" }, "scheduler": { "executePersistentSchedules": "true" } }
The following example uses a PATCH request to reset the number of threads to their original value.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation" : "replace", "field" : "/threadPool/threadCount", "value" : "10" } ]' \ "http://localhost:8080/openidm/config/scheduler" { "_id": "scheduler", "threadPool": { "threadCount": "10" }, "scheduler": { "executePersistentSchedules": "true" } }
Note
Multi-version concurrency control (MVCC) is not supported for configuration objects so you do not need to specify a revision during updates to the configuration, and no revision is returned in the output.
For more information about using the REST API to update objects, see "REST API Reference".
7.5. Using Property Value Substitution In the Configuration
In an environment where you have more than one OpenIDM instance, you might require a configuration that is similar, but not identical, across the different OpenIDM hosts. OpenIDM supports variable replacement in its configuration which means that you can modify the effective configuration according to the requirements of a specific environment or OpenIDM instance.
Property substitution enables you to achieve the following:
Define a configuration that is specific to a single OpenIDM instance, for example, setting the location of the keystore on a particular host.
Define a configuration whose parameters vary between different environments, for example, the URLs and passwords for test, development, and production environments.
Disable certain capabilities on specific nodes. For example, you might want to disable the workflow engine on specific instances.
When OpenIDM starts up, it combines the system configuration, which might contain specific environment variables, with the defined OpenIDM configuration properties. This combination makes up the effective configuration for that OpenIDM instance. By varying the environment properties, you can change specific configuration items that vary between OpenIDM instances or environments.
Property references are contained within the construct
&{ }
. When such references are found, OpenIDM replaces
them with the appropriate property value, defined in the
boot.properties
file.
For properties that would usually be encrypted, such as passwords, OpenIDM does not encrypt the property reference. You can therefore reference an obfuscated property value as shown in the following example:
Specify the reference in the configuration file:
{ ... "password" : "&{openidm.repo.password}", ... }
Provide the encrypted or obfuscated property value in the
boot.properties
file:
openidm.repo.password=OBF:1jmv1usdf1t3b1vuz1sfgsb1t2v1ufs1jkn
The following examples demonstrate additional use cases for property value substitution.
The following example defines two separate OpenIDM environments - a development environment and a production environment. You can specify the environment at startup time and, depending on the environment, the database URL is set accordingly.
The environments are defined by adding the following lines to the
conf/boot.properties
file:
PROD.location=production DEV.location=development
The database URL is then specified as follows in the
repo.orientdb.json
file:
{ "dbUrl" : "plocal:./db/&{&{environment}.location}-openidm", ... }
The effective database URL is determined by setting the
OPENIDM_OPTS
environment variable when you start OpenIDM.
To use the production environment, start OpenIDM as follows:
$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Denvironment=PROD" $ ./startup.sh
To use the development environment, start OpenIDM as follows:
$ export OPENIDM_OPTS="-Xmx1024m -Xms1024m -Denvironment=DEV" $ ./startup.sh
7.5.1. Using Property Value Substitution With System Properties
You can use property value substitution in conjunction with the system properties, to modify the configuration according to the system on which the OpenIDM instance runs.
The following example modifies the audit.json
file so
that the JSON audit logs are written to the user's directory. The
user.home
property is a default Java System property:
"eventHandlers" : [ { "class" : "org.forgerock.audit.handlers.json.JsonAuditEventHandler", "config" : { "name" : "json", "logDirectory" : "&{user.home}/audit", ... } }, ...
You can define nested properties (that is a property definition within another property definition) and you can combine system properties and boot properties.
The following example uses the user.country
property, a
default Java system property. The example defines specific LDAP ports,
depending on the country (identified by the country code) in the
boot.properties
file. The value of the LDAP port (set in
the provisioner.openicf-ldap.json
file) depends on the
value of the user.country
system property.
The port numbers are defined in the boot.properties
file as follows:
openidm.NO.ldap.port=2389 openidm.EN.ldap.port=3389 openidm.US.ldap.port=1389
The following excerpt of the provisioner.openicf-ldap.json
file shows how the value of the LDAP port is eventually determined, based on
the system property:
"configurationProperties" : { "credentials" : "Passw0rd", "port" : "&{openidm.&{user.country}.ldap.port}", "principal" : "cn=Directory Manager", "baseContexts" : [ "dc=example,dc=com" ], "host" : "localhost" }
7.5.2. Limitations of Property Value Substitution
Note the following limitations when you use property value substitution:
You cannot reference complex objects or properties with syntaxes other than string. Property values are resolved from the
boot.properties
file or from the system properties and the value of these properties is always in string format.Property substitution of boolean values is currently only supported in stringified format, that is, resulting in
"true"
or"false"
.
7.6. Setting the Script Configuration
The script configuration file (conf/script.json
)
enables you to modify the parameters that are used when compiling,
debugging, and running JavaScript and Groovy scripts.
The default script.json
file includes the following
parameters:
- properties
Any custom properties that should be provided to the script engine.
- ECMAScript
Specifies JavaScript debug and compile options. JavaScript is an ECMAScript language.
javascript.recompile.minimumInterval
- minimum time after which a script can be recompiled.The default value is
60000
, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around100
(100 milliseconds).
- Groovy
Specifies compilation and debugging options related to Groovy scripts. Many of these options are commented out in the default script configuration file. Remove the comments to set these properties:
groovy.warnings
- the log level for Groovy scripts. Possible values arenone
,likely
,possible
, andparanoia
.groovy.source.encoding
- the encoding format for Groovy scripts. Possible values areUTF-8
andUS-ASCII
.groovy.target.directory
- the directory to which compiled Groovy classes will be output. The default directory isinstall-dir/classes
.groovy.target.bytecode
- the bytecode version that is used to compile Groovy scripts. The default version is1.5
.groovy.classpath
- the directory in which the compiler should look for compiled classes. The default classpath isinstall-dir/lib
.To call an external library from a Groovy script, you must specify the complete path to the .jar file or files, as a value of this property. For example:
"groovy.classpath" : "/&{launcher.install.location}/lib/http-builder-0.7.1.jar: /&{launcher.install.location}/lib/json-lib-2.3-jdk15.jar: /&{launcher.install.location}/lib/xml-resolver-1.2.jar: /&{launcher.install.location}/lib/commons-collections-3.2.1.jar",
groovy.output.verbose
- specifies the verbosity of stack traces. Boolean,true
orfalse
.groovy.output.debug
- specifies whether debugging messages are output. Boolean,true
orfalse
.groovy.errors.tolerance
- sets the number of non-fatal errors that can occur before a compilation is aborted. The default is10
errors.groovy.script.extension
- specifies the file extension for Groovy scripts. The default is.groovy
.groovy.script.base
- defines the base class for Groovy scripts. By default any class extendsgroovy.lang.Script
.groovy.recompile
- indicates whether scripts can be recompiled. Boolean,true
orfalse
, with defaulttrue
.groovy.recompile.minimumInterval
- sets the minimum time between which Groovy scripts can be recompiled.The default value is
60000
, or 60 seconds. This means that any changes made to scripts will not get picked up for up to 60 seconds. If you are developing scripts, reduce this parameter to around100
(100 milliseconds).groovy.target.indy
- specifies whether a Groovy indy test can be used. Boolean,true
orfalse
, with defaulttrue
.groovy.disabled.global.ast.transformations
- specifies a list of disabled Abstract Syntax Transformations (ASTs).
- sources
Specifies the locations in which OpenIDM expects to find JavaScript and Groovy scripts that are referenced in the configuration.
The following excerpt of the
script.json
file shows the default locations:... "sources" : { "default" : { "directory" : "&{launcher.install.location}/bin/defaults/script" }, "install" : { "directory" : "&{launcher.install.location}" }, "project" : { "directory" : "&{launcher.project.location}" }, "project-script" : { "directory" : "&{launcher.project.location}/script" } ...
Note
The order in which locations are listed in the
sources
property is important. Scripts are loaded from the bottom up in this list, that is, scripts found in the last location on the list are loaded first.
Note
By default, debug information (such as file name and line number) is
excluded from JavaScript exceptions. To troubleshoot script exceptions, you
can include debug information by changing the following setting to
true
in your project's
conf/boot/boot.properties
file:
javascript.exception.debug.info=false
Including debug information in a production environment is not recommended.
7.7. Calling a Script From a Configuration File
You can call a script from within a configuration file by providing the script source, or by referencing a file that contains the script source. For example:
{ "type" : "text/javascript", "source": string }
or
{ "type" : "text/javascript", "file" : file location }
- type
string, required
Specifies the type of script to be executed. Supported types include
text/javascript
, andgroovy
.- source
string, required if
file
is not specifiedSpecifies the source code of the script to be executed.
- file
string, required if
source
is not specifiedSpecifies the file containing the source code of the script to execute.
The following sample excerpts from configuration files indicate how scripts can be called.
The following example (included in the sync.json
file)
returns true
if the employeeType
is
equal to external
, otherwise returns
false
. This script can be useful during reconciliation to
establish whether a target object should be included in the reconciliation
process, or should be ignored:
"validTarget": { "type" : "text/javascript", "source": "target.employeeType == 'external'" }
The following example (included in the sync.json
file)
sets the __PASSWORD__
attribute to
defaultpwd
when OpenIDM creates a target object:
"onCreate" : { "type" : "text/javascript", "source": "target.__PASSWORD__ = 'defaultpwd'" }
The following example (included in the router.json
file) shows a trigger to create Solaris home directories using a script. The
script is located in the file,
project-dir/script/createUnixHomeDir.js
:
{ "filters" : [ { "pattern" : "^system/solaris/account$", "methods" : [ "create" ], "onResponse" : { "type" : "text/javascript", "file" : "script/createUnixHomeDir.js" } } ] }
Often, script files are reused in different contexts. You can pass variables to your scripts to provide these contextual details at runtime. You pass variables to the scripts that are referenced in configuration files by declaring the variable name in the script reference.
The following example of a scheduled task configuration calls a script
named triggerEmailNotification.js
. The example sets the
sender and recipient of the email in the schedule configuration, rather
than in the script itself:
{ "enabled" : true, "type" : "cron", "schedule" : "0 0/1 * * * ?", "persisted" : true, "invokeService" : "script", "invokeContext" : { "script": { "type" : "text/javascript", "file" : "script/triggerEmailNotification.js", "fromSender" : "admin@example.com", "toEmail" : "user@example.com" } } }
Tip
In general, you should namespace variables passed into scripts with the
globals
map. Passing variables in this way prevents
collisions with the top-level reserved words for script maps, such as
file
, source
, and
type
. The following example uses the
globals
map to namespace the variables passed in the
previous example.
"script": { "type" : "text/javascript", "file" : "script/triggerEmailNotification.js", "globals" : { "fromSender" : "admin@example.com", "toEmail" : "user@example.com" } }
Script variables are not necessarily simple key:value
pairs. A script variable can be any arbitrarily complex JSON object.
Chapter 8. Accessing Data Objects
OpenIDM supports a variety of objects that can be addressed via a URL or URI. You can access data objects by using scripts (through the Resource API) or by using direct HTTP calls (through the REST API).
The following sections describe these two methods of accessing data objects, and provide information on constructing and calling data queries.
8.1. Accessing Data Objects By Using Scripts
OpenIDM's uniform programming model means that all objects are queried and manipulated in the same way, using the Resource API. The URL or URI that is used to identify the target object for an operation depends on the object type. For an explanation of object types, see "Data Models and Objects Reference". For more information about scripts and the objects available to scripts, see "Scripting Reference".
You can use the Resource API to obtain managed, system, configuration, and repository objects, as follows:
val = openidm.read("managed/organization/mysampleorg") val = openidm.read("system/mysystem/account") val = openidm.read("config/custom/mylookuptable") val = openidm.read("repo/custom/mylookuptable")
For information about constructing an object ID, see "URI Scheme".
You can update entire objects with the update()
function,
as follows:
openidm.update("managed/organization/mysampleorg", rev, object) openidm.update("system/mysystem/account", rev, object)
You can apply a partial update to a managed or system object by using the
patch()
function:
openidm.patch("managed/organization/mysampleorg", rev, value)
The create()
, delete()
, and
query()
functions work the same way.
8.2. Accessing Data Objects By Using the REST API
OpenIDM provides RESTful access to data objects via ForgeRock's Common REST API. To access the repository over REST, you can use a client application like Postman, or RESTClient for Firefox. Alternatively you can use the curl command-line utility that is included with most operating systems. For more information about curl, see https://github.com/bagder/curl.
For a comprehensive overview of the REST API, see "REST API Reference".
To obtain a managed object through the REST API, depending on your security
settings and authentication configuration, perform an HTTP GET on the
corresponding URL, for example
http://localhost:8080/openidm/managed/organization/mysampleorg
.
By default, the HTTP GET returns a JSON representation of the object.
In general, you can map any HTTP request to the corresponding
openidm.method
call. The following example shows how the
parameters provided in an openidm.query
request correspond
with the key-value pairs that you would include in a similar HTTP GET request:
Reading an object using the Resource API:
openidm.query("managed/user", { "_queryId": "query-all" }, ["userName","sn"])
Reading an object using the REST API:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user?_queryId=query-all&_fields=userName,sn"
8.3. Defining and Calling Queries
OpenIDM supports an advanced query model that enables you to define queries, and to call them over the REST or Resource API. Three types of queries are supported, on both managed, and system objects:
Common filter expressions
Parameterized, or predefined queries
Native query expressions
Each of these mechanisms is discussed in the following sections.
8.3.1. Common Filter Expressions
The ForgeRock REST API defines common filter expressions that enable you to form arbitrary queries using a number of supported filter operations. This query capability is the standard way to query data if no predefined query exists, and is supported for all managed and system objects.
Common filter expressions are useful in that they do not require knowledge of how the object is stored and do not require additions to the repository configuration.
Common filter expressions are called with the
_queryFilter
keyword. The following example uses a common
filter expression to retrieve managed user objects whose user name is Smith:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ 'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"smith"'
The filter is URL encoded in this example. The corresponding filter using the resource API would be:
openidm.query("managed/user", { "_queryFilter" : '/userName eq "smith"' });
Note that, this JavaScript invocation is internal and is not subject to the same URL-encoding requirements that a GET request would be. Also, because JavaScript supports the use of single quotes, it is not necessary to escape the double quotes in this example.
For a list of supported filter operations, see "Constructing Queries".
Note that using common filter expressions to retrieve values from arrays is currently not supported. If you need to search within an array, you should set up a predefined (parameterized) in your repository configuration. For more information, see "Parameterized Queries".
8.3.2. Parameterized Queries
Managed objects in the supported OpenIDM repositories can be accessed using
a parameterized query mechanism. Parameterized queries on repositories are
defined in the repository configuration (repo.*.json
)
and are called by their _queryId
.
Parameterized queries provide precise control over the query that is executed. Such control might be useful for tuning, or for performing database operations such as aggregation (which is not possible with a common filter expression.)
Parameterized queries provide security and portability for the query call signature, regardless of the backend implementation. Queries that are exposed over the REST interface must be parameterized queries to guard against injection attacks and other misuse. Queries on the officially supported repositories have been reviewed and hardened against injection attacks.
For system objects, support for parameterized queries is restricted to
_queryId=query-all-ids
. There is currently no support for
user-defined parameterized queries on system objects. Typically,
parameterized queries on system objects are not called directly over the
REST interface, but are issued from internal calls, such as correlation
queries.
A typical query definition is as follows:
"query-all-ids" : "select _openidm_id from ${unquoted:_resource}"
To call this query, you would reference its ID, as follows:
?_queryId=query-all-ids
The following example calls query-all-ids
over the REST
interface:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids"
8.3.3. Native Query Expressions
Native query expressions are supported for all managed objects and system objects, and can be called directly, rather than being defined in the repository configuration.
Native queries are intended specifically for internal callers, such as custom scripts, and should be used only in situations where the common filter or parameterized query facilities are insufficient. For example, native queries are useful if the query needs to be generated dynamically.
The query expression is specific to the target resource. For repositories, queries use the native language of the underlying data store. For system objects that are backed by OpenICF connectors, queries use the applicable query language of the system resource.
Native queries on the repository are made using the
_queryExpression
keyword. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ "http://localhost:8080/openidm/managed/user?_queryExpression=select+from+managed_user"
Unless you have specifically enabled native queries over REST, the previous command returns a 403 access denied error message. Native queries are not portable and do not guard against injection attacks. Such query expressions should therefore not be used or made accessible over the REST interface or over HTTP in production environments. They should be used only via the internal Resource API. If you want to enable native queries over REST for development, see "Protect Sensitive REST Interface URLs".
Alternatively, if you really need to expose native queries over HTTP, in a selective manner, you can design a custom endpoint to wrap such access.
8.3.4. Constructing Queries
The openidm.query
function enables you to query OpenIDM
managed and system objects. The query syntax is
openidm.query(id, params)
, where id
specifies the object on which the query should be performed and
params
provides the parameters that are passed to the
query, either _queryFilter
or
_queryID
. For example:
var params = { '_queryFilter' : 'givenName co "' + sourceCriteria + '" or ' + 'sn co "' + sourceCriteria + '"' }; var results = openidm.query("system/ScriptedSQL/account", params)
Over the REST interface, the query filter is specified as
_queryFilter=filter
, for
example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"Smith"'
Note the use of double-quotes around the search term:
Smith
. In _queryFilter
expressions, string values must use double-quotes.
Numeric and boolean expressions should not use quotes.
When called over REST, you must URL encode the filter expression. The following examples show the filter expressions using the resource API and the REST API, but do not show the URL encoding, to make them easier to read.
Note that, for generic mappings, any fields that are included in the query
filter (for example userName
in the previous query), must
be explicitly defined as searchable, if you have set
the global searchableDefault
to false. For more
information, see "Improving Search Performance for Generic Mappings".
The filter expression is constructed from the
building blocks shown in this section. In these expressions the simplest
json-pointer is a field of the JSON resource,
such as userName
or id
. A JSON pointer
can, however, point to nested elements.
Note
You can also use the negation operator (!) to help
construct a query. For example, a
_queryFilter=!(userName+eq+"jdoe") query would return
every userName
except for jdoe
.
You can set up query filters with one of the following types of expressions.
8.3.4.1. Comparison Expressions
Equal queries (see "Querying Objects That Equal the Given Value")
Contains queries (see "Querying Objects That Contain the Given Value")
Starts with queries (see "Querying Objects That Start With the Given Value")
Less than queries (see "Querying Objects That Are Less Than the Given Value")
Less than or equal to queries (see "Querying Objects That Are Less Than or Equal to the Given Value")
Greater than queries (see "Querying Objects That Are Greater Than the Given Value")
Greater than or equal to queries (see "Querying Objects That Are Greater Than or Equal to the Given Value")
Note
Certain system endpoints also support EndsWith
and
ContainsAllValues
queries. However, such queries are
not supported for managed objects and have not been
tested with all supported OpenICF connectors.
8.3.4.1.1. Querying Objects That Equal the Given Value
This is the associated JSON comparison expression:
json-pointer eq
json-value
.
Review the following example:
"_queryFilter" : '/givenName eq "Dan"'
The following REST call returns the user name and given name of all
managed users whose first name (givenName
) is "Dan":
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=givenName+eq+"Dan"&_fields=userName,givenName' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 3, "result": [ { "givenName": "Dan", "userName": "dlangdon" }, { "givenName": "Dan", "userName": "dcope" }, { "givenName": "Dan", "userName": "dlanoway" } }
8.3.4.1.2. Querying Objects That Contain the Given Value
This is the associated JSON comparison expression:
json-pointer co
json-value
.
Review the following example:
"_queryFilter" : '/givenName co "Da"'
The following REST call returns the user name and given name of all
managed users whose first name (givenName
) contains
"Da":
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=givenName+co+"Da"&_fields=userName,givenName' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 10, "result": [ { "givenName": "Dave", "userName": "djensen" }, { "givenName": "David", "userName": "dakers" }, { "givenName": "Dan", "userName": "dlangdon" }, { "givenName": "Dan", "userName": "dcope" }, { "givenName": "Dan", "userName": "dlanoway" }, { "givenName": "Daniel", "userName": "dsmith" }, ... }
8.3.4.1.3. Querying Objects That Start With the Given Value
This is the associated JSON comparison expression:
json-pointer sw
json-value
.
Review the following example:
"_queryFilter" : '/sn sw "Jen"'
The following REST call returns the user names of all managed users
whose last name (sn
) starts with "Jen":
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=sn+sw+"Jen"&_fields=userName' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 4, "result": [ { "userName": "bjensen" }, { "userName": "djensen" }, { "userName": "cjenkins" }, { "userName": "mjennings" } ] }
8.3.4.1.4. Querying Objects That Are Less Than the Given Value
This is the associated JSON comparison expression:
json-pointer lt
json-value
.
Review the following example:
"_queryFilter" : '/employeeNumber lt 5000'
The following REST call returns the user names of all managed users whose
employeeNumber
is lower than 5000:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+lt+5000&_fields=userName,employeeNumber' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 4999, "result": [ { "employeeNumber": 4907, "userName": "jnorris" }, { "employeeNumber": 4905, "userName": "afrancis" }, { "employeeNumber": 3095, "userName": "twhite" }, { "employeeNumber": 3921, "userName": "abasson" }, { "employeeNumber": 2892, "userName": "dcarter" } ... ] }
8.3.4.1.5. Querying Objects That Are Less Than or Equal to the Given Value
This is the associated JSON comparison expression:
json-pointer le
json-value
.
Review the following example:
"_queryFilter" : '/employeeNumber le 5000'
The following REST call returns the user names of all managed users whose
employeeNumber
is 5000 or less:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+le+5000&_fields=userName,employeeNumber' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 5000, "result": [ { "employeeNumber": 4907, "userName": "jnorris" }, { "employeeNumber": 4905, "userName": "afrancis" }, { "employeeNumber": 3095, "userName": "twhite" }, { "employeeNumber": 3921, "userName": "abasson" }, { "employeeNumber": 2892, "userName": "dcarter" } ... ] }
8.3.4.1.6. Querying Objects That Are Greater Than the Given Value
This is the associated JSON comparison expression:
json-pointer gt
json-value
Review the following example:
"_queryFilter" : '/employeeNumber gt 5000'
The following REST call returns the user names of all managed users whose
employeeNumber
is higher than 5000:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+gt+5000&_fields=userName,employeeNumber' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 1458, "result": [ { "employeeNumber": 5003, "userName": "agilder" }, { "employeeNumber": 5011, "userName": "bsmith" }, { "employeeNumber": 5034, "userName": "bjensen" }, { "employeeNumber": 5027, "userName": "cclarke" }, { "employeeNumber": 5033, "userName": "scarter" } ... ] }
8.3.4.1.7. Querying Objects That Are Greater Than or Equal to the Given Value
This is the associated JSON comparison expression:
json-pointer ge
json-value
.
Review the following example:
"_queryFilter" : '/employeeNumber ge 5000'
The following REST call returns the user names of all managed users whose
employeeNumber
is 5000 or greater:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=employeeNumber+ge+5000&_fields=userName,employeeNumber' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 1457, "result": [ { "employeeNumber": 5000, "userName": "agilder" }, { "employeeNumber": 5011, "userName": "bsmith" }, { "employeeNumber": 5034, "userName": "bjensen" }, { "employeeNumber": 5027, "userName": "cclarke" }, { "employeeNumber": 5033, "userName": "scarter" } ... ] }
8.3.4.2. Presence Expressions
The following examples show how you can build filters using a presence
expression, shown as pr
. The presence expression is a
filter that returns all records with a given attribute.
A presence expression filter evaluates to true
when a
json-pointer pr
matches any
object in which the json-pointer is present,
and contains a non-null value. Review the following expression:
"_queryFilter" : '/mail pr'
The following REST call uses that expression to return the mail addresses
for all managed users with a mail
property:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=mail+pr&_fields=mail' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 2, "result": [ { "mail": "jdoe@exampleAD.com" }, { "mail": "bjensen@example.com" } ] }
You can also apply the presence filter on system objects. For example, the
following query returns the uid
of all users in an LDAP
system who have the uid
attribute in their entries:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/system/ldap/account?_queryFilter=uid+pr&_fields=uid' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 2, "result": [ { "uid": "jdoe" }, { "uid": "bjensen" } ] }
8.3.4.3. Literal Expressions
A literal expression is a boolean:
true
matches any object in the resource.false
matches no object in the resource.
For example, you can list the _id
of all managed
objects as follows:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=true&_fields=_id' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 2, "result": [ { "_id": "d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c" }, { "_id": "709fed03-897b-4ff0-8a59-6faaa34e3af6" } ] }
8.3.4.4. Complex Expressions
You can combine expressions using the boolean operators
and
, or
, and !
(not). The following example queries managed user objects located in
London, with last name Jensen:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user/?_queryFilter=city+eq+"London"+and+sn+eq+"Jensen"&_fields=userName,givenName,sn' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 3, "result": [ { "sn": "Jensen", "givenName": "Clive", "userName": "cjensen" }, { "sn": "Jensen", "givenName": "Dave", "userName": "djensen" }, { "sn": "Jensen", "givenName": "Margaret", "userName": "mjensen" } ] }
8.3.5. Paging and Counting Query Results
The common filter query mechanism supports paged query results for managed objects, and for some system objects, depending on the system resource.
Predefined queries must be configured to support paging, in the repository configuration. For example:
"query-all-ids" : "select _openidm_id from ${unquoted:_resource} SKIP ${unquoted:_pagedResultsOffset} LIMIT ${unquoted:_pageSize}",
The query implementation includes a configurable count policy that can be set per query. Currently, counting results is supported only for predefined queries, not for filtered queries.
The count policy can be one of the following:
NONE
- to disable counting entirely for that query.EXACT
- to return the precise number of query results. Note that this has a negative impact on query performance.ESTIMATE
- to return a best estimate of the number of query results in the shortest possible time. This number generally correlates with the number of records in the index.
If no count policy is specified, the policy is assumed to be
NONE
. This prevents the overhead of counting results,
unless a result count is specifically required.
The following query returns the first three records in the managed user repository:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids&_pageSize=3" { "result": [ { "_id": "scarter", "_rev": "1" }, { "_id": "bjensen", "_rev": "1" }, { "_id": "asmith", "_rev": "1" } ], "resultCount": 3, "pagedResultsCookie": "3", "totalPagedResultsPolicy": "NONE", "totalPagedResults": -1, "remainingPagedResults": -1 }
Notice that no counting is done in this query, so the returned value the of
"totalPagedResults"
and "remainingPagedResults"
fields is -1
.
To specify that either an EXACT
or ESTIMATE
result count be applied, add the "totalPagedResultsPolicy"
to the query.
The following query is identical to the previous query but includes a count of the total results in the result set.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids&_pageSize=3&_totalPagedResultsPolicy=EXACT" { "result": [ { "_id": "scarter", "_rev": "1" }, { "_id": "bjensen", "_rev": "1" }, { "_id": "asmith", "_rev": "1" } ], "resultCount": 3, "pagedResultsCookie": "3", "totalPagedResultsPolicy": "EXACT", "totalPagedResults": 4, "remainingPagedResults": -1 }
Note that the totalPagedResultsPolicy
is
EXACT
for this query. To return an exact result count,
a corresponding count
query must be defined in the
repository configuration. The following excerpt of the default
repo.orientdb.json
file shows the predefined
query-all-ids
query, and its corresponding
count
query:
"query-all-ids" : "select _openidm_id, @version from ${unquoted:_resource} SKIP ${unquoted:_pagedResultsOffset} LIMIT ${unquoted:_pageSize}", "query-all-ids-count" : "select count(_openidm_id) AS total from ${unquoted:_resource}",
The following paging parameters are supported:
_pagedResultsCookie
Opaque cookie used by the server to keep track of the position in the search results. The format of the cookie is a string value.
The server provides the cookie value on the first request. You should then supply the cookie value in subsequent requests until the server returns a null cookie, meaning that the final page of results has been returned.
Paged results are enabled only if the
_pageSize
is a non-zero integer._pagedResultsOffset
Specifies the index within the result set of the number of records to be skipped before the first result is returned. The format of the
_pagedResultsOffset
is an integer value. When the value of_pagedResultsOffset
is greater than or equal to 1, the server returns pages, starting after the specified index.This request assumes that the
_pageSize
is set, and not equal to zero.For example, if the result set includes 10 records, the
_pageSize
is 2, and the_pagedResultsOffset
is 6, the server skips the first 6 records, then returns 2 records, 7 and 8. The_pagedResultsCookie
value would then be 8 (the index of the last returned record) and the_remainingPagedResults
value would be 2, the last two records (9 and 10) that have not yet been returned.If the offset points to a page beyond the last of the search results, the result set returned is empty.
Note that the
totalPagedResults
and_remainingPagedResults
parameters are not supported for all queries. Where they are not supported, their returned value is always-1
._pageSize
An optional parameter indicating that query results should be returned in pages of the specified size. For all paged result requests other than the initial request, a cookie should be provided with the query request.
The default behavior is not to return paged query results. If set, this parameter should be an integer value, greater than zero.
8.3.6. Sorting Query Results
For common filter query expressions, you can sort the results of a query
using the _sortKeys
parameter. This parameter takes a
comma-separated list as a value and orders the way in which the JSON result
is returned, based on this list.
The _sortKeys
parameter is not supported for predefined
queries.
The following query returns all users with the givenName
Dan
, and sorts the results alphabetically, according to
surname (sn
):
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/system/ldap/account?_queryFilter=givenName+eq+"Dan"&_fields=givenName,sn&_sortKeys=sn' { "remainingPagedResults": -1, "pagedResultsCookie": null, "resultCount": 3, "result": [ { "sn": "Cope", "givenName": "Dan" }, { "sn": "Langdon", "givenName": "Dan" }, { "sn": "Lanoway", "givenName": "Dan" } ] }
Chapter 9. Managing Users, Groups, Roles and Relationships
OpenIDM provides a default schema for typical managed object types, such as users and roles, but does not control the structure of objects that you store in the OpenIDM repository. You can modify or extend the schema for the default object types, and you can set up a new managed object type for any item that can be collected in a data set. For example, with the right schema, you can set up any device associated with the Internet of Things (IoT).
Managed objects and their properties are defined in your project's
conf/managed.json
file. Note that the schema defined in
this file is not a comprehensive list of all the properties that can be stored
in the managed object repository. If you use a generic object mapping, you can
create a managed object with any arbitrary property, and that property will be
stored in the repository. For more information about explicit and generic
object mappings, see "Using Explicit or Generic Object Mapping With a JDBC Repository".
This chapter describes how to work with the default managed object types and how to create new object types as required by your deployment. For more information about the OpenIDM object model, see "Data Models and Objects Reference".
9.1. Creating and Modifying Managed Object Types
If the managed object types provided in the default configuration are not sufficient for your deployment, you can create any number of new managed object types.
The easiest way to create a new managed object type is to use the Admin UI, as follows:
Navigate to the Admin UI URL (
https://localhost:8443/admin
) then select Configure > Managed Objects > New Managed Object.Enter a name for the new managed object and, optionally, an icon that will be displayed for that object type in the UI.
Click Save.
Select the Scripts tab and specify any scripts that should be applied on various events associated with that object type, for example, when an object of that type is created, updated or deleted.
Specify the schema for the object type, that is, the properties that make up the object, and any policies or restrictions that must be applied to the property values.
You can also create a new managed object type by adding its configuration,
in JSON, to your project's conf/managed.json
file. The following excerpt of the managed.json
file
shows the configuration of a "Phone" object, that was created through the UI.
{ "name": "Phone", "schema": { "$schema": "http://forgerock.org/json-schema#", "type": "object", "properties": { "brand": { "description": "The supplier of the mobile phone", "title": "Brand", "viewable": true, "searchable": true, "userEditable": false, "policies": [], "returnByDefault": false, "minLength": "", "pattern": "", "isVirtual": false, "type": [ "string", "null" ] }, "assetNumber": { "description": "The asset tag number of the mobile device", "title": "Asset Number", "viewable": true, "searchable": true, "userEditable": false, "policies": [], "returnByDefault": false, "minLength": "", "pattern": "", "isVirtual": false, "type": "string" }, "model": { "description": "The model number of the mobile device, such as 6 plus, Galaxy S4", "title": "Model", "viewable": true, "searchable": false, "userEditable": false, "policies": [], "returnByDefault": false, "minLength": "", "pattern": "", "isVirtual": false, "type": "string" } }, "required": [], "order": [ "brand", "assetNumber", "model" ] } }
You can add any arbitrary properties to the schema of a new managed object type. A property definition typically includes the following fields:
name
The name of the property.
title
The name of the property, in human-readable language, used to display the property in the UI.
description
A brief description of the property.
viewable
Specifies whether this property is viewable in the object's profile in the UI). Boolean,
true
orfalse
(true
by default).searchable
Specifies whether this property can be searched in the UI. A searchable property is visible within the Managed Object data grid in the Self-Service UI. Note that for a property to be searchable in the UI, it must be indexed in the repository configuration. For information on indexing properties in a repository, see "Using Explicit or Generic Object Mapping With a JDBC Repository".
Boolean,
true
orfalse
(false
by default).userEditable
Specifies whether users can edit the property value in the UI. This property applies in the context of the Self-Service UI, where users are able to edit certain properties of their own accounts. Boolean,
true
orfalse
(false
by default).isProtected
Specifies whether reauthentication is required if the value of this property changes.
For certain properties, such as passwords, changing the value of the property should force an end-user to reauthenticate. These properties are referred to as protected properties. Depending on how the user authenticates (which authentication module is used), the list of protected properties is added to the user's security context. For example, if a user logs in with the login and password of their managed user entry (
MANAGED_USER
authentication module), their security context will include this list of protected properties. The list of protected properties is not included in the security context if the user logs in with a module that does not support reauthentication (such as through a social identity provider).minLength
The minimum number of characters that the value of this property must have.
pattern
Any specific pattern to which the value of the property must adhere. For example, a property whose value is a date might require a specific date format.
policies
Any policy validation that must be applied to the property. For more information on managed object policies, see "Configuring the Default Policy for Managed Objects".
required
Specifies whether the property must be supplied when an object of this type is created. Boolean,
true
orfalse
.type
The data type for the property value; can be
string
,array
,boolean
,integer
,number
,object
,Resource Collection
, ornull
.Note
If a property (such as a
telephoneNumber
) might not exist for a particular user, you must includenull
as one of the propertytype
s. You can set a null property type in the Admin UI (Configure > Managed Objects > User > Schema then select the property and setNullable
totrue
). You can also set a null property type directly in yourmanaged.json
file by setting"type" : '[ "string","null" ]'
for that property (wherestring
can be any other valid property type. This information is validated by thepolicy.js
script, as described in "Validation of Managed Object Data Types".If you're configuring a data
type
ofarray
through the Admin UI, you're limited to two values.isVirtual
Specifies whether the property takes a static value, or whether its value is calculated "on the fly" as the result of a script. Boolean,
true
orfalse
.returnByDefault
For non-core attributes (virtual attributes and relationship fields), specifies whether the property will be returned in the results of a query on an object of this type if it is not explicitly requested. Virtual attributes and relationship fields are not returned by default. Boolean,
true
orfalse
. When the property is in an array within a relationship, always set tofalse
.
9.2. Working with Managed Users
User objects that are stored in OpenIDM's repository are referred to as
managed users. For a JDBC repository, OpenIDM stores
managed users in the managedobjects
table. A second table,
managedobjectproperties
, serves as the index table. For an
OrientDB repository, managed users are stored in the
managed_user
table.
OpenIDM provides RESTful access to managed users, at the context path
/openidm/managed/user
. For more information, see
"Getting Started With the REST Interface" in the Installation Guide.
You can add, change, and delete managed users by using the Admin UI or over the REST interface. To use the Admin UI, simply select Manage > User. The UI is intuitive as regards user management.
The following examples show how to add, change and delete users over the REST interface. For a reference of all managed user endpoints and actions, see "Managing Users Over REST". You can also use the API Explorer as a reference to the managed object REST API. For more information, see "API Explorer".
The following example retrieves the JSON representation of all managed users in the repository:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user?_queryId=query-all-ids"
The following two examples query all managed users for a user named
scarter
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+%22scarter%22"
In this second example, note the use of single quotes around the URL, to
avoid conflicts with the double quotes around the user named
smith
. Note also that the _queryFilter
requires double quotes (or the URL encoded equivalent, %22
,)
around the search term:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ 'http://localhost:8080/openidm/managed/user?_queryFilter=userName+eq+"scarter"'
The following example retrieves the JSON representation of a managed user,
specified by his ID, scarter
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/scarter"
The following example adds a user with a specific user ID,
bjensen
:
$ curl \ --header "Content-Type: application/json" \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "If-None-Match: *" \ --request PUT \ --data '{ "userName":"bjensen", "sn":"Jensen", "givenName":"Barbara", "mail": "bjensen@example.com", "telephoneNumber": "082082082", "password":"Passw0rd" }' \ "http://localhost:8080/openidm/managed/user/bjensen"
The following example adds the same user, but allows OpenIDM to generate an ID. Creating objects with system-generated IDs is recommended in production environments:
$ curl \ --header "Content-Type: application/json" \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ --data '{ "userName":"bjensen", "sn":"Jensen", "givenName":"Barbara", "mail": "bjensen@example.com", "telephoneNumber": "082082082", "password":"Passw0rd" }' \ "http://localhost:8080/openidm/managed/user?_action=create"
The following example checks whether user bjensen
exists,
then replaces her telephone number with the new data provided in the request
body:
$ curl \ --header "Content-Type: application/json" \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ --data '[{ "operation":"replace", "field":"/telephoneNumber", "value":"1234567" }]' \ "http://localhost:8080/openidm/managed/user?_action=patch&_queryId=for-userName&uid=bjensen"
The following example deletes user bjensen
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/user/bjensen"
9.3. Working With Managed Groups
OpenIDM provides support for a managed group
object. For
a JDBC repository, OpenIDM stores managed groups with all other managed
objects, in the managedobjects
table, and uses the
managedobjectproperties
for indexing. For an OrientDB
repository, managed groups are stored in the managed_group
table.
The managed group object is not provided by default. To use managed groups,
add an object similar to the following to your
conf/managed.json
file:
{ "name" : "group" },
With this addition, OpenIDM provides RESTful access to managed groups, at the
context path /openidm/managed/group
.
For an example of a deployment that uses managed groups, see "Sample 2d - Synchronizing LDAP Groups" in the Samples Guide.
9.4. Working With Managed Roles
OpenIDM supports two types of roles:
Provisioning roles - used to specify how objects are provisioned to an external system.
Authorization roles - used to specify the authorization rights of a managed object internally, within OpenIDM.
Provisioning roles are always created as managed roles, at the context path
openidm/managed/role/role-name
.
Provisioning roles are granted to managed users as values of the user's
roles
property.
Authorization roles can be created either as managed roles (at the context
path
openidm/managed/role/role-name
)
or as internal roles (at the context path
openidm/repo/internal/role/role-name
).
Authorization roles are granted to managed users as values of the user's
authzRoles
property.
Both provisioning roles and authorization roles use the relationships mechanism to link the role to the managed object to which it applies. For more information about relationships between objects, see "Managing Relationships Between Objects".
This section describes how to create and use managed roles, either managed provisioning roles, or managed authorization roles. For more information about internal authorization roles, and how OpenIDM controls authorization to its own endpoints, see "Authorization".
Managed roles are defined like any other managed object, and are granted to users through the relationships mechanism.
A managed role can be granted manually, as a static value of the user's
roles
or authzRoles
attribute, or
dynamically, as a result of a condition or script. For example, a user might
be granted a role such as sales-role
dynamically, if that
user is in the sales
organization.
A managed user's roles
and authzRoles
attributes take an array of references as a value, where
the references point to the managed roles. For example, if user bjensen has
been granted two provisioning roles (employee
and
supervisor
), the value of bjensen's
roles
attribute would look something like the following:
"roles": [ { "_ref": "managed/role/employee", "_refProperties": { "_id": "c090818d-57fd-435c-b1b1-bb23f47eaf09", "_rev": "1" } }, { "_ref": "managed/role/supervisor", "_refProperties": { "_id": "4961912a-e2df-411a-8c0f-8e63b62dbef6", "_rev": "1" } } ]
Important
The _ref
property points to the ID of the managed role
that has been granted to the user. This particular example uses a
client-assigned ID that is the same as the role name, to make the example
easier to understand. All other examples in this chapter use system-assigned
IDs. In production, you should use system-assigned IDs for role objects.
The following sections describe how to create, read, update, and delete managed roles, and how to grant roles to users. For information about how roles are used to provision users to external systems, see "Working With Role Assignments". For a sample that demonstrates the basic CRUD operations on roles, see "Demonstrating the Roles Implementation" in the Samples Guide.
9.4.1. Creating a Role
The easiest way to create a new role is by using the Admin UI. Select Manage > Role and click New Role on the Role List page. Enter a name and description for the new role and click Save.
Optionally, select Enable Condition to define a query filter that will allow this role to be granted to members dynamically. For more information, see "Granting Roles Dynamically".
To create a managed role over REST, send a PUT or POST request to the
/openidm/managed/role
context path. The following
example creates a managed role named employee
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "name" : "employee", "description" : "Role granted to workers on the company payroll" }' \ "http://localhost:8080/openidm/managed/role?_action=create" { "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a", "_rev": "1", "name": "employee", "description": "Role granted to workers on the company payroll" }
At this stage, the employee
role has no corresponding
assignments. Assignments are what enables the
provisioning logic to the external system. Assignments are created and
maintained as separate managed objects, and are referred to within role
definitions. For more information about assignments, see
"Working With Role Assignments".
9.4.2. Listing Existing Roles
You can display a list of all configured managed roles over REST or by using the Admin UI.
To list the managed roles in the Admin UI, select Manage > Role.
To list the managed roles over REST, query the
openidm/managed/role
endpoint. The following example
shows the employee
role that you created in the previous
section:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/role?_queryFilter=true" { "result": [ { "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a", "_rev": "1", "name": "employee", "description": "Role granted to workers on the company payroll" } ], ... }
9.4.3. Granting a Role to a User
Roles are granted to users through the relationship mechanism. Relationships are essentially references from one managed object to another, in this case from a user object to a role object. For more information about relationships, see "Managing Relationships Between Objects".
Roles can be granted manually or dynamically.
To grant a role manually, you must do one of the following:
Update the value of the user's
roles
property (if the role is a provisioning role) orauthzRoles
property (if the role is an authorization role) to reference the role.Update the value of the role's
members
property to reference the user.
Manual role grants are described further in "Granting Roles Manually".
Dynamic role grants use the result of a condition or script to update a user's list of roles. Dynamic role grants are described in detail in "Granting Roles Dynamically".
9.4.3.1. Granting Roles Manually
To grant a role to a user manually, use the Admin UI or the REST interface as follows:
- Using the Admin UI
Use one of the following UI methods to grant a role to a user:
Update the user entry:
Select Manage > User and click on the user to whom you want to grant the role.
Select the Provisioning Roles tab and click Add Provisioning Roles.
Select the role from the dropdown list and click Add.
Update the role entry:
Select Manage > Role and click on the role that you want to grant.
Select the Role Members tab and click Add Role Members.
Select the user from the dropdown list and click Add.
- Over the REST interface
Use one of the following methods to grant a role to a user over REST:
Update the user to refer to the role.
The following sample command grants the
employee
role (with IDcedadaed-5774-4d65-b4a2-41d455ed524a
) to user scarter:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation": "add", "field": "/roles/-", "value": {"_ref" : "managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"} } ]' \ "http://localhost:8080/openidm/managed/user/scarter" { "_id": "scarter", "_rev": "2", "mail": "scarter@example.com", "givenName": "Steven", "sn": "Carter", "description": "Created By XML1", "userName": "scarter@example.com", "telephoneNumber": "1234567", "accountStatus": "active", "effectiveRoles": [ { "_ref": "managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a" } ], "effectiveAssignments": [] }
Note that scarter's
effectiveRoles
attribute has been updated with a reference to the new role. For more information about effective roles and effective assignments, see "Understanding Effective Roles and Effective Assignments".Update the role to refer to the user.
The following sample command makes scarter a member of the
employee
role:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation": "add", "field": "/members/-", "value": {"_ref" : "managed/user/scarter"} } ]' \ "http://localhost:8080/openidm/managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a" { "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a", "_rev": "2", "name": "employee", "description": "Role granted to workers on the company payroll" }
Note that the
members
attribute of a role is not returned by default in the output. To show all members of a role, you must specifically request the relationship properties (*_ref
) in your query. The following sample command lists the members of theemployee
role (currently only scarter):$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a?_fields=*_ref,name" { "_id": "cedadaed-5774-4d65-b4a2-41d455ed524a", "_rev": "1", "name": "employee", "members": [ { "_ref": "managed/user/scarter", "_refProperties": { "_id": "98d22d75-7090-47f8-9608-01ff92b447a4", "_rev": "1" } } ], "authzMembers": [], "assignments": [] }
You can replace an existing role grant with a new one by using the
replace
operation in your patch request.The following command replaces scarter's entire
roles
entry (that is, overwrites any existing roles) with a single entry, the reference to theemployee
role (IDcedadaed-5774-4d65-b4a2-41d455ed524a
):$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation": "replace", "field":"/roles", "value":[ {"_ref":"managed/role/cedadaed-5774-4d65-b4a2-41d455ed524a"} ] } ]' \ "http://localhost:8080/openidm/managed/user/scarter"
9.4.3.2. Granting Roles Dynamically
The previous section showed how to grant roles to a user manually, by
listing a reference to the role as a value of the user's
roles
attribute. OpenIDM also supports the following
methods of granting a role dynamically:
Granting a role based on a condition, where that condition is expressed in a query filter in the role definition. If the condition is
true
for a particular member, that member is granted the role.Using a custom script to define a more complex role granting strategy.
9.4.3.2.1. Granting Roles Based on a Condition
A role that is granted based on a defined condition is called a conditional role. To create a conditional role, include a query filter in the role definition.
To create a conditional role by using the Admin UI, select Condition on the
role Details page, then define the query filter that will be used to assess
the condition. In the following example, the role
fr-employee
will be granted only to those users who live
in France (whose country
property is set to
FR
):
To create a conditional role over REST, include the query filter as a value
of the condition
property in the role definition. The
following command creates a role similar to the one created in the previous
screen shot:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "name": "fr-employee", "description": "Role granted to employees resident in France", "condition": "/country eq \"FR\"" }' \ "http://localhost:8080/openidm/managed/role?_action=create" { "_id": "4b0a3e42-e5be-461b-a995-3e66c74551c1", "_rev": "1", "name": "fr-employee", "description": "Role granted to employees resident in France", "condition": "/country eq \"FR\"" }
When a conditional role is created or updated, OpenIDM automatically
assesses all managed users, and recalculates the value of their
roles
property, if they qualify for that role. When a
condition is removed from a role, that is, when the role becomes an
unconditional role, all conditional grants removed. So, users who were
granted the role based on the condition have that role removed from their
roles
property.
Caution
When a conditional role is defined in an existing data set, every user entry (including the mapped entries on remote systems) must be updated with the assignments implied by that conditional role. The time that it takes to create a new conditional role is impacted by the following items:
The number of managed users affected by the condition
The number of assignments related to the conditional role
The average time required to provision updates to all remote systems affected by those assignments
In a data set with a very large number of users, creating a new conditional role can therefore incur a significant performance cost at the time of creation. Ideally, you should set up your conditional roles at the beginning of your deployment to avoid performance issues later.
9.4.3.2.2. Granting Roles By Using Custom Scripts
The easiest way to grant roles dynamically is to use conditional roles, as described in "Granting Roles Based on a Condition". If your deployment requires complex conditional logic that cannot be achieved with a query filter, you can create a custom script to grant the role, as follows:
Create a
roles
directory in your project'sscript
directory and copy the default effective roles script to that new directory:$ mkdir project-dir/script/roles/ $ cp /path/to/openidm/bin/defaults/script/roles/effectiveRoles.js \ project-dir/script/roles/
The new script will override the default effective roles script.
Modify the script to reference additional roles that have not been granted manually, or as the result of a conditional grant. The effective roles script calculates the grants that are in effect when the user is retrieved.
For example, the following addition to the
effectiveRoles.js
script grants the rolesdynamic-role1
anddynamic-role2
to all active users (managed user objects whoseaccountStatus
value isactive
). This example assumes that you have already created the managed roles,dynamic-role1
(with IDd2e29d5f-0d74-4d04-bcfe-b1daf508ad7c
) anddynamic-role2
(with ID709fed03-897b-4ff0-8a59-6faaa34e3af6
, and their corresponding assignments:// This is the location to expand to dynamic roles, // project role script return values can then be added via // effectiveRoles = effectiveRoles.concat(dynamicRolesArray); if (object.accountStatus === 'active') { effectiveRoles = effectiveRoles.concat([ {"_ref": "managed/role/d2e29d5f-0d74-4d04-bcfe-b1daf508ad7c"}, {"_ref": "managed/role/709fed03-897b-4ff0-8a59-6faaa34e3af6"} ]); }
Note
For conditional roles, the user's roles
property is
updated if the user meets the condition. For custom scripted roles, the
user's effectiveRoles
property is calculated when the
user is retrieved and includes the dynamic roles according to the custom
script.
If you make any of the following changes to a scripted role grant, you must perform a manual reconciliation of all affected users before assignment changes will take effect on an external system:
If you create a new scripted role grant.
If you change the definition of an existing scripted role grant.
If you change any of the assignment rules for a role that is granted by a custom script.
9.4.4. Using Temporal Constraints to Restrict Effective Roles
To restrict the period during which a role is effective, you can set a temporal constraint on the role itself, or on the role grant. A temporal constraint that is set on a role definition applies to all grants of that role. A temporal constraint that is set on a role grant enables you to specify the period that the role is valid per user.
For example, you might want a role definition such as
contractors-2016
to apply to all contract employees
only for the year 2016. Or you might want a
contractors
role to apply to an individual user only
for the period of his contract of employment.
The following sections describe how to set temporal constraints on role definitions, and on individual role grants.
9.4.4.1. Adding a Temporal Constraint to a Role Definition
When you create a role, you can include a temporal constraint in the role definition that restricts the validity of the entire role, regardless of how that role is granted. Temporal constraints are expressed as a time interval in ISO 8601 date and time format. For more information on this format, see the ISO 8601 standard .
To restrict the period during which a role is valid by using the Admin UI, select Temporal Constraint on the role Details page, then select the timezone and start and end dates for the required period.
In the following example, the Contractor
role is
effective from January 1st, 2016 to January 1st, 2017:
The following example adds a similar contractor
role,
over the REST interface:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "name" : "contractor", "description" : "Role granted to contract workers for 2016", "temporalConstraints" : [ { "duration" : "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z" } ] }' \ "http://localhost:8080/openidm/managed/role?_action=create" { "_id": "071283a8-0237-40a2-a31e-ceaa4d93c93d", "_rev": "1", "name": "contractor", "description": "Role granted to contract workers for 2016", "temporalConstraints": [ { "duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z" } ] }
The preceding example specifies the time zone as Coordinated Universal Time
(UTC) by appending Z
to the time. If no time zone
information is provided, the time zone is assumed to be local time. To
specify a different time zone, include an offset (from UTC) in the format
±hh:mm
. For example, an interval of
2016-01-01T00:00:00.000+04:00/2017-01-01T00:00:00.000+04:00
specifies a time zone that is four hours ahead of UTC.
When the period defined by the constraint has ended, the role object remains in the repository but the effective roles script will not include the role in the list of effective roles for any user.
The following example assumes that user scarter has been granted a role
contractor-april
. A temporal constraint has been
included in the contractor-april
definition that
specifies that the role should be applicable only during the month of April
2016. At the end of this period, a query on scarter's entry shows that his
roles
property still includes the
contractor-april
role (with ID
3eb67be6-205b-483d-b36d-562b43a04ff8
), but his
effectiveRoles
property does not:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/scarter?_fields=_id,userName,roles,effectiveRoles" { "_id": "scarter", "_rev": "1", "userName": "scarter@example.com", "roles": [ { "_ref": "managed/role/3eb67be6-205b-483d-b36d-562b43a04ff8", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "257099f5-56e5-4ce0-8580-f0f4d4b93d93", "_rev": "1" } } ], "effectiveRoles": [] }
In other words, the role is still in place but is no longer effective.
9.4.4.2. Adding a Temporal Constraint to a Role Grant
To restrict the validity of a role for individual users, you can apply a temporal constraint at the grant level, rather than as part of the role definition. In this case, the temporal constraint is taken into account per user, when the user's effective roles are calculated. Temporal constraints that are defined at the grant level can be different for each user who is a member of that role.
To restrict the period during which a role grant is valid by using the Admin UI, set a temporal constraint when you add the member to the role.
For example, to specify that bjensen be added to a Contractor role only for the period of her employment contract, select Manage > Role, click the Contractor role, and click Add Role Members. On the Add Role Members screen, select bjensen from the list, then enable the Temporal Constraint and specify the start and end date of her contract.
To apply a temporal constraint to a grant over the REST interface, include
the constraint as one of the _refProperties
of the
relationship between the user and the role. The following example assumes a
contractor
role, with ID
9321fd67-30d1-4104-934d-cfd0a22e8182
. The command adds
user bjensen as a member of that role, with a temporal constraint that
specifies that she be a member of the role only for one year, from January
1st, 2016 to January 1st, 2017:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation": "add", "field": "/members/-", "value": { "_ref" : "managed/user/bjensen", "_refProperties": { "temporalConstraints": [{"duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z"}] } } } ]' \ "http://localhost:8080/openidm/managed/role/9321fd67-30d1-4104-934d-cfd0a22e8182" { "_id": "9321fd67-30d1-4104-934d-cfd0a22e8182", "_rev": "2", "name": "contractor", "description": "Role for contract workers" }
A query on bjensen's roles property shows that the temporal constraint has been applied to this grant:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/bjensen/roles?_queryFilter=true" { "result": [ { "_ref": "managed/role/9321fd67-30d1-4104-934d-cfd0a22e8182", "_refProperties": { "temporalConstraints": [ { "duration": "2016-01-01T00:00:00.000Z/2017-01-01T00:00:00.000Z" } ], "_id": "84f5342c-cebe-4f0b-96c9-0267bf68a095", "_rev": "1" } } ], ... }
9.4.5. Querying a User's Manual and Conditional Roles
The easiest way to check what roles have been granted to a user, either manually, or as the result of a condition, is to look at the user's entry in the Admin UI. Select Manage > User, click on the user whose roles you want to see, and select the Provisioning Roles tab.
To obtain a similar list over the REST interface, you can query the user's
roles
property. The following sample query shows that
scarter has been granted two roles - an employee
role
(with ID 6bf4701a-7579-43c4-8bb4-7fd6cac552a1
) and an
fr-employee
role (with ID
00561df0-1e7d-4c8a-9c1e-3b1096116903
).
specifies :
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/scarter/roles?_queryFilter=true&_fields=_ref,_refProperties,name" { "result": [ { "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308", "_rev": "2" }, "name": "employee" }, { "_ref": "managed/role/00561df0-1e7d-4c8a-9c1e-3b1096116903", "_refProperties": { "_grantType": "conditional", "_id": "e59ce7c3-46ce-492a-ba01-be27af731435", "_rev": "1" }, "name": "fr-employee" } ], ... }
Note that the fr-employee
role has an additional
reference property, _grantType
. This property indicates
how the role was granted to the user. If there is no
_grantType
, the role was granted manually.
Querying a user's roles in this way does not return
any roles that would be in effect as a result of a custom script, or of any
temporal constraint applied to the role. To return a complete list of
all the roles in effect at a specific time, you need
to query the user's effectiveRoles
property, as follows:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/scarter?_fields=effectiveRoles"
9.4.6. Deleting a User's Roles
Roles that have been granted manually can be removed from a user's entry in two ways:
Update the value of the user's
roles
property (if the role is a provisioning role) orauthzRoles
property (if the role is an authorization role) to remove the reference to the role.Update the value of the role's
members
property to remove the reference to that user.
Both of these actions can be achieved by using the Admin UI, or over REST.
- Using the Admin UI
Use one of the following methods to remove a user's roles:
Select Manage > User and click on the user whose role or roles you want to remove.
Select the Provisioning Roles tab, select the role that you want to remove, and click Remove Selected Provisioning Roles.
Select Manage > Role and click on the role whose members you want to remove.
Select the Role Members tab, select the member or members that that you want to remove, and click Remove Selected Role Members.
- Over the REST interface
Use one of the following methods to remove a role grant from a user:
Delete the role from the user's
roles
property, including the reference ID (the ID of the relationship between the user and the role) in the delete request:The following sample command removes the
employee
role (with ID6bf4701a-7579-43c4-8bb4-7fd6cac552a1
) from user scarter:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/user/scarter/roles/8417106e-c3ef-4f59-a482-4c92dbf00308" { "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308", "_rev": "2" } }
PATCH the user entry to remove the role from the array of roles, specifying the value of the role object in the JSON payload.
Caution
When you remove a role in this way, you must include the entire object in the value, as shown in the following example:
$ curl \ --header "Content-type: application/json" \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request PATCH \ --data '[ { "operation" : "remove", "field" : "/roles", "value" : { "_ref": "managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "8417106e-c3ef-4f59-a482-4c92dbf00308", "_rev": "1" } } } ]' \ "http://localhost:8080/openidm/managed/user/scarter" { "_id": "scarter", "_rev": "3", "mail": "scarter@example.com", "givenName": "Steven", "sn": "Carter", "description": "Created By XML1", "userName": "scarter@example.com", "telephoneNumber": "1234567", "accountStatus": "active", "effectiveRoles": [], "effectiveAssignments": [] }
Delete the user from the role's
members
property, including the reference ID (the ID of the relationship between the user and the role) in the delete request.The following example first queries the members of the
employee
role, to obtain the ID of the relationship, then removes bjensen's membership from that role:$ url \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1/members?_queryFilter=true" { "result": [ { "_ref": "managed/user/bjensen", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d", "_rev": "3" } } ], ... } $ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1/members/3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d" { "_ref": "managed/user/bjensen", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "3c047f39-a9a3-4030-8d0c-bcd1fadb1d3d", "_rev": "3" } }
Note
Roles that have been granted as the result of a condition can only be removed when the condition is changed or removed, or when the role itself is deleted.
9.4.7. Deleting a Role Definition
You can delete a managed provisioning or authorization role by using the Admin UI, or over the REST interface.
To delete a role by using the Admin UI, select Manage > Role, select the role you want to remove, and click Delete.
To delete a role over the REST interface, simply delete that managed object.
The following command deletes the employee
role created
in the previous section:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1" { "_id": "6bf4701a-7579-43c4-8bb4-7fd6cac552a1", "_rev": "1", "name": "employee", "description": "Role granted to workers on the company payroll" }
Note
You cannot delete a role if it is currently granted to one or more users.
If you attempt to delete a role that is granted to a user (either over the
REST interface, or by using the Admin UI), OpenIDM returns an error. The
following command indicates an attempt to remove the
employee
role while it is still granted to user
scarter:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/role/6bf4701a-7579-43c4-8bb4-7fd6cac552a1" { "code":409, "reason":"Conflict", "message":"Cannot delete a role that is currently granted" }
9.4.8. Working With Role Assignments
Authorization roles control access to OpenIDM itself. Provisioning roles define rules for how attribute values are updated on external systems. These rules are configured through assignments that are attached to a provisioning role definition. The purpose of an assignment is to provision an attribute or set of attributes, based on an object's role membership.
The synchronization mapping configuration between two resources (defined in
the sync.json
file) provides the basic account
provisioning logic (how an account is mapped from a source to a target
system). Role assignments provide additional provisioning logic that is not
covered in the basic mapping configuration. The attributes and values that
are updated by using assignments might include group membership, access to
specific external resources, and so on. A group of assignments can
collectively represent a role.
Assignment objects are created, updated and deleted like any other managed
object, and are attached to a role by using the relationships mechanism, in
much the same way as a role is granted to a user. Assignment are stored in
the repository and are accessible at the context path
/openidm/managed/assignment
.
This section describes how to manipulate managed assignments over the REST
interface, and by using the Admin UI. When you have created an assignment,
and attached it to a role definition, all user objects that reference that
role definition will, as a result, reference the corresponding assignment
in their effectiveAssignments
attribute.
9.4.8.1. Creating an Assignment
The easiest way to create an assignment is by using the Admin UI, as follows:
Select Manage > Assignment and click New Assignment on the Assignment List page.
Enter a name and description for the new assignment, and select the mapping to which the assignment should apply. The mapping indicates the target resource, that is, the resource on which the attributes specified in the assignment will be adjusted.
Click Add Assignment.
Select the Attributes tab and select the attribute or attributes whose values will be adjusted by this assignment.
If a regular text field appears, specify what the value of the attribute should be, when this assignment is applied.
If an Item button appears, you can specify a managed object type, such as an object, relationship, or string.
If a Properties button appears, you can specify additional information such as an array of role references, as described in "Working With Managed Roles".
Select the assignment operation from the dropdown list:
Merge With Target
- the attribute value will be added to any existing values for that attribute. This operation merges the existing value of the target object attribute with the value(s) from the assignment. If duplicate values are found (for attributes that take a list as a value), each value is included only once in the resulting target. This assignment operation is used only with complex attribute values like arrays and objects, and does not work with strings or numbers. (Property:mergeWithTarget
.)Replace Target
- the attribute value will overwrite any existing values for that attribute. The value from the assignment becomes the authoritative source for the attribute. (Property:replaceTarget
.)
Select the unassignment operation from the dropdown list. You can set the unassignment operation to one of the following:
Remove From Target
- the attribute value is removed from the system object when the user is no longer a member of the role, or when the assignment itself is removed from the role definition. (Property:removeFromTarget
.)No Operation
- removing the assignment from the user'seffectiveAssignments
has no effect on the current state of the attribute in the system object. (Property:noOp
.)
Optionally, click the Events tab to specify any scriptable events associated with this assignment.
The assignment and unassignment operations described in the previous step operate at the attribute level. That is, you specify what should happen with each attribute affected by the assignment when the assignment is applied to a user, or removed from a user.
The scriptable On assignment and On unassignment events operate at the assignment level, rather than the attribute level. You define scripts here to apply additional logic or operations that should be performed when a user (or other object) receives or loses an entire assignment. This logic can be anything that is not restricted to an operation on a single attribute.
For information about the variables available to these scripts, see "Variables Available to Role Assignment Scripts".
Click the Roles tab to attach this assignment to an existing role definition.
To create a new assignment over REST, send a PUT or POST request to the
/openidm/managed/assignment
context path.
The following example creates a new managed assignment named
employee
. The JSON payload in this example shows the
following:
The assignment is applied for the mapping
managedUser_systemLdapAccounts
, so attributes will be updated on the external LDAP system specified in this mapping.The name of the attribute on the external system whose value will be set is
employeeType
and its value will be set toEmployee
.When the assignment is applied during a sync operation, the attribute value
Employee
will be added to any existing values for that attribute. When the assignment is removed (if the role is deleted, or if the managed user is no longer a member of that role), the attribute valueEmployee
will be removed from the values of that attribute.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "name" : "employee", "description": "Assignment for employees.", "mapping" : "managedUser_systemLdapAccounts", "attributes": [ { "name": "employeeType", "value": "Employee", "assignmentOperation" : "mergeWithTarget", "unassignmentOperation" : "removeFromTarget" } ] }' \ "http://localhost:8080/openidm/managed/assignment?_action=create" { "_id": "2fb3aa12-109f-431c-bdb7-e42213747700", "_rev": "1", "name": "employee", "description": "Assignment for employees.", "mapping": "managedUser_systemLdapAccounts", "attributes": [ { "name": "employeeType", "value": "Employee", "assignmentOperation": "mergeWithTarget", "unassignmentOperation": "removeFromTarget" } ] }
Note that at this stage, the assignment is not linked to any role, so no user can make use of the assignment. You must add the assignment to a role, as described in the following section.
9.4.8.2. Adding an Assignment to a Role
When you have created a managed role, and a managed assignment, you reference the assignment from the role, in much the same way as a user references a role.
You can update a role definition to include one or more assignments, either by using the Admin UI, or over the REST interface.
- Using the Admin UI
Select Manage > Role and click on the role to which you want to add an assignment.
Select the Managed Assignments tab and click Add Managed Assignments.
Select the assignment that you want to add to the role and click Add.
- Over the REST interface
Update the role definition to include a reference to the ID of the assignment in the
assignments
property of the role. The following sample command adds theemployee
assignment (with ID2fb3aa12-109f-431c-bdb7-e42213747700
) to an existingemployee
role (whose ID is59a8cc01-bac3-4bae-8012-f639d002ad8c
):$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PATCH \ --data '[ { "operation" : "add", "field" : "/assignments/-", "value" : { "_ref": "managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700" } } ]' \ "http://localhost:8080/openidm/managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c" { "_id": "59a8cc01-bac3-4bae-8012-f639d002ad8c", "_rev": "3", "name": "employee", "description": "Role granted to workers on the company payroll" }
To check that the assignment was added successfully, you can query the
assignments
property of the role:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c/assignments?_queryFilter=true&_fields=_ref,_refProperties,name" { "result": [ { "_ref": "managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700", "_refProperties": { "_id": "686b328a-e2bd-4e48-be25-4a4e12f3b431", "_rev": "4" }, "name": "employee" } ], ... }
Note that the role's
assignments
property now references the assignment that you created in the previous step.
To remove an assignment from a role definition, remove the reference to the
assignment from the role's assignments
property.
9.4.8.3. Deleting an Assignment
You can delete an assignment by using the Admin UI, or over the REST interface.
To delete an assignment by using the Admin UI, select Manage > Assignment, select the assignment you want to remove, and click Delete.
To delete an assignment over the REST interface, simply delete that object.
The following command deletes the employee
assignment
created in the previous section:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/managed/assignment/2fb3aa12-109f-431c-bdb7-e42213747700" { "_id": "2fb3aa12-109f-431c-bdb7-e42213747700", "_rev": "1", "name": "employee", "description": "Assignment for employees.", "mapping": "managedUser_systemLdapAccounts", "attributes": [ { "name": "employeeType", "value": "Employee", "assignmentOperation": "mergeWithTarget", "unassignmentOperation": "removeFromTarget" } ] }
Note
You can delete an assignment, even if it is
referenced by a managed role. When the assignment is removed, any users
to whom the corresponding roles were granted will no longer have that
assignment in their list of effectiveAssignments
. For
more information about effective roles and effective assignments, see
"Understanding Effective Roles and Effective Assignments".
9.4.9. Understanding Effective Roles and Effective Assignments
Effective roles and
effective assignments are virtual properties of a
user object. Their values are calculated on the fly by
the openidm/bin/defaults/script/roles/effectiveRoles.js
and openidm/bin/defaults/script/roles/effectiveAssignments.js
scripts. These scripts are triggered when a managed user is retrieved.
The following excerpt of a managed.json
file shows how
these two virtual properties are constructed for each managed user object:
"effectiveRoles" : { "type" : "array", "title" : "Effective Roles", "viewable" : false, "returnByDefault" : true, "isVirtual" : true, "onRetrieve" : { "type" : "text/javascript", "source" : "require('roles/effectiveRoles').calculateEffectiveRoles(object, 'roles');" }, "items" : { "type" : "object" } }, "effectiveAssignments" : { "type" : "array", "title" : "Effective Assignments", "viewable" : false, "returnByDefault" : true, "isVirtual" : true, "onRetrieve" : { "type" : "text/javascript", "file" : "roles/effectiveAssignments.js", "effectiveRolesPropName" : "effectiveRoles" }, "items" : { "type" : "object" } },
When a role references an assignment, and a user references the role, that user automatically references the assignment in its list of effective assignments.
The effectiveRoles.js
script uses the
roles
attribute of a user entry to calculate the grants
(manual or conditional) that are currently in effect at the time of
retrieval, based on temporal constraints or other custom scripted logic.
The effectiveAssignments.js
script uses the virtual
effectiveRoles
attribute to calculate that user's
effective assignments. The synchronization engine reads the calculated value
of the effectiveAssignments
attribute when it processes
the user. The target system is updated according to the configured
assignmentOperation
for each assignment.
Do not change the default effectiveRoles.js
and
effectiveAssignments.js
scripts. If you need to change
the logic that calculates effectiveRoles
and
effectiveAssignments
, create your own custom script and
include a reference to it in your project's
conf/managed.json
file. For more information about
using custom scripts, see "Scripting Reference".
When a user entry is retrieved, OpenIDM calculates the
effectiveRoles
and effectiveAssignments
for that user based on the current value of the user's
roles
property, and on any roles that might be granted
dynamically through a custom script. The previous set of examples showed the
creation of a role employee
that referenced an assignment
employee
and was granted to user bjensen. Querying that
user entry would show the following effective roles and effective
assignments:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/bjensen?_fields=userName,roles,effectiveRoles,effectiveAssignments" { "_id": "bjensen", "_rev": "2", "userName": "bjensen@example.com", "roles": [ { "_ref": "managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c", "_refProperties": { "temporalConstraints": [], "_grantType": "", "_id": "881f0b96-06e9-4af4-b86b-aba4ee15e4ef", "_rev": "2" } } ], "effectiveRoles": [ { "_ref": "managed/role/59a8cc01-bac3-4bae-8012-f639d002ad8c" } ], "effectiveAssignments": [ { "name": "employee", "description": "Assignment for employees.", "mapping": "managedUser_systemLdapAccounts", "attributes": [ { "name": "employeeType", "value": "Employee", "assignmentOperation": "mergeWithTarget", "unassignmentOperation": "removeFromTarget" } ], "_id": "4606245c-9412-4f1f-af0c-2b06852dedb8", "_rev": "2" } ] }
In this example, synchronizing the managed/user repository with the
external LDAP system defined in the mapping should populate user bjensen's
employeeType
attribute in LDAP with the value
employee
.
9.4.10. Managed Role Script Hooks
Like any other managed object, a role has script hooks that enable you to
configure role behavior. The default role definition in
conf/managed.json
includes the following script hooks:
{ "name" : "role", "onDelete" : { "type" : "text/javascript", "file" : "roles/onDelete-roles.js" }, "onSync" : { "type" : "text/javascript", "source" : "require('roles/onSync-roles').syncUsersOfRoles(resourceName, oldObject, newObject, ['members']);" }, "onCreate" : { "type" : "text/javascript", "source" : "require('roles/conditionalRoles').roleCreate(object);" }, "onUpdate" : { "type" : "text/javascript", "source" : "require('roles/conditionalRoles').roleUpdate(oldObject, object);" }, "postCreate" : { "type" : "text/javascript", "file" : "roles/postOperation-roles.js" }, "postUpdate" : { "type" : "text/javascript", "file" : "roles/postOperation-roles.js" }, "postDelete" : { "type" : "text/javascript", "file" : "roles/postOperation-roles.js" }, ...
When a role is deleted, the onDelete
script hook calls
the bin/default/script/roles/onDelete-roles.js
script.
When a role is synchronized, the onSync
hook causes a
synchronization operation on all managed objects that reference the role.
When a conditional role is created or updated, the
onCreate
and onUpdate
script hooks
force an update on all managed users affected by the conditional role.
Directly after a role is created, updated or deleted, the
postCreate
, postUpdate
, and
postDelete
hooks call the
bin/default/script/roles/postOperation-roles.js
script.
Depending on when this script is called, it either creates or removes the
scheduled jobs required to manage temporal constraints on roles.
9.5. Managing Relationships Between Objects
OpenIDM enables you to define relationships between two managed objects. Managed roles are implemented using relationship objects, but you can create a variety of relationship objects, as required by your deployment.
9.5.1. Defining a Relationship Type
Relationships are defined in your project's managed object configuration
file (conf/managed.json
). By default, OpenIDM provides
a relationship named manager
, that enables you to
configure a management relationship between two managed users. The
manager
relationship is a good example from which to
understand how relationships work.
The default manager
relationship is configured as
follows:
"manager" : { "type" : "relationship", "returnByDefault" : false, "description" : "", "title" : "Manager", "viewable" : true, "searchable" : false, "properties" : { "_ref" : { "type" : "string" }, "_refProperties": { "type": "object", "properties": { "_id": { "type": "string" } } } },
All relationships have the following configurable properties:
type
(string)The object type. Must be
relationship
for a relationship object.returnByDefault
(booleantrue, false
)Specifies whether the relationship should be returned in the result of a read or search query on the managed object that has the relationship. If included in an array, always set this property to
false
. By default, relationships are not returned, unless explicitly requested.description
(string, optional)An optional string that provides additional information about the relationship object.
title
(string)Used by the UI to refer to the relationship.
viewable
(boolean,true, false
)Specifies whether the relationship is visible as a field in the UI. The default value is
true
.searchable
(boolean,true, false
)Specifies whether values of the relationship can be searched, in the UI. For example, if you set this property to
true
for themanager
relationship, a user will be able to search for managed user entries using themanager
field as a filter._ref
(JSON object)Specifies how the relationship between two managed objects is referenced.
In the relationship definition, the value of this property is
{ "type" : "string" }
. In a managed user entry, the value of the_ref
property is the reference to the other resource. The_ref
property is described in more detail in "Establishing a Relationship Between Two Objects"._refProperties
(JSON object)Specifies any required properties from the relationship that should be included in the managed object. The
_refProperties
field includes a unique ID (_id
) and the revision (_rev
) of the object._refProperties
can also contain arbitrary fields to support metadata within the relationship.
9.5.2. Establishing a Relationship Between Two Objects
When you have defined a relationship type, (such as the
manager
relationship, described in the previous section),
you can reference that relationship from a managed user, using the
_ref
property.
For example, imagine that you are creating a new user, psmith, and that
psmith's manager will be bjensen. You would add psmith's user entry, and
reference bjensen's entry with the
_ref
property, as follows:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "If-None-Match: *" \ --header "Content-Type: application/json" \ --request PUT \ --data '{ "sn":"Smith", "userName":"psmith", "givenName":"Patricia", "displayName":"Patti Smith", "description" : "psmith - new user", "mail" : "psmith@example.com", "phoneNumber" : "0831245986", "password" : "Passw0rd", "manager" : {"_ref" : "managed/user/bjensen"} }' \ "http://localhost:8080/openidm/managed/user/psmith" { "_id": "psmith", "_rev": "1", "sn": "Smith", "userName": "psmith", "givenName": "Patricia", "displayName": "Patti Smith", "description": "psmith - new user", "mail": "psmith@example.com", "phoneNumber": "0831245986", "accountStatus": "active", "effectiveRoles": [], "effectiveAssignments": [] }
Note that the relationship information is not returned by default in the command-line output.
Any change to a relationship triggers a synchronization operation on any
other managed objects that are referenced by the relationship. For example,
OpenIDM maintains referential integrity by deleting the relationship
reference, if the object referred to by that relationship is deleted. In our
example, if bjensen's user entry is deleted, the corresponding reference in
psmith's manager
property is removed.
9.5.3. Validating Relationships Between Objects
Optionally, you can specify that a relationship between two objects must be validated when the relationship is created. For example, you can indicate that a user cannot reference a role, if that role does not exist.
When you create a new relationship type, validation is disabled by default
as it entails a query to the relationship that can be expensive, if it is
not required. To configure validation of a referenced relationship, set
"validate": true
in the object configuration (in
managed.json
). The managed.json
files provided with OpenIDM enable validation for the following
relationships:
For user objects ‒ roles, managers, and reports
For role objects ‒ members and assignments
For assignment objects ‒ roles
The following configuration of the manager
relationship
enables validation, and prevents a user from referencing a manager that has
not already been created:
"manager" : { "type" : "relationship", ... "validate" : true,
9.5.4. Working With Bi-Directional Relationships
In the Admin UI, it is useful to define a relationship between two objects in both directions. For example, a relationship between users and managers might indicate a reverse relationship between the manager and her direct report. Reverse relationships are particularly useful in querying. For example, you might want to query jdoe's user entry to discover who his manager is, or query bjensen's user entry to discover all the users who report to bjensen.
A reverse relationship is declared in the managed object configuration
(conf/managed.json
). Consider the following sample
excerpt of the default managed object configuration:
"reports" : { "description" : "", "title" : "Direct Reports", ... "type" : "array", "returnByDefault" : false, "items" : { "type" : "relationship", "reverseRelationship" : true, "reversePropertyName" : "manager", "validate" : true, } ...
The reports
property is a relationship
between users and managers. So, you can refer to a
managed user's reports by referencing the reports
.
However, the reports property is also a reverse relationship
("reverseRelationship" : true
) which
means that you can list all users that reference that report.
In other words, you can list all users whose manager
property is set to the currently queried user.
That reverse relationship uses a resourceCollection
of
managed users, as shown here:
"resourceCollection" : [ { "path" : "managed/user", "label" : "User", "query" : { "queryFilter" : "true", "fields" : [ "userName", "givenName", "sn" ], "sortKeys" : [ "userName" ] } } ]
In this case, users are listed with the noted fields. You can configure these relationships from the Admin UI. For an example of the process, see "Configure a Relationship From the User Managed Object".
9.5.5. Viewing Relationships Over REST
By default, information about relationships is not returned as the result of a GET request on a managed object. You must explicitly include the relationship property in the request, for example:
$ curl --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/psmith?_fields=manager" { "_id": "psmith", "_rev": "1", "manager": { "_ref": "managed/user/bjensen", "_refProperties": { "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99", "_rev": "1" } } }
To obtain more information about the referenced object (psmith's manager, in
this case), you can include additional fields from the referenced object in
the query, using the syntax object/property
(for a simple
string value) or object/*/property
(for an array of
values).
The following example returns the email address and contact number for psmith's manager:
$ curl --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/psmith?_fields=manager/mail,manager/phoneNumber" { "_id": "psmith", "_rev": "1", "phoneNumber": "1234567", "manager": { "_ref": "managed/user/bjensen", "_refProperties": { "_id": "e15779ad-be54-4a1c-b643-133dd9bb2e99", "_rev": "1" }, "mail": "bjensen@example.com", "phoneNumber": "1234567" } }
You can query all the relationships associated with a managed object by
querying the reference (*_ref
) property of the object.
For example, the following query shows all the objects that are referenced
by psmith's entry:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/psmith?_fields=*_ref" { "_id": "psmith", "_rev": "1", "roles": [], "authzRoles": [ { "_ref": "repo/internal/role/openidm-authorized", "_refProperties": { "_id": "8e7b2c97-dfa8-4eec-a95b-b40b710d443d", "_rev": "1" } } ], "manager": { "_ref": "managed/user/bjensen", "_refProperties": { "_id": "3a246327-a972-4576-b6a6-7126df780029", "_rev": "1" } } }
9.5.6. Viewing Relationships in Graph Form
OpenIDM provides a relationship graph widget that gives a visual display of the relationships between objects.
The relationship graph widget is not displayed on any dashboard by default. You can add it as follows:
Log into the Admin UI.
Select Dashboards, and choose the dashboard to which you want to add the widget.
For more information about managing dashboards in the UI, see "Creating and Modifying Dashboards".
Select Add Widgets. In the Add Widgets window, scroll to the Identity Relationships widget, and click Add.
Select Close to exit the Add Widgets window.
On the dashboard, scroll down to the Identity Relationships widget. Select the vertical ellipses > Settings to configure the widget.
Choose the Widget Size, then enter the object for which you want to display relationships such as
user
and the search property for that object, such asuserName
.If you want to include an additional level of relationships in the graph, select Display sub-relationships. In a traditional organization, this option will display a user's manager, along with all users with that same manager.
Click Save.
When you have configured the Identity Relationships widget, enter the user whose relationships you want to search.
The following graph shows all of bjensen's relationships. The graph shows bjensen's manager (emacheke) and all other users who are direct reports of emacheke.
Select or deselect the Data Types on the left of the screen to control how much information is displayed.
Select and move the graph for a better view. Double-click on any user in the graph to view that user's profile.
9.6. Running Scripts on Managed Objects
OpenIDM provides a number of hooks that enable you to
manipulate managed objects using scripts. These scripts can be triggered
during various stages of the lifecycle of the managed object, and are
defined in the managed objects configuration file
(managed.json
).
The scripts can be triggered when a managed object is created (onCreate), updated (onUpdate), retrieved (onRetrieve), deleted (onDelete), validated (onValidate), or stored in the repository (onStore). A script can also be triggered when a change to a managed object triggers an implicit synchronization operation (onSync).
In addition, OpenIDM supports the use of post-action scripts for managed objects, including after the creation of an object is complete (postCreate), after the update of an object is complete (postUpdate), and after the deletion of an object (postDelete).
The following sample extract of a managed.json
file runs
a script to calculate the effective assignments of a managed object, whenever
that object is retrieved from the repository:
"effectiveAssignments" : { "type" : "array", "title" : "Effective Assignments", "viewable" : false, "returnByDefault" : true, "isVirtual" : true, "onRetrieve" : { "type" : "text/javascript", "file" : "roles/effectiveAssignments.js", "effectiveRolesPropName" : "effectiveRoles" }, "items" : { "type" : "object" } },
9.7. Encoding Attribute Values
OpenIDM supports two methods of encoding attribute values for managed objects - reversible encryption and the use of salted hashing algorithms. Attribute values that might be encoded include passwords, authentication questions, credit card numbers, and social security numbers. If passwords are already encoded on the external resource, they are generally excluded from the synchronization process. For more information, see "Managing Passwords".
You configure attribute value encoding, per schema property, in the managed
object configuration (in your project's conf/managed.json
file). The following sections show how to use reversible encryption and
salted hash algorithms to encode attribute values.
9.7.1. Encoding Attribute Values With Reversible Encryption
The following excerpt of a managed.json
file shows a
managed object configuration that encrypts and decrypts the
password
attribute using the default symmetric key:
{ "objects" : [ { "name" : "user", ... "schema" : { ... "properties" : { ... "password" : { "title" : "Password", ... "encryption" : { "key" : "openidm-sym-default" }, "scope" : "private", ... } ] }
Tip
To configure encryption of properties by using the Admin UI:
Select Configure > Managed Objects, and click on the object type whose property values you want to encrypt (for example User).
On the Properties tab, select the property whose value should be encrypted and select the Encrypt checkbox.
For information about encrypting attribute values from the command-line, see "Using the encrypt Subcommand".
Important
Hashing is a one way operation - property values that are hashed can not be "unhashed" in the way that they can be decrypted. Therefore, if you hash the value of any property, you cannot synchronize that property value to an external resource. For managed object properties with hashed values, you must either exclude those properties from the mapping or set a random default value if the external resource requires the property.
9.7.2. Encoding Attribute Values by Using Salted Hash Algorithms
To encode attribute values with salted hash algorithms, add the
secureHash
property to the attribute definition, and
specify the algorithm that should be used to hash the value. OpenIDM
supports the following hash algorithms:
MD5
|
SHA-1
|
SHA-256
|
SHA-384
|
SHA-512
|
The following excerpt of a managed.json
file shows a
managed object configuration that hashes the values of the
password
attribute using the SHA-1
algorithm:
{ "objects" : [ { "name" : "user", ... "schema" : { ... "properties" : { ... "password" : { "title" : "Password", ... "secureHash" : { "algorithm" : "SHA-1" }, "scope" : "private", ... } ] }
Tip
To configure hashing of properties by using the Admin UI:
Select Configure > Managed Objects, and click on the object type whose property values you want to hash (for example User).
On the Properties tab, select the property whose value must be hashed and select the Hash checkbox.
Select the algorithm that should be used to hash the property value.
OpenIDM supports the following hash algorithms:
MD5
SHA-1
SHA-256
SHA-384
SHA-512
For information about hashing attribute values from the command-line, see "Using the secureHash Subcommand".
9.8. Restricting HTTP Access to Sensitive Data
You can protect specific sensitive managed data by marking the corresponding
properties as private
. Private data, whether it is
encrypted or not, is not accessible over the REST interface. Properties that
are marked as private are removed from an object when that object is
retrieved over REST.
To mark a property as private, set its scope
to
private
in the conf/managed.json
file.
The following extract of the managed.json
file shows how
HTTP access is prevented on the password
and
securityAnswer
properties:
{ "objects": [ { "name": "user", "schema": { "id" : "http://jsonschema.net", "title" : "User", ... "properties": { ... { "name": "securityAnswer", "encryption": { "key": "openidm-sym-default" }, "scope" : "private" }, { "name": "password", "encryption": { "key": "openidm-sym-default" }' "scope" : "private" } }, ... } ] }
Tip
To configure private properties by using the Admin UI:
Select Configure > Managed Objects, and click on the object type whose property values you want to make private (for example User).
On the Properties tab, select the property that must be private and select the Private checkbox.
A potential caveat with using private properties is that private properties
are removed if an object is updated by using an HTTP
PUT
request. A PUT
request replaces the
entire object in the repository. Because properties that are marked as
private are ignored in HTTP requests, these properties are effectively
removed from the object when the update is done. To work around this
limitation, do not use PUT
requests if you have configured
private properties. Instead, use a PATCH
request to update
only those properties that need to be changed.
For example, to update the givenName
of user jdoe, you
could run the following command:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '[ { "operation":"replace", "field":"/givenName", "value":"Jon" } ]' \ "http://localhost:8080/openidm/managed/user?_action=patch&_queryId=for-userName&uid=jdoe"
Note
The filtering of private data applies only to direct HTTP read and query calls on managed objects. No automatic filtering is done for internal callers, and the data that these callers choose to expose.
Chapter 10. Configuring Social ID Providers
OpenIDM provides a standards-based solution for social authentication requirements, based on the OAuth 2.0 and OpenID Connect 1.0 standards. They are similar, as OpenID Connect 1.0 is an authentication layer built on OAuth 2.0.
This chapter describes how to configure OpenIDM to register and authenticate users with multiple social identity providers.
To configure different social identity providers, you'll take the same general steps:
Set up the provider. You'll need information such as a
Client ID
andClient Secret
to set up an interface with OpenIDM.Configure the provider on OpenIDM.
Set up User Registration. Activate
Social Registration
in the applicable Admin UI screen or configuration file.Set up an authentication module. OpenIDM includes a
SOCIAL_PROVIDERS
module for this purpose. You'll configure that module in the same way for all supported providers, as described in "Configuring the Social Providers Authentication Module".After configuration is complete, test the result. For a common basic procedure, see "Testing the Social ID Provider".
To understand how data is transmitted between OpenIDM and a social identity provider, read "OpenID Connect Authorization Code Flow".
Note
For all social identity providers, set up a FQDN for OpenIDM, along with
information in a DNS server, or system hosts
files. For test purposes, FQDNs that comply with RFC 2606, such as
localhost
and openidm.example.com
,
are acceptable.
10.1. OpenID Connect Authorization Code Flow
The OpenID Connect Authorization Code Flow specifies how OpenIDM (Relying Party) interacts with the OpenID Provider (Social ID Provider), based on use of the OAuth 2.0 authorization grant. The following sequence diagram illustrates successful processing from the authorization request, through grant of the authorization code, access token, ID token, and provisioning from the social ID provider to OpenIDM.
The following list describes details of each item in the authorization flow:
A user navigates to the OpenIDM Self-Service UI, and selects the
Sign In
link for the desired social identity provider.OpenIDM prepares an authorization request.
OpenIDM sends the request to the Authorization Endpoint that you configured for the social identity provider, with a Client ID.
The social identity provider requests end user authentication and consent.
The end user transmits authentication and consent.
The social identity provider sends a redirect message, with an authorization code, to the end user's browser.
The browser transmits the redirect message, with the authorization code, to OpenIDM.
OpenIDM records the authorization code, and sends it to the social identity provider Token Endpoint.
The social identity provider token endpoint returns access and ID tokens.
OpenIDM validates the token, and sends it to the social identity provider User Info Endpoint.
The social identity provider responds with information on the user's account, that OpenIDM can provision as a new Managed User.
You'll configure these credentials and endpoints, in some form, for each social identity provider.
10.2. Many Social ID Providers, One Schema
Most social ID providers include common properties, such as name, email address, and location.
OpenIDM includes two sets of property maps that translate information from a social ID provider to your managed user objects. These property maps are as follows:
The
identityProviders.json
file includes apropertyMap
code block for each supported provider. This file maps properties from the provider to a generic managed user object. You should not customize this file.The
selfservice.propertymap.json
file translates the generic managed user properties to the managed user schema that you have defined inmanaged.json
. If you have customized the managed user schema, this is the file that you must change, to indicate how your custom schema maps to the generic managed user schema.
Examine the identityProviders.json
file in the
conf/
subdirectory for your project. The following
excerpt represents the Facebook propertyMap
code block
from that file:
"propertyMap" : [ { "source" : "id", "target" : "id" }, { "source" : "name", "target" : "displayName" }, { "source" : "first_name", "target" : "givenName" }, { "source" : "last_name", "target" : "familyName" }, { "source" : "email", "target" : "email" }, { "source" : "email", "target" : "username" }, { "source" : "locale", "target" : "locale" } ]
The source lists the Facebook property, the target lists the corresponding property for a generic managed user.
OpenIDM then processes that information through the
selfservice.propertymap.json
file, where the source
corresponds to the generic managed user and the target corresponds to your
customized managed user schema (defined in your project's
managed.json
file).
{ "properties" : [ { "source" : "givenName", "target" : "givenName" }, { "source" : "familyName", "target" : "sn" }, { "source" : "email", "target" : "mail" }, { "source" : "postalAddress", "target" : "postalAddress", "condition" : "/object/postalAddress pr" }, { "source" : "addressLocality", "target" : "city", "condition" : "/object/addressLocality pr" }, { "source" : "addressRegion", "target" : "stateProvince", "condition" : "/object/addressRegion pr" }, { "source" : "postalCode", "target" : "postalCode", "condition" : "/object/postalCode pr" }, { "source" : "country", "target" : "country", "condition" : "/object/country pr" }, { "source" : "phone", "target" : "telephoneNumber", "condition" : "/object/phone pr" }, { "source" : "username", "target" : "userName" } ] }
Tip
To take additional information from a social ID provider, make
sure the property is mapped through the identityProviders.json
and selfservice.propertymap.json
files.
Several of the property mappings include a pr
presence
expression which is a filter that returns all records with the given
attribute. For more information, see "Presence Expressions".
10.3. Setting Up Google as a Social Identity Provider
As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Google as a social identity provider for OpenIDM:
"Configuring the Social Providers Authentication Module". (This section is common for all providers.)
10.3.1. Setting Up Google
To set up Google as a social identity provider for OpenIDM, navigate to the Google API Manager. You'll need a Google account. If you have GMail, you already have a Google account. While you could use a personal Google account, it is best to use an organizational account to avoid problems if specific individuals leave your organization. When you set up a Google social identity provider, you'll need to perform the following tasks:
Plan ahead. It may take some time before the Google+
API that you configure for OpenIDM is ready for use.
In the Google API Manager, select and enable the
Google+
API. It is one of the Google "social" APIs.Create a project for OpenIDM.
Create OAuth client ID credentials. You'll need to configure an
OAuth consent screen
with at least a product name and email address.When you set up a Web application for the client ID, you'll need to set up a web client with:
Authorized JavaScript origins
The origin URL for OpenIDM, typically a URL such as
https://openidm.example.com:8443
Authorized redirect URIs
The redirect URI after users are authenticated, typically:
https://openidm.example.com:8443/oauthReturn.html
andhttps://openidm.example.com:8443/admin/oauthReturn.html
Note
The
oauthReturn.html
file is needed as an intermediate step, as social identity providers do not allow redirect URIs with hash fragments.
In the list of credentials, you'll see a unique
Client ID
andClient secret
. You'll need this information when you configure the Google social ID provider in OpenIDM in the following procedure: "Configuring a Google Social ID Provider".
For Google's procedure, see the Google Identity Platform documentation on Setting Up OAuth 2.0.
10.3.2. Configuring a Google Social ID Provider
To configure a Google social ID provider, log into the Admin UI and navigate to Configure > Social ID Providers.
Enable the Google social ID provider, and select the edit icon.
Include the Google values for
Client ID
andClient Secret
for your project, as described earlier in this section.Under regular and
Advanced Options
, include the options shown in the following appendix: "Google Social ID Provider Configuration Details". The defaults are based on Google's documentation on OpenID Connect.
The default installation of OpenIDM does not include configuration files
specific to social ID providers. When you enable a Google social ID
provider in the Admin UI, OpenIDM generates the
identityProvider-google.json
file in your project's conf/
subdirectory.
When you review that file, you should see information beyond what you see
in the Admin UI. One part of the file includes the authentication protocol,
OPENID_CONNECT
, the sign-in button,
Authorization scopes, and Google's
authentication identifier for authenticationId
.
{ "name" : "google", "type" : "OPENID_CONNECT", "icon" : "<button class=\"btn btn-lg btn-default btn-block btn-social-provider btn-google\"><img src=\"images/g-logo.png\"> {{action}} with Google</button>", "scope" : [ "openid", "profile", "email" ], "authenticationId" : "id",
Another part of the file includes a propertyMap
, which
maps user information entries between the source
(Google) and the target
(OpenIDM).
The next part of the file includes schema
information,
which includes properties for each social ID account, as collected by
OpenIDM, as well as the order in which it appears in the Admin UI. When
you've registered a user with a Google social ID, you can verify this
by selecting Manage > Google, and then selecting a user.
Finally, there's the part of the file that you may have configured through the Admin UI:
"enabled" : true, "client_id" : "<someUUID>.apps.googleusercontent.com", "client_secret" : { "$crypto" : { "type" : "x-simple-encryption", "value" : { "cipher" : "AES/CBC/PKCS5Padding", "salt" : "<hashValue>", "data" : "<encryptedValue>", "iv" : "<encryptedValue>", "key" : "openidm-sym-default", "mac" : "<hashValue>" } } }, "authorization_endpoint" : "https://accounts.google.com/o/oauth2/v2/auth", "token_endpoint" : "https://www.googleapis.com/oauth2/v4/token", "userinfo_endpoint" : "https://www.googleapis.com/oauth2/v3/userinfo", "well-known" : "https://accounts.google.com/.well-known/openid-configuration" }
If you need more information about the properties in this file, refer to the following appendix: "Google Social ID Provider Configuration Details".
10.3.3. Configuring User Registration to Link to Google
Once you've configured the Google social ID provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and enable the option associated with Social Registration. For more information on OpenIDM user self-service features, see "Configuring User Self-Service".
When you enable one or more social ID providers, OpenIDM changes the
selfservice-registration.json
file in the
conf/
subdirectory for your project, by replacing
userDetails
with socialUserDetails
in the stageConfigs
code block:
"name" : "socialUserDetails"
When you enable social ID providers in User Registration, you're allowing users to register through all active social identity providers.
10.4. Setting Up LinkedIn as a Social Identity Provider
As suggested in the introduction to this chapter, you'll need to take four basic steps to configure LinkedIn as a social identity provider for OpenIDM:
"Configuring the Social Providers Authentication Module". (This section is common to all social providers.)
10.4.1. Setting Up LinkedIn
To set up LinkedIn as a social identity provider for OpenIDM, navigate to
the
LinkedIn Developers page for
My Applications
. You'll need a LinkedIn account.
While you could use a personal LinkedIn account, it is best to use an
organizational account to avoid problems if specific individuals leave your
organization. When you set up a LinkedIn social identity provider, you'll
need to perform the following tasks:
In the LinkedIn Developers page for My Applications, select Create Application.
You'll need to include the following information when creating an application:
Company Name
Application Name
Description
Application Logo
Application Use
Website URL
Business Email
Business Phone
When you see Authentication Keys for your LinkedIn application, save the
Client ID
andClient Secret
.Enable the following default application permissions:
r_basicprofile
r_emailaddress
When you set up a Web application for the client ID, you'll need to set up a web client with OAuth 2.0 Authorized Redirect URLs. For example, if your OpenIDM FQDN is
openidm.example.com
, add the following URLs:https://openidm.example.com:8443/oauthReturn.html
https://openidm.example.com:8443/admin/oauthReturn.html
You can ignore any LinkedIn URL boxes related to OAuth 1.0a.
For LinkedIn's procedure, see their documentation on Authenticating with OAuth 2.0.
10.4.2. Configuring a LinkedIn Social ID Provider
To configure a LinkedIn social ID provider, log into the Admin UI and navigate to Configure > Social ID Providers.
Enable the LinkedIn social ID provider.
Include the values that LinkedIn created for
Client ID
andClient Secret
, as described in "Setting Up LinkedIn".Under regular and
Advanced Options
, include the options shown in the following appendix: "LinkedIn Social ID Provider Configuration Details".
The default installation of OpenIDM includes configuration details specific
to social ID providers. When you enable a Google social ID provider, OpenIDM
generates the identityProvider-linkedIn.json
file in your project's conf/
subdirectory.
When you review that file, you should see information beyond what you see
in the Admin UI. One part of the file includes the authentication protocol,
OAUTH
, the sign-in button, and LinkedIn's
authentication identifier for authenticationId
.
{ "name" : "linkedIn", "type" : "OAUTH", "icon" : "<button class=\"btn btn-lg btn-default btn-block btn-social-provider\"> {{action}} with LinkedIn</button>", "scope" : [ "r_basicprofile", "r_emailaddress" ], "authenticationId" : "id",
Another part of the file includes a propertyMap
, which
maps user information entries between the source
(LinkedIn) and the target
(OpenIDM).
The next part of the file includes schema
information,
which includes properties for each social ID account, as collected by
OpenIDM, as well as the order in which it appears in the Admin UI. When
you've registered a user with a LinkedIn social ID, you can verify this
by selecting Manage > LinkedIn, and then selecting a user.
Finally, there's the part of the file that you may have configured through the Admin UI:
"enabled" : true, "client_id" : "<someUUID>", "client_secret" : { "$crypto" : { "type" : "x-simple-encryption", "value" : { "cipher" : "AES/CBC/PKCS5Padding", "salt" : "<hashValue>", "data" : "<encryptedValue>", "iv" : "<encryptedValue>", "key" : "openidm-sym-default", "mac" : "<hashValue>" } } }, "authorization_endpoint" : "https://www.linkedin.com/oauth/v2/authorization", "token_endpoint" : "https://www.linkedin.com/oauth/v2/accessToken", "userinfo_endpoint" : "https://api.linkedin.com/v1/people/~:(id,formatted-name,first-name,last-name,email-address,location)?format=json" }
If you need more information about the properties in this file, refer to the following appendix: "LinkedIn Social ID Provider Configuration Details".
10.4.3. Configuring User Registration to Link to LinkedIn
Once you've configured the LinkedIn social ID provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and enable the option associated with Social Registration. For more information on OpenIDM user self-service features, see "Configuring User Self-Service".
When you enable social ID providers, OpenIDM changes the
selfservice-registration.json
file in the
conf/
subdirectory for your project, by adding
the following entry to the stageConfigs
code block:
"name" : "socialUserDetails"
When you enable social ID providers in User Registration, you're allowing users to register through all active social identity providers.
10.5. Setting Up Facebook as a Social Identity Provider
As suggested in the introduction to this chapter, you'll need to take four basic steps to configure Facebook as a social identity provider for OpenIDM:
"Configuring the Social Providers Authentication Module". (This section is common to all social providers.)
10.5.1. Setting Up Facebook
To set up Facebook as a social identity provider for OpenIDM, navigate to the Facebook for Developers page. You'll need a Facebook account. While you could use a personal Facebook account, it is best to use an organizational account to avoid problems if specific individuals leave your organization. When you set up a Facebook social identity provider, you'll need to perform the following tasks:
Note
This procedure was tested with Facebook API version v2.7.
In the Facebook for Developers page, select My Apps and Add a New App. For OpenIDM, you'll create a
Website
application.You'll need to include the following information when creating a Facebook website application:
Display Name
Contact Email
OpenIDM URL
When complete, you should see your App and a link to a Dashboard. Navigate to the Dashboard for your App.
Make a copy of the
App ID
andApp Secret
for when you configure the Facebook social ID provider in OpenIDM.In the settings for your App, you should see an entry for
App Domains
, such asexample.com
.
For Facebook's documentation on the subject, see Facebook Login for the Web with the JavaScript SDK.
10.5.2. Configuring a Facebook Social ID Provider
To configure a Facebook social ID provider, log into the Admin UI and navigate to Configure > Social ID Providers.
Enable the Facebook social ID provider.
Include the values that Facebook created for
App ID
andApp Secret
, as described in "Setting Up LinkedIn".Under regular and
Advanced Options
, include the options shown in the following appendix: "Facebook Social ID Provider Configuration Details".
The default installation of OpenIDM includes configuration details specific
to social ID providers. When you enable a Facebook social ID provider, in the
Admin UI, OpenIDM generates the identityProvider-facebook.json
file in your project's conf/
subdirectory.
When you review that file, you should see information beyond what you see
in the Admin UI. One part of the file includes the authentication protocol,
OAUTH
, the sign-in button, and Facebook's
authentication identifier for authenticationId
.
{ "name" : "facebook", "type" : "OAUTH", "icon" : "<button class=\"btn btn-lg btn-default btn-block btn-social-provider\"> {{action}} with Facebook</button>", "scope" : [ "email", "user_birthday" ], "authenticationId" : "id",
Another part of the file includes a propertyMap
, which
maps user information entries between the source
(Facebook) and the target
(OpenIDM).
The next part of the file includes schema
information,
which includes properties for each social ID account, as collected by
OpenIDM, as well as the order in which it appears in the Admin UI. When
you've registered a user with a Facebook social ID, you can verify this
by selecting Manage > Facebook, and then selecting a user.
Finally, there's the part of the file that you may have configured through the Admin UI:
"enabled" : true, "client_id" : "<someUUID>", "client_secret" : { "$crypto" : { "type" : "x-simple-encryption", "value" : { "cipher" : "AES/CBC/PKCS5Padding", "salt" : "<hashValue>", "data" : "<encryptedValue>", "iv" : "<encryptedValue>", "key" : "openidm-sym-default", "mac" : "<hashValue>" } } }, "authorization_endpoint" : "https://www.facebook.com/dialog/oauth", "token_endpoint" : "https://graph.facebook.com/v2.7/oauth/access_token", "userinfo_endpoint" : "https://graph.facebook.com/me?fields=id,name,picture,email,first_name,last_name,locale" }
If you need more information about the properties in this file, refer to the following appendix: "Facebook Social ID Provider Configuration Details".
10.5.3. Configuring User Registration to Link to Facebook
Once you've configured the Facebook social ID provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and enable the option associated with Social Registration. For more information on OpenIDM user self-service features, see "Configuring User Self-Service".
When you enable social ID providers, OpenIDM changes the
selfservice-registration.json
file in the
conf/
subdirectory for your project, by adding
the following entry to the stageConfigs
code block:
"name" : "socialUserDetails"
When you enable social ID providers in User Registration, you're allowing users to register through all active social identity providers.
10.6. Setting Up a Custom Social Identity Provider
As suggested in the introduction to this chapter, you'll need to take five basic steps to configure a custom social identity provider for OpenIDM:
Note
These instructions require the social identity provider to be fully compliant with The OAuth 2.0 Authorization Framework or the OpenID Connect standards.
10.6.1. Preparing OpenIDM For a Custom Social ID Provider
While OpenIDM includes provisions to work with OpenID Connect 1.0 and OAuth 2.0 social identity providers, OpenIDM does not support connections to those providers, other than those listed in this chapter.
To set up another social provider, first add a code block
to the identityProviders.json
file, such as:
{ "name" : "custom", "type" : "OAUTH", "icon" : "<button class=\"btn btn-lg btn-default btn-block btn-social-provider btn-custom\"><img src=\"images/custom-logo.png\">{{action}} with Custom Social ID</button>", "authorization_endpoint" : "", "token_endpoint" : "", "userinfo_endpoint" : "", "client_id" : "", "client_secret" : "", "scope" : [ ], "authenticationId" : "id", "schema" : { "id" : "http://jsonschema.net", "viewable" : true, "type" : "object", "$schema" : "http://json-schema.org/draft-03/schema", "properties" : { "id" : { "title" : "ID", "viewable" : true, "type" : "string", "searchable" : true }, "name" : { "title" : "Name", "viewable" : true, "type" : "string", "searchable" : true }, "first_name" : { "title" : "First Name", "viewable" : true, "type" : "string", "searchable" : true }, "last_name" : { "title" : "Last Name", "viewable" : true, "type" : "string", "searchable" : true }, "email" : { "title" : "Email Address", "viewable" : true, "type" : "string", "searchable" : true }, "locale" : { "title" : "Locale Code", "viewable" : true, "type" : "string", "searchable" : true } }, "order" : [ "id", "name", "first_name", "last_name", "email", "locale" ], "required" : [ ] }, "propertyMap" : [ { "source" : "id", "target" : "id" }, { "source" : "name", "target" : "displayName" }, { "source" : "first_name", "target" : "givenName" }, { "source" : "last_name", "target" : "familyName" }, { "source" : "email", "target" : "email" }, { "source" : "email", "target" : "username" }, { "source" : "locale", "target" : "locale" } ] },
Modify this code block for your selected social provider. Some of these
properties may appear under other names. For example, some providers
specify an App ID
that you'd include as a
client_id
.
In the propertyMap
code block, you should substitute the
properties from the selected social ID provider for various values of
source
. Make sure to trace the property mapping through
selfservice.propertymap.json
to the Managed User
property shown in managed.json
. For more information on
this multi-step mapping, see "Many Social ID Providers, One Schema".
As shown in "OpenID Connect Authorization Code Flow", user provisioning information goes through the User Info Endpoint. Some providers, such as Linkedin and Facebook, may require a list of properties with the endpoint. Consult the documentation for your provider for details.
With the icon
property, note the
{{action}}
tag. It is a placeholder; OpenIDM substitutes
Sign in or Register for the tag,
depending on the functionality of the UI login page.
You can configure some of this code block through the Admin UI. Based on
the "name" : "custom"
line in the code block,
select Configure > Social ID Providers. You'll see the entry as
Custom
, and you can configure the provider in the same
way as others.
Alternatively, you can copy that code block directly to a new file. Based
on "name" : "custom"
you'd create the following file:
identityProvider-custom.json
.
Both files, identityProviders.json
and
identityProvider-custom.json
, should include the same
information for the new custom
identity provider.
For property details, see "Custom Social ID Provider Configuration Details".
Once you've included information from your selected social ID provider, proceed with the configuration process. You'll use the same basic steps described for other specified social providers.
10.6.2. Setting Up a Custom Social ID Provider
Every social identity provider should be able to provide the information you need to specify properties in the code block shown in "Preparing OpenIDM For a Custom Social ID Provider".
In general, you'll need an authorization_endpoint
,
a token_endpoint
and a userinfo_endpoint
.
To link to the custom provider, you'll also have to copy the
client_id
and client_secret
that you
created with that provider. In some cases, you'll get this information in
a slightly different format, such as an App ID
and
App Secret
.
For the propertyMap
, check the source
properties. You may need to revise these properties to match those available
from your custom provider.
For examples, refer to the specific social ID providers documented in this chapter.
10.6.3. Configuring a Custom Social ID Provider
To configure a custom social ID provider, log into the Admin UI and navigate to Configure > Social ID Providers.
Enable the custom social ID provider. The name you see is based on the
name
property in the relevant code block in theidentityProviders.json
file.If you haven't already done so, include the values provided by your social ID provider for the properties shown. For more information, see the following appendix: "Custom Social ID Provider Configuration Details".
10.6.4. Configuring User Registration to Link to a Custom Provider
Once you've configured a custom social ID provider, you can activate it through User Registration. To do so in the Admin UI, select Configure > User Registration, and enable the option associated with Social Registration. For more information on OpenIDM user self-service features, see "Configuring User Self-Service".
When you enable social ID providers, OpenIDM changes the
selfservice-registration.json
file in the
conf/
subdirectory for your project, by adding
the following entry to the stageConfigs
code block:
"name" : "socialUserDetails"
When you enable social ID providers in User Registration, you're allowing users to register through all active social identity providers.
10.7. Configuring the Social Providers Authentication Module
OpenIDM includes a SOCIAL_PROVIDERS
authentication
module, which incorporates the requirements from social ID providers who
rely on either the OAuth2 or the OpenID Connect standards. To configure this
module in the Admin UI, select Configure > Authentication, choose the Modules
tab. In the Select a Module text box, select and enable the Social Providers
authentication module.
When configured, OpenIDM adds the following code block to the
authentication.json
file for your project:
{ "enabled" : true, "properties" : { "queryOnResource" : "managed/user", "defaultUserRoles" : [ "openidm-authorized" ], "propertyMapping" : { "userRoles" : "authzRoles" } }, "name" : "SOCIAL_PROVIDERS" }
For more information on these options, see "Common Module Properties".
10.8. Managing the Social ID Provider Over REST
You can identify the current status of configured social ID providers with the following REST call:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ http://localhost:8080/openidm/authentication
The output that you see includes JSON information from each configured
social ID provider, as described in the
identityProvider-provider
file in your project's conf/
subdirectory.
One key line from this output specifies whether the social ID provider is enabled:
"enabled" : true
If the SOCIAL_PROVIDERS
authentication module is disabled,
you'll see the following output from that REST call:
{ "providers" : [ ] }
For more information, see "Configuring the Social Providers Authentication Module".
If the SOCIAL_PROVIDERS
module is disabled, you can still
review the standard configuration of each social provider (enabled or not) by
running the same REST call on a different endpoint (do not forget the
s
at the end of identityProviders
):
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ http://localhost:8080/openidm/identityProviders
Note
If you have not configured a social ID provider, you'll see the following
output from the REST call on the openidm/identityProviders
endpoint:
{ "providers" : [ ] }
You can still get information about the available configuration for social ID providers on a slightly different endpoint:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ http://localhost:8080/openidm/config/identityProviders
The config
in the endpoint refers to the configuration,
starting with the identityProviders.json
configuration
file. Note how it matches the corresponding term in the endpoint.
You can review information for a specific provider by including the name with the endpoint. For example, if you've configured LinkedIn as described in "Setting Up LinkedIn as a Social Identity Provider", run the following command:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ http://localhost:8080/openidm/config/identityProvider/linkedIn
The above command differs in subtle ways. The config
in
the endpoint points to configuration data. The identityProvider
at the end of the endpoint is singular, which matches the corresponding
configuration file, identityProvider-linkedIn.json
.
And linkedIn
includes a capital I
in
the middle of the word.
In a similar fashion, you can delete a specific provider:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ http://localhost:8080/openidm/config/identityProvider/linkedIn
If you have the information needed to set up a provider, such as the output from the previous two REST calls, you can use the following command to add a provider:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-type: application/json" \ --request PUT \ --data '{ "_id" : "identityProvider/linkedIn", "name" : "linkedIn", "type" : "OAUTH", "icon" : "<button class=\"btn btn-lg btn-default btn-block btn-social-provider btn-linkedin\"><img src=\"images/ln-logo.png\">{{action}} LinkedIn</button>", "scope" : [ "r_basicprofile", "r_emailaddress" ], "authenticationId" : "id", "propertyMap" : [ { "source" : "id", "target" : "id" }, { "source" : "formattedName", "target" : "displayName" }, { "source" : "firstName", "target" : "givenName" }, { "source" : "lastName", "target" : "familyName" }, { "source" : "emailAddress", "target" : "email" }, { "source" : "emailAddress", "target" : "username" }, { "source" : "location", "target" : "locale", "transform" : { "type" : "text/javascript", "source" : "source.country.code", "file" : null } } ], "enabled" : true, "client_id" : "<some UUID>", "client_secret" : "<some client secret>", "authorization_endpoint" : "https://www.linkedin.com/oauth/v2/authorization", "token_endpoint" : "https://www.linkedin.com/oauth/v2/accessToken", "userinfo_endpoint" : "https://api.linkedin.com/v1/people/~:(id,formatted-name,first-name,last-name,email-address,location)?format=json" }' \ http://localhost:8080/openidm/config/identityProvider/linkedIn
You can even disable a social ID provider with a PATCH
REST
call, as shown:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-type: application/json" \ --request PATCH \ --data '[ { "operation":"replace", "field" : "enabled", "value" : false } ]' \ http://localhost:8080/openidm/config/identityProvider/linkedIn
You can reverse the process by substituting true
for
false
in the previous PATCH
REST call.
You can manage the social ID providers associated with individual users over REST, as described in "Managing Links Between End User Accounts and Social ID Providers".
10.9. Testing the Social ID Provider
In all cases, once configuration is complete, you should test the Social ID Provider. To do so, go through the steps in the following procedure:
Navigate to the login screen for the self-service UI,
https://openidm.example.com:8443
.Select the
Register
link (after the "Don't have an account?" question) on the login page.You should see a link to sign in with your selected social ID provider. Select that link.
Note
If you do not see a link to sign in with any social ID provider, you probably did not enable the option associated with Social Registration. To make sure, access the Admin UI, and select Configure > User Registration.
Warning
If you see a redirect URI error from a social ID provider, check the configuration for your web application in the social ID provider developer console. There may be a mistake in the redirect URI or redirect URL.
Follow the prompts from your social ID provider to log into your account.
You should next see the OpenIDM Register Your Account screen, pre-populated with a username, first name, last name, and email address (if available). You should be able to modify these entries. If the username already exists in the OpenIDM managed user datastore, you'll have to change that username before you can save and register the account.
As Knowledge-based Authentication (KBA) is enabled by default, you'll need to add at least one security question and answer to proceed. For more information, see "Configuring Self-Service Questions".
When the Social ID registration process is complete, OpenIDM takes you to the self-service login URL at
https://openidm.example.com:8443
.At the self-service login URL, you should now be able to use the sign in link for your social ID provider to log into OpenIDM.
10.10. Managing Links Between End User Accounts and Social ID Providers
If your users have one or more social ID providers, they can link them to the same OpenIDM user account. This section assumes that you have configured more than one of the social ID providers described in this chapter.
Conversely, you should not be able to configure more than one OpenIDM account
with a single social ID provider account. When social accounts are associated
with an account, a related managed record is created for the user. This
related record uses the social ID provider name as the managed object type,
and the subject is used as the _id
. This combination has
a unique constraint; if you try to associate a second OpenIDM account with
the same social account, OpenIDM detects a conflict, which prevents the
association.
10.10.1. The Process for End Users
When your users register with a social ID provider, as defined in "Testing the Social ID Provider", they create an account in your OpenIDM managed user datastore. They can link additional social ID providers to that datastore, using the following steps:
Navigate to the self-service UI, at an URL such as
https://openidm.example.com:8443
.Log into the account, either as an OpenIDM user, or with the social ID provider.
Navigate to Profile > Social Identities.
Enable a second social ID provider. Unless you've previously authenticated with that social provider, you should be prompted to log into that provider.
To test the result, log out and log back in, using the link for the second social ID provider.
10.10.2. Reviewing Linked Accounts as an Administrator
You can review social ID accounts linked to an OpenIDM account, from the Admin UI and from the command line. You can disable or delete social ID provider information for a specific user from the command line, as described in "Reviewing Linked Accounts Over REST".
Note
An end-user can unbind social providers through their managed user accounts. However, an administrative user cannot delete social provider accounts through the Admin UI.
When you activate a social ID provider, OpenIDM creates a new managed object
for that provider. You can review that managed object in the
managed.json
file, as well as in the Admin UI, by
selecting Configure > Managed Objects.
The information shown is reflected in the schema in the
identityProvider-providername.json
file for the selected provider.
Note
Best practice: do not edit social ID provider profile information via OpenIDM. Any changes that you make won't be synchronized with that provider.
10.10.2.1. Reviewing Linked Accounts Over REST
You can also review the social ID accounts linked to
specific users with REST calls. Start by finding the _id
for your user with the following command:
$ curl \ --header "X-OpenIDM-Username:openidm-admin" \ --header "X-OpenIDM-Password:openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/?_queryId=query-all-ids"
The following REST call finds all data from a specified user.
$ curl \ --header "X-OpenIDM-Username:openidm-admin" \ --header "X-OpenIDM-Password:openidm-admin" \ --request GET \ "http://localhost:8080/openidm/managed/user/10aa857f-b2cc-47a4-a295-f842df96e5e8"
From the following output, you can see how Jane Doe's idpData
includes Linkedin information in a code block similar to her Google
information. The order makes no functional difference in whether users can
log in via their Google or Linkedin accounts.
{ "_id" : "10aa857f-b2cc-47a4-a295-f842df96e5e8", "_rev" : "2", "givenName" : "Jane", "sn" : "Doe", "mail" : "Jane.Doe@example.com", "userName" : "Jane.Doe@example.com", "idpData" : { "google" : { "subject" : "105533855303935356522", "enabled" : true, "dateCollected" : "2016-09-16T17:25Z", "rawProfile" : { "sub" : "105533855303935356522", "name" : "Jane", "given_name" : "Jane", "family_name" : "Doe", "profile" : "https://plus.google.com/<some number>", "picture" : "https://lh4.googleusercontent.com/<some path>/photo.jpg", "email" : "Jane.Doe@example.com", "email_verified" : true, "gender" : "female", "locale" : "en", "hd" : "example.com" } }, "linkedIn" : { "rawProfile" : { "emailAddress" : "Jane.Doe@example.net", "firstName" : "Jane", "formattedName" : "Jane Doe", "id" : "MW9FE_KyQH", "lastName" : "Doe", "location" : { "country" : { "code" : "us" }, "name" : "Portland, Oregon Area" } }, "enabled" : true, "subject" : "MW9FE_KyQH" } }, "kbaInfo" : [ { "answer" : { "$crypto" : { "value" : { "algorithm" : "SHA-256", "data" : "<some hashed value>" }, "type" : "salted-hash" } }, "questionId" : "1" } ], "accountStatus" : "active", "effectiveRoles" : [ ], "effectiveAssignments" : [ ]
When a user disables logins via one specific social ID provider in the self-
service UI, that sets "enabled" : false
in the data for
that provider. However, that user's social ID information is preserved.
Alternatively, you can use a REST call to disable logins to a specific social ID provider. The following REST call disables logins for the same user via Google:
$ curl \ --header "-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-type: application/json" \ --request POST \ "http://localhost:8080/openidm/managed/user/10aa857f-b2cc-47a4-a295-f842df96e5e8?_action=unbind&provider=google"
For privacy purposes, you can also set up deletion of a disabled social ID
provider. To do so, you need to make one change to the
unBindBehavior.js
file in the following
/path/to/openidm
subdirectory:
bin/defaults/script/ui/
.
// uncomment below line to delete social provider data - // delete object.idpData[request.additionalParameters.provider];
As suggested by the file, when you uncomment the noted line, disabling one social ID provider (in the UI or via REST) removes data for that provider from that user's information in the OpenIDM repository.
10.10.2.2. Reviewing Linked Accounts From the Admin UI
When you configure a social ID provider, OpenIDM includes two features in the Admin UI.
The ability to review the social ID accounts linked to specific users. To see how this works, log into the Admin UI, and select Manage > User, and select a user. Under the Identity Providers tab, you can review the social ID providers associated with a specific account.
A managed object for each provider. For example, if you've enabled Google as a social ID provider, select Manage > Google. In the screen that appears, you can select the ID for any Google social ID account that has been used or linked to an existing OpenIDM account, and review the profile information shared from that provider.
Chapter 11. Using Policies to Validate Data
OpenIDM provides an extensible policy service that enables you to apply specific validation requirements to various components and properties. This chapter describes the policy service, and provides instructions on configuring policies for managed objects.
The policy service provides a REST interface for reading policy requirements and validating the properties of components against configured policies. Objects and properties are validated automatically when they are created, updated, or patched. Policies are generally applied to user passwords, but can also be applied to any managed or system object, and to internal user objects.
The policy service enables you to accomplish the following tasks:
Read the configured policy requirements of a specific component.
Read the configured policy requirements of all components.
Validate a component object against the configured policies.
Validate the properties of a component against the configured policies.
The OpenIDM router service limits policy application to managed, system, and
internal user objects. To apply policies to additional objects, such as the
audit service, you must modify your project's
conf/router.json
file. For more information about the
router service, see "Router Service Reference".
A default policy applies to all managed objects. You can configure this default policy to suit your requirements, or you can extend the policy service by supplying your own scripted policies.
11.1. Configuring the Default Policy for Managed Objects
Policies applied to managed objects are configured in two files:
A policy script file (
openidm/bin/defaults/script/policy.js
) that defines each policy and specifies how policy validation is performed. For more information, see "Understanding the Policy Script File".A managed object policy configuration element, defined in your project's
conf/managed.json
file, that specifies which policies are applicable to each managed resource. For more information, see "Understanding the Policy Configuration Element".Note
The configuration for determining which policies apply to resources other than managed objects is defined in your project's
conf/policy.json
file. The defaultpolicy.json
file includes policies that are applied to internal user objects, but you can extend the configuration in this file to apply policies to system objects.
11.1.1. Understanding the Policy Script File
The policy script file (openidm/bin/defaults/script/policy.js
)
separates policy configuration into two parts:
A policy configuration object, which defines each element of the policy. For more information, see "Policy Configuration Objects".
A policy implementation function, which describes the requirements that are enforced by that policy.
Together, the configuration object and the implementation function determine whether an object is valid in terms of the applied policy. The following excerpt of a policy script file configures a policy that specifies that the value of a property must contain a certain number of capital letters:
... { "policyId" : "at-least-X-capitals", "policyExec" : "atLeastXCapitalLetters", "clientValidation": true, "validateOnlyIfPresent":true, "policyRequirements" : ["AT_LEAST_X_CAPITAL_LETTERS"] }, ... policyFunctions.atLeastXCapitalLetters = function(fullObject, value, params, property) { var isRequired = _.find(this.failedPolicyRequirements, function (fpr) { return fpr.policyRequirement === "REQUIRED"; }), isNonEmptyString = (typeof(value) === "string" && value.length), valuePassesRegexp = (function (v) { var test = isNonEmptyString ? v.match(/[(A-Z)]/g) : null; return test !== null && test.length >= params.numCaps; }(value)); if ((isRequired || isNonEmptyString) && !valuePassesRegexp) { return [ { "policyRequirement" : "AT_LEAST_X_CAPITAL_LETTERS", "params" : {"numCaps": params.numCaps} } ]; } return []; } ...
To enforce user passwords that contain at least one capital letter, the
policyId
from the preceding example is applied to the
appropriate resource (managed/user/*
). The required
number of capital letters is defined in the policy configuration element of
the managed object configuration file (see "Understanding the Policy Configuration Element".
11.1.1.1. Policy Configuration Objects
Each element of the policy is defined in a policy configuration object. The structure of a policy configuration object is as follows:
{ "policyId" : "minimum-length", "policyExec" : "propertyMinLength", "clientValidation": true, "validateOnlyIfPresent": true, "policyRequirements" : ["MIN_LENGTH"] }
policyId
- a unique ID that enables the policy to be referenced by component objects.policyExec
- the name of the function that contains the policy implementation. For more information, see "Policy Implementation Functions".clientValidation
- indicates whether the policy decision can be made on the client. When"clientValidation": true
, the source code for the policy decision function is returned when the client requests the requirements for a property.validateOnlyIfPresent
- notes that the policy is to be validated only if it exists.policyRequirements
- an array containing the policy requirement ID of each requirement that is associated with the policy. Typically, a policy will validate only one requirement, but it can validate more than one.
11.1.1.2. Policy Implementation Functions
Each policy ID has a corresponding policy implementation function that performs the validation. Implementation functions take the following form:
function <name>(fullObject, value, params, propName) { <implementation_logic> }
fullObject
is the full resource object that is supplied with the request.value
is the value of the property that is being validated.params
refers to theparams
array that is specified in the property's policy configuration.propName
is the name of the property that is being validated.
The following example shows the implementation function for the
required
policy:
function required(fullObject, value, params, propName) { if (value === undefined) { return [ { "policyRequirement" : "REQUIRED" } ]; } return []; }
11.1.2. Understanding the Policy Configuration Element
The configuration of a managed object property (in the
managed.json
file) can include a policies
element that specifies how policy validation should be applied to that
property. The following excerpt of the default
managed.json
file shows how policy validation is
applied to the password
and _id
properties of a managed/user object:
{ "objects" : [ { "name" : "user", ... "schema" : { "id" : "http://jsonschema.net", ... "properties" : { "_id" : { "type" : "string", "viewable" : false, "searchable" : false, "userEditable" : false, "policies" : [ { "policyId" : "cannot-contain-characters", "params" : { "forbiddenChars" : ["/"] } } ] }, "password" : { "type" : "string", "viewable" : false, "searchable" : false, "minLength" : 8, "userEditable" : true, "policies" : [ { "policyId" : "at-least-X-capitals", "params" : { "numCaps" : 1 } }, { "policyId" : "at-least-X-numbers", "params" : { "numNums" : 1 } }, { "policyId" : "cannot-contain-others", "params" : { "disallowedFields" : [ "userName", "givenName", "sn" ] } } ] },
Note that the policy for the _id
property references
the function cannot-contain-characters
, that is defined
in the policy.js
file. The policy for the
password
property references the functions
at-least-X-capitals
, at-least-X-numbers
,
and cannot-contain-others
, that are defined in the
policy.js
file. The parameters that are passed to these
functions (number of capitals required, and so forth) are specified in the
same element.
11.1.3. Validation of Managed Object Data Types
The type
property of a managed object specifies the
data type of that property, for example, array
,
boolean
, integer
,
number
, null
,
object
, or string
. For more
information about data types, see the
JSON Schema Primitive Types section of the
JSON Schema standard.
The type
property is subject to policy validation when a
managed object is created or updated. Validation fails if data does not
match the specified type
, such as when the data is an
array
instead of a string
.
The valid-type
policy in the default
policy.js
file enforces the match between property
values and the type
defined in the
managed.json
file.
OpenIDM supports multiple valid property types. For example, you might have
a scenario where a managed user can have more than one telephone number, or
an null telephone number (when the user entry is
first created and the telephone number is not yet known). In such a case,
you could specify the accepted property type as follows in your
managed.json
file:
"telephoneNumber" : { "description" : "", "title" : "Mobile Phone", "viewable" : true, "searchable" : false, "userEditable" : true, "policies" : [ ], "returnByDefault" : false, "minLength" : null, "pattern" : "^\\+?([0-9\\- \\(\\)])*$", "type" : [ "string", "null" ] },
In this case, the valid-type
policy from the
policy.js
file checks the telephone number for an
accepted type
and pattern
, either
for a real telephone number or a null
entry.
11.1.4. Configuring Policy Validation in the UI
The Admin UI provides rudimentary support for applying policy validation to
managed object properties. To configure policy validation for a managed
object type update the configuration of the object type in the UI. For
example, to specify validation policies for specific properties of managed
user objects, select Configure > Managed Objects then click on the User
object. Scroll down to the bottom of the Managed Object configuration, then
update, or add, a validation policy. The Policy
field
here refers to a function that has been defined in the policy script file.
For more information, see "Understanding the Policy Script File". You cannot
define additional policy functions by using the UI.
Note
Take care with Validation Policies. If it relates to an array of
relationships, such as between a user and multiple devices, "Return by
Default" should always be set to false. You can verify this in the
managed.json
file for your project, with the
"returnByDefault" : false
entry for the applicable
managed object, whenever there are items
of
"type" : "relationship"
.
11.2. Extending the Policy Service
You can extend the policy service by adding custom scripted policies, and by adding policies that are applied only under certain conditions.
11.2.1. Adding Custom Scripted Policies
If your deployment requires additional validation functionality that is not
supplied by the default policies, you can add your own policy scripts to
your project's script
directory, and reference them
from your project's conf/policy.json
file.
Do not modify the default policy script file
(openidm/bin/defaults/script/policy.js
) as doing so might
result in interoperability issues in a future release. To reference
additional policy scripts, set the additionalFiles
property conf/policy.json
.
The following example creates a custom policy that rejects properties with
null values. The policy is defined in a script named
mypolicy.js
:
var policy = { "policyId" : "notNull", "policyExec" : "notNull", "policyRequirements" : ["NOT_NULL"] } addPolicy(policy); function notNull(fullObject, value, params, property) { if (value == null) { var requireNotNull = [ {"policyRequirement": "NOT_NULL"} ]; return requireNotNull; } return []; }
The mypolicy.js
policy is referenced in the
policy.json
configuration file as follows:
{ "type" : "text/javascript", "file" : "bin/defaults/script/policy.js", "additionalFiles" : ["script/mypolicy.js"], "resources" : [ { ...
11.2.2. Adding Conditional Policy Definitions
You can extend the policy service to support policies that are applied only
under specific conditions. To apply a conditional policy to managed objects,
add the policy to your project's managed.json
file. To
apply a conditional policy to other objects, add it to your project's
policy.json
file.
The following excerpt of a managed.json
file shows a
sample conditional policy configuration for the "password"
property of managed user objects. The policy indicates that sys-admin users
have a more lenient password policy than regular employees:
{ "objects" : [ { "name" : "user", ... "properties" : { ... "password" : { "title" : "Password", "type" : "string", ... "conditionalPolicies" : [ { "condition" : { "type" : "text/javascript", "source" : "(fullObject.org === 'sys-admin')" }, "dependencies" : [ "org" ], "policies" : [ { "policyId" : "max-age", "params" : { "maxDays" : ["90"] } } ] }, { "condition" : { "type" : "text/javascript", "source" : "(fullObject.org === 'employees')" }, "dependencies" : [ "org" ], "policies" : [ { "policyId" : "max-age", "params" : { "maxDays" : ["30"] } } ] } ], "fallbackPolicies" : [ { "policyId" : "max-age", "params" : { "maxDays" : ["7"] } } ] }
To understand how a conditional policy is defined, examine the components of this sample policy. For more information on the policy function, see "Policy Implementation Functions".
There are two distinct scripted conditions (defined in the
condition
elements). The first condition asserts that
the user object, contained in the fullObject
argument, is
a member of the sys-admin
org. If that assertion is true,
the max-age
policy is applied to the
password
attribute of the user object, and the maximum
number of days that a password may remain unchanged is set to 90
.
The second condition asserts that the user object is a member of the
employees
org. If that assertion is true, the
max-age
policy is applied to the
password
attribute of the user object, and the maximum
number of days that a password may remain unchanged is set to
30
.
In the event that neither condition is met (the user object is not a member
of the sys-admin
org or the employees
org), an optional fallback policy can be applied. In this example, the
fallback policy also references the max-age
policy and
specifies that for such users, their password must be changed after 7 days.
The dependencies
field prevents the condition scripts
from being run at all, if the user object does not include an
org
attribute.
Note
This example assumes that a custom max-age
policy
validation function has been defined, as described in
"Adding Custom Scripted Policies".
11.3. Disabling Policy Enforcement
Policy enforcement is the automatic validation of data when it is created, updated, or patched. In certain situations you might want to disable policy enforcement temporarily. You might, for example, want to import existing data that does not meet the validation requirements with the intention of cleaning up this data at a later stage.
You can disable policy enforcement by setting
openidm.policy.enforcement.enabled
to
false
in your project's
conf/boot/boot.properties
file. This setting disables
policy enforcement in the back-end only, and has no impact on direct policy
validation calls to the Policy Service (which the UI makes to validate input
fields). So, with policy enforcement disabled, data added directly over REST
is not subject to validation, but data added with the UI is still subject to
validation.
You should not disable policy enforcement permanently, in a production environment.
11.4. Managing Policies Over REST
You can manage the policy service over the REST interface, by calling the
REST endpoint https://localhost:8443/openidm/policy
, as
shown in the following examples.
11.4.1. Listing the Defined Policies
The following REST call displays a list of all the policies defined in
policy.json
(policies for objects other than managed
objects). The policy objects are returned in JSON format, with one object for
each defined policy ID:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "https://localhost:8443/openidm/policy" { "_id": "", "resources": [ { "resource": "repo/internal/user/*", "properties": [ { "name": "_id", "policies": [ { "policyId": "cannot-contain-characters", "params": { "forbiddenChars": [ "/" ] }, "policyFunction": "\nfunction (fullObject, value, params, property) ...
To display the policies that apply to a specific resource, include the resource name in the URL. For example, the following REST call displays the policies that apply to managed users:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "https://localhost:8443/openidm/policy/managed/user/*" { "_id": "*", "resource": "managed/user/*", "properties": [ { "name": "_id", "conditionalPolicies": null, "fallbackPolicies": null, "policyRequirements": [ "CANNOT_CONTAIN_CHARACTERS" ], "policies": [ { "policyId": "cannot-contain-characters", "params": { "forbiddenChars": [ "/" ] ...
11.4.2. Validating Objects and Properties Over REST
To verify that an object adheres to the requirements of all applied policies,
include the validateObject
action in the request.
The following example verifies that a new managed user object is acceptable, in terms of the policy requirements:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "sn":"Jones", "givenName":"Bob", "_id":"bjones", "telephoneNumber":"0827878921", "passPhrase":null, "mail":"bjones@example.com", "accountStatus":"active", "userName":"bjones@example.com", "password":"123" }' \ "https://localhost:8443/openidm/policy/managed/user/bjones?_action=validateObject" { "result": false, "failedPolicyRequirements": [ { "policyRequirements": [ { "policyRequirement": "MIN_LENGTH", "params": { "minLength": 8 } } ], "property": "password" }, { "policyRequirements": [ { "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS", "params": { "numCaps": 1 } } ], "property": "password" } ] }
The result (false
) indicates that the object is not
valid. The unfulfilled policy requirements are provided as part of the
response - in this case, the user password does not meet the validation
requirements.
Use the validateProperty
action to verify that a specific
property adheres to the requirements of a policy.
The following example checks whether Barbara Jensen's new password
(12345
) is acceptable:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "password" : "12345" }' \ "https://localhost:8443/openidm/policy/managed/user/bjensen?_action=validateProperty" { "result": false, "failedPolicyRequirements": [ { "policyRequirements": [ { "policyRequirement": "MIN_LENGTH", "params": { "minLength": 8 } } ], "property": "password" }, { "policyRequirements": [ { "policyRequirement": "AT_LEAST_X_CAPITAL_LETTERS", "params": { "numCaps": 1 } } ], "property": "password" } ] }
The result (false
) indicates that the password is not
valid. The unfulfilled policy requirements are provided as part of the
response - in this case, the minimum length and the minimum number of
capital letters.
Validating a property that does fulfil the policy requirements returns a
true
result, for example:
$ curl \ --cacert self-signed.crt \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "password" : "1NewPassword" }' \ "https://localhost:8443/openidm/policy/managed/user/bjensen?_action=validateProperty" { "result": true, "failedPolicyRequirements": [] }
Chapter 12. Configuring Server Logs
In this chapter, you will learn about server logging, that is, the messages that OpenIDM logs related to server activity.
Server logging is separate from auditing. Auditing logs
activity on the OpenIDM system, such as access and synchronization. For
information about audit logging, see "Logging Audit Information". To
configure server logging, edit the logging.properties
file in your project-dir/conf
directory.
Important
When you change the logging settings you must restart the server for those changes to take effect. Alternatively, you can use JMX via jconsole to change the logging settings, in which case changes take effect without restarting the server.
12.1. Log Message Files
The default configuration writes log messages in simple format to
openidm/logs/openidm*.log
files, rotating files when the
size reaches 5 MB, and retaining up to 5 files. Also by default, OpenIDM
writes all system and custom log messages to the files.
You can modify these limits in the following properties in the
logging.properties
file for your project:
# Limiting size of output file in bytes: java.util.logging.FileHandler.limit = 5242880 # Number of output files to cycle through, by appending an # integer to the base file name: java.util.logging.FileHandler.count = 5
12.2. Specifying the Logging Level
By default, IDM logs messages at the INFO
level.
This logging level is specified with the following global property in
conf/logging.properties
:
.level=INFO
You can specify different separate logging levels for individual server features which override the global logging level. Set the log level, per package to one of the following:
SEVERE (highest value) WARNING INFO CONFIG FINE FINER FINEST (lowest value)
For example, the following setting decreases the messages logged by the embedded PostgreSQL database:
# reduce the logging of embedded postgres since it is very verbose ru.yandex.qatools.embed.postgresql.level = SEVERE
Set the log level to OFF
to disable logging completely
(see in "Disabling Logs"), or to ALL
to
capture all possible log messages.
If you use logger
functions in your JavaScript scripts,
set the log level for the scripts as follows:
org.forgerock.openidm.script.javascript.JavaScript.level=level
You can override the log level settings, per script, with the following setting:
org.forgerock.openidm.script.javascript.JavaScript.script-name.level
For more information about using logger
functions in
scripts, see "Logging Functions".
Important
It is strongly recommended that you do not log messages
at the FINE
or FINEST
levels in a
production environment. Although these levels are useful for debugging
issues in a test environment, they can result in accidental exposure of
sensitive data. For example, a password change patch request can expose the
updated password in the Jetty logs.
12.3. Disabling Logs
You can also disable logs if desired. For example, before starting OpenIDM,
you can disable ConsoleHandler
logging in your project's
conf/logging.properties
file.
Just set java.util.logging.ConsoleHandler.level = OFF
,
and comment out other references to ConsoleHandler
,
as shown in the following excerpt:
# ConsoleHandler: A simple handler for writing formatted records to System.err #handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler handlers=java.util.logging.FileHandler ... # --- ConsoleHandler --- # Default: java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.level = OFF #java.util.logging.ConsoleHandler.formatter = ... #java.util.logging.ConsoleHandler.filter=...
Chapter 13. Connecting to External Resources
This chapter describes how to connect to external resources such as LDAP, Active Directory, flat files, and others. Configurations shown here are simplified to show essential aspects. Not all resources support all OpenIDM operations; however, the resources shown here support most of the CRUD operations, and also reconciliation and liveSync.
In OpenIDM, resources are external systems, databases, directory servers, and other sources of identity data that are managed and audited by the identity management system. To connect to resources, OpenIDM loads the Identity Connector Framework, OpenICF. OpenICF aims to avoid the need to install agents to access resources, instead using the resources' native protocols. For example, OpenICF connects to database resources using the database's Java connection libraries or JDBC driver. It connects to directory servers over LDAP. It connects to UNIX systems by using ssh.
13.1. The Open Identity Connector Framework (OpenICF)
OpenICF provides a common interface to allow identity services access to the resources that contain user information. OpenIDM loads the OpenICF API as one of its OSGi modules. OpenICF uses connectors to separate the OpenIDM implementation from the dependencies of the resource to which OpenIDM is connecting. A specific connector is required for each remote resource. Connectors can run either locally or remotely.
Local connectors are loaded by OpenICF as regular bundles in the OSGi container. Most connectors can be run locally. Remote connectors must be executed on a remote connector server. If a resource requires access libraries that cannot be included as part of the OpenIDM process, you must use a connector server. For example, OpenICF OpenICF connects to Microsoft Active Directory through a remote connector server that is implemented as a .NET service.
Connections to remote connector servers are configured in a single
connector info provider configuration file, located in
your project's conf/
directory.
Connectors themselves are configured through
provisioner files. One provisioner file must exist for
each connector. Provisioner files are named
provisioner.openicf-name
where
name corresponds to the name of the connector,
and are also located in the conf/
directory.
A number of sample connector configurations are available in the
openidm/samples/provisioners
directory. To use these
connectors, edit the configuration files as required, and copy them to your
project's conf/
directory.
The following figure shows how OpenIDM connects to resources by using connectors and remote connector servers. The figure shows one local connector (LDAP) and two remote connectors (Scripted SQL and PowerShell). In this example, the remote Scripted SQL connector uses a remote Java connector server. The remote PowerShell connector always requires a remote .NET connector server.
Tip
Connectors that use the .NET framework must run remotely. Java connectors can be run locally or remotely. You might run a Java connector remotely for security reasons (firewall constraints), for geographical reasons, or if the JVM version that is required by the connector conflicts with the JVM version that is required by OpenIDM.
13.2. Accessing Remote Connectors
When you configure a remote connector, you use the connector info
provider service to connect through a remote connector server.
The connector info provider service configuration is stored in the file
project-dir/conf/provisioner.openicf.connectorinfoprovider.json
.
A sample configuration file is provided in the
openidm/samples/provisioners/
directory. To use this
sample configuration, edit the file as required, and copy it to your
project's conf/
directory.
The sample connector info provider configuration is as follows:
{ "remoteConnectorServers" : [ { "name" : "dotnet", "host" : "127.0.0.1", "port" : 8759, "useSSL" : false, "timeout" : 0, "protocol" : "websocket", "key" : "Passw0rd" } ] }
You can configure the following remote connector server properties:
name
string, required
The name of the remote connector server object. This name is used to identify the remote connector server in the list of connector reference objects.
host
string, required
The remote host to connect to.
port
integer, optional
The remote port to connect to. The default remote port is 8759.
heartbeatInterval
integer, optional
The interval, in seconds, at which heartbeat packets are transmitted. If the connector server is unreachable based on this heartbeat interval, all services that use the connector server are made unavailable until the connector server can be reached again. The default interval is 60 seconds.
useSSL
boolean, optional
Specifies whether to connect to the connector server over SSL. The default value is
false
.timeout
integer, optional
Specifies the timeout (in milliseconds) to use for the connection. The default value is
0
, which means that there is no timeout.protocol
string
Version 1.5.2.0 of the OpenICF framework supports a new communication protocol with remote connector servers. This protocol is enabled by default, and its value is
websocket
in the default configuration.For compatibility reasons, you might want to enable the legacy protocol for specific remote connectors. For example, if you deploy the connector server on a Java 5 or 6 JVM, you must use the old protocol. In this case, remove the
protocol
property from the connector server configuration.For the .NET connector server, the service with the new protocol listens on port 8759 and the service with the legacy protocol listens on port 8760 by default. For more information on running the connector server in legacy mode, see "Running the .NET Connector Server in Legacy Mode".
For the Java connector server, the service listens on port 8759 by default, for both the new and legacy protocols. The new protocol runs by default. To run the service with the legacy protocol, you must change the main class that is executed in the
ConnectorServer.sh
orConnectorServer.bat
file. The class that starts the websocket protocol isMAIN_CLASS=org.forgerock.openicf.framework.server.Main
. The class that starts the legacy protocol isMAIN_CLASS=org.identityconnectors.framework.server.Main
. To change the port on which the Java connector server listens, change theconnectorserver.port
property in theopenicf/conf/ConnectorServer.properties
file.key
string, required
The secret key, or password, to use to authenticate to the remote connector server.
To run remotely, the connector .jar itself must be copied to the
openicf/bundles
directory on the remote machine.
The following example provides a configuration for reconciling managed users with objects in a remote CSV file.
This example demonstrates reconciliation of users stored in a CSV file on a remote machine. The remote Java Connector Server enables OpenIDM to synchronize the internal OpenIDM repository with the remote CSV repository.
The example assumes that a remote Java Connector Server is installed on a
host named remote-host
. For instructions on setting up
the remote Java Connector Server, see
"Installing a Remote Java Connector Server for Unix/Linux" or
"Installing a Remote Java Connector Server for Windows".
This example assumes that the Java Connector Server is running on the
machine named remote-host
. The example uses the small
CSV data set provided with the Getting Started
sample (hr.csv
). The CSV connector runs as a
remote connector, that is, on the remote host on
which the Java Connector Server is installed. Before you start, copy the
sample data file, and the CSV connector itself over to the remote machine.
Shut down the remote connector server, if it is running. In the connector server terminal window, type
q
:q INFO: Stopped listener bound to [0.0.0.0:8759] May 30, 2016 12:33:24 PM INFO o.f.o.f.server.ConnectorServer: Server is shutting down org.forgerock.openicf.framework.server.ConnectorServer@171ba877
Copy the CSV data file from the Getting Started sample (
/path/to/openidm/samples/getting-started/data/hr.csv
) to an accessible location on the machine that hosts the remote Java Connector Server. For example:$ cd /path/to/openidm/samples/getting-started/data/ $ scp hr.csv testuser@remote-host:/home/testuser/csv-sample/data/ Password:******** hr.csv 100% 651 0.6KB/s 00:00
Copy the CSV connector .jar from the OpenIDM installation to the
openicf/bundles
directory on the remote host:$ cd path/to/openidm $ scp connectors/csvfile-connector-1.5.1.4.jar testuser@remote-host:/path/to/openicf/bundles/ Password:******** csvfile-connector-1.5.1.4.jar 100% 40KB 39.8KB/s 00:00
The CSV connector depends on the Super CSV library, that is bundled with OpenIDM. Copy the Super CSV library
super-csv-2.4.0.jar
from theopenicf/bundle
directory to theopenicf/lib
directory on the remote server:$ cd path/to/openidm $ scp bundle/super-csv-2.4.0.jar testuser@remote-host:/path/to/openicf/lib/ Password:******** super-csv-2.4.0.jar 100% 96KB 95.8KB/s 00:00
On the remote host, restart the Connector Server so that it picks up the new CSV connector and its dependent libraries:
$ cd /path/to/openicf $ bin/ConnectorServer.sh /run ... May 30, 2016 3:58:29 PM INFO o.i.f.i.a.l.LocalConnectorInfoManagerImpl: Add ConnectorInfo ConnectorKey( bundleName=org.forgerock.openicf.connectors.csvfile-connector bundleVersion="[1.5.1.4,1.6.0.0)" connectorName=org.forgerock.openicf.csvfile.CSVFileConnector ) to Local Connector Info Manager from file:/path/to/openicf/bundles/csvfile-connector-1.5.1.4.jar May 30, 2016 3:58:30 PM org.glassfish.grizzly.http.server.NetworkListener start INFO: Started listener bound to [0.0.0.0:8759] May 30, 2016 3:58:30 PM org.glassfish.grizzly.http.server.HttpServer start INFO: [OpenICF Connector Server] Started. May 30, 2016 3:58:30 PM INFO o.f.openicf.framework.server.Main: ConnectorServer listening on: ServerListener[0.0.0.0:8759 - plain]
The connector server logs are noisy by default. You should, however, notice the addition of the CSV connector.
Before you start, copy the following files to your
/path/to/openidm/conf
directory:
A customised mapping file required for this example.
/openidm/samples/provisioners/provisioner.openicf.connectorinfoprovider.json
The sample connector server configuration file./openidm/samples/provisioners/provisioner.openicf-csv.json
The sample connector configuration file.
Edit the remote connector server configuration file (
provisioner.openicf.connectorinfoprovider.json
) to match your network setup.The following example indicates that the Java connector server is running on the host
remote-host
, listening on the default port, and configured with a secret key ofPassw0rd
:{ "remoteConnectorServers" : [ { "name" : "csv", "host" : "remote-host", "port" : 8759, "useSSL" : false, "timeout" : 0, "protocol" : "websocket", "key" : "Passw0rd" } ] }
The
name
that you set in this file will be referenced in theconnectorHostRef
property of the connector configuration, in the next step.The
key
that you specify here must match the password that you set when you installed the Java connector server.Edit the CSV connector configuration file (
provisioner.openicf-csv.json
) as follows:{ "name" : "csvfile", "connectorRef" : { "connectorHostRef" : "csv", "bundleName" : "org.forgerock.openicf.connectors.csvfile-connector", "bundleVersion" : "[1.5.1.4,1.6.0.0)", "connectorName" : "org.forgerock.openicf.connectors.csv.CSVFileConnector" }, ... "configurationProperties" : { "csvFile" : "/home/testuser/csv-sample/data/hr.csv" }, }
The
connectorHostRef
property indicates which remote connector server to use, and refers to thename
property you specified in theprovisioner.openicf.connectorinfoprovider.json
file.The
bundleVersion : "[1.5.1.4,1.6.0.0)",
must either be exactly the same as the version of the CSV connector that you are using or, if you specify a range, the CSV connector version must be included in this range.The
csvFile
property must specify the absolute path to the CSV data file that you copied to the remote host on which the Java Connector Server is running.
Start OpenIDM:
$ cd /path/to/openidm $ ./startup.sh
Verify that OpenIDM can reach the remote connector server and that the CSV connector has been configured correctly:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system?_action=test" [ { "name": "csv", "enabled": true, "config": "config/provisioner.openicf/csv", "objectTypes": [ "__ALL__", "account" ], "connectorRef": { "bundleName": "org.forgerock.openicf.connectors.csvfile-connector", "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector", "bundleVersion": "[1.5.1.4,1.6.0.0)" }, "displayName": "CSV File Connector", "ok": true } ]
The connector must return
"ok": true
.Alternatively, use the Admin UI to verify that OpenIDM can reach the remote connector server and that the CSV connector is active. Log in to the Admin UI (
https://localhost:8443/openidm/admin
) and select Configure > Connectors. The CSV connector should be listed on the Connectors page, and its status should be Active.To test that the connector has been configured correctly, run a reconciliation operation as follows:
Select Configure > Mappings and click the systemCsvAccounts_managedUser mapping.
Click Reconcile Now.
If the reconciliation is successful, the three users from the remote CSV file should have been added to the managed user repository.
To check this, select Manage > User.
13.2.1. Configuring Failover Between Remote Connector Servers
To prevent the connector server from being a single point of failure, you
can specify a list of remote connector servers that the connector can
target. This failover configuration is included in your project's
conf/provisioner.openicf.connectorinfoprovider.json
file. The connector attempts to contact the first connector server in the
list. If that connector server is down, it proceeds to the next connector
server.
The following sample configuration defines two remote connector servers, on
hosts remote-host-1
and remote-host-2
.
These servers are listed, by their name
property in a
group, specified in the remoteConnectorServersGroups
property. You can configure multiple servers per group, and multiple groups
in a single remote connector server configuration file.
{ "connectorsLocation" : "connectors", "remoteConnectorServers" : [ { "name" : "dotnet1", "host" : "remote-host-1", "port" : 8759, "protocol" : "websocket", "useSSL" : false, "timeout" : 0, "key" : "password" }, { "name" : "dotnet2", "host" : "remote-host-2", "port" : 8759, "protocol" : "websocket", "useSSL" : false, "timeout" : 0, "key" : "password" } ], "remoteConnectorServersGroups" : [ { "name" : "dotnet-ha", "algorithm" : "failover", "serversList" : [ {"name": "dotnet1"}, {"name": "dotnet2"} ] } ] }
The algorithm
can be either failover
or roundrobin
. If the algorithm is
failover
, requests are always sent to the first connector
server in the list, unless it is unavailable, in which case requests are
sent to the next connector server in the list. If the algorithm is
roundrobin
, requests are distributed equally between the
connector servers in the list, in the order in which they are received.
Your connector configuration file
(provisioner.openicf-connector-name.json
)
references the remote connector server group, rather than a single remote
connector server. For example, the following excerpt of a PowerShell
connector configuration file references the dotnet-ha
connector server group from the previous configuration:
{ "connectorRef" : { "bundleName" : "MsPowerShell.Connector", "connectorName" : "Org.ForgeRock.OpenICF.Connectors.MsPowerShell.MsPowerShellConnector", "connectorHostRef" : "dotnet-ha", "bundleVersion" : "[1.4.2.0,1.5.0.0)" }, ...
Note
Failover is not supported between connector servers that are running in
legacy mode. Therefore, the configuration of each connector server that is
part of the failover group must have the protocol
property set to websocket
.
13.3. Configuring Connectors
Connectors are configured through the OpenICF provisioner service. Each
connector configuration is stored in a file in your project's
conf/
directory, and accessible over REST at the
openidm/conf
endpoint. Configuration files are named
project-dir/conf/provisioner.openicf-name
where name corresponds to the name of the
connector. A number of sample connector configurations are available in the
openidm/samples/provisioners
directory. To use these
connector configurations, edit the configuration files as required, and copy
them to your project's conf
directory.
If you are creating your own connector configuration files, do
not include additional dash characters ( -
) in the
connector name, as this might cause
problems with the OSGi parser. For example, the name
provisioner.openicf-hrdb.json
is fine. The name
provisioner.openicf-hr-db.json
is not.
The following example shows a connector configuration for an XML file resource:
{ "name" : "xml", "connectorRef" : connector-ref-object, "producerBufferSize" : integer, "connectorPoolingSupported" : boolean, true/false, "poolConfigOption" : pool-config-option-object, "operationTimeout" : operation-timeout-object, "configurationProperties" : configuration-properties-object, "syncFailureHandler" : sync-failure-handler-object, "resultsHandlerConfig" : results-handler-config-object, "objectTypes" : object-types-object, "operationOptions" : operation-options-object }
The name
property specifies the name of the system to
which you are connecting. This name must be
alphanumeric.
13.3.1. Setting the Connector Reference Properties
The following example shows a connector reference object:
{ "bundleName" : "org.forgerock.openicf.connectors.xml-connector", "bundleVersion" : "[1.1.0.3,1.2.0.0)", "connectorName" : "org.forgerock.openicf.connectors.xml.XMLConnector", "connectorHostRef" : "host" }
bundleName
string, required
The
ConnectorBundle-Name
of the OpenICF connector.bundleVersion
string, required
The
ConnectorBundle-Version
of the OpenICF connector. The value can be a single version (such as1.4.0.0
) or a range of versions, which enables you to support multiple connector versions in a single project.You can specify a range of versions as follows:
[1.1.0.0,1.4.0.0]
indicates that all connector versions from 1.1 to 1.4, inclusive, are supported.[1.1.0.0,1.4.0.0)
indicates that all connector versions from 1.1 to 1.4, including 1.1 but excluding 1.4, are supported.(1.1.0.0,1.4.0.0]
indicates that all connector versions from 1.1 to 1.4, excluding 1.1 but including 1.4, are supported.(1.1.0.0,1.4.0.0)
indicates that all connector versions from 1.1 to 1.4, exclusive, are supported.
When a range of versions is specified, OpenIDM uses the latest connector that is available within that range. If your project requires a specific connector version, you must explicitly state the version in your connector configuration file, or constrain the range to address only the version that you need.
connectorName
string, required
The connector implementation class name.
connectorHostRef
string, optional
If the connector runs remotely, the value of this field must match the
name
field of theRemoteConnectorServers
object in the connector server configuration file (provisioner.openicf.connectorinfoprovider.json
). For example:... "remoteConnectorServers" : [ { "name" : "dotnet", ...
If the connector runs locally, the value of this field can be one of the following:
If the connector .jar is installed in
openidm/connectors/
, the value must be"#LOCAL"
. This is currently the default, and recommended location.If the connector .jar is installed in
openidm/bundle/
(not recommended), the value must be"osgi:service/org.forgerock.openicf.framework.api.osgi.ConnectorManager"
.
13.3.2. Setting the Pool Configuration
The poolConfigOption
specifies the pool configuration
for poolable connectors only (connectors that have
"connectorPoolingSupported" : true
). Non-poolable
connectors ignore this parameter.
The following example shows a pool configuration option object for a poolable connector:
{ "maxObjects" : 10, "maxIdle" : 10, "maxWait" : 150000, "minEvictableIdleTimeMillis" : 120000, "minIdle" : 1 }
maxObjects
The maximum number of idle and active instances of the connector.
maxIdle
The maximum number of idle instances of the connector.
maxWait
The maximum time, in milliseconds, that the pool waits for an object before timing out. A value of
0
means that there is no timeout.minEvictableIdleTimeMillis
The maximum time, in milliseconds, that an object can be idle before it is removed. A value of
0
means that there is no idle timeout.minIdle
The minimum number of idle instances of the connector.
13.3.3. Setting the Operation Timeouts
The operation timeout property enables you to configure timeout values per operation type. By default, no timeout is configured for any operation type. A sample configuration follows:
{ "CREATE" : -1, "TEST" : -1, "AUTHENTICATE" : -1, "SEARCH" : -1, "VALIDATE" : -1, "GET" : -1, "UPDATE" : -1, "DELETE" : -1, "SCRIPT_ON_CONNECTOR" : -1, "SCRIPT_ON_RESOURCE" : -1, "SYNC" : -1, "SCHEMA" : -1 }
- operation-name
Timeout in milliseconds
A value of
-1
disables the timeout.
13.3.4. Setting the Connection Configuration
The configurationProperties
object specifies the
configuration for the connection between the connector and the resource,
and is therefore resource specific.
The following example shows a configuration properties object for the default XML sample resource connector:
"configurationProperties" : { "xsdIcfFilePath" : "&{launcher.project.location}/data/resource-schema-1.xsd", "xsdFilePath" : "&{launcher.project.location}/data/resource-schema-extension.xsd", "xmlFilePath" : "&{launcher.project.location}/data/xmlConnectorData.xml" }
- property
Individual properties depend on the type of connector.
13.3.5. Setting the Synchronization Failure Configuration
The syncFailureHandler
object specifies what should
happen if a liveSync operation reports a failure for an operation. The
following example shows a synchronization failure configuration:
{ "maxRetries" : 5, "postRetryAction" : "logged-ignore" }
maxRetries
positive integer or
-1
, requiredThe number of attempts that OpenIDM should make to process a failed modification. A value of zero indicates that failed modifications should not be reattempted. In this case, the post retry action is executed immediately when a liveSync operation fails. A value of -1 (or omitting the
maxRetries
property, or the entiresyncFailureHandler
object) indicates that failed modifications should be retried an infinite number of times. In this case, no post retry action is executed.postRetryAction
string, required
The action that should be taken if the synchronization operation fails after the specified number of attempts. The post retry action can be one of the following:
logged-ignore
indicates that OpenIDM should ignore the failed modification, and log its occurrence.dead-letter-queue
indicates that OpenIDM should save the details of the failed modification in a table in the repository (accessible over REST atrepo/synchronisation/deadLetterQueue/provisioner-name
).script
specifies a custom script that should be executed when the maximum number of retries has been reached.
For more information, see "Configuring the LiveSync Retry Policy".
13.3.6. Configuring How Results Are Handled
The resultsHandlerConfig
object specifies how OpenICF
returns results. These configuration properties depend on the connector type
and on the interfaces that are implemented by that connector type. For
information the interfaces that each connector supports, see the
OpenICF
Connector Configuration Reference.
The following example shows a results handler configuration object:
{ "enableNormalizingResultsHandler" : true, "enableFilteredResultsHandler" : false, "enableCaseInsensitiveFilter" : false, "enableAttributesToGetSearchResultsHandler" : false }
enableNormalizingResultsHandler
boolean
If the connector implements the attribute normalizer interface, you can enable this interface by setting this configuration property to
true
. If the connector does not implement the attribute normalizer interface, the value of this property has no effect.enableFilteredResultsHandler
boolean
If the connector uses the filtering and search capabilities of the remote connected system, you can set this property to
false
. If the connector does not use the remote system's filtering and search capabilities (for example, the CSV file connector), you must set this property totrue
, otherwise the connector performs an additional, case-sensitive search, which can cause problems.enableCaseInsensitiveFilter
boolean
By default, the filtered results handler (described previously) is case-sensitive. If the filtered results handler is enabled, you can use this property to enable case-insensitive filtering. If you do not enable case-insensitive filtering, a search will not return results unless the case matches exactly. For example, a search for
lastName = "Jensen"
will not match a stored user withlastName : jensen
.enableAttributesToGetSearchResultsHandler
boolean
By default, OpenIDM determines which attributes should be retrieved in a search. If the
enableAttributesToGetSearchResultsHandler
property is set totrue
the OpenICF framework removes all attributes from the READ/QUERY response, except for those that are specifically requested. For performance reasons, you should set this property tofalse
for local connectors and totrue
for remote connectors.
13.3.7. Specifying the Supported Object Types
The object-types
configuration specifies the objects
(user, group, and so on) that are supported by the connector. The property
names set here define the objectType
that is used in the
URI. For example:
system/systemName/objectType
This configuration is based on the JSON Schema with the extensions described in the following section.
Attribute names that start or end with __
are regarded as
special attributes by OpenICF. The purpose of the
special attributes in OpenICF is to enable someone who is developing a
new connector to create a contract regarding how
a property can be referenced, regardless of the application that is using
the connector. In this way, the connector can map specific object
information between an arbitrary application and the resource, without
knowing how that information is referenced in the application.
These attributes have no specific meaning in the context of OpenIDM,
although some of the connectors that are bundled with OpenIDM use these
attributes. The generic LDAP connector, for example, can be used with OpenDJ,
Active Directory, OpenLDAP, and other LDAP directories. Each of these
directories might use a different attribute name to represent the same type
of information. For example, Active Directory uses
unicodePassword
and OpenDJ uses
userPassword
to represent the same thing, a user's
password. The LDAP connector uses the special OpenICF
__PASSWORD__
attribute to abstract that difference. In
the same way, the LDAP connector maps the __NAME__
attribute to an LDAP dn
.
The OpenICF __UID__
is a special case. The
__UID__
must not be included in the
OpenIDM configuration or in any update or create operation. This attribute
denotes the unique identity attribute of an object and OpenIDM always maps
it to the _id
of the object.
The following excerpt shows the configuration of an
account
object type:
{ "account" : { "$schema" : "http://json-schema.org/draft-03/schema", "id" : "__ACCOUNT__", "type" : "object", "nativeType" : "__ACCOUNT__", "properties" : { "name" : { "type" : "string", "nativeName" : "__NAME__", "nativeType" : "JAVA_TYPE_PRIMITIVE_LONG", "flags" : [ "NOT_CREATABLE", "NOT_UPDATEABLE", "NOT_READABLE", "NOT_RETURNED_BY_DEFAULT" ] }, "groups" : { "type" : "array", "items" : { "type" : "string", "nativeType" : "string" }, "nativeName" : "__GROUPS__", "nativeType" : "string", "flags" : [ "NOT_RETURNED_BY_DEFAULT" ] }, "givenName" : { "type" : "string", "nativeName" : "givenName", "nativeType" : "string" }, } } }
OpenICF supports an __ALL__
object type that ensures
that objects of every type are included in a synchronization operation. The
primary purpose of this object type is to prevent synchronization errors
when multiple changes affect more than one object type.
For example, imagine a deployment synchronizing two external systems. On
system A, the administrator creates a user, jdoe
, then
adds the user to a group, engineers
. When these changes
are synchronized to system B, if the __GROUPS__
object
type is synchronized first, the synchronization will fail, because the group
contains a user that does not yet exist on system B. Synchronizing the
__ALL__
object type ensures that user
jdoe
is created on the external system before he is added
to the group engineers
.
The __ALL__
object type is assumed by default - you do
not need to declare it in your provisioner configuration file. If it is not
declared, the object type is named __ALL__
. If you want
to map a different name for this object type, declare it in your provisioner
configuration. The following excerpt from a sample provisioner configuration
uses the name allobjects
:
"objectTypes": { "allobjects": { "$schema": "http://json-schema.org/draft-03/schema", "id": "__ALL__", "type": "object", "nativeType": "__ALL__" }, ...
A liveSync operation invoked with no object type assumes an object type of
__ALL__
. For example, the following call invokes a
liveSync operation on all defined object types in an LDAP system:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system/ldap?_action=liveSync"
Note
Using the __ALL__
object type requires a mechanism
to ensure the order in which synchronization changes are processed. Servers
that use the cn=changelog
mechanism to order sync
changes (such as OpenDJ, Oracle DSEE, and the legacy Sun Directory Server)
cannot use the __ALL__
object type by default, and must
be forced to use time stamps to order their sync changes. For these LDAP
server types, set useTimestampsForSync
to
true
in the provisioner configuration.
LDAP servers that use timestamps by default (such as Active Directory GCs
and OpenLDAP) can use the __ALL__
object type without
any additional configuration. Active Directory and Active Directory LDS,
which use Update Sequence Numbers, can also use the
__ALL__
object type without additional configuration.
13.3.7.1. Extending the Object Type Configuration
nativeType
string, optional
The native OpenICF object type.
The list of supported native object types is dependent on the resource, or on the connector. For example, an LDAP connector might have object types such as
__ACCOUNT__
and__GROUP__
.
13.3.7.2. Extending the Property Type Configuration
nativeType
string, optional
The native OpenICF attribute type.
The following native types are supported:
JAVA_TYPE_BIGDECIMAL JAVA_TYPE_BIGINTEGER JAVA_TYPE_BYTE JAVA_TYPE_BYTE_ARRAY JAVA_TYPE_CHAR JAVA_TYPE_CHARACTER JAVA_TYPE_DATE JAVA_TYPE_DOUBLE JAVA_TYPE_FILE JAVA_TYPE_FLOAT JAVA_TYPE_GUARDEDBYTEARRAY JAVA_TYPE_GUARDEDSTRING JAVA_TYPE_INT JAVA_TYPE_INTEGER JAVA_TYPE_LONG JAVA_TYPE_OBJECT JAVA_TYPE_PRIMITIVE_BOOLEAN JAVA_TYPE_PRIMITIVE_BYTE JAVA_TYPE_PRIMITIVE_DOUBLE JAVA_TYPE_PRIMITIVE_FLOAT JAVA_TYPE_PRIMITIVE_LONG JAVA_TYPE_STRING
Note
The
JAVA_TYPE_DATE
property is deprecated. Functionality may be removed in a future release. This property-level extension is an alias forstring
. Any dates assigned to this extension should be formatted per ISO 8601.nativeName
string, optional
The native OpenICF attribute name.
flags
string, optional
The native OpenICF attribute flags. OpenICF supports the following attribute flags:
MULTIVALUED
- specifies that the property can be multivalued. This flag sets thetype
of the attribute as follows:"type" : "array"
If the attribute type is
array
, an additionalitems
field specifies the supported type for the objects in the array. For example:"groups" : { "type" : "array", "items" : { "type" : "string", "nativeType" : "string" }, ....
NOT_CREATABLE
,NOT_READABLE
,NOT_RETURNED_BY_DEFAULT
,NOT_UPDATEABLE
In some cases, the connector might not support manipulating an attribute because the attribute can only be changed directly on the remote system. For example, if the
name
attribute of an account can only be created by Active Directory, and never changed by OpenIDM, you would addNOT_CREATABLE
andNOT_UPDATEABLE
to the provisioner configuration for that attribute.Certain attributes such as LDAP groups or other calculated attributes might be expensive to read. You might want to avoid returning these attributes in a default read of the object, unless they are explicitly requested. In this case, you would add the
NOT_RETURNED_BY_DEFAULT
flag to the provisioner configuration for that attribute.REQUIRED
- specifies that the property is required in create operations. This flag sets therequired
property of an attribute as follows:"required" : true
Note
Do not use the dash character ( -
) in property names,
like last-name
. Dashes in names make JavaScript syntax
more complex. If you cannot avoid the dash, write
source['last-name']
instead of
source.last-name
in your JavaScript scripts.
13.3.8. Configuring the Operation Options
The operationOptions
object enables you to deny specific
operations on a resource. For example, you can use this configuration object
to deny CREATE
and DELETE
operations
on a read-only resource to avoid OpenIDM accidentally updating the resource
during a synchronization operation.
The following example defines the options for the "SYNC"
operation:
"operationOptions" : { { "SYNC" : { "denied" : true, "onDeny" : "DO_NOTHING", "objectFeatures" : { "__ACCOUNT__" : { "denied" : true, "onDeny" : "THROW_EXCEPTION", "operationOptionInfo" : { "$schema" : "http://json-schema.org/draft-03/schema", "id" : "FIX_ME", "type" : "object", "properties" : { "_OperationOption-float" : { "type" : "number", "nativeType" : "JAVA_TYPE_PRIMITIVE_FLOAT" } } } }, "__GROUP__" : { "denied" : false, "onDeny" : "DO_NOTHING" } } } } ...
The OpenICF Framework supports the following operations:
AUTHENTICATE
CREATE
DELETE
GET
RESOLVEUSERNAME
SCHEMA
SCRIPT_ON_CONNECTOR
SCRIPT_ON_RESOURCE
SEARCH
SYNC
TEST
UPDATE
VALIDATE
For detailed information on these operations, see the OpenICF API documentation.
The operationOptions
object has the following
configurable properties:
denied
boolean, optional
This property prevents operation execution if the value is
true
.onDeny
string, optional
If
denied
istrue
, then the service uses this value. Default value:DO_NOTHING
.DO_NOTHING
: On operation the service does nothing.THROW_EXCEPTION
: On operation the service throws aForbiddenException
exception.
13.4. Installing and Configuring Remote Connector Servers
Connectors that use the .NET framework must run remotely. Java connectors can run locally or remotely. Connectors that run remotely require a connector server to enable OpenIDM to access the connector.
For a list of supported versions, and compatibility between versions, see "Supported Connectors, Connector Servers, and Plugins" in the Release Notes.
This section describes the steps to install a .NET connector server and a remote Java Connector Server.
13.4.1. Installing and Configuring a .NET Connector Server
A .NET connector server is useful when an application is written in Java, but a connector bundle is written using C#. Because a Java application (for example, a J2EE application) cannot load C# classes, you must deploy the C# bundles under a .NET connector server. The Java application can communicate with the C# connector server over the network, and the C# connector server acts as a proxy to provide access to the C# bundles that are deployed within the C# connector server, to any authenticated application.
By default, the connector server outputs log messages to a file named
connectorserver.log
, in the
C:\path\to\openicf
directory. To change the location of
the log file set the initializeData
parameter in the
configuration file, before you install the connector server. For example,
the following excerpt sets the log directory to
C:\openicf\logs\connectorserver.log
:
<add name="file" type="System.Diagnostics.TextWriterTraceListener" initializeData="C:\openicf\logs\connectorserver.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information"/> </add>
Download the OpenICF .NET Connector Server from the ForgeRock BackStage site.
The .NET connector server is distributed in two formats. The
.msi
file is a wizard that installs the Connector Server as a Windows Service. The.zip
file is simply a bundle of all the files required to run the Connector Server.If you do not want to run the Connector Server as a Windows service, download and extract the
.zip
file, then move on to "Configuring the .NET Connector Server".If you have deployed the
.zip
file and then decide to run the Connector Server as a service, install the service manually with the following command:.\ConnectorServerService.exe /install /serviceName service-name
Then proceed to "Configuring the .NET Connector Server".
To install the Connector Server as a Windows service automatically, follow the remaining steps in this section.
Execute the
openicf-zip-1.5.2.0-dotnet.msi
installation file and complete the wizard.You must run the wizard as a user who has permissions to start and stop a Windows service, otherwise the service will not start.
When you choose the Setup Type, select Typical unless you require backward compatibility with the 1.4.0.0 connector server. If you need backward compatibility, select Custom, and install the Legacy Connector Service.
When the wizard has completed, the Connector Server is installed as a Windows Service.
Open the Microsoft Services Console and make sure that the Connector Server is listed there.
The name of the service is
OpenICF Connector Server
, by default.
If you are installing the .NET Connector Server from the
.msi
distribution, select Custom for the Setup Type, and install the Legacy Connector Service.If you are installing the .NET Connector Server from the
.zip
distribution, launch the Connector Server by running theConnectorServer.exe
command, and not theConnectorServerService.exe
command.Adjust the
port
parameter in your OpenIDM remote connector server configuration file. In legacy mode, the connector server listens on port8760
by default.Remove the
"protocol" : "websocket",
from your OpenIDM remote connector server configuration file to specify that the connector server should use the legacy protocol.In the commands shown in "Configuring the .NET Connector Server", replace
ConnectorServerService.exe
withConnectorServer.exe
.
After you have installed the .NET Connector Server, as described in the previous section, follow these steps to configure the Connector Server:
Make sure that the Connector Server is not currently running. If it is running, use the Microsoft Services Console to stop it.
At the command prompt, change to the directory where the Connector Server was installed:
c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF"
Run the ConnectorServerService /setkey command to set a secret key for the Connector Server. The key can be any string value. This example sets the secret key to
Passw0rd
:ConnectorServerService /setkey Passw0rd Key has been successfully updated.
This key is used by clients connecting to the Connector Server. The key that you set here must also be set in the OpenIDM connector info provider configuration file (
conf/provisioner.openicf.connectorinfoprovider.json
). For more information, see "Configuring OpenIDM to Connect to the .NET Connector Server".Edit the Connector Server configuration.
The Connector Server configuration is saved in a file named
ConnectorServerService.exe.Config
(in the directory in which the Connector Server is installed).Check and edit this file, as necessary, to reflect your installation. Specifically, verify that the
baseAddress
reflects the host and port on which the connector server is installed:<system.serviceModel> <services> <service name="Org.ForgeRock.OpenICF.Framework.Service.WcfServiceLibrary.WcfWebsocket"> <host> <baseAddresses> <add baseAddress="http://0.0.0.0:8759/openicf" /> </baseAddresses> <host> </service> </services> </system.serviceModel>
The
baseAddress
specifies the host and port on which the Connector Server listens, and is set tohttp://0.0.0.0:8759/openicf
by default. If you set a host value other than the default0.0.0.0
, connections from all IP addresses other than the one specified are denied.If Windows firewall is enabled, you must create an inbound port rule to open the TCP port for the connector server (8759 by default). If you do not open the TCP port, OpenIDM will be unable to contact the Connector Server. For more information, see the Microsoft documentation on creating an inbound port rule.
Optionally, configure the Connector Server to use SSL:
Use an existing CA certificate, or use the
makecert
utility to create an exportable self-signed Root CA Certificate:c:\"Program Files (x86)"\"Windows Kits"\8.1\bin\x64\makecert.exe ^ -pe -r -sky signature -cy authority -a sha1 -n "CN=Dev Certification Authority" ^ -ss Root -sr LocalMachine -sk RootCA signroot.cer
Create an exportable server authentication certificate:
c:\"Program Files (x86)"\"Windows Kits"\8.1\bin\x64\makecert.exe ^ -pe -sky exchange -cy end -n "CN=localhost" -b 01/01/2015 -e 01/01/2050 -eku 1.3.6.1.5.5.7.3.1 ^ -ir LocalMachine -is Root -ic signroot.cer -ss My -sr localMachine -sk server ^ -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 server.cer
Retrieve and set the certificate thumbprint:
c:\Program Files (x86)\ForgeRock\OpenICF>ConnectorServerService.exe /setCertificate Select certificate you want to use: Index Issued To Thumbprint ----- --------- ------------------------- 0) localhost 4D01BE385BF079DD4B9C5A416E7B535904855E0A Certificate Thumbprint has been successfully updated to 4D01BE385BF079DD4B9C5A416E7B535904855E0A.
Bind the certificate to the Connector Server port. For example:
netsh http add sslcert ipport=0.0.0.0:8759 ^ certhash=4D01BE385BF079DD4B9C5A416E7B535904855E0A ^ appid={bca0631d-cab1-48c8-bd2a-eb049d7d3c55}
Execute Service as a non-administrative user:
netsh http add urlacl url=https://+:8759/ user=EVERYONE
Change the Connector Server configuration to use HTTPS and not HTTP:
<add baseAddress="https://0.0.0.0:8759/openicf" />
Check the trace settings, in the same Connector Server configuration file, under the
system.diagnostics
item:<system.diagnostics> <trace autoflush="true" indentsize="4"> <listeners> <remove name="Default" /> <add name="console" /> <add name="file" /> </listeners> </trace> <sources> <source name="ConnectorServer" switchName="switch1"> <listeners> <remove name="Default" /> <add name="file" /> </listeners> </source> </sources> <switches> <add name="switch1" value="Information" /> </switches> <sharedListeners> <add name="console" type="System.Diagnostics.ConsoleTraceListener" /> <add name="file" type="System.Diagnostics.TextWriterTraceListener" initializeData="logs\ConnectorServerService.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Information" /> </add> </sharedListeners> </system.diagnostics>
The Connector Server uses the standard .NET trace mechanism. For more information about tracing options, see Microsoft's .NET documentation for
System.Diagnostics
.The default trace settings are a good starting point. For less tracing, set the EventTypeFilter's
initializeData
toWarning
orError
. For very verbose logging set the value toVerbose
orAll
. The logging level has a direct effect on the performance of the Connector Servers, so take care when setting this level.
Start the .NET Connector Server in one of the following ways:
Start the server as a Windows service, by using the Microsoft Services Console.
Locate the connector server service (
OpenICF Connector Server
), and clickStart the service
orRestart the service
.The service is executed with the credentials of the "run as" user (
System
, by default).Start the server as a Windows service, by using the command line.
In the Windows Command Prompt, run the following command:
net start ConnectorServerService
To stop the service in this manner, run the following command:
net stop ConnectorServerService
Start the server without using Windows services.
In the Windows Command Prompt, change directory to the location where the Connector Server was installed. The default location is
c:\> cd "c:\Program Files (x86)\ForgeRock\OpenICF"
.Start the server with the following command:
ConnectorServerService.exe /run
Note that this command starts the Connector Server with the credentials of the current user. It does not start the server as a Windows service.
The connector info provider service configures one or more remote connector
servers to which OpenIDM can connect. The connector info provider
configuration is stored in a file named
project-dir/conf/provisioner.openicf.connectorinfoprovider.json
.
A sample connector info provider configuration file is located in
openidm/samples/provisioners/
.
To configure OpenIDM to use the remote .NET connector server, follow these steps:
Start OpenIDM, if it is not already running.
Copy the sample connector info provider configuration file to your project's
conf/
directory:$ cd /path/to/openidm $ cp samples/provisioners/provisioner.openicf.connectorinfoprovider.json project-dir/conf/
Edit the connector info provider configuration, specifying the details of the remote connector server:
"remoteConnectorServers" : [ { "name" : "dotnet", "host" : "192.0.2.0", "port" : 8759, "useSSL" : false, "timeout" : 0, "protocol" : "websocket", "key" : "Passw0rd" }
Configurable properties are as follows:
name
Specifies the name of the connection to the .NET connector server. The name can be any string. This name is referenced in the
connectorHostRef
property of the connector configuration file (provisioner.openicf-ad.json
).host
Specifies the IP address of the host on which the Connector Server is installed.
port
Specifies the port on which the Connector Server listens. This property matches the
connectorserver.port
property in theConnectorServerService.exe.config
file.For more information, see "Configuring the .NET Connector Server".
useSSL
Specifies whether the connection to the Connector Server should be secured. This property matches the
"connectorserver.usessl"
property in theConnectorServerService.exe.config
file.timeout
Specifies the length of time, in seconds, that OpenIDM should attempt to connect to the Connector Server before abandoning the attempt. To disable the timeout, set the value of this property to
0
.protocol
Version 1.5.2.0 of the OpenICF framework supports a new communication protocol with remote connector servers. This protocol is enabled by default, and its value is
websocket
in the default configuration.key
Specifies the connector server key. This property matches the
key
property in theConnectorServerService.exe.config
file. For more information, see "Configuring the .NET Connector Server".The string value that you enter here is encrypted as soon as the file is saved.
13.4.2. Installing and Configuring a Remote Java Connector Server
In certain situations, it might be necessary to set up a remote Java Connector Server. This section provides instructions for setting up a remote Java Connector Server on Unix/Linux and Windows.
Download the OpenICF Java Connector Server from ForgeRock's BackStage site.
Change to the appropriate directory and unpack the zip file. The following command unzips the file in the current directory:
$ unzip openicf-zip-1.5.2.0.zip
Change to the
openicf
directory:$ cd path/to/openicf
The Java Connector Server uses a
key
property to authenticate the connection. The default key value ischangeit
. To change the value of the secret key, run a command similar to the following. This example sets the key value toPassw0rd
:$ cd /path/to/openicf $ bin/ConnectorServer.sh /setkey Passw0rd Key has been successfully updated.
Review the
ConnectorServer.properties
file in the/path/to/openicf/conf
directory, and make any required changes. By default, the configuration file has the following properties:connectorserver.port=8759 connectorserver.libDir=lib connectorserver.usessl=false connectorserver.bundleDir=bundles connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=
The
connectorserver.usessl
parameter indicates whether client connections to the connector server should be over SSL. This property is set tofalse
by default.To secure connections to the connector server, set this property to
true
and set the following properties before you start the connector server:java -Djavax.net.ssl.keyStore=mySrvKeystore -Djavax.net.ssl.keyStorePassword=Passw0rd
Start the Java Connector Server:
$ bin/ConnectorServer.sh /run
The connector server is now running, and listening on port 8759, by default.
Log files are available in the
/path/to/openicf/logs
directory.$ ls logs/ Connector.log ConnectorServer.log ConnectorServerTrace.log
If required, stop the Java Connector Server by pressing CTRL-C.
Download the OpenICF Java Connector Server from ForgeRock's BackStage site.
Change to the appropriate directory and unpack the zip file.
In a Command Prompt window, change to the
openicf
directory:C:\>cd C:\path\to\openicf\bin
If required, secure the communication between OpenIDM and the Java Connector Server. The Java Connector Server uses a
key
property to authenticate the connection. The default key value ischangeit
.To change the value of the secret key, use the
bin\ConnectorServer.bat /setkey
command. The following example sets the key toPassw0rd
:c:\path\to\openicf>bin\ConnectorServer.bat /setkey Passw0rd lib\framework\connector-framework.jar;lib\framework\connector-framework-internal .jar;lib\framework\groovy-all.jar;lib\framework\icfl-over-slf4j.jar;lib\framework \slf4j-api.jar;lib\framework\logback-core.jar;lib\framework\logback-classic.jar
Review the
ConnectorServer.properties
file in thepath\to\openicf\conf
directory, and make any required changes. By default, the configuration file has the following properties:connectorserver.port=8759 connectorserver.libDir=lib connectorserver.usessl=false connectorserver.bundleDir=bundles connectorserver.loggerClass=org.forgerock.openicf.common.logging.slf4j.SLF4JLog connectorserver.key=xOS4IeeE6eb/AhMbhxZEC37PgtE\=
You can either run the Java Connector Server as a Windows service, or start and stop it from the command-line.
To install the Java Connector Server as a Windows service, run the following command:
c:\path\to\openicf>bin\ConnectorServer.bat /install
If you install the connector server as a Windows service you can use the Microsoft Services Console to start, stop and restart the service. The Java Connector Service is named
OpenICFConnectorServerJava
.To uninstall the Java Connector Server as a Windows service, run the following command:
c:\path\to\openicf>bin\ConnectorServer.bat /uninstall
To start the Java Connector Server from the command line, enter the following command:
c:\path\to\openicf>bin\ConnectorServer.bat /run
The connector server is now running, and listening on port 8759, by default.
Log files are available in the
\path\to\openicf\logs
directory.If required, stop the Java Connector Server by pressing
^C
.
13.5. Supported Connectors
OpenIDM provides several connectors by default, in the
path/to/openidm/connectors
directory. You can download the
connectors that are not bundled with OpenIDM from ForgeRock's BackStage site.
For details about the connectors that are supported for use with OpenIDM 5, see Connectors Guide.
13.6. Creating Default Connector Configurations
You have three ways to create provisioner files:
Start with the sample provisioner files in the
/path/to/openidm/samples/provisioners
directory. For more information, see "Supported Connectors".Set up connectors with the help of the Admin UI. To start this process, navigate to
https://localhost:8443/admin
and log in to OpenIDM. Continue with "Adding New Connectors from the Admin UI".Use the service that OpenIDM exposes through the REST interface to create basic connector configuration files, or use the cli.sh or cli.bat scripts to generate a basic connector configuration. To see how this works continue with "Adding New Connectors from the Command Line".
13.6.1. Adding New Connectors from the Admin UI
You can include several different connectors in an OpenIDM configuration. In the Admin UI, select Configure > Connector. Try some of the different connector types in the screen that appears. Observe as the Admin UI changes the configuration options to match the requirements of the connector type.
The list of connectors shown in the Admin UI does not include all supported connectors. For information and examples of how each supported connector is configured, see "Supported Connectors".
When you have filled in all required text boxes, the Admin UI allows you to validate the connector configuration.
If you want to configure a different connector through the
Admin UI, you could copy the provisioner file from the
/path/to/openidm/samples/provisioners
directory.
However, additional configuration may be required, as described in
"Supported Connectors".
Alternatively, some connectors are included with the configuration of a specific sample. For example, if you want to build a ScriptedSQL connector, read "Using the Connector Bundler to Build a ScriptedSQL Connector" in the Samples Guide.
13.6.2. Adding New Connectors from the Command Line
This section describes how to create connector configurations over the REST interface. For instructions on how to create connector configurations from the command line, see "Using the configureconnector Subcommand".
You create a new connector configuration file in three stages:
List the available connectors.
Generate the core configuration.
Connect to the target system and generate the final configuration.
List the available connectors by using the following command:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system?_action=availableConnectors"
Available connectors are installed in openidm/connectors
.
OpenIDM bundles the following connectors:
CSV File Connector
Database Table Connector
Scripted Groovy Connector Toolkit, which includes the following sample implementations:
Scripted SQL Connector
Scripted CREST Connector
Scripted REST Connector
LDAP Connector
XML Connector
GoogleApps Connector
Salesforce Connector
The preceding command therefore returns the following output:
{ "connectorRef": [ { "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector", "displayName": "XML Connector", "bundleName": "org.forgerock.openicf.connectors.xml-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.1.0.3" }, { "connectorName": "org.identityconnectors.ldap.LdapConnector", "displayName": "LDAP Connector", "bundleName": "org.forgerock.openicf.connectors.ldap-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.forgerock.openicf.connectors.scriptedsql.ScriptedSQLConnector", "displayName": "Scripted SQL Connector", "bundleName": "org.forgerock.openicf.connectors.groovy-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.forgerock.openicf.connectors.scriptedrest.ScriptedRESTConnector", "displayName": "Scripted REST Connector", "bundleName": "org.forgerock.openicf.connectors.groovy-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.forgerock.openicf.connectors.scriptedcrest.ScriptedCRESTConnector", "displayName": "Scripted CREST Connector", "bundleName": "org.forgerock.openicf.connectors.groovy-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedPoolableConnector", "displayName": "Scripted Poolable Groovy Connector", "bundleName": "org.forgerock.openicf.connectors.groovy-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.forgerock.openicf.connectors.groovy.ScriptedConnector", "displayName": "Scripted Groovy Connector", "bundleName": "org.forgerock.openicf.connectors.groovy-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.4.3.0" }, { "connectorName": "org.identityconnectors.databasetable.DatabaseTableConnector", "displayName": "Database Table Connector", "bundleName": "org.forgerock.openicf.connectors.databasetable-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.1.0.2" }, { "connectorName": "org.forgerock.openicf.csvfile.CSVFileConnector", "displayName": "CSV File Connector", "bundleName": "org.forgerock.openicf.connectors.csvfile-connector", "systemType": "provisioner.openicf", "bundleVersion": "1.5.1.4" } ] }
To generate the core configuration, choose one of the available connectors by copying one of the JSON objects from the generated list into the body of the REST command, as shown in the following command for the XML connector:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{"connectorRef": {"connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector", "displayName": "XML Connector", "bundleName": "org.forgerock.openicf.connectors.xml-connector", "bundleVersion": "[1.1.0.3,1.2.0.0)"} }' \ "http//localhost:8080/openidm/system?_action=createCoreConfig"
This command returns a core connector configuration, similar to the following:
{ "poolConfigOption": { "minIdle": 1, "minEvictableIdleTimeMillis": 120000, "maxWait": 150000, "maxIdle": 10, "maxObjects": 10 }, "resultsHandlerConfig": { "enableAttributesToGetSearchResultsHandler": true, "enableFilteredResultsHandler": true, "enableNormalizingResultsHandler": true }, "operationTimeout": { "SCHEMA": -1, "SYNC": -1, "VALIDATE": -1, "SEARCH": -1, "AUTHENTICATE": -1, "CREATE": -1, "UPDATE": -1, "DELETE": -1, "TEST": -1, "SCRIPT_ON_CONNECTOR": -1, "SCRIPT_ON_RESOURCE": -1, "GET": -1, "RESOLVEUSERNAME": -1 }, "configurationProperties": { "xsdIcfFilePath": null, "xsdFilePath": null, "createFileIfNotExists": false, "xmlFilePath": null }, "connectorRef": { "bundleVersion": "[1.1.0.3,1.2.0.0)", "bundleName": "org.forgerock.openicf.connectors.xml-connector", "displayName": "XML Connector", "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector" } }
The configuration that is returned is not yet functional. Notice that
it does not contain the required system-specific
configurationProperties
, such as the host name and port,
or the xmlFilePath
for the XML file-based connector. In
addition, the configuration does not include the complete list of
objectTypes
and operationOptions
.
To generate the final configuration, add values for the
configurationProperties
to the core configuration, and
use the updated configuration as the body for the next command:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "configurationProperties": { "xsdIcfFilePath" : "samples/sample1/data/resource-schema-1.xsd", "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd", "xmlFilePath" : "samples/sample1/data/xmlConnectorData.xml", "createFileIfNotExists": false }, "operationTimeout": { "SCHEMA": -1, "SYNC": -1, "VALIDATE": -1, "SEARCH": -1, "AUTHENTICATE": -1, "CREATE": -1, "UPDATE": -1, "DELETE": -1, "TEST": -1, "SCRIPT_ON_CONNECTOR": -1, "SCRIPT_ON_RESOURCE": -1, "GET": -1, "RESOLVEUSERNAME": -1 }, "resultsHandlerConfig": { "enableAttributesToGetSearchResultsHandler": true, "enableFilteredResultsHandler": true, "enableNormalizingResultsHandler": true }, "poolConfigOption": { "minIdle": 1, "minEvictableIdleTimeMillis": 120000, "maxWait": 150000, "maxIdle": 10, "maxObjects": 10 }, "connectorRef": { "bundleVersion": "[1.1.0.3,1.2.0.0)", "bundleName": "org.forgerock.openicf.connectors.xml-connector", "displayName": "XML Connector", "connectorName": "org.forgerock.openicf.connectors.xml.XMLConnector" } }' \ "http://localhost:8080/openidm/system?_action=createFullConfig"
Note
Notice the single quotes around the argument to the --data
option in the preceding command. For most UNIX shells, single quotes around
a string prevent the shell from executing the command when encountering a
new line in the content. You can therefore pass the
--data '...'
option on a single line, or including line
feeds.
OpenIDM attempts to read the schema, if available, from the external
resource in order to generate output. OpenIDM then iterates through schema
objects and attributes, creating JSON representations for
objectTypes
and operationOptions
for
supported objects and operations.
The output includes the basic --data
input, along with
operationOptions
and objectTypes
.
Because OpenIDM produces a full property set for all attributes and all
object types in the schema from the external resource, the resulting
configuration can be large. For an LDAP server, OpenIDM can generate a
configuration containing several tens of thousands of lines, for example.
You might therefore want to reduce the schema to a minimum on the external
resource before you run the createFullConfig
command.
When you have the complete connector configuration, save that configuration
in a file named provisioner.openicf-name.json
(where name corresponds to the name of the connector) and place it in the
conf
directory of your project. For more information, see
"Configuring Connectors".
13.7. Checking the Status of External Systems Over REST
After a connection has been configured, external systems are accessible over
the REST interface at the URL
http://localhost:8080/openidm/system/connector-name
.
Aside from accessing the data objects within the external systems, you can
test the availability of the systems themselves.
To list the external systems that are connected to an OpenIDM instance, use
the test
action on the URL
http://localhost:8080/openidm/system/
. The following
example shows the connector configuration for an external LDAP system:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system?_action=test" [ { "ok": true, "displayName": "LDAP Connector", "connectorRef": { "bundleVersion": "[1.4.0.0,2.0.0.0)", "bundleName": "org.forgerock.openicf.connectors.ldap-connector", "connectorName": "org.identityconnectors.ldap.LdapConnector" }, "objectTypes": [ "__ALL__", "group", "account" ], "config": "config/provisioner.openicf/ldap", "enabled": true, "name": "ldap" } ]
The status of the system is provided by the ok
parameter.
If the connection is available, the value of this parameter is
true
.
To obtain the status for a single system, include the name of the connector in the URL, for example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system/ldap?_action=test" { "ok": true, "displayName": "LDAP Connector", "connectorRef": { "bundleVersion": "[1.4.0.0,2.0.0.0)", "bundleName": "org.forgerock.openicf.connectors.ldap-connector", "connectorName": "org.identityconnectors.ldap.LdapConnector" }, "objectTypes": [ "__ALL__", "group", "account" ], "config": "config/provisioner.openicf/ldap", "enabled": true, "name": "ldap" }
If there is a problem with the connection, the ok
parameter returns false
, with an indication of the error.
In the following example, the LDAP server named
ldap
, running on localhost:1389
, is
down:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system/ldap?_action=test" { "ok": false, "error": "localhost:1389", "displayName": "LDAP Connector", "connectorRef": { "bundleVersion": "[1.4.0.0,2.0.0.0)", "bundleName": "org.forgerock.openicf.connectors.ldap-connector", "connectorName": "org.identityconnectors.ldap.LdapConnector" }, "objectTypes": [ "__ALL__", "group", "account" ], "config": "config/provisioner.openicf/ldap", "enabled": true, "name": "ldap" }
To test the validity of a connector configuration, use the
testConfig
action and include the configuration in the
command. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --data '{ "name" : "xmlfile", "connectorRef" : { "bundleName" : "org.forgerock.openicf.connectors.xml-connector", "bundleVersion" : "[1.1.0.3,1.2.0.0)", "connectorName" : "org.forgerock.openicf.connectors.xml.XMLConnector" }, "producerBufferSize" : 100, "connectorPoolingSupported" : true, "poolConfigOption" : { "maxObjects" : 10, "maxIdle" : 10, "maxWait" : 150000, "minEvictableIdleTimeMillis" : 120000, "minIdle" : 1 }, "operationTimeout" : { "CREATE" : -1, "TEST" : -1, "AUTHENTICATE" : -1, "SEARCH" : -1, "VALIDATE" : -1, "GET" : -1, "UPDATE" : -1, "DELETE" : -1, "SCRIPT_ON_CONNECTOR" : -1, "SCRIPT_ON_RESOURCE" : -1, "SYNC" : -1, "SCHEMA" : -1 }, "configurationProperties" : { "xsdIcfFilePath" : "samples/sample1/data/resource-schema-1.xsd", "xsdFilePath" : "samples/sample1/data/resource-schema-extension.xsd", "xmlFilePath" : "samples/sample1/data/xmlConnectorData.xml" }, "syncFailureHandler" : { "maxRetries" : 5, "postRetryAction" : "logged-ignore" }, "objectTypes" : { "account" : { "$schema" : "http://json-schema.org/draft-03/schema", "id" : "__ACCOUNT__", "type" : "object", "nativeType" : "__ACCOUNT__", "properties" : { "description" : { "type" : "string", "nativeName" : "__DESCRIPTION__", "nativeType" : "string" }, "firstname" : { "type" : "string", "nativeName" : "firstname", "nativeType" : "string" }, "email" : { "type" : "string", "nativeName" : "email", "nativeType" : "string" }, "_id" : { "type" : "string", "nativeName" : "__UID__" }, "password" : { "type" : "string", "nativeName" : "password", "nativeType" : "string" }, "name" : { "type" : "string", "required" : true, "nativeName" : "__NAME__", "nativeType" : "string" }, "lastname" : { "type" : "string", "required" : true, "nativeName" : "lastname", "nativeType" : "string" }, "mobileTelephoneNumber" : { "type" : "string", "required" : true, "nativeName" : "mobileTelephoneNumber", "nativeType" : "string" }, "securityQuestion" : { "type" : "string", "required" : true, "nativeName" : "securityQuestion", "nativeType" : "string" }, "securityAnswer" : { "type" : "string", "required" : true, "nativeName" : "securityAnswer", "nativeType" : "string" }, "roles" : { "type" : "string", "required" : false, "nativeName" : "roles", "nativeType" : "string" } } } }, "operationOptions" : { } }' \ --request POST \ "http://localhost:8080/openidm/system?_action=testConfig"
If the configuration is valid, the command returns
"ok": true
, for example:
{ "ok": true, "name": "xmlfile" }
If the configuration is not valid, the command returns an error, indicating
the problem with the configuration. For example, the following result is
returned when the LDAP connector configuration is missing a required property
(in this case, the baseContexts
to synchronize):
{ "error": "org.identityconnectors.framework.common.exceptions.ConfigurationException: The list of base contexts cannot be empty", "name": "OpenDJ", "ok": false }
The testConfig
action requires a running OpenIDM instance,
as it uses the REST API, but does not require an active connector instance
for the connector whose configuration you want to test.
13.8. Adding Attributes to Connector Configurations
You can add the attributes of your choice to a connector configuration file.
Specifically, if you want to set up
"Extending the Property Type Configuration" to one of the
objectTypes
such as account
, use the
format shown under "Specifying the Supported Object Types".
You can configure connectors to enable provisioning of arbitrary property
level extensions (such as image files) to system resources. For example, if
you want to set up image files such as account avatars, open the appropriate
provisioner file. Look for an account
section similar to:
"account" : { "$schema" : "http://json-schema.org/draft-03/schema", "id" : "__ACCOUNT__", "type" : "object", "nativeType" : "__ACCOUNT__", "properties" : {...
Under properties
, add one of the following code blocks.
The first block works for a single photo encoded as a base64 string. The
second block would address multiple photos encoded in the same way:
"attributeByteArray" : { "type" : "string", "nativeName" : "attributeByteArray", "nativeType" : "JAVA_TYPE_BYTE_ARRAY" },
"attributeByteArrayMultivalue": { "type": "array", "items": { "type": "string", "nativeType": "JAVA_TYPE_BYTE_ARRAY" }, "nativeName": "attributeByteArrayMultivalue" },
Chapter 14. Synchronizing Data Between Resources
One of the core services of OpenIDM is synchronizing identity data between different resources. In this chapter, you will learn about the different types of synchronization, and how to configure OpenIDM's flexible synchronization mechanism.
14.1. Types of Synchronization
Synchronization happens either when OpenIDM receives a change directly, or when OpenIDM discovers a change on an external resource. An external resource can be any system that holds identity data, such as Active Directory, OpenDJ, a CSV file, a JDBC database, and others. OpenIDM connects to external resources by using OpenICF connectors. For more information, see "Connecting to External Resources".
For direct changes to managed objects, OpenIDM immediately synchronizes those changes to all mappings configured to use those objects as their source. A direct change can originate not only as a write request through the REST interface, but also as an update resulting from reconciliation with another resource.
OpenIDM discovers and synchronizes changes from external resources by using reconciliation and liveSync.
OpenIDM synchronizes changes made to its internal repository with external resources by using implicit synchronization.
- Reconciliation
Reconciliation is the process of ensuring that the objects in two different data stores are synchronized. Traditionally, reconciliation applies mainly to user objects, but OpenIDM can reconcile any objects, such as groups, roles, and devices.
In any reconciliation operation, there is a source system (the system that contains the changes) and a target system (the system to which the changes will be propagated). The source and target system are defined in a mapping. OpenIDM can be either the source or the target in a mapping. You can configure multiple mappings for one OpenIDM instance, depending on the external resources to which OpenIDM connects.
To perform reconciliation, OpenIDM analyzes both the source system and the target system, to discover the differences that it must reconcile. Reconciliation can therefore be a heavyweight process. When working with large data sets, finding all changes can be more work than processing the changes.
Reconciliation is, however, thorough. It recognizes system error conditions and catches changes that might be missed by liveSync. Reconciliation therefore serves as the basis for compliance and reporting functionality.
- LiveSync
LiveSync captures the changes that occur on a remote system, then pushes those changes to OpenIDM. OpenIDM uses the defined mappings to replay the changes where they are required; either in the OpenIDM repository, or on another remote system, or both. Unlike reconciliation, liveSync uses a polling system, and is intended to react quickly to changes as they happen.
To perform this polling, liveSync relies on a change detection mechanism on the external resource to determine which objects have changed. The change detection mechanism is specific to the external resource, and can be a time stamp, a sequence number, a change vector, or any other method of recording changes that have occurred on the system. For example, OpenDJ implements a change log that provides OpenIDM with a list of objects that have changed since the last request. Active Directory implements a change sequence number, and certain databases might have a
lastChange
attribute.Note
In the case of OpenDJ, the change log (
cn=changelog
) can be read only bycn=directory manager
by default. If you are configuring liveSync with OpenDJ, theprincipal
that is defined in the LDAP connector configuration must have access to the change log. For information about allowing a regular user to read the change log, see To Allow a User to Read the Change Log in the Administration Guide for OpenDJ.- Implicit synchronization
Implicit synchronization automatically pushes changes that are made in the OpenIDM internal repository to external systems.
Note that implicit synchronization only synchronizes changed objects to the external data sources. To synchronize a complete data set, you must start with a reconciliation operation. The entire changed object is synchronized. If you want to synchronize only the attributes that have changed, you can modify the
onUpdate
script in your mapping to compare attribute values before pushing changes.
OpenIDM uses mappings, configured in your project's
conf/sync.json
file, to determine which data to
synchronize, and how that data must be synchronized. You can schedule
reconciliation operations, and the frequency with which OpenIDM polls for
liveSync changes, as described in "Scheduling Tasks and Events".
OpenIDM logs reconciliation and synchronization operations in the audit logs by default. For information about querying the reconciliation and synchronization logs, see "Querying Audit Logs Over REST".
14.2. Defining Your Data Mapping Model
In general, identity management software implements one of the following data models:
A meta-directory data model, where all data are mirrored in a central repository.
The meta-directory model offers fast access at the risk of getting outdated data.
A virtual data model, where only a minimum set of attributes are stored centrally, and most are loaded on demand from the external resources in which they are stored.
The virtual model guarantees fresh data, but pays for that guarantee in terms of performance.
OpenIDM leaves the data model choice up to you. You determine the right trade offs for a particular deployment. OpenIDM does not hard code any particular schema or set of attributes stored in the repository. Instead, you define how external system objects map onto managed objects, and OpenIDM dynamically updates the repository to store the managed object attributes that you configure.
You can, for example, choose to follow the data model defined in the Simple Cloud Identity Management (SCIM) specification. The following object represents a SCIM user:
{ "userName": "james1", "familyName": "Berg", "givenName": "James", "email": [ "james1@example.com" ], "description": "Created by OpenIDM REST.", "password": "asdfkj23", "displayName": "James Berg", "phoneNumber": "12345", "employeeNumber": "12345", "userType": "Contractor", "title": "Vice President", "active": true }
Note
Avoid using the dash character ( -
) in property names,
like last-name
, as dashes in names make JavaScript syntax
more complex. If you cannot avoid the dash, then write
source['last-name']
instead of
source.last-name
in your JavaScript.
14.3. Configuring Synchronization Between Two Resources
This section describes the high-level steps required to set up synchronization between two resources. A basic synchronization configuration involves the following steps:
Set up the connector configuration.
Connector configurations are defined in
conf/provisioner-*.json
files. One provisioner file must be defined for each external resource to which you are connecting.Map source objects to target objects.
Mappings are defined in the
conf/sync.json
file. There is only onesync.json
file per OpenIDM instance, but multiple mappings can be defined in that file.Configure any scripts that are required to check source and target objects, and to manipulate attributes.
In addition to these configuration elements, OpenIDM stores a
links
table in its repository. The links table maintains a record of relationships established between source and target objects.
14.3.1. Setting Up the Connector Configuration
Connector configuration files map external resource objects to OpenIDM
objects, and are described in detail in "Connecting to External Resources".
Connector configuration files are stored in the conf/
directory of your project, and are named
provisioner.resource-name.json
,
where resource-name reflects the connector
technology and the external resource, for example,
openicf-xml
.
You can create and modify connector configurations through the Admin UI or directly in the configuration files, as described in the following sections.
14.3.1.1. Setting up and Modifying Connector Configurations in the Admin UI
The easiest way to set up and modify connector configurations is to use the Admin UI.
To add or modify a connector configuration in the Admin UI:
Log in to the UI (
http://localhost:8080/admin
) as an administrative user. The default administrative username and password isopenidm-admin
andopenidm-admin
.Select Configure > Connectors.
Click on the connector that you want to modify (if there is an existing connector configuration) or click New Connector to set up a new connector configuration.
14.3.1.2. Editing Connector Configuration Files
A number of sample provisioner files are provided in
path/to/openidm/samples/provisioners
. To modify
connector configuration files directly, edit one of the sample provisioner
files that corresponds to the resource to which you are connecting.
The following excerpt of an example LDAP connector configuration shows the
name for the connector and two attributes of an account object type. In the
attribute mapping definitions, the attribute name is mapped from the
nativeName
(the attribute name used on the external
resource) to the attribute name that is used in OpenIDM. The
sn
attribute in LDAP is mapped to
lastName
in OpenIDM. The homePhone
attribute is defined as an array, because it can have multiple values:
{ "name": "MyLDAP", "objectTypes": { "account": { "lastName": { "type": "string", "required": true, "nativeName": "sn", "nativeType": "string" }, "homePhone": { "type": "array", "items": { "type": "string", "nativeType": "string" }, "nativeName": "homePhone", "nativeType": "string" } } } }
For OpenIDM to access external resource objects and attributes, the object and its attributes must match the connector configuration. Note that the connector file only maps external resource objects to OpenIDM objects. To construct attributes and to manipulate their values, you use the synchronization mappings file, described in the following section.
14.3.2. Mapping Source Objects to Target Objects
A synchronization mapping specifies a relationship between objects and their attributes in two data stores. A typical attribute mapping, between objects in an external LDAP directory and an internal Managed User data store, is:
"source": "lastName", "target": "sn"
In this case, the lastName
source attribute is mapped
to the sn
(surname) attribute on the target.
The core configuration for OpenIDM synchronization is defined in your
project's synchronization mappings file (conf/sync.json
).
The mappings file contains one or more mappings for every resource that
must be synchronized.
Mappings are always defined from a source resource to a target resource. To configure bidirectional synchronization, you must define two mappings. For example, to configure bidirectional synchronization between an LDAP server and a local repository, you would define the following two mappings:
LDAP Server > Local Repository
Local Repository > LDAP Server
With bidirectional synchronization, OpenIDM includes a
links
property that enables you to reuse the links
established between objects, for both mappings. For more information, see
"Reusing Links Between Mappings".
You can update a mapping while the server is running. To avoid inconsistencies between repositories, do not update a mapping while a reconciliation is in progress for that mapping.
14.3.2.1. Specifying the Resource Mapping
Objects in external resources are specified in a mapping as
system/name/object-type
,
where name is the name used in the connector
configuration file, and object-type is the
object defined in the connector configuration file list of object types.
Objects in OpenIDM's internal repository are specified in the mapping as
managed/object-type
, where
object-type is defined in your project's
managed objects configuration file (conf/managed.json
).
External resources, and OpenIDM managed objects, can be the
source or the target in a
mapping. By convention, the mapping name is a string of the form
source_target
,
as shown in the following example:
{ "mappings": [ { "name": "systemLdapAccounts_managedUser", "source": "system/ldap/account", "target": "managed/user", "properties": [ { "source": "lastName", "target": "sn" }, { "source": "telephoneNumber", "target": "telephoneNumber" }, { "target": "phoneExtension", "default": "0047" }, { "source": "email", "target": "mail", "comment": "Set mail if non-empty.", "condition": { "type": "text/javascript", "source": "(object.email != null)" } }, { "source": "", "target": "displayName", "transform": { "type": "text/javascript", "source": "source.lastName +', ' + source.firstName;" } }, { "source" : "uid", "target" : "userName", "condition" : "/linkQualifier eq \"user\"" } }, ] } ] }
In this example, the name of the source is
the external resource (ldap
), and the target is
OpenIDM's user repository, specifically managed/user
.
The properties
defined in the mapping reflect attribute
names that are defined in the OpenIDM configuration. For example, the
source attribute uid
is defined in the
ldap
connector configuration file, rather than on the
external resource itself.
14.3.2.1.1. Specifying Resource Mapping in the Admin UI
You can also configure synchronization mappings in the Admin UI. To do so,
navigate to http://localhost:8080/admin
, and click
Configure > Mappings. The Admin UI serves as a front end to OpenIDM
configuration files, so, the changes you make to mappings in the Admin UI
are written to your project's conf/sync.json
file.
14.3.2.2. Creating Attributes in a Mapping
You can use a mapping to create attributes on the
target resource. In the preceding example, the mapping creates a
phoneExtension
attribute with a default value of
0047
on the target object.
In other words, the default
property specifies a value
to assign to the attribute on the target object. Before OpenIDM determines
the value of the target attribute, it first evaluates any applicable
conditions, followed by any transformation scripts. If the
source
property and the transform
script yield a null value, it then applies the default value, create and
update actions. The default value overrides the target value, if one
exists.
To set up attributes with default values in the Admin UI:
Select Configure > Mappings, and click on the Mapping you want to edit.
Click on the Target Property that you want to create (
phoneExtension
in the previous example), select the Default Values tab, and enter a default value for that property mapping.
14.3.2.3. Transforming Attributes in a Mapping
Use a mapping to define attribute transformations during synchronization.
In the following sample mapping excerpt, the value of the
displayName
attribute on the target is set using a
combination of the lastName
and
firstName
attribute values from the source:
{ "source": "", "target": "displayName", "transform": { "type": "text/javascript", "source": "source.lastName +', ' + source.firstName;" } },
For transformations, the source
property is optional.
However, a source object is only available when you specify the
source
property. Therefore, in order to use
source.lastName
and source.firstName
to calculate the displayName
, the example specifies
"source" : ""
.
If you set "source" : ""
(not specifying an attribute),
the entire object is regarded as the source, and you must include the
attribute name in the transformation script. For example, to transform the
source username to lower case, your script would be
source.mail.toLowerCase();
. If you do specify a source
attribute (for example "source" : "mail"
), just that
attribute is regarded as the source. In this case, the transformation
script would be source.toLowerCase();
.
To set up a transformation script in the Admin UI:
Select Configure > Mappings, and select the Mapping.
Select the line with the target attribute whose value you want to set.
On the Transformation Script tab, select
Javascript
orGroovy
, and enter the transformation as anInline Script
or specify the path to the file containing your transformation script.
14.3.2.4. Using Scriptable Conditions in a Mapping
By default, OpenIDM synchronizes all attributes in a mapping. To facilitate more complex relationships between source and target objects, you can define conditions for which OpenIDM maps certain attributes. OpenIDM supports two types of mapping conditions:
Scriptable conditions, in which an attribute is mapped only if the defined script evaluates to
true
Condition filters, a declarative filter that sets the conditions under which the attribute is mapped. Condition filters can include a link qualifier, that identifies the type of relationship between the source object and multiple target objects. For more information, see "Mapping a Single Source Object to Multiple Target Objects".
Examples of condition filters include:
"condition": "/object/country eq 'France'"
- only map the attribute if the object'scountry
attribute equalsFrance
."condition": "/object/password pr"
- only map the attribute if the object'spassword
attribute is present."/linkQualifier eq 'admin'"
- only map the attribute if the link between this source and target object is of typeadmin
.
To set up mapping conditions in the Admin UI, select Configure > Mappings. Click the mapping for which you want to configure conditions. On the Properties tab, click on the attribute that you want to map, then select the Conditional Updates tab.
Configure the filtered condition on the Condition Filter
tab, or a scriptable condition on the Script
tab.
Scriptable conditions create mapping logic, based on the result of the
condition script. If the script does not return true
,
OpenIDM does not manipulate the target attribute during a synchronization
operation.
In the following excerpt, the value of the target mail
attribute is set to the value of the source email
attribute only if the source attribute is not empty:
{ "target": "mail", "comment": "Set mail if non-empty.", "source": "email", "condition": { "type": "text/javascript", "source": "(object.email != null)" } ...
Tip
You can add comments to JSON files. While this example includes a
property named comment
, you can use any unique
property name, as long as it is not used elsewhere in the server.
OpenIDM ignores unknown property names in JSON configuration files.
14.3.2.5. Mapping a Single Source Object to Multiple Target Objects
In certain cases, you might have a single object in a resource that maps to
more than one object in another resource. For example, assume that managed
user, bjensen, has two distinct accounts in an LDAP directory: an
employee
account (under
uid=bjensen,ou=employees,dc=example,dc=com
) and a
customer
account (under
uid=bjensen,ou=customers,dc=example,dc=com
). You want to
map both of these LDAP accounts to the same managed user account.
OpenIDM uses link qualifiers to manage this one-to-many scenario. To map a single source object to multiple target objects, you indicate how the source object should be linked to the target object by defining link qualifiers. A link qualifier is essentially a label that identifies the type of link (or relationship) between each object.
In the previous example, you would define two link qualifiers that enable you to link both of bjensen's LDAP accounts to her managed user object, as shown in the following diagram:
Note from this diagram that the link qualifier is a property of the link between the source and target object, and not a property of the source or target object itself.
Link qualifiers are defined as part of the mapping (in your project's
conf/sync.json
file). Each link qualifier must be
unique within the mapping. If no link qualifier is specified (when only one
possible matching target object exists), OpenIDM uses a default link
qualifier with the value default
.
Link qualifiers can be defined as a static list, or dynamically, using a
script. The following excerpt from a sample mapping shows the two static
link qualifiers, employee
and customer
,
described in the previous example:
{ "mappings": [ { "name": "managedUser_systemLdapAccounts", "source": "managed/user", "target": "system/MyLDAP/account", "linkQualifiers" : [ "employee", "customer" ], ...
The list of static link qualifiers is evaluated for every source record. That is, every reconciliation processes all synchronization operations, for each link qualifier, in turn.
A dynamic link qualifier script returns a list of link qualifiers applicable for each source record. For example, suppose you have two types of managed users - employees and contractors. For employees, a single managed user (source) account can correlate with three different LDAP (target) accounts - employee, customer, and manager. For contractors, a single managed user account can correlate with only two separate LDAP accounts - contractor, and customer. The possible linking situations for this scenario are shown in the following diagram:
In this scenario, you could write a script to generate a dynamic list of
link qualifiers, based on the managed user type. For employees, the script
would return [employee, customer, manager]
in its list of
possible link qualifiers. For contractors, the script would return
[contractor, customer]
in its list of possible link
qualifiers. A reconciliation operation would then only process the list of
link qualifiers applicable to each source object.
If your source resource includes a large number of records, you should use a dynamic link qualifier script instead of a static list of link qualifiers. Generating the list of applicable link qualifiers dynamically avoids unnecessary additional processing for those qualifiers that will never apply to specific source records. Synchronization performance is therefore improved for large source data sets.
You can include a dynamic link qualifier script inline (using the
source
property), or by referencing a JavaScript or
Groovy script file (using the file
property). The
following link qualifier script sets up the dynamic link qualifier lists
described in the previous example:
{ "mappings": [ { "name": "managedUser_systemLdapAccounts", "source": "managed/user", "target": "system/MyLDAP/account", "linkQualifiers" : { "type" : "text/javascript", "globals" : { }, "source" : "if(source.type === 'employee'){['employee', 'customer', 'manager']} else { ['contractor', 'customer'] }" } ...
To reference an external link qualifier script, provide a link to the file
in the file
property:
{ "mappings": [ { "name": "managedUser_systemLdapAccounts", "source": "managed/user", "target": "system/MyLDAP/account", "linkQualifiers" : { "type" : "text/javascript", "file" : "script/linkQualifiers.js" } ...
Dynamic link qualifier scripts must return all valid link qualifiers when
the returnAll
global variable is true. The
returnAll
variable is used during the target
reconciliation phase to check whether there are any target records that are
unassigned, for each known link qualifier. For a list of the variables
available to a dynamic link qualifier script, see
"Script Triggers Defined in sync.json
".
On their own, link qualifiers have no functionality. However, they can be referenced by various aspects of reconciliation to manage the situations where a single source object maps to multiple target objects. The following examples show how link qualifiers can be used in reconciliation operations:
Use link qualifiers during object creation, to create multiple target objects per source object.
The following excerpt of a sample mapping defines a transformation script that generates the value of the
dn
attribute on an LDAP system. If the link qualifier isemployee
, the value of the targetdn
is set to"uid=userName,ou=employees,dc=example,dc=com"
. If the link qualifier iscustomer
, the value of the targetdn
is set to"uid=userName,ou=customers,dc=example,dc=com"
. The reconciliation operation iterates through the link qualifiers for each source record. In this case, two LDAP objects, with differentdn
s would created for each managed user object.{ "target" : "dn", "transform" : { "type" : "text/javascript", "globals" : { }, "source" : "if (linkQualifier === 'employee') { 'uid=' + source.userName + ',ou=employees,dc=example,dc=com'; } else if (linkQualifier === 'customer') { 'uid=' + source.userName + ',ou=customers,dc=example,dc=com'; }" }, "source" : "" }
Use link qualifiers in conjunction with a correlation query that assigns a link qualifier based on the values of an existing target object.
During the source synchronization, OpenIDM queries the target system for every source record and link qualifier, to check if there are any matching target records. If a match is found, the sourceId, targetId, and linkQualifier are all saved as the link.
The following excerpt of a sample mapping shows the two link qualifiers described previously (
employee
andcustomer
). The correlation query first searches the target system for theemployee
link qualifier. If a target object matches the query, based on the value of itsdn
attribute, OpenIDM creates a link between the source object and that target object and assigns theemployee
link qualifier to that link. This process is repeated for all source records. Then, the correlation query searches the target system for thecustomer
link qualifier. If a target object matches that query, OpenIDM creates a link between the source object and that target object and assigns thecustomer
link qualifier to that link."linkQualifiers" : ["employee", "customer"], "correlationQuery" : [ { "linkQualifier" : "employee", "type" : "text/javascript", "source" : "var query = {'_queryFilter': 'dn co \"' + uid=source.userName + 'ou=employees\"'}; query;" }, { "linkQualifier" : "customer", "type" : "text/javascript", "source" : "var query = {'_queryFilter': 'dn co \"' + uid=source.userName + 'ou=customers\"'}; query;" } ] ...
For more information about correlation queries, see "Correlating Source Objects With Existing Target Objects".
Use link qualifiers during policy validation to apply different policies based on the link type.
The following excerpt of a sample
sync.json
file shows two link qualifiers,user
andtest
. Depending on the link qualifier, different actions are taken when the target record is ABSENT:{ "mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "linkQualifiers" : [ "user", "test" ], "properties" : [ ... "policies" : [ { "situation" : "CONFIRMED", "action" : "IGNORE" }, { "situation" : "FOUND", "action" : "UPDATE } { "condition" : "/linkQualifier eq \"user\"", "situation" : "ABSENT", "action" : "CREATE", "postAction" : { "type" : "text/javascript", "source" : "java.lang.System.out.println('Created user: \');" } }, { "condition" : "/linkQualifier eq \"test\"", "situation" : "ABSENT", "action" : "IGNORE", "postAction" : { "type" : "text/javascript", "source" : "java.lang.System.out.println('Ignored user: ');" } }, ...
With this sample mapping, the synchronization operation creates an object in the target system only if the potential match is assigned a
user
link qualifier. If the match is assigned atest
qualifier, no target object is created. In this way, the process avoids creating duplicate test-related accounts in the target system.
Tip
To set up link qualifiers in the Admin UI select Configure > Mappings. Select a mapping, and click Properties > Link Qualifiers.
For an example that uses link qualifiers in conjunction with roles, see "The Multi-Account Linking Sample" in the Samples Guide.
14.3.2.6. Correlating Source Objects With Existing Target Objects
When OpenIDM creates an object on a target system in a synchronization process, it also creates a link between the source and target object. OpenIDM then uses that link to determine the object's synchronization situation during later synchronization operations. For a list of synchronization situations, see "How OpenIDM Assesses Synchronization Situations".
With every synchronization operation, OpenIDM can correlate existing source and target objects. Correlation matches source and target objects, based on the results of a query or script, and creates links between matched objects.
Correlation queries and correlation scripts are defined in your project's
mapping (conf/sync.json
) file. Each query or script is
specific to the mapping for which it is configured. You can also configure
correlation by using the Admin UI. Select Configure > Mappings, and click
on the mapping for which you want to correlate. On the Association tab,
expand Association Rules, and select Correlation Queries or Correlation
Script from the list.
The following sections describe how to write correlation queries and scripts.
14.3.2.6.1. Writing Correlation Queries
OpenIDM processes a correlation query by constructing a query map. The content of the query is generated dynamically, using values from the source object. For each source object, a new query is sent to the target system, using (possibly transformed) values from the source object for its execution.
Queries are run against target resources, either managed or system objects, depending on the mapping. Correlation queries on system objects access the connector, which executes the query on the external resource.
Correlation queries can be expressed using a query filter
(_queryFilter
), a predefined query
(_queryId
), or a native query expression
(_queryExpression
). For more information on these query
types, see "Defining and Calling Queries". The synchronization process executes
the correlation query to search through the target system for objects that
match the current source object.
The preferred syntax for a correlation query is a filtered query, using the
_queryFilter
keyword. Filtered queries should work in the
same way on any backend, whereas other query types are generally specific to
the backend. Predefined queries (using _queryId
) and
native queries (using _queryExpression
) can also be
used for correlation queries on managed resources. Note that
system
resources do not support native queries or
predefined queries other than query-all-ids
(which serves
no purpose in a correlation query).
To configure a correlation query, define a script whose source returns a
query that uses the _queryFilter
,
_queryId
, or _queryExpression
keyword.
For example:
For a
_queryId
, the value is the named query. Named parameters in the query map are expected by that query.{'_queryId' : 'for-userName', 'uid' : source.name}
For a
_queryFilter
, the value is the abstract filter string:{ "_queryFilter" : "uid eq \"" + source.userName + "\"" }
For a
_queryExpression
, the value is the system-specific query expression, such as raw SQL.{'_queryExpression': 'select * from managed_user where givenName = \"' + source.firstname + '\"' }
Caution
Using a query expression in this way is not recommended as it exposes your system to SQL injection exploits.
Using Filtered Queries to Correlate Objects
For filtered queries, the script that is defined or referenced in the
correlationQuery
property must return an object with the
following elements:
The element that is being compared on the target object, for example,
uid
.The element on the target object is not necessarily a single attribute. Your query filter can be simple or complex; valid query filters range from a single operator to an entire boolean expression tree.
If the target object is a system object, this attribute must be referred to by its OpenIDM name rather than its OpenICF
nativeName
. For example, given the following provisioner configuration excerpt, the attribute to use in the correlation query would beuid
and not__NAME__
:"uid" : { "type" : "string", "nativeName" : "__NAME__", "required" : true, "nativeType" : "string" } ...
The value to search for in the query.
This value is generally based on one or more values from the source object. However, it does not have to match the value of a single source object property. You can define how your script uses the values from the source object to find a matching record in the target system.
You might use a transformation of a source object property, such as
toUpperCase()
. You can concatenate that output with other strings or properties. You can also use this value to call an external REST endpoint, and redirect the response to the final "value" portion of the query.
The following correlation query matches source and target objects if the
value of the uid
attribute on the target is the same as
the userName
attribute on the source:
"correlationQuery" : { "type" : "text/javascript", "source" : "var qry = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; qry" },
The query can return zero or more objects. The situation that OpenIDM assigns to the source object depends on the number of target objects that are returned, and on the presence of any link qualifiers in the query. For information about synchronization situations, see "How OpenIDM Assesses Synchronization Situations". For information about link qualifiers, see "Mapping a Single Source Object to Multiple Target Objects".
Using Predefined Queries to Correlate Objects
For correlation queries on managed objects, you can
use a query that has been predefined in the database table configuration
file for the repository, either conf/repo.jdbc.json
or
conf/repo.orientdb.json
. You reference the query ID in
your project's conf/sync.json
file.
The following example shows a query defined in the OrientDB repository
configuration (conf/repo.orientdb.json
) that can be
used as the basis for a correlation query:
"for-userName" : "SELECT * FROM ${unquoted:_resource} WHERE userName = ${uid} SKIP ${unquoted:_pagedResultsOffset} LIMIT ${unquoted:_pageSize}"
By default, a ${value}
token replacement is assumed to be
a quoted string. If the value is not a quoted string, use the
unquoted:
prefix, as shown above.
You would call this query in the mapping (sync.json
) file
as follows:
{ "correlationQuery": { "type": "text/javascript", "source": "var qry = {'_queryId' : 'for-userName', 'uid' : source.name}; qry;" } }
In this correlation query, the _queryId
property
value (for-userName
) matches the name of the query
specified in conf/repo.orientdb.json
. The
source.name
value replaces ${uid}
in
the query. OpenIDM replaces ${unquoted:_resource}
in
the query with the name of the table that holds managed objects.
Using the Expression Builder to Create Correlation Queries
OpenIDM provides a declarative correlation option, the expression builder, that makes it easier to configure correlation queries.
The easiest way to use the expression builder to create a correlation query is through the Admin UI:
Select Configure > Mappings and select the mapping for which you want to configure a correlation query.
On the Association tab, expand the Association Rules item and select Correlation Queries.
Click Add Correlation query.
In the Correlation Query window, select a link qualifier.
If you do not need to correlate multiple potential target objects per source object, select the
default
link qualifier. For more information about linking to multiple target objects, see "Mapping a Single Source Object to Multiple Target Objects".Select Expression Builder, and add or remove the fields whose values in the source and target must match.
The following image shows how you can use the expression builder to build a correlation query for a mapping from
managed/user
tosystem/ldap/accounts
objects. The query will create a match between the source (managed) object and the target (LDAP) object if the value of thegivenName
or thetelephoneNumber
of those objects is the same.Click Submit to exit the Correlation Query pop-up then click Save.
The correlation query created in the previous steps displays as follows in
the mapping configuration (sync.json
):
"correlationQuery" : [ { "linkQualifier" : "default", "expressionTree" : { "any" : [ "givenName", "telephoneNumber" ] }, "mapping" : "managedUser_systemLdapAccounts", "type" : "text/javascript", "file" : "ui/correlateTreeToQueryFilter.js" } ]
14.3.2.6.2. Writing Correlation Scripts
If you need a more powerful correlation mechanism than a simple query can provide, you can write a correlation script with additional logic. Correlation scripts are generally more complex than correlation queries and impose no restrictions on the methods used to find matching objects. A correlation script must execute a query and return the result of that query.
The result of a correlation script is a list of maps, each of which contains
a candidate _id
value. If no match is found, the script
returns a zero-length list. If exactly one match is found, the script
returns a single-element list. If there are multiple ambiguous matches, the
script returns a list with multiple elements. There is no assumption that
the matching target record or records can be found by a simple query on the
target system. All of the work necessary to find matching records is left to
the script.
In general, a correlation query should meet the requirements of most deployments. Correlation scripts can be useful, however, if your query needs extra processing, such as fuzzy-logic matching or out-of-band verification with a third-party service over REST.
The following example shows a correlation script that uses link qualifiers.
The script returns resultData.result
- a list of maps,
each of which has an _id
entry. These entries will be the
values that are used for correlation.
(function () { var query, resultData; switch (linkQualifier) { case "test": logger.info("linkQualifier = test"); query = {'_queryFilter': 'uid eq \"' + source.userName + '-test\"'}; break; case "user": logger.info("linkQualifier = user"); query = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; break; case "default": logger.info("linkQualifier = default"); query = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; break; default: logger.info("No linkQualifier provided."); break; } var resultData = openidm.query("system/ldap/account", query); logger.info("found " + resultData.result.length + " results for link qualifier " + linkQualifier) for (i=0;i<resultData.result.length;i++) { logger.info("found target: " + resultData.result[i]._id); } return resultData.result; } ());
To configure a correlation script in the Admin UI, follow these steps:
Select Configure > Mappings and select the mapping for which you want to configure the correlation script.
On the Association tab, expand the Association Rules item and select Correlation Script from the list.
Select a script type (either JavaScript or Groovy) and either enter the script source in the Inline Script box, or specify the path to a file that contains the script.
To create a correlation script, use the details from the source object to find the matching record in the target system. If you are using link qualifiers to match a single source record to multiple target records, you must also use the value of the
linkQualifier
variable within your correlation script to find the target ID that applies for that qualifier.Click Save to save the script as part of the mapping.
14.3.3. Filtering Synchronized Objects
By default, OpenIDM synchronizes all objects that match those defined in
the connector configuration for the resource. Many connectors allow you
to limit the scope of objects that the connector accesses. For example,
the LDAP connector allows you to specify base DNs and LDAP filters so
that you do not need to access every entry in the directory. You can also
filter the source or target objects that are included in a
synchronization operation. To apply these filters, use the
validSource
, validTarget
, or
sourceCondition
properties in your mapping:
validSource
A script that determines if a source object is valid to be mapped. The script yields a boolean value:
true
indicates that the source object is valid;false
can be used to defer mapping until some condition is met. In the root scope, the source object is provided in the"source"
property. If the script is not specified, then all source objects are considered valid:{ "validSource": { "type": "text/javascript", "source": "source.ldapPassword != null" } }
validTarget
A script used during the second phase of reconciliation that determines if a target object is valid to be mapped. The script yields a boolean value:
true
indicates that the target object is valid;false
indicates that the target object should not be included in reconciliation. In the root scope, the source object is provided in the"target"
property. If the script is not specified, then all target objects are considered valid for mapping:{ "validTarget": { "type": "text/javascript", "source": "target.employeeType == 'internal'" } }
sourceCondition
The
sourceCondition
element defines an additional filter that must be met for a source object's inclusion in a mapping.This condition works like a
validSource
script. Its value can be either aqueryFilter
string, or a script configuration.sourceCondition
is used principally to specify that a mapping applies only to a particular role or entitlement.The following
sourceCondition
restricts synchronization to those user objects whose account status isactive
:{ "mappings": [ { "name": "managedUser_systemLdapAccounts", "source": "managed/user", "sourceCondition": "/source/accountStatus eq \"active\"", ... } ] }
During synchronization, your scripts and filters have access to a
source
object and a target
object.
Examples already shown in this section use source.attributeName
to retrieve attributes from the
source objects. Your scripts can also write to target attributes using
target.attributeName
syntax:
{ "onUpdate": { "type": "text/javascript", "source": "if (source.email != null) {target.mail = source.email;}" } }
In addition, the sourceCondition
filter has the
linkQualifier
variable in its scope.
For more information about scripting, see "Scripting Reference".
14.3.4. Configuring Synchronization Filters With User Preferences
For all regular users (other than openidm-admin
), you
can set up preferences, such as those related to marketing and news updates.
You can then use those preferences as a filter when reconciling users to a
target repository.
OpenIDM includes default user preferences defined for the managed user
object, available in the Admin UI and configured in the
managed.json
file.
14.3.4.1. Configuring End User Preferences
In the default project, common marketing preference options are included
for the managed user object. To find these preferences in the Admin UI,
select Configure > Managed Objects and select the User managed object.
Under the Preferences tab, you'll see keys and descriptions. You can also
see these preferences in the managed.json
file,
illustrated here:
"preferences" : { "title" : "Preferences", "viewable" : true, "searchable" : false, "userEditable" : true, "type" : "object", "properties" : { "updates" : { "description" : "Send me news and updates", "type" : "boolean" }, "marketing" : { "description" : "Send me special offers and services", "type" : "boolean" } }, "order" : [ "updates", "marketing" ], "required" : [ ] },
14.3.4.2. Reviewing Preferences as an End User
When regular users log into the self-service UI, they'll see the preferences described in the following section: "Configuring End User Preferences". To review those preferences, log into the end user UI and select Profile > Preferences.
End users who accept these preferences get the following entries in their managed user data:
"preferences" : { "updates" : true, "marketing" : true },
You can configure reconciliation to validate users who have chosen to accept the noted preferences.
14.3.4.3. User Preferences and Reconciliation
You can configure user preferences as a filter for reconciliation. For example, if some of your users do not want marketing emails, you can filter those users out of any reconciliation operation.
To configure user preferences as a filter, log into the Admin UI.
Select Configure > Mappings. Choose a mapping.
Under the Association tab, select Individual Record Validation.
Based on the options in the Valid Source drop down text box, you can select
Validate based on user preferences
. Users who have selected a preference such asSend me special offers
will then be reconciled from the source to the target repository.Note
What OpenIDM does during this reconciliation depends on the policy associated with the
UNQUALIFIED
situation for avalidSource
. The default action is to delete the target object (user). For more information, see "How OpenIDM Assesses Synchronization Situations".
Alternatively, you can edit the sync.json
file
directly. The following code block includes
preferences
as conditions to define a
validSource
on an individual record validation.
OpenIDM applies these conditions at the next reconciliation.
"validSource" : { "type" : "text/javascript", "globals" : { "preferences" : [ "updates", "marketing" ] }, "file" : "ui/preferenceCheck.js" }, "validTarget" : { "type" : "text/javascript", "globals" : { }, "source" : "" }
14.3.5. Preventing Accidental Deletion of a Target System
If a source resource is empty, the default behavior is to exit without failure and to log a warning similar to the following:
2015-06-05 10:41:18:918 WARN Cannot reconcile from an empty data source, unless allowEmptySourceSet is true.
The reconciliation summary is also logged in the reconciliation audit log.
This behavior prevents reconciliation operations from accidentally deleting everything in a target resource. In the event that a source system is unavailable but erroneously reports its status as up, the absence of source objects should not result in objects being removed on the target resource.
When you do want reconciliations of an empty source
resource to proceed, override the default behavior by setting the
allowEmptySourceSet
property to true
in the mapping. For example:
{ "mappings" : [ { "name" : "systemXmlfileAccounts_managedUser", "source" : "system/xmlfile/account", "allowEmptySourceSet" : true, ...
When an empty source is reconciled, the target is wiped out.
14.3.5.1. Preventing Accidental Deletion in the Admin UI
To change the allowEmptySourceSet
option in the Admin UI,
choose Configure > Mappings. Select the desired mapping. In the Advanced
tab, enable or disable the following option:
Allow Reconciliations From an Empty Source
14.4. Constructing and Manipulating Attributes With Scripts
OpenIDM provides a number of script hooks to construct
and manipulate attributes. These scripts can be triggered during various
stages of the synchronization process, and are defined as part of the
mapping, in the sync.json
file.
The scripts can be triggered when a managed or system object is created
(onCreate
), updated (onUpdate
), or
deleted (onDelete
). Scripts can also be triggered when a
link is created (onLink
) or removed
(onUnlink
).
In the default synchronization mapping, changes are always written to target objects, not to source objects. However, you can explicitly include a call to an action that should be taken on the source object within the script.
Note
The onUpdate
script is always
called for an UPDATE situation, even if the synchronization process
determines that there is no difference between the source and target
objects, and that the target object will not be updated.
If, subsequent to the onUpdate
script running, the
synchronization process determines that the target value to set is the same
as its existing value, the change is prevented from synchronizing to the
target.
The following sample extract of a sync.json
file derives
a DN for an LDAP entry when the entry is created in the internal repository:
{ "onCreate": { "type": "text/javascript", "source": "target.dn = 'uid=' + source.uid + ',ou=people,dc=example,dc=com'" } }
14.5. Advanced Use of Scripts in Mappings
"Constructing and Manipulating Attributes With Scripts" shows how to manipulate attributes with scripts when objects are created and updated. You might want to trigger scripts in response to other synchronization actions. For example, you might not want OpenIDM to delete a managed user directly when an external account record is deleted, but instead unlink the objects and deactivate the user in another resource. (Alternatively, you might delete the object in OpenIDM but nevertheless execute a script.) The following example shows a more advanced mapping configuration that exposes the script hooks available during synchronization.
{ "mappings": [ { "name": "systemLdapAccount_managedUser", "source": "system/ldap/account", "target": "managed/user", "validSource": { "type": "text/javascript", "file": "script/isValid.js" }, "correlationQuery" : { "type" : "text/javascript", "source" : "var map = {'_queryFilter': 'uid eq \"' + source.userName + '\"'}; map;" }, "properties": [ { "source": "uid", "transform": { "type": "text/javascript", "source": "source.toLowerCase()" }, "target": "userName" }, { "source": "", "transform": { "type": "text/javascript", "source": "if (source.myGivenName) {source.myGivenName;} else {source.givenName;}" }, "target": "givenName" }, { "source": "", "transform": { "type": "text/javascript", "source": "if (source.mySn) {source.mySn;} else {source.sn;}" }, "target": "familyName" }, { "source": "cn", "target": "fullname" }, { "comment": "Multi-valued in LDAP, single-valued in AD. Retrieve first non-empty value.", "source": "title", "transform": { "type": "text/javascript", "file": "script/getFirstNonEmpty.js" }, "target": "title" }, { "condition": { "type": "text/javascript", "source": "var clearObj = openidm.decrypt(object); ((clearObj.password != null) && (clearObj.ldapPassword != clearObj.password))" }, "transform": { "type": "text/javascript", "source": "source.password" }, "target": "__PASSWORD__" } ], "onCreate": { "type": "text/javascript", "source": "target.ldapPassword = null; target.adPassword = null; target.password = null; target.ldapStatus = 'New Account'" }, "onUpdate": { "type": "text/javascript", "source": "target.ldapStatus = 'OLD'" }, "onUnlink": { "type": "text/javascript", "file": "script/triggerAdDisable.js" }, "policies": [ { "situation": "CONFIRMED", "action": "UPDATE" }, { "situation": "FOUND", "action": "UPDATE" }, { "situation": "ABSENT", "action": "CREATE" }, { "situation": "AMBIGUOUS", "action": "EXCEPTION" }, { "situation": "MISSING", "action": "EXCEPTION" }, { "situation": "UNQUALIFIED", "action": "UNLINK" }, { "situation": "UNASSIGNED", "action": "EXCEPTION" } ] } ] }
The following list shows the properties that you can use as hooks in mapping configurations to call scripts:
- Triggered by Situation
onCreate, onUpdate, onDelete, onLink, onUnlink
- Object Filter
validSource, validTarget
- Correlating Objects
correlationQuery
- Triggered on Reconciliation
result
- Scripts Inside Properties
condition, transform
Your scripts can get data from any connected system at any time by using the
openidm.read(id)
function, where id
is
the identifier of the object to read.
The following example reads a managed user object from the repository:
repoUser = openidm.read("managed/user/ddoe");
The following example reads an account from an external LDAP resource:
externalAccount = openidm.read("system/ldap/account/uid=ddoe,ou=People,dc=example,dc=com");
Note that the query targets a DN rather than a UID as it did in the previous
example. The attribute that is used for the _id
is defined
in the connector configuration file and, in this example, is set to
"uidAttribute" : "dn"
. Although it is possible to use a DN
(or any unique attribute) for the _id
, as a best practice,
you should use an attribute that is both unique and immutable.
14.6. Reusing Links Between Mappings
When two mappings synchronize the same objects bidirectionally, use the
links
property in one mapping to have OpenIDM use the
same internally managed link for both mappings. If you do not specify a
links
property, OpenIDM maintains a separate link for
each mapping.
The following excerpt shows two mappings, one from MyLDAP accounts to
managed users, and another from managed users to MyLDAP accounts. In the
second mapping, the link
property tells OpenIDM to reuse
the links created in the first mapping, rather than create new links:
{ "mappings": [ { "name": "systemMyLDAPAccounts_managedUser", "source": "system/MyLDAP/account", "target": "managed/user" }, { "name": "managedUser_systemMyLDAPAccounts", "source": "managed/user", "target": "system/MyLDAP/account", "links": "systemMyLDAPAccounts_managedUser" } ] }
14.7. Managing Reconciliation
Reconciliation is the synchronization of objects between two data stores. You
can trigger, cancel, and monitor reconciliation operations over REST, using
the REST endpoint http://localhost:8080/openidm/recon
. You
can also perform most of these actions through the Admin UI.
14.7.1. Triggering a Reconciliation
The following example triggers a reconciliation operation over REST based
on the systemLdapAccounts_managedUser
mapping. The
mapping is defined in the file conf/sync.json
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/recon?_action=recon&mapping=systemLdapAccounts_managedUser"
By default, a reconciliation run ID is returned immediately when the reconciliation operation is initiated. Clients can make subsequent calls to the reconciliation service, using this reconciliation run ID to query its state and to call operations on it. For an example, see "Obtaining the Details of a Reconciliation".
The reconciliation run initiated previously would return something similar to the following:
{"_id":"9f4260b6-553d-492d-aaa5-ae3c63bd90f0-14","state":"ACTIVE"}
To complete the reconciliation operation before the reconciliation run ID is
returned, set the waitForCompletion
property to
true
when the reconciliation is initiated:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/recon?_action=recon&mapping=systemLdapAccounts_managedUser&waitForCompletion=true"
14.7.1.1. Triggering a Reconciliation in the Admin UI
You can also trigger this reconciliation through the Admin UI. Select Configure > Mappings. In the mapping of your choice select Reconcile.
If you're reconciling a large number of items, the Admin UI shares the following message with you, possibly with numbers for entries reconciled and total entries.
In progress: reconciling source entries
Note
In the Admin UI, if you select Cancel Reconciliation
before it is complete, you'll have to start the process again.
14.7.2. Canceling a Reconciliation
With a REST call, you can cancel a reconciliation in progress, by specifying the reconciliation run ID. The following REST call cancels the reconciliation run initiated in the previous section:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/recon/0890ad62-4738-4a3f-8b8e-f3c83bbf212e?_action=cancel"
The output for a reconciliation cancellation request is similar to the following:
{ "status":"SUCCESS", "action":"cancel", "_id":"0890ad62-4738-4a3f-8b8e-f3c83bbf212e" }
If the reconciliation run is waiting for completion before its ID is returned, obtain the reconciliation run ID from the list of active reconciliations, as described in the following section.
14.7.2.1. Cancelling a Reconciliation in the Admin UI
In the Admin UI, you can cancel a reconciliation run in progress. When you
select Configure > Mappings, and select Reconcile from a mapping, the
Cancel Reconciliation
button appears while the
reconciliation is in progress.
14.7.3. Listing a History of Reconciliations
Display a list of reconciliation processes that have completed, and
those that are in progress, by running a RESTful GET on
"http://localhost:8080/openidm/recon"
.
The following example displays all reconciliation runs:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/recon"
The output is similar to the following, with one item for each reconciliation run:
{ "reconciliations": [ { "ended": "2014-03-06T06:14:11.845Z", "_id": "4286510e-986a-4521-bfa4-8cd1e039a7f5", "mapping": "systemLdapAccounts_managedUser", "state": "SUCCESS", "stage": "COMPLETED_SUCCESS", "stageDescription": "reconciliation completed.", "progress": { "links": { "created": 1, "existing": { "total": "0", "processed": 0 } }, "target": { "created": 1, "existing": { "total": "2", "processed": 2 } }, "source": { "existing": { "total": "1", "processed": 1 } } }, "situationSummary": { "UNASSIGNED": 2, "TARGET_IGNORED": 0, "SOURCE_IGNORED": 0, "MISSING": 0, "FOUND": 0, "AMBIGUOUS": 0, "UNQUALIFIED": 0, "CONFIRMED": 0, "SOURCE_MISSING": 0, "ABSENT": 1 }, "started": "2014-03-06T06:14:04.722Z" },] }
In contrast, the Admin UI displays the results of only the most recent reconciliation. For more information, see "Obtaining the Details of a Reconciliation in the Admin UI".
Each reconciliation run includes the following properties:
_id
The ID of the reconciliation run.
mapping
The name of the mapping, defined in the
conf/sync.json
file.state
The high level state of the reconciliation run. Values can be as follows:
ACTIVE
The reconciliation run is in progress.
CANCELED
The reconciliation run was successfully canceled.
FAILED
The reconciliation run was terminated because of failure.
SUCCESS
The reconciliation run completed successfully.
stage
The current stage of the reconciliation run. Values can be as follows:
ACTIVE_INITIALIZED
The initial stage, when a reconciliation run is first created.
ACTIVE_QUERY_ENTRIES
Querying the source, target and possibly link sets to reconcile.
ACTIVE_RECONCILING_SOURCE
Reconciling the set of IDs retrieved from the mapping source.
ACTIVE_RECONCILING_TARGET
Reconciling any remaining entries from the set of IDs retrieved from the mapping target, that were not matched or processed during the source phase.
ACTIVE_LINK_CLEANUP
Checking whether any links are now unused and should be cleaned up.
ACTIVE_PROCESSING_RESULTS
Post-processing of reconciliation results.
ACTIVE_CANCELING
Attempting to abort a reconciliation run in progress.
COMPLETED_SUCCESS
Successfully completed processing the reconciliation run.
COMPLETED_CANCELED
Completed processing because the reconciliation run was aborted.
COMPLETED_FAILED
Completed processing because of a failure.
stageDescription
A description of the stages described previously.
progress
The progress object has the following structure (annotated here with comments):
"progress":{ "source":{ // Progress on set of existing entries in the mapping source "existing":{ "processed":1001, "total":"1001" // Total number of entries in source set, if known, "?" otherwise } }, "target":{ // Progress on set of existing entries in the mapping target "existing":{ "processed":1001, "total":"1001" // Total number of entries in target set, if known, "?" otherwise }, "created":0 // New entries that were created }, "links":{ // Progress on set of existing links between source and target "existing":{ "processed":1001, "total":"1001" // Total number of existing links, if known, "?" otherwise }, "created":0 // Denotes new links that were created } },
14.7.4. Obtaining the Details of a Reconciliation
Display the details of a specific reconciliation over REST, by including the reconciliation run ID in the URL. The following call shows the details of the reconciliation run initiated in "Triggering a Reconciliation".
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/recon/0890ad62-4738-4a3f-8b8e-f3c83bbf212e" { "ended": "2014-03-06T07:00:32.094Z", "_id": "7a07c100-4f11-4d7e-bf8e-fa4594f99d58", "mapping": "systemLdapAccounts_managedUser", "state": "SUCCESS", "stage": "COMPLETED_SUCCESS", "stageDescription": "reconciliation completed.", "progress": { "links": { "created": 0, "existing": { "total": "1", "processed": 1 } }, "target": { "created": 0, "existing": { "total": "3", "processed": 3 } }, "source": { "existing": { "total": "1", "processed": 1 } } }, "situationSummary": { "UNASSIGNED": 2, "TARGET_IGNORED": 0, "SOURCE_IGNORED": 0, "MISSING": 0, "FOUND": 0, "AMBIGUOUS": 0, "UNQUALIFIED": 0, "CONFIRMED": 1, "SOURCE_MISSING": 0, "ABSENT": 0 }, "started": "2014-03-06T07:00:31.907Z" }
14.7.4.1. Obtaining the Details of a Reconciliation in the Admin UI
You can display the details of the most recent reconciliation in the Admin UI. Select the mapping. In the page that appears, you'll see a message similar to:
Completed: Last reconciled July 29, 2016 14:13
When you select this option, the details of the reconciliation appear.
14.7.5. Triggering LiveSync Over REST
Because you can trigger liveSync operations over REST (or by using the resource API) you can use an external scheduler to trigger liveSync operations, rather than using the OpenIDM scheduling mechanism.
There are two ways to trigger liveSync over REST:
Use the
_action=liveSync
parameter directly on the resource. This is the recommended method. The following example calls liveSync on the user accounts in an external LDAP system:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system/ldap/account?_action=liveSync"
Target the
system
endpoint and supply asource
parameter to identify the object that should be synchronized. This method matches the scheduler configuration and can therefore be used to test schedules before they are implemented.The following example calls the same liveSync operation as the previous example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system?_action=liveSync&source=system/ldap/account"
A successful liveSync operation returns the following response:
{ "_rev": "4", "_id": "SYSTEMLDAPACCOUNT", "connectorData": { "nativeType": "integer", "syncToken": 1 } }
Do not run two identical liveSync operations simultaneously. Rather ensure that the first operation has completed before a second similar operation is launched.
To troubleshoot a liveSync operation that has not succeeded, include an
optional parameter (detailedFailure
) to return additional
information. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/system/ldap/account?_action=liveSync&detailedFailure=true"
Note
The first time liveSync is called, it does not have a synchronization token in the database to establish which changes have already been processed. The default liveSync behavior is to locate the last existing entry in the change log, and to store that entry in the database as the current starting position from which changes should be applied. This behavior prevents liveSync from processing changes that might already have been processed during an initial data load. Subsequent liveSync operations will pick up and process any new changes.
Typically, in setting up liveSync on a new system, you would load the data initially (by using reconciliation, for example) and then enable liveSync, starting from that base point.
14.8. Restricting Reconciliation By Using Queries
Every reconciliation operation performs a query on the source and on the
target resource, to determine which records should be reconciled. The default
source and target queries are query-all-ids
, which means
that all records in both the source and the target are considered candidates
for that reconciliation operation.
You can restrict reconciliation to specific entries by defining explicit source or target queries in the mapping configuration.
To restrict reconciliation to only those records whose
employeeType
on the source resource is
Permanent
, you might specify a source query as follows:
"mappings" : [ { "name" : "managedUser_systemLdapAccounts", "source" : "managed/user", "target" : "system/ldap/account", "sourceQuery" : { "_queryFilter" : "employeeType eq \"Permanent\"" }, ...
The format of the query can be any query type that is supported by the resource, and can include additional parameters, if applicable. OpenIDM supports the following query types.
For queries on managed objects:
_queryId
for arbitrary predefined, parameterized queries_queryFilter
for arbitrary filters, in common filter notation_queryExpression
for client-supplied queries, in native query format
For queries on system objects:
_queryId=query-all-ids
(the only supported predefined query)_queryFilter
for arbitrary filters, in common filter notation
The source and target queries send the query to the resource that is defined
for that source or target, by default. You can override the resource the
query is to sent by specifying a resourceName
in the
query. For example, to query a specific endpoint instead of the source
resource, you might modify the preceding source query as follows:
"mappings" : [ { "name" : "managedUser_systemLdapAccounts", "source" : "managed/user", "target" : "system/ldap/account", "sourceQuery" : { "resourceName" : "endpoint/scriptedQuery" "_queryFilter" : "employeeType eq \"Permanent\"" }, ...
To override a source or target query that is defined in the mapping, you can specify the query when you call the reconciliation operation. If you wanted to reconcile all employee entries, and not just the permanent employees, you would run the reconciliation operation as follows:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{"sourceQuery": {"_queryId" : "query-all-ids"}}' \ "http://localhost:8080/openidm/recon?_action=recon&mapping=managedUser_systemLdapAccounts"
By default, a reconciliation operation runs both the source and target phase.
To avoid queries on the target resource, set
runTargetPhase
to false
in the mapping
configuration (conf/sync.json
file). To prevent the
target resource from being queried during the reconciliation operation
configured in the previous example, amend the mapping configuration as
follows:
{ "mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "sourceQuery" : { "_queryFilter" : "employeeType eq \"Permanent\"" }, "runTargetPhase" : false, ...
14.8.1. Restricting Reconciliation in the Admin UI, With Queries
You can also restrict reconciliation by using queries through the Admin UI. Select Configure > Mappings, select a Mapping > Association > Reconciliation Query Filters. You can then specify desired source and target queries.
14.9. Restricting Reconciliation to a Specific ID
You can specify an ID to restrict reconciliation to a specific record in much the same way as you restrict reconciliation by using queries.
To restrict reconciliation to a specific ID, use the
reconById
action, instead of the recon
action when you call the reconciliation operation. Specify the ID with the
ids
parameter. Reconciling more than one ID with
the reconById
action is not currently supported.
The following example is based on the data from Sample 2b, which maps an
LDAP server with the OpenIDM repository. The example reconciles only the user
bjensen
, using the
managedUser_systemLdapAccounts
mapping to update the user
account in LDAP with the data from the OpenIDM repository. The
_id
for bjensen
in this example is
b3c2f414-e7b3-46aa-8ce6-f4ab1e89288c
.
The example assumes that implicit synchronization has been disabled and that
a reconciliation operation is required to copy changes made in the repository
to the LDAP system:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/recon?_action=reconById&mapping=managedUser_systemLdapAccounts&ids=b3c2f414-e7b3-46aa-8ce6-f4ab1e89288c"
Reconciliation by ID takes the default reconciliation options that are specified in the mapping so the source and target queries, and source and target phases described in the previous section apply equally to reconciliation by ID.
14.10. Configuring the LiveSync Retry Policy
You can specify the results when a liveSync operation reports a failure. Configure the liveSync retry policy to specify the number of times a failed modification should be reattempted and what should happen if the modification is unsuccessful after the specified number of attempts. If no retry policy is configured, OpenIDM reattempts the change an infinite number of times until the change is successful. This behavior can increase data consistency in the case of transient failures (for example, when the connection to the database is temporarily lost). However, in situations where the cause of the failure is permanent (for example, if the change does not meet certain policy requirements) the change will never succeed, regardless of the number of attempts. In this case, the infinite retry behavior can effectively block subsequent liveSync operations from starting.
Generally, a scheduled reconciliation operation will eventually force consistency. However, to prevent repeated retries that block liveSync, restrict the number of times OpenIDM reattempts the same modification. You can then specify what OpenIDM does with failed liveSync changes. The failed modification can be stored in a dead letter queue, discarded, or reapplied. Alternatively, an administrator can be notified of the failure by email or by some other means. This behavior can be scripted. The default configuration in the samples provided with OpenIDM is to retry a failed modification five times, and then to log and ignore the failure.
The liveSync retry policy is configured in the connector configuration file
(provisioner.openicf-*.json
). The sample connector
configuration files have a retry policy defined as follows:
"syncFailureHandler" : { "maxRetries" : 5, "postRetryAction" : "logged-ignore" },
The maxRetries
field specifies the number of attempts
that OpenIDM should make to process the failed modification. The value of
this property must be a positive integer, or -1
. A value
of zero indicates that failed modifications should not be reattempted. In
this case, the post-retry action is executed immediately when a liveSync
operation fails. A value of -1
(or omitting the
maxRetries
property, or the entire
syncFailureHandler
from the configuration) indicates that
failed modifications should be retried an infinite number of times. In this
case, no post retry action is executed.
The default retry policy relies on the scheduler, or whatever invokes liveSync. Therefore, if retries are enabled and a liveSync modification fails, OpenIDM will retry the modification the next time that liveSync is invoked.
The postRetryAction
field indicates what OpenIDM should
do if the maximum number of retries has been reached (or if
maxRetries
has been set to zero). The post-retry action
can be one of the following:
logged-ignore
indicates that OpenIDM should ignore the failed modification, and log its occurrence.dead-letter-queue
indicates that OpenIDM should save the details of the failed modification in a table in the repository (accessible over REST atrepo/synchronisation/deadLetterQueue/provisioner-name
).script
specifies a custom script that should be executed when the maximum number of retries has been reached. For information about using custom scripts in the configuration, see "Scripting Reference".In addition to the regular objects described in "Scripting Reference", the following objects are available in the script scope:
syncFailure
Provides details about the failed record. The structure of the
syncFailure
object is as follows:"syncFailure" : { "token" : the ID of the token, "systemIdentifier" : a string identifier that matches the "name" property in provisioner.openicf.json, "objectType" : the object type being synced, one of the keys in the "objectTypes" property in provisioner.openicf.json, "uid" : the UID of the object (for example uid=joe,ou=People,dc=example,dc=com), "failedRecord", the record that failed to synchronize },
To access these fields, include
syncFailure.fieldname
in your script.failureCause
Provides the exception that caused the original liveSync failure.
failureHandlers
OpenIDM currently provides two synchronization failure handlers out of the box:
loggedIgnore
indicates that the failure should be logged, after which no further action should be taken.deadLetterQueue
indicates that the failed record should be written to a specific table in the repository, where further action can be taken.
To invoke one of the internal failure handlers from your script, use a call similar to the following (shown here for JavaScript):
failureHandlers.deadLetterQueue.invoke(syncFailure, failureCause);
Two sample scripts are provided in
path/to/openidm/samples/syncfailure/script
, one that logs failures, and one that sends them to the dead letter queue in the repository.
The following sample provisioner configuration file extract shows a liveSync retry policy that specifies a maximum of four retries before the failed modification is sent to the dead letter queue:
... "connectorName" : "org.identityconnectors.ldap.LdapConnector" }, "syncFailureHandler" : { "maxRetries" : 4, "postRetryAction" : dead-letter-queue }, "poolConfigOption" : { ...
In the case of a failed modification, a message similar to the following is output to the log file:
INFO: sync retries = 1/4, retrying
OpenIDM reattempts the modification the specified number of times. If the modification is still unsuccessful, a message similar to the following is logged:
INFO: sync retries = 4/4, retries exhausted Jul 19, 2013 11:59:30 AM org.forgerock.openidm.provisioner.openicf.syncfailure.DeadLetterQueueHandler invoke INFO: uid=jdoe,ou=people,dc=example,dc=com saved to dead letter queue
The log message indicates the entry for which the modification failed
(uid=jdoe
, in this example).
You can view the failed modification in the dead letter queue, over the REST interface, as follows:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/repo/synchronisation/deadLetterQueue/ldap?_queryId=query-all-ids" { "query-time-ms": 2, "result": [ { "_id": "4", "_rev": "0" } ], "conversion-time-ms": 0 }
To view the details of a specific failed modification, include its ID in the URL:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/repo/synchronisation/deadLetterQueue/ldap/4" { "objectType": "account", "systemIdentifier": "ldap", "failureCause": "org.forgerock.openidm.sync.SynchronizationException: org.forgerock.openidm.objset.ConflictException: org.forgerock.openidm.sync.SynchronizationException: org.forgerock.openidm.script.ScriptException: ReferenceError: \"bad\" is not defined. (PropertyMapping/mappings/0/properties/3/condition#1)", "token": 4, "failedRecord": "complete record, in xml format" "uid": "uid=jdoe,ou=people,dc=example,dc=com", "_rev": "0", "_id": "4" }
14.11. Disabling Automatic Synchronization Operations
By default, all mappings are automatically synchronized. A change to a managed object is automatically synchronized to all resources for which the managed object is configured as a source. Similarly, if liveSync is enabled for a system, changes to an object on that system are automatically propagated to the managed object repository.
To prevent automatic synchronization for a specific mapping, set the
enableSync
property of that mapping to false. In the
following example, implicit synchronization is disabled. This means that
changes to objects in the internal repository are not automatically
propagated to the LDAP directory. To propagate changes to the LDAP
directory, reconciliation must be launched manually:
{ "mappings" : [ { "name" : "managedUser_systemLdapAccounts", "source" : "managed/user", "target" : "system/ldap/account", "enableSync" : false, .... }
If enableSync
is set to false
for a
system to managed user mapping (for example
"systemLdapAccounts_managedUser"
), liveSync is disabled
for that mapping.
14.12. Configuring Synchronization Failure Compensation
When implicit synchronization is used to push a large number of changes from the managed object repository to several external repositories, the process can take some time. Problems such as lost connections might happen, resulting in the changes being only partially synchronized.
For example, if a Human Resources manager adds a group of new employees in one database, a partial synchronization might mean that some of those employees do not have access to their email or other systems.
You can configure implicit synchronization to revert a reconciliation operation if it is not completely successful. This is known as failure compensation. An example of such a configuration is shown in "Sample 5b - Failure Compensation With Multiple Resources" in the Samples Guide. That sample demonstrates how OpenIDM compensates when synchronization to an external resource fails.
Failure compensation works by using the optional onSync
hook, which can be specified in the conf/managed.json
file. The onSync
hook can be used to provide failure
compensation as follows:
... "onDelete" : { "type" : "text/javascript", "file" : "ui/onDelete-user-cleanup.js" }, "onSync" : { "type" : "text/javascript", "file" : "compensate.js" }, "properties" : [ ...
The onSync
hook references a script
(compensate.js
), that is located in the
/path/to/openidm/bin/defaults/script
directory.
When a managed object is changed, an implicit synchronization operation attempts to synchronize the change (and any other pending changes) with any external data store(s) for which a mapping is configured. Note that implicit synchronization is enabled by default. To disable implicit synchronization, see "Disabling Automatic Synchronization Operations".
The implicit synchronization process proceeds with each mapping, in the
order in which the mappings are specified in sync.json
.
The compensate.js
script is designed to avoid
partial synchronization. If synchronization is successful for all
configured mappings, OpenIDM exits from the script.
If an implicit synchronization operation fails for a particular resource,
the onSync
hook invokes the
compensate.js
script. This script attempts to revert
the original change by performing another update to the managed object. This
change, in turn, triggers another implicit synchronization operation to all
external resources for which mappings are configured.
If the synchronization operation fails again, the
compensate.js
script is triggered a second time. This
time, however, the script recognizes that the change was originally called
as a result of a compensation and aborts. OpenIDM logs warning messages
related to the sync action
(notifyCreate, notifyUpdate, notifyDelete
), along with
the error that caused the sync failure.
If failure compensation is not configured, any issues with connections to an external resource can result in out of sync data stores, as discussed in the earlier Human Resources example.
With the compensate.js
script, any such errors will
result in each data store using the information it had before implicit
synchronization started. OpenIDM stores that information, temporarily, in
the oldObject
variable.
In the previous Human Resources example, managers should see that new employees are not shown in their database. Then, the OpenIDM administrators can check log files for errors, address them, and restart implicit synchronization with a new REST call.
14.13. Synchronization Situations and Actions
During synchronization OpenIDM assesses source and target objects, and the links between them, and determines the synchronization situation that applies to each object. OpenIDM then performs a specific action, usually on the target object, depending on the assessed situation.
The action that is taken for each situation is defined in the
policies
section of your synchronization mapping. The
following excerpt of the sync.json
file from Sample 2b
shows the defined actions in that sample:
{ "policies": [ { "situation": "CONFIRMED", "action": "UPDATE" }, { "situation": "FOUND", "action": "LINK" }, { "situation": "ABSENT", "action": "CREATE" }, { "situation": "AMBIGUOUS", "action": "IGNORE" }, { "situation": "MISSING", "action": "IGNORE" }, { "situation": "SOURCE_MISSING", "action": "DELETE" { "situation": "UNQUALIFIED", "action": "IGNORE" }, { "situation": "UNASSIGNED", "action": "IGNORE" } ] }
You can also define these actions in the Admin UI. Select Configure > Mappings, click on the required Mapping, then select the Behaviors tab to specify different actions per situation.
If you do not define an action for a particular situation, OpenIDM takes the default action for that situation. The following section describes how situations are assessed, lists all possible situations and describes the default actions taken for each situation.
14.13.1. How OpenIDM Assesses Synchronization Situations
Reconciliation is performed in two phases:
Source reconciliation, where OpenIDM accounts for source objects and associated links based on the configured mapping.
Target reconciliation, where OpenIDM iterates over the target objects that were not processed in the first phase.
For example, if a source object was deleted, the source reconciliation phase will not identify the target object that was previously linked to that source object. Instead, this orphaned target object is detected during the second phase.
During source reconciliation OpenIDM iterates through the objects in the source resource and evaluates the following conditions:
Is the source object valid?
Valid source objects are categorized
qualifies=1
. Invalid source objects are categorizedqualifies=0
. Invalid objects include objects that were filtered out by avalidSource
script orsourceCondition
. For more information, see "Filtering Synchronized Objects".Does the source object have a record in the links table?
Source objects that have a corresponding link in the repository's
links
table are categorizedlink=1
. Source objects that do not have a corresponding link are categorizedlink=0
.Does the source object have a corresponding valid target object?
Source objects that have a corresponding object in the target resource are categorized
target=1
. Source objects that do not have a corresponding object in the target resource are categorizedtarget=0
.
The following diagram illustrates the categorization of four sample objects during source reconciliation. In this example, the source is the managed user repository and the target is an LDAP directory.
Based on the categorizations of source objects during the source reconciliation phase, OpenIDM assesses a situation for each source object. Not all situations are detected in all synchronization types. The following list describes the set of synchronization situations, when they can be detected, the default action taken for that situation, and valid alternative actions that can be defined for the situation:
- Situations detected during reconciliation and source change events
CONFIRMED
(qualifies=1, link=1, target=1)The source object qualifies for a target object, and is linked to an existing target object.
Default action:
UPDATE
the target object.Other valid actions:
IGNORE, REPORT, NOREPORT, ASYNC
FOUND
(qualifies=1, link=0, target=1)The source object qualifies for a target object and is not linked to an existing target object. There is a single target object that correlates with this source object, according to the logic in the correlation.
Default action:
UPDATE
the target object.Other valid actions:
EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC
FOUND_ALREADY_LINKED
(qualifies=1, link=1, target=1)The source object qualifies for a target object and is not linked to an existing target object. There is a single target object that correlates with this source object, according to the logic in the correlation, but that target object is already linked to a different source object.
Default action: throw an
EXCEPTION
.Other valid actions:
IGNORE, REPORT, NOREPORT, ASYNC
ABSENT
(qualifies=1, link=0, target=0)The source object qualifies for a target object, is not linked to an existing target object, and no correlated target object is found.
Default action:
CREATE
a target object.Other valid actions:
EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC
UNQUALIFIED
(qualifies=0, link=0 or 1, target=1 or >1)The source object is unqualified (by the
validSource
script). One or more target objects are found through the correlation logic.Default action:
DELETE
the target object or objects.Other valid actions:
EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC
AMBIGUOUS
(qualifies=1, link=0, target>1)The source object qualifies for a target object, is not linked to an existing target object, but there is more than one correlated target object (that is, more than one possible match on the target system).
Default action: throw an
EXCEPTION
.Other valid actions:
IGNORE, REPORT, NOREPORT, ASYNC
MISSING
(qualifies=1, link=1, target=0)The source object qualifies for a target object, and is linked to a target object, but the target object is missing.
Default action: throw an
EXCEPTION
.Other valid actions:
CREATE, UNLINK, DELETE, IGNORE, REPORT, NOREPORT, ASYNC
Note
When a target object is deleted, the link from the target to the corresponding source object is not deleted automatically. This allows IDM to detect and report items that might have been removed without permission or might need review. If you need to remove the corresponding link when a target object is deleted, change the action to UNLINK to remove the link, or to DELETE to remove the target object and the link.
SOURCE_IGNORED
(qualifies=0, link=0, target=0)The source object is unqualified (by the
validSource
script), no link is found, and no correlated target exists.Default action:
IGNORE
the source object.Other valid actions:
EXCEPTION, REPORT, NOREPORT, ASYNC
- Situations detected only during source change events:
TARGET_IGNORED
(qualifies=0, link=0 or 1, target=1)The source object is unqualified (by the
validSource
script). One or more target objects are found through the correlation logic.This situation differs from the
UNQUALIFIED
situation, based on the status of the link and the target. If there is a link, the target is not valid. If there is no link and exactly one target, that target is not valid.Default action:
IGNORE
the target object until the next full reconciliation operation.Other valid actions:
DELETE, UNLINK, EXCEPTION, REPORT, NOREPORT, ASYNC
LINK_ONLY
(qualifies=n/a, link=1, target=0)The source may or may not be qualified. A link is found, but no target object is found.
Default action: throw an
EXCEPTION
.Other valid actions:
UNLINK, IGNORE, REPORT, NOREPORT, ASYNC
ALL_GONE
(qualifies=n/a, link=0, cannot-correlate)The source object has been removed. No link is found. Correlation is not possible, for one of the following reasons:
No previous source object can be found.
There is no correlation logic.
A previous source object was found, and correlation logic exists, but no corresponding target was found.
Default action:
IGNORE
the source object.Other valid actions:
EXCEPTION, REPORT, NOREPORT, ASYNC
Based on this list, OpenIDM would assign the following situations to the previous diagram:
During target reconciliation, OpenIDM iterates through the objects in the target resource that were not accounted for during source reconciliation, and evaluates the following conditions:
Is the target object valid?
Valid target objects are categorized
qualifies=1
. Invalid target objects are categorizedqualifies=0
. Invalid objects include objects that were filtered out by avalidTarget
script. For more information, see "Filtering Synchronized Objects".Does the target object have a record in the links table?
Target objects that have a corresponding link in the repository's
links
table are categorizedlink=1
. Target objects that do not have a corresponding link are categorizedlink=0
.Does the target object have a corresponding valid source object?
Target objects that have a corresponding object in the source resource are categorized
source=1
. Target objects that do not have a corresponding object in the source resource are categorizedsource=0
.
The following diagram illustrates the categorization of three sample objects during target reconciliation.
Based on the categorizations of target objects during the target reconciliation phase, OpenIDM assesses a situation for each remaining target object. Not all situations are detected in all synchronization types. The following list describes the set of synchronization situations, when they can be detected, the default action taken for that situation, and valid alternative actions that can be defined for the situation:
- Situations detected only during reconciliation:
TARGET_IGNORED
(qualifies=0)During target reconciliation, the target becomes unqualified by the
validTarget
script.Default action:
IGNORE
the target object.Other valid actions:
DELETE, UNLINK, REPORT, NOREPORT, ASYNC
UNASSIGNED
(qualifies=1, link=0)A valid target object exists but does not have a link.
Default action: throw an
EXCEPTION
.Other valid actions:
IGNORE, REPORT, NOREPORT, ASYNC
CONFIRMED
(qualifies=1, link=1, source=1)The target object qualifies, and a link to a source object exists.
Default action:
UPDATE
the target object.Other valid actions:
IGNORE, REPORT, NOREPORT
- Situations detected during reconciliation and target change events:
UNQUALIFIED
(qualifies=0, link=1, source=1, but source does not qualify)The target object is unqualified (by the
validTarget
script). There is a link to an existing source object, which is also unqualified.Default action:
DELETE
the target object.Other valid actions:
UNLINK, EXCEPTION, IGNORE, REPORT, NOREPORT, ASYNC
SOURCE_MISSING
(qualifies=1, link=1, source=0)The target object qualifies and a link is found, but the source object is missing.
Default action: throw an
EXCEPTION
.Other valid actions:
DELETE, UNLINK, IGNORE, REPORT, NOREPORT, ASYNC
Based on this list, OpenIDM would assign the following situations to the previous diagram:
The following sections walk you through how OpenIDM assigns situations during source and target reconciliation.
14.13.2. Source Reconciliation
OpenIDM starts reconciliation and liveSync by reading a list of objects from
the resource. For reconciliation, the list includes all objects that are
available through the connector. For liveSync, the list contains only
changed objects. OpenIDM can filter objects from the list by using the
script specified in the validSource
property, or the
query specified in the sourceCondition
property.
OpenIDM then iterates the list, checking each entry against the
validSource
and sourceCondition
filters, and classifying objects according to their situations as described
in "How OpenIDM Assesses Synchronization Situations". OpenIDM uses the list of links for the
current mapping to classify objects. Finally, OpenIDM executes the action
that is configured for each situation.
The following table shows how OpenIDM assigns the appropriate situation during source reconciliation, depending on whether a valid source exists (Source Qualifies), whether a link exists in the repository (Link Exists), and the number of target objects found, based either on links or on the results of the correlation.
Source Qualifies? | Link Exists? | Target Objects Found[a] | Situation | ||||
---|---|---|---|---|---|---|---|
Yes | No | Yes | No | 0 | 1 | > 1 | |
X | X | X | SOURCE_MISSING | ||||
X | X | X | UNQUALIFIED | ||||
X | X | X | UNQUALIFIED | ||||
X | X | X | TARGET_IGNORED | ||||
X | X | X | UNQUALIFIED | ||||
X | X | X | ABSENT | ||||
X | X | X | FOUND | ||||
X | X[b] | X | FOUND_ALREADY_LINKED | ||||
X | X | X | AMBIGUOUS | ||||
X | X | X | MISSING | ||||
X | X | X | CONFIRMED | ||||
[a] If no link exists for the source object, then OpenIDM executes correlation logic. If no previous object is available, OpenIDM cannot correlate. [b] A link exists from the target object but it is not for this specific source object. |
14.13.3. Target Reconciliation
During source reconciliation, OpenIDM cannot detect situations where no
source object exists, such as the UNASSIGNED
situation.
When no source object exists, OpenIDM detects the situation during the second
reconciliation phase, target reconciliation. During target reconciliation,
OpenIDM iterates all target objects that do not have a representation on the
source, checking each object against the validTarget
filter, determining the appropriate situation and executing the action
configured for the situation.
The following table shows how OpenIDM assigns the appropriate situation during target reconciliation, depending on whether a valid target exists (Target Qualifies), whether a link with an appropriate type exists in the repository (Link Exists), whether a source object exists (Source Exists), and whether the source object qualifies (Source Qualifies). Not all situations assigned during source reconciliation are assigned during target reconciliation.
Target Qualifies? | Link Exists? | Source Exists? | Source Qualifies? | Situation | ||||
---|---|---|---|---|---|---|---|---|
Yes | No | Yes | No | Yes | No | Yes | No | |
X | TARGET_IGNORED | |||||||
X | X | X | UNASSIGNED | |||||
X | X | X | X | CONFIRMED | ||||
X | X | X | X | UNQUALIFIED | ||||
X | X | X | SOURCE_MISSING |
14.13.4. Situations Specific to Implicit Synchronization and LiveSync
Certain situations occur only during implicit synchronization (when OpenIDM pushes changes made in the repository out to external systems) and liveSync (when OpenIDM polls external system change logs for changes and updates the repository).
The following table shows the situations that pertain only to implicit sync and liveSync, when records are deleted from the source or target resource.
14.13.5. Synchronization Actions
When a situation has been assigned to an object, OpenIDM takes the actions configured in the mapping. If no action is configured, OpenIDM takes the default action for the situation. OpenIDM supports the following actions:
CREATE
Create and link a target object.
UPDATE
Link and update a target object.
DELETE
Delete and unlink the target object.
LINK
Link the correlated target object.
UNLINK
Unlink the linked target object.
EXCEPTION
Flag the link situation as an exception.
Do not use this action for liveSync mappings.
IGNORE
Do not change the link or target object state.
REPORT
Do not perform any action but report what would happen if the default action were performed.
NOREPORT
Do not perform any action or generate any report.
ASYNC
An asynchronous process has been started so do not perform any action or generate any report.
14.13.6. Launching a Script As an Action
In addition to the static synchronization actions described in the previous
section, you can provide a script that is run in specific synchronization
situations. The script can be either JavaScript or Groovy, and can be
provided inline (with the "source"
property), or
referenced from a file, (with the "file"
property).
The following excerpt of a sample sync.json
file
specifies that an inline script should be invoked when a synchronization
operation assesses an entry as ABSENT
in the target
system. The script checks whether the employeeType
property of the corresponding source entry is contractor
.
If so, the entry is ignored. Otherwise, the entry is created on the target
system:
{ "situation" : "ABSENT", "action" : { "type" : "text/javascript", "globals" : { }, "source" : "if (source.employeeType === "contractor") {action='IGNORE'} else {action='CREATE'};action;" }, }
The variables available to a script that is called as an action are
source
, target
,
linkQualifier
, and recon
(where
recon.actionParam
contains information about the current
reconciliation operation). For more information about the variables
available to scripts, see "Variables Available to Scripts".
The result obtained from evaluating this script must be a string whose value is one of the synchronization actions listed in "Synchronization Actions". This resulting action will be shown in the reconciliation log.
To launch a script as a synchronization action in the Admin UI:
Select Configure > Mappings.
Select the mapping that you want to change.
On the Behaviors tab, click the pencil icon next to the situation whose action you want to change.
On the Perform this Action tab, click Script, then enter the script that corresponds to the action.
14.13.7. Launching a Workflow As an Action
OpenIDM provides a default script
(triggerWorkflowFromSync.js
) that launches a predefined
workflow when a synchronization operation assesses a particular situation.
The mechanism for triggering this script is the same as for any
other script. The script is provided in the
openidm/bin/defaults/script/workflow
directory. If you
customize the script, copy it to the script
directory of
your project to ensure that your customizations are preserved during an
upgrade.
The parameters for the workflow are passed as properties of the
action
parameter.
The following extract of a sample sync.json
file
specifies that, when a synchronization operation assesses an entry as
ABSENT
, the workflow named
managedUserApproval
is invoked:
{ "situation" : "ABSENT", "action" : { "workflowName" : "managedUserApproval", "type" : "text/javascript", "file" : "workflow/triggerWorkflowFromSync.js" } }
To launch a workflow as a synchronization action in the Admin UI:
Select Configure > Mappings.
Select the mapping that you want to change.
On the Behaviors tab, click the pencil icon next to the situation whose action you want to change.
On the Perform this Action tab, click Workflow, then enter the details of the workflow you want to launch.
14.14. Asynchronous Reconciliation
Reconciliation can work in tandem with workflows to provide additional business logic to the reconciliation process. You can define scripts to determine the action that should be taken for a particular reconciliation situation. A reconciliation process can launch a workflow after it has assessed a situation, and then perform the reconciliation or some other action.
For example, you might want a reconciliation process to assess new user accounts that need to be created on a target resource. However, new user account creation might require some kind of approval from a manager before the accounts are actually created. The initial reconciliation process can assess the accounts that need to be created, launch a workflow to request management approval for those accounts, and then relaunch the reconciliation process to create the accounts, after the management approval has been received.
In this scenario, the defined script returns IGNORE
for
new accounts and the reconciliation engine does not continue processing the
given object. The script then initiates an asynchronous process which calls
back and completes the reconciliation process at a later stage.
A sample configuration for this scenario is available in
openidm/samples/sample9
, and described in "Demonstrating Asynchronous Reconciliation Using a Workflow" in the Samples Guide.
Configuring asynchronous reconciliation using a workflow involves the following steps:
Create the workflow definition file (
.xml or .bar
file) and place it in theopenidm/workflow
directory. For more information about creating workflows, see "Integrating Business Processes and Workflows".Modify the
conf/sync.json
file for the situation or situations that should call the workflow. Reference the workflow name in the configuration for that situation.For example, the following
sync.json
extract calls themanagedUserApproval
workflow if the situation is assessed asABSENT
:{ "situation" : "ABSENT", "action" : { "workflowName" : "managedUserApproval", "type" : "text/javascript", "file" : "workflow/triggerWorkflowFromSync.js" } },
In the sample configuration, the workflow calls a second, explicit reconciliation process as a final step. This reconciliation process is called on the
sync
context path, with theperformAction
action (openidm.action('sync', 'performAction', params)
).
You can also use this kind of explicit reconciliation to perform a specific action on a source or target record, regardless of the assessed situation.
You can call such an operation over the REST interface, specifying the
source, and/or target IDs, the mapping, and the action to be taken. The
action can be any one of the supported reconciliation actions:
CREATE, UPDATE, DELETE, LINK, UNLINK, EXCEPTION, REPORT, NOREPORT,
ASYNC, IGNORE
.
The following sample command calls the DELETE action on user
bjensen
, whose _id
in the LDAP
directory is uid=bjensen,ou=People,dc=example,dc=com
.
The user is deleted in the target resource, in this case, the OpenIDM
repository.
Note that the _id
must be URL-encoded in the REST call:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/sync?_action=performAction&sourceId=uid%3Dbjensen%2Cou%3DPeople%2Cdc%3Dexample%2Cdc%3Dcom&mapping= systemLdapAccounts_ManagedUser&action=DELETE" { "status": "OK" }
The following example creates a link between a managed object and its corresponding system object. Such a call is useful in the context of manual data association, when correlation logic has linked an incorrect object, or when OpenIDM has been unable to determine the correct target object.
In this example, there are two separate target accounts
(scarter.user
and scarter.admin
) that
should be mapped to the managed object. This call creates a link to the
user
account and specifies a link qualifier that indicates
the type of link that will be created:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/sync?_action=performAction&action=LINK &sourceId=4b39f74d-92c1-4346-9322-d86cb2d828a8&targetId=scarter.user &mapping=managedUser_systemXmlfileAccounts&linkQualifier=user" { "status": "OK" }
For more information about linking to multiple accounts, see "Mapping a Single Source Object to Multiple Target Objects".
14.15. Configuring Case Sensitivity For Data Stores
OpenIDM is case-sensitive, which means that an upper case ID is considered different from an otherwise identical lower case ID during reconciliation. In contrast, OpenDJ is case-insensitive. This can lead to problems, as the ID of links created by reconciliation may not match the case of the IDs expected by OpenIDM.
If a mapping inherits links by using the links
property,
you do not need to set case-sensitivity, because the mapping uses the
setting of the referred links.
Alternatively, you can address case-sensitivity issues from a datastore in one of the following ways:
Specify a case-insensitive datastore. To do so, set the
sourceIdsCaseSensitive
ortargetIdsCaseSensitive
properties tofalse
in the mapping for those links. For example, if the source LDAP data store is case-insensitive, set the mapping from the LDAP store to the managed user repository as follows:"mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "sourceIdsCaseSensitive" : false, "target" : "managed/user", "properties" : [ ...
You may also need to modify the OpenICF provisioner to make it case-insensitive. To do so, open your provisioner configuration file, and set the
enableFilteredResultsHandler
property tofalse
:"resultsHandlerConfig" : { "enableFilteredResultsHandler":false, },
Caution
Do not disable the filtered results handler for the CSV file connector. The CSV file connector does not perform filtering so if you disable the filtered results handler for this connector, the full CSV file will be returned for every request.
Use a case-insensitive option from your datastore. For example, in MySQL, you can change the collation of
managedobjectproperties.propvalue
toutf8_general_ci
.
In general, to address case-sensitivity, focus on database, table, or column level collation settings. Queries performed against repositories configured in this way are subject to the collation, and are used for comparison.
14.16. Optimizing Reconciliation Performance
By default, reconciliation is configured to function optimally, with regard to performance. Some of these optimizations might, however, be unsuitable for your environment. The following sections describe the default optimizations and how they can be configured, as well as additional methods you can use to improve the performance of reconciliation operations.
14.16.1. Correlating Empty Target Sets
To optimize performance, reconciliation does not correlate source objects to
target objects if the set of target objects is empty when the correlation is
started. This considerably speeds up the process the first time
reconciliation is run. You can change this behavior for a specific mapping by
adding the correlateEmptyTargetSet
property to the mapping
definition and setting it to true
. For example:
{ "mappings": [ { "name" : "systemMyLDAPAccounts_managedUser", "source" : "system/MyLDAP/account", "target" : "managed/user", "correlateEmptyTargetSet" : true }, ] }
Be aware that this setting will have a performance impact on the reconciliation process.
14.16.1.1. Correlating Empty Target Sets in the Admin UI
To change the correlateEmptyTargetSet
option in the
Admin UI, choose Configure > Mappings. Select the desired mapping. In the
Advanced tab, enable or disable the following option:
Correlate Empty Target Objects
14.16.2. Prefetching Links
All links are queried at the start of reconciliation and the results of that
query are used. You can disable the link prefetching so that the
reconciliation process looks up each link in the database as it processes
each source or target object. You can disable the prefetching of links by
adding the prefetchLinks
property to the mapping, and
setting it to false
, for example:
{ "mappings": [ { "name": "systemMyLDAPAccounts_managedUser", "source": "system/MyLDAP/account", "target": "managed/user" "prefetchLinks" : false } ] }
Be aware that this setting will have a performance impact on the reconciliation process.
14.16.2.1. Prefetching Links in the Admin UI
To change the prefetchLinks
option in the
Admin UI, choose Configure > Mappings. Select the desired mapping. In the
Advanced tab, enable or disable the following option:
Pre-fetch Links
14.16.3. Parallel Reconciliation Threads
By default, reconciliation is multithreaded; numerous threads are dedicated to the same reconciliation run. Multithreading generally improves reconciliation performance. The default number of threads for a single reconciliation run is 10 (plus the main reconciliation thread). Under normal circumstances, you should not need to change this number; however the default might not be appropriate in the following situations:
The hardware has many cores and supports more concurrent threads. As a rule of thumb for performance tuning, start with setting the thread number to two times the number of cores.
The source or target is an external system with high latency or slow response times. Threads may then spend considerable time waiting for a response from the external system. Increasing the available threads enables the system to prepare or continue with additional objects.
To change the number of threads, set the taskThreads
property in the conf/sync.json
file, for example:
"mappings" : [ { "name" : "systemXmlfileAccounts_managedUser", "source" : "system/xmlfile/account", "target" : "managed/user", "taskThreads" : 20 ... } ] }
A zero value runs reconciliation as a serialized process, on the main reconciliation thread.
14.16.3.1. Parallel Reconciliation Threads in the Admin UI
To change the taskThreads
option in the
Admin UI, choose Configure > Mappings. Select the desired mapping. In the
Advanced tab, adjust the number of threads in the following text box:
Threads Per Reconciliation
14.16.4. Improving Reconciliation Query Performance
Reconciliation operations are processed in two phases; a source phase and a target phase. In most reconciliation configurations, source and target queries make a read call to every record on the source and target systems to determine candidates for reconciliation. On slow source or target systems, these frequent calls can incur a substantial performance cost.
To improve query performance in these situations, you can preload the entire result set into memory on the source or target system, or on both systems. Subsequent read queries on known IDs are made against the data in memory, rather than the data on the remote system. For this optimization to be effective, the entire result set must fit into the available memory on the system for which it is enabled.
The optimization works by defining a sourceQuery
or
targetQuery
in the synchronization mapping that returns
not just the ID, but the complete object.
The following example query loads the full result set into memory during the
source phase of the reconciliation. The example uses a common filter
expression, called with the _queryFilter
keyword. The
query returns the complete object:
"mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "sourceQuery" : { "_queryFilter" : "true" }, ...
OpenIDM tries to detect what data has been returned. The autodetection
mechanism assumes that a result set that includes three or more fields per
object (apart from the _id
and rev
fields) contains the complete object.
You can explicitly state whether a query is configured to return complete
objects by setting the value of sourceQueryFullEntry
or
targetQueryFullEntry
in the mapping. The setting of these
properties overrides the autodetection mechanism.
Setting these properties to false
indicates that the
returned object is not the complete object. This might be required if a
query returns more than three fields of an object, but not the complete
object. Without this setting, the autodetect logic would assume that the
complete object was being returned. OpenIDM uses only the IDs from this
query result. If the complete object is required, the object is queried on
demand.
Setting these properties to true
indicates that the
complete object is returned. This setting is typically required only for
very small objects, for which the number of returned fields does not reach
the threshold required for the auto-detection mechanism to assume that it is
a full object. In this case, the query result includes all the details
required to pre-load the full object.
The following excerpt indicates that the full objects are returned and that OpenIDM should not autodetect the result set:
"mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "sourceQueryFullEntry" : true, "sourceQuery" : { "_queryFilter" : "true" }, ...
By default, all the attributes that are defined in the connector
configuration file are loaded into memory. If your mapping uses only a small
subset of the attributes in the connector configuration file, you can
restrict your query to return only those attributes required for
synchronization by using the _fields
parameter with the
query filter.
The following excerpt loads only a subset of attributes into memory, for all users in an LDAP directory.
"mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "sourceQuery" : { "_queryFilter" : "true", "_fields" : "cn, sn, dn, uid, employeeType, mail" }, ...
14.16.5. Improving Role-Based Provisioning Performance With an onRecon
Script
OpenIDM provides an onRecon
script that runs once, at
the beginning of each reconciliation. This script can perform any setup or
initialization operations that are appropriate for the reconciliation run.
In addition, OpenIDM provides a reconContext
that is
added to a request's context chain when reconciliation runs. The
reconContext
can store pre-loaded data that can be used
by other OpenIDM components (such as the managed object service) to
increase performance.
The default onRecon
script
(openidm/bin/default/script/roles/onRecon.groovy
)
loads the reconContext
with all the roles and
assignments that are required for the current mapping. The
effectiveAssignments
script checks the
reconContext
first. If a reconContext
is present, the script uses that reconContext
to populate
the array of effectiveAssignments
. This prevents a read
operation to managed/role
or
managed/assignment
every time reconciliation runs, and
greatly improves the overall performance for role-based provisioning.
You can customize the onRecon
,
effectiveRoles
, and effectiveAssignments
scripts to provide additional business logic during reconciliation. If you
customize these scripts, copy the default scripts from
openidm/bin/defaults/scripts
into your project's
script
directory, and make the changes there.
14.16.6. Paging Reconciliation Query Results
"Improving Reconciliation Query Performance" describes how to improve reconciliation performance by loading all entries into memory to avoid making individual requests to the external system for every ID. However, this optimization depends on the entire result set fitting into the available memory on the system for which it is enabled. For particularly large data sets (for example, data sets of hundreds of millions of users), having the entire data set in memory might not be feasible.
To alleviate this constraint, OpenIDM supports reconciliation paging, which breaks down extremely large data sets into chunks. It also lets you specify the number of entries that should be reconciled in each chunk or page.
Reconciliation paging is disabled by default, and can be enabled per mapping
(in the sync.json
file). To configure reconciliation
paging, set the reconSourceQueryPaging
property to
true
and set the
reconSourceQueryPageSize
in the synchronization mapping,
for example:
{ "mappings" : [ { "name" : "systemLdapAccounts_managedUser", "source" : "system/ldap/account", "target" : "managed/user", "reconSourceQueryPaging" : true, "reconSourceQueryPageSize" : 100, ... }
The value of reconSourceQueryPageSize
must be a
positive integer, and specifies the number of entries that will be processed
in each page. If reconciliation paging is enabled but no page size is set, a
default page size of 1000
is used.
14.17. Scheduling Synchronization
You can schedule synchronization operations, such as liveSync and reconciliation, using Quartz cronTrigger syntax. For more information about cronTrigger, see the corresponding Quartz documentation.
This section describes scheduling specifically for reconciliation and liveSync. You can use OpenIDM's scheduler service to schedule any other event by supplying a link to a script file, in which that event is defined. For information about scheduling other events, see "Scheduling Tasks and Events".
14.17.1. Configuring Scheduled Synchronization
Each scheduled reconciliation and liveSync task requires a schedule
configuration file in your project's conf
directory.
By convention, schedule configuration files are named
schedule-schedule-name.json
,
where schedule-name is a logical name for the
scheduled synchronization operation, such as
reconcile_systemXmlAccounts_managedUser
.
Schedule configuration files have the following format:
{ "enabled" : true, "persisted" : true, "type" : "cron", "startTime" : "(optional) time", "endTime" : "(optional) time", "schedule" : "cron expression", "misfirePolicy" : "optional, string", "timeZone" : "(optional) time zone", "invokeService" : "service identifier", "invokeContext" : "service specific context info" }
These properties are specific to the scheduler service, and are explained in "Scheduling Tasks and Events".
To schedule a reconciliation or liveSync task, set the
invokeService
property to either
sync
(for reconciliation) or
provisioner
for liveSync.
The value of the invokeContext
property depends on the
type of scheduled event. For reconciliation, the properties are set as
follows:
{ "invokeService": "sync", "invokeContext": { "action": "reconcile", "mapping": "systemLdapAccount_managedUser" } }
The mapping
is either referenced by its name in the
conf/sync.json
file, or defined inline by using the
mapping
property, as shown in the example in
"Specifying the Mapping as Part of the Schedule".
For liveSync, the properties are set as follows:
{ "invokeService": "provisioner", "invokeContext": { "action": "liveSync", "source": "system/OpenDJ/__ACCOUNT__" } }
The source
property follows the convention for a pointer
to an external resource object and takes the form
system/resource-name/object-type
.
Important
When you schedule a reconciliation operation to run at regular intervals,
do not set "concurrentExecution" : true
. This parameter
enables multiple scheduled operations to run concurrently. You cannot
launch multiple reconciliation operations for a single mapping
concurrently.
Daylight Savings Time (DST) can cause problems for scheduled liveSync operations. For more information, see "Schedules and Daylight Savings Time".
14.17.2. Specifying the Mapping as Part of the Schedule
Mappings for synchronization operations are usually stored in your project's
sync.json
file. You can, however, provide the mapping
for scheduled synchronization operation by including it as part of the
invokeContext
of the schedule configuration, as shown in
the following example:
{ "enabled": true, "type": "cron", "schedule": "0 08 16 * * ?", "persisted": true, "invokeService": "sync", "invokeContext": { "action": "reconcile", "mapping": { "name": "CSV_XML", "source": "system/Ldap/account", "target": "managed/user", "properties": [ { "source": "firstname", "target": "firstname" }, ... ], "policies": [...] } } }
Chapter 15. Extending Functionality By Using Scripts
Scripting enables you to customize various aspects of OpenIDM functionality, for example, by providing custom logic between source and target mappings, defining correlation rules, filters, and triggers, and so on.
OpenIDM supports scripts written in JavaScript and Groovy. Script options, and
the locations in which OpenIDM expects to find scripts, are configured in the
conf/script.json
file for your project. For more
information, see "Setting the Script Configuration".
OpenIDM includes several default scripts in the following directory
install-dir/bin/defaults/script/
.
Do not modify or remove any of the scripts in this directory. OpenIDM needs
these scripts to run specific services. Scripts in this folder are not
guaranteed to remain constant between product releases.
If you develop custom scripts, copy them to the script/
directory for your project, for example,
path/to/openidm/samples/sample2/script/
.
15.1. Validating Scripts Over REST
OpenIDM exposes a script
endpoint over which scripts can
be validated, by specifying the script parameters as part of the JSON
payload. This functionality enables you to test how a script will operate in
your deployment, with complete control over the inputs and outputs. Testing
scripts in this way can be useful in debugging.
In addition, the script registry service supports calls to other scripts. For example, you might have logic written in JavaScript, but also some code available in Groovy. Ordinarily, it would be challenging to interoperate between these two environments, but this script service enables you to call one from the other on the OpenIDM router.
The script
endpoint supports two actions -
eval
and compile
.
The eval
action evaluates a script, by taking any actions
referenced in the script, such as router calls to affect the state of an
object. For JavaScript scripts, the last statement that is executed is the
value produced by the script, and the expected result of the REST call.
The following REST call attempts to evaluate the
autoPurgeAuditRecon.js
script (provided in
openidm/bin/defaults/script/audit
), but provides an
incorrect purge type ("purgeByNumOfRecordsToKeep"
instead
of "purgeByNumOfReconsToKeep"
). The error is picked up in
the evaluation. The example assumes that the script exists in the directory
reserved for custom scripts (openidm/script
).
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "type": "text/javascript", "file": "script/autoPurgeAuditRecon.js", "globals": { "input": { "mappings": ["%"], "purgeType": "purgeByNumOfRecordsToKeep", "numOfRecons": 1 } } }' \ "http://localhost:8080/openidm/script?_action=eval" "Must choose to either purge by expired or number of recons to keep"
Tip
The variables passed into this script are namespaced with the
"globals"
map. It is preferable to namespace variables
passed into scripts in this way, to avoid collisions with the top-level
reserved words for script maps, such as file
,
source
, and type
.
The compile
action compiles a script, but does not
execute it. This action is used primarily by the UI, to validate scripts that
are entered in the UI. A successful compilation returns
true
. An unsuccessful compilation returns the reason for
the failure.
The following REST call tests whether a transformation script will compile.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "type":"text/javascript", "source":"source.mail ? source.mail.toLowerCase() : null" }' \ "http://localhost:8080/openidm/script?_action=compile" True
If the script is not valid, the action returns an indication of the error, for example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "type":"text/javascript", "source":"source.mail ? source.mail.toLowerCase()" }' \ "http://localhost:8080/openidm/script?_action=compile" { "code": 400, "reason": "Bad Request", "message": "missing : in conditional expression (3864142CB836831FAB8EAB662F566139CDC22BF2#1) in 3864142CB836831FAB8EAB662F566139CDC22BF2 at line number 1 at column number 39" }
15.2. Creating Custom Endpoints to Launch Scripts
Custom endpoints enable you to run arbitrary scripts through the OpenIDM REST URI.
Custom endpoints are configured in files named
conf/endpoint-name.json
,
where name generally describes the purpose of the
endpoint. The endpoint configuration file includes an inline script or a
reference to a script file, in either JavaScript or Groovy. The referenced
script provides the endpoint functionality.
A sample custom endpoint configuration is provided in the
openidm/samples/customendpoint
directory. The sample
includes three files:
- conf/endpoint-echo.json
Provides the configuration for the endpoint.
- script/echo.js
Provides the endpoint functionality in JavaScript.
- script/echo.groovy
Provides the endpoint functionality in Groovy.
This sample endpoint is described in detail in "Custom Endpoint Sample" in the Samples Guide.
Endpoint configuration files and scripts are discussed further in the following sections.
15.2.1. Creating a Custom Endpoint Configuration File
An endpoint configuration file includes the following elements:
{ "context" : "endpoint/linkedView/*", "type" : "text/javascript", "source" : "require('linkedView').fetch(request.resourcePath);" }
context
string, optional
The context path under which the custom endpoint is registered, in other words, the route to the endpoint. An endpoint with the context
endpoint/test
is addressable over REST at the URLhttp://localhost:8080/openidm/endpoint/test
or by using a script such asopenidm.read("endpoint/test")
.Endpoint contexts support wild cards, as shown in the preceding example. The
endpoint/linkedview/*
route matches the following patterns:endpoint/linkedView/managed/user/bjensen endpoint/linkedView/system/ldap/account/bjensen endpoint/linkedView/ endpoint/linkedView
The
context
parameter is not mandatory in the endpoint configuration file. If you do not include acontext
, the route to the endpoint is identified by the name of the file. For example, in the sample endpoint configuration provided inopenidm/samples/customendpoint/conf/endpoint-echo.json
, the route to the endpoint isendpoint/echo
.Note that this
context
path is not the same as the context chain of the request. For information about the request context chain, see "Understanding the Request Context Chain".type
string, required
The type of script to be executed, either
text/javascript
orgroovy
.file
orsource
The path to the script file, or the script itself, inline.
For example:
"file" : "workflow/gettasksview.js"
or
"source" : "require('linkedView').fetch(request.resourcePath);"
You must set authorization appropriately for any custom endpoints that you add, for example, by restricting the appropriate methods to the appropriate roles. For more information, see "Authorization".
15.2.2. Writing Custom Endpoint Scripts
The custom endpoint script files in the
samples/customendpoint/script
directory demonstrate
all the HTTP operations that can be called by a script. Each HTTP operation
is associated with a method
- create
,
read
, update
, delete
,
patch
, action
or
query
. Requests sent to the custom endpoint return a list
of the variables available to each method.
All scripts are invoked with a global request
variable in
their scope. This request structure carries all the information about the
request.
Warning
Read requests on custom endpoints must not modify the state of the resource, either on the client or the server, as this can make them susceptible to CSRF exploits.
The standard OpenIDM READ endpoints are safe from Cross Site Request Forgery (CSRF) exploits because they are inherently read-only. That is consistent with the Guidelines for Implementation of REST, from the US National Security Agency, as "... CSRF protections need only be applied to endpoints that will modify information in some way."
Custom endpoint scripts must return a JSON object. The
structure of the return object depends on the method
in
the request.
The following example shows the create
method in the
echo.js
file:
if (request.method === "create") { return { method: "create", resourceName: request.resourcePath, newResourceId: request.newResourceId, parameters: request.additionalParameters, content: request.content, context: context.current };
The following example shows the query
method in the
echo.groovy
file:
else if (request instanceof QueryRequest) { // query results must be returned as a list of maps return [ [ method: "query", resourceName: request.resourcePath, pagedResultsCookie: request.pagedResultsCookie, pagedResultsOffset: request.pagedResultsOffset, pageSize: request.pageSize, queryExpression: request.queryExpression, queryId: request.queryId, queryFilter: request.queryFilter.toString(), parameters: request.additionalParameters, context: context.toJsonValue().getObject() ] ] }
Depending on the method, the variables available to the script can include the following:
resourceName
The name of the resource, without the
endpoint/
prefix, such asecho
.newResourceId
The identifier of the new object, available as the results of a
create
request.revision
The revision of the object.
parameters
Any additional parameters provided in the request. The sample code returns request parameters from an HTTP GET with
?param=x
, as"parameters":{"param":"x"}
.content
Content based on the latest revision of the object, using
getObject
.context
The context of the request, including headers and security. For more information, see "Understanding the Request Context Chain".
- Paging parameters
The
pagedResultsCookie
,pagedResultsOffset
andpageSize
parameters are specific toquery
methods. For more information see "Paging and Counting Query Results".- Query parameters
The
queryExpression
,queryId
andqueryFilter
parameters are specific toquery
methods. For more information see "Constructing Queries".
15.2.3. Setting Up Exceptions in Scripts
When you create a custom endpoint script, you might need to build exception-handling logic. To return meaningful messages in REST responses and in logs, you must comply with the language-specific method of throwing errors.
A script written in JavaScript should comply with the following exception format:
throw { "code": 400, // any valid HTTP error code "message": "custom error message", "detail" : { "var": parameter1, "complexDetailObject" : [ "detail1", "detail2" ] } }
Any exceptions will include the specified HTTP error code, the
corresponding HTTP error message, such as Bad Request
,
a custom error message that can help you diagnose the error, and any
additional detail that you think might be helpful.
A script written in Groovy should comply with the following exception format:
import org.forgerock.json.resource.ResourceException import org.forgerock.json.JsonValue throw new ResourceException(404, "Your error message").setDetail(new JsonValue([ "var": "parameter1", "complexDetailObject" : [ "detail1", "detail2" ] ]))
15.3. Registering Custom Scripted Actions
OpenIDM enables you to register custom scripts that initiate some arbitrary action on a managed object endpoint. You can declare any number of actions in your managed object schema and associate those actions with a script.
Custom scripted actions have access to the following variables:
context
, request
,
resourcePath
, and object
. For more
information, see "Variables Available to Scripts".
Custom scripted actions facilitate arbitrary behavior on managed objects. For example, imagine a scenario where you want your managed users to be able to indicate whether they receive update notifications. You can define an action that toggles the value of a specific property on the user object. You can implement this scenario by following these steps:
Add an
updates
property to the managed user schema (in your project'sconf/managed.json
file) as follows:"properties": { ... "updates": { "title": "Automatic Updates", "viewable": true, "type": "boolean", "searchable": true, "userEditable": true }, ... }
Add an action named
toggleUpdates
to the managed user object definition as follows:{ "objects" : [ { "name" : "user", "onCreate" : { ... }, ... "actions" : { "toggleUpdates" : { "type" : "text/javascript", "source" : "openidm.patch(resourcePath, null, [{ 'operation' : 'replace', 'field' : '/updates', 'value' : !object.updates }])" } }, ...
Note that the
toggleUpdates
action calls a script that changes the value of the user'supdates
property.Call the script by specifying the ID of the action in a POST request on the user object, for example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/managed/user/ID?_actionId=toggleUpdate"
You can test this functionality as follows:
Create a managed user, bjensen, with an
updates
property that is set totrue
:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "_id":"bjensen", "userName":"bjensen", "sn":"Jensen", "givenName":"Barbara", "mail":"bjensen@example.com", "telephoneNumber":"5556787", "description":"Created by OpenIDM REST.", "updates": true, "password":"Passw0rd" }' \ "http://localhost:8080/openidm/managed/user?_action=create" { "_id": "bjensen", "_rev": "1", "userName": "bjensen", "sn": "Jensen", "givenName": "Barbara", "mail": "bjensen@example.com", "telephoneNumber": "5556787", "description": "Created by OpenIDM REST.", "updates": true, "accountStatus": "active", "effectiveRoles": [], "effectiveAssignments": [] }
Run the
toggleUpdates
action on bjensen's entry:$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/managed/user/bjensen?_action=toggleUpdates" { "_id": "bjensen", "_rev": "2", "userName": "bjensen", "sn": "Jensen", "givenName": "Barbara", "mail": "bjensen@example.com", "telephoneNumber": "5556787", "description": "Created by OpenIDM REST.", "updates": false, "accountStatus": "active", "effectiveRoles": [], "effectiveAssignments": [] }
Note in the command output that this action has set bjensen's
updates
property tofalse
.
The return value of a custom scripted action is ignored. The managed object is returned as the response of the scripted action, whether that object has been updated by the script or not.
Chapter 16. Scheduling Tasks and Events
The scheduler service enables you to schedule reconciliation and synchronization tasks, trigger scripts, collect and run reports, trigger workflows, and perform custom logging.
The scheduler service supports cronTrigger syntax based on the Quartz Scheduler (bundled with OpenIDM). For more information about cronTrigger, see the corresponding Quartz documentation.
By default, OpenIDM picks up changes to scheduled tasks and events dynamically, during initialization and also at runtime. For more information, see "Changing the Default Configuration".
In addition to the fine-grained scheduling facility, you can perform a scheduled batch scan for a specified date in OpenIDM data, and then automatically run a task when this date is reached. For more information, see "Scanning Data to Trigger Tasks".
16.1. Configuring the Scheduler Service
There is a distinction between the configuration of the scheduler service,
and the configuration of individual scheduled tasks and events. The
scheduler service is configured in your project's
conf/scheduler.json
file. This file has the following
format:
{ "threadPool" : { "threadCount" : "10" }, "scheduler" : { "executePersistentSchedules" : "&{openidm.scheduler.execute.persistent.schedules}" } }
The properties in the scheduler.json
file relate to the
configuration of the Quartz Scheduler:
threadCount
specifies the maximum number of threads that are available for running scheduled tasks concurrently.executePersistentSchedules
allows you to disable persistent schedules for a specific node. If this parameter is set tofalse
, the Scheduler Service will support the management of persistent schedules (CRUD operations) but it will not run any persistent schedules. The value of this property can be a string or boolean and istrue
by default.advancedProperties
(optional) enables you to configure additional properties for the Quartz Scheduler.
Note
In clustered environments, the scheduler service obtains an
instanceID
, and checkin and timeout settings from the
cluster management service (defined in the
project-dir/conf/cluster.json
file).
For details of all the configurable properties for the Quartz Scheduler, see the Quartz Scheduler Configuration Reference.
Note
You can also control whether schedules are persisted in your project's
conf/boot/boot.properties
file. In the default
boot.properties
file, persistent schedules
are enabled:
# enables the execution of persistent schedulers openidm.scheduler.execute.persistent.schedules=true
Settings in boot.properties
are not persisted in the
repository. Therefore, you can use the boot.properties
file to set different values for a property across different nodes in a
cluster. For example, if your deployment has a four-node cluster and you
want only two of those nodes to execute persisted schedules, you can
disable persisted schedules in the boot.properties
files of the remaining two nodes. If you set these values directly in the
scheduler.json
file, the values are persisted to the
repository and are therefore applied to all nodes in the cluster.
Changing the value of the
openidm.scheduler.execute.persistent.schedules
property
in the boot.properties
file changes the scheduler that
manages scheduled tasks on that node. Because the persistent and in-memory
schedulers are managed separately, a situation can arise where two separate
schedules have the same schedule name.
For more information about persistent schedules, see "Configuring Persistent Schedules".
16.2. Configuring Schedules
You can use the Admin UI or JSON configuration files to schedule tasks and
events. To configure a schedule in the Admin UI, select Configure > Schedules
and then click Add Schedule. If configure your schedules directly in JSON
files, place these files in your project's conf/
directory. By convention, OpenIDM uses file names of the form
schedule-schedule-name.json
,
where schedule-name is a logical name for the
scheduled operation, for example,
schedule-reconcile_systemXmlAccounts_managedUser.json
.
There are several example schedule configuration files in the
openidm/samples/schedules
directory.
Each schedule configuration file has the following format:
{ "enabled" : true, "persisted" : true, "concurrentExecution" : false, "type" : "cron", "startTime" : "(optional) time", "endTime" : "(optional) time", "schedule" : "cron expression", "misfirePolicy" : "optional, string", "timeZone" : "(optional) time zone", "invokeService" : "service identifier", "invokeContext" : "service specific context info", "invokeLogLevel" : "(optional) level" }
The schedule configuration properties are defined as follows:
enabled
Set to
true
to enable the schedule. When this property isfalse
, OpenIDM considers the schedule configuration dormant, and does not allow it to be triggered or launched.If you want to retain a schedule configuration, but do not want it used, set
enabled
tofalse
for task and event schedulers, instead of changing the configuration or cron expressions.persisted
(optional)Specifies whether the schedule state should be persisted or stored in RAM. Boolean (
true
orfalse
),false
by default.In a clustered environment, this property must be set to
true
to have the schedule fire only once across the cluster. For more information, see "Configuring Persistent Schedules".concurrentExecution
Specifies whether multiple instances of the same schedule can run concurrently. Boolean (
true
orfalse
),false
by default. Multiple instances of the same schedule cannot run concurrently by default. This setting prevents a new scheduled task from being launched before the same previously launched task has completed. For example, under normal circumstances you would want a liveSync operation to complete before the same operation was launched again. To enable multiple schedules to run concurrently, set this parameter totrue
. The behavior of missed scheduled tasks is governed by themisfirePolicy
.type
Currently OpenIDM supports only
cron
.startTime
(optional)Used to start the schedule at some time in the future. If this parameter is omitted, empty, or set to a time in the past, the task or event is scheduled to start immediately.
Use ISO 8601 format to specify times and dates (
yyyy-MM-dd'T'HH:mm:ss
).endTime
(optional)Used to plan the end of scheduling.
schedule
Takes cron expression syntax. For more information, see the CronTrigger Tutorial and Lesson 6: CronTrigger.
misfirePolicy
For persistent schedules, this optional parameter specifies the behavior if the scheduled task is missed, for some reason. Possible values are as follows:
fireAndProceed
. The first run of a missed schedule is immediately launched when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed.doNothing
. All missed schedules are discarded and the normal schedule is resumed when the server is back online.
timeZone
(optional)If not set, OpenIDM uses the system time zone.
invokeService
Defines the type of scheduled event or action. The value of this parameter can be one of the following:
sync
for reconciliationprovisioner
for liveSyncscript
to call some other scheduled operation defined in a scripttaskScanner
to define a scheduled task that queries a set of objects. For more information, see "Scanning Data to Trigger Tasks".
invokeContext
Specifies contextual information, depending on the type of scheduled event (the value of the
invokeService
parameter).The following example invokes reconciliation:
{ "invokeService": "sync", "invokeContext": { "action": "reconcile", "mapping": "systemLdapAccounts_managedUser" } }
For a scheduled reconciliation task, you can define the mapping in one of two ways:
Reference a mapping by its name in
sync.json
, as shown in the previous example. The mapping must exist in your project'sconf/sync.json
file.Add the mapping definition inline by using the
mapping
property, as shown in "Specifying the Mapping as Part of the Schedule".
The following example invokes a liveSync operation:
{ "invokeService": "provisioner", "invokeContext": { "action": "liveSync", "source": "system/OpenDJ/__ACCOUNT__" } }
For scheduled liveSync tasks, the
source
property follows OpenIDM's convention for a pointer to an external resource object and takes the formsystem/resource-name/object-type
.The following example invokes a script, which prints the string
It is working: 26
to the console. A similar sample schedule is provided inschedule-script.json
in the/path/to/openidm/samples/schedules
directory.{ "invokeService": "script", "invokeContext": { "script" : { "type" : "text/javascript", "source" : "java.lang.System.out.println('It is working: ' + input.edit);", "input": { "edit": 26} } } }
Note that these are sample configurations only. Your own schedule configuration will differ according to your specific requirements.
invokeLogLevel
(optional)Specifies the level at which the invocation will be logged. Particularly for schedules that run very frequently, such as liveSync, the scheduled task can generate significant output to the log file, and you should adjust the log level accordingly. The default schedule log level is
info
. The value can be set to any one of the SLF4J log levels:trace
debug
info
warn
error
fatal
16.3. Schedules and Daylight Savings Time
The schedule service uses Quartz cronTrigger syntax. CronTrigger schedules jobs to fire at specific times with respect to a calendar (rather than every N milliseconds). This scheduling can cause issues when clocks change for daylight savings time (DST) if the trigger time falls around the clock change time in your specific time zone.
Depending on the trigger schedule, and on the daylight event, the trigger might be skipped or might appear not to fire for a short period. This interruption can be particularly problematic for liveSync where schedules execute continuously. In this case, the time change (for example, from 02:00 back to 01:00) causes an hour break between each liveSync execution.
To prevent DST from having an impact on your schedules, set the time zone of the schedule to Coordinated Universal Time (UTC). UTC is never subject to DST, so schedules will continue to fire as normal.
16.4. Configuring Persistent Schedules
By default, scheduling information, such as schedule state and details of the
schedule run, is stored in RAM. This means that such information is lost when
the server is rebooted. The schedule configuration itself (defined in your
project's conf/schedule-schedule-name.json
file) is not lost when the server is shut down, and normal scheduling
continues when the server is restarted. However, there are no details of
missed schedule runs that should have occurred during the period the server
was unavailable.
You can configure schedules to be persistent, which means that the scheduling information is stored in the internal repository rather than in RAM. With persistent schedules, scheduling information is retained when the server is shut down. Any previously scheduled jobs can be rescheduled automatically when the server is restarted.
Persistent schedules also enable you to manage scheduling across a cluster (multiple OpenIDM instances). When scheduling is persistent, a particular schedule will be launched only once across the cluster, rather than once on every OpenIDM instance. For example, if your deployment includes a cluster of OpenIDM nodes for high availability, you can use persistent scheduling to start a reconciliation operation on only one node in the cluster, instead of starting several competing reconciliation operations on each node.
Important
Persistent schedules rely on timestamps. In a deployment where OpenIDM instances run on separate machines, you must synchronize the system clocks of these machines using a time synchronization service that runs regularly. The clocks of all machines involved in persistent scheduling must be within one second of each other. For information on how you can achieve this using the Network Time Protocol (NTP) daemon, see the NTP RFC.
To configure persistent schedules, set persisted
to
true
in the schedule configuration file
(schedule-schedule-name.json)
.
If the server is down when a scheduled task was set to occur, one or more
runs of that schedule might be missed. To specify what action should be taken
if schedules are missed, set the misfirePolicy
in the
schedule configuration file. The misfirePolicy
determines
what OpenIDM should do if scheduled tasks are missed. Possible values are as
follows:
fireAndProceed
. The first run of a missed schedule is immediately implemented when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed.doNothing
. All missed schedules are discarded and the normal schedule is resumed when the server is back online.
16.5. Schedule Examples
The following example shows a schedule for reconciliation that is not
enabled. When the schedule is enabled ("enabled" : true,
),
reconciliation runs every 30 minutes, starting on the hour:
{ "enabled": false, "persisted": true, "type": "cron", "schedule": "0 0/30 * * * ?", "invokeService": "sync", "invokeContext": { "action": "reconcile", "mapping": "systemLdapAccounts_managedUser" } }
The following example shows a schedule for liveSync enabled to run every 15 seconds, starting at the beginning of the minute. Note that the schedule is persisted, that is, stored in the internal repository rather than in memory. If one or more liveSync runs are missed, as a result of the server being unavailable, the first run of the liveSync operation is implemented when the server is back online. Subsequent runs are discarded. After this, the normal schedule is resumed:
{ "enabled": true, "persisted": true, "misfirePolicy" : "fireAndProceed", "type": "cron", "schedule": "0/15 * * * * ?", "invokeService": "provisioner", "invokeContext": { "action": "liveSync", "source": "system/ldap/account" } }
16.6. Managing Schedules Over REST
The scheduler service is exposed under the
/openidm/scheduler
context path. Within this context path,
the defined scheduled jobs are accessible at
/openidm/scheduler/job
. A job is the actual task that is
run. Each job contains a trigger that starts the job.
The trigger defines the schedule according to which the job is executed. You
can read and query the existing triggers on the
/openidm/scheduler/trigger
context path.
The following examples show how schedules are validated, created, read, queried, updated, and deleted, over REST, by using the scheduler service. The examples also show how to pause and resume scheduled jobs, when an instance is placed in maintenance mode. For information about placing a server in maintenance mode, see "Placing a Server in Maintenance Mode" in the Installation Guide.
Note
When you configure schedules over REST, changes made to the schedules are
not pushed back into the configuration service. Managing schedules by
using the /openidm/scheduler/job
context path essentially
bypasses the configuration service and sends the request directly to the
scheduler.
If you need to perform an operation on a schedule that was created by using
the configuration service (by placing a schedule file in the
conf/
directory), you must direct your request to the
/openidm/config
context path, and not to the
/openidm/scheduler/job
context path.
PATCH operations are not supported on the scheduler
context path. To patch a schedule, use the config
context
path.
16.6.1. Validating Schedule Syntax
Schedules are defined using Quartz cron syntax. You can validate your cron
expression by sending the expression as a JSON object to the
scheduler
context path. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "cronExpression":"0 0/1 * * * ?" }' \ "http://localhost:8080/openidm/scheduler?_action=validateQuartzCronExpression" { "valid": true }
16.6.2. Defining a Schedule
To define a new schedule, send a PUT or POST request to the
scheduler/job
context path with the details of the
schedule in the JSON payload. A PUT request allows you to specify the ID of
the schedule while a POST request assigns an ID automatically.
The following example uses a PUT request to create a schedule that fires a
script (script/testlog.js
) every second. The schedule
configuration is as described in
"Configuring the Scheduler Service":
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PUT \ --data '{ "enabled":true, "type":"cron", "schedule":"0/1 * * * * ?", "persisted":true, "misfirePolicy":"fireAndProceed", "invokeService":"script", "invokeContext": { "script": { "type":"text/javascript", "file":"script/testlog.js" } } }' \ "http://localhost:8080/openidm/scheduler/job/testlog-schedule" { "_id": "testlog-schedule", "enabled": true, "persisted": true, "misfirePolicy": "fireAndProceed", "schedule": "0/1 * * * * ?", "type": "cron", "invokeService": "org.forgerock.openidm.script", "invokeContext": { "script": { "type": "text/javascript", "file": "script/testlog.js" } }, "invokeLogLevel": "info", "timeZone": null, "startTime": null, "endTime": null, "concurrentExecution": false, "triggers": [ { "previous_state": 0, "name": "trigger-testlog-schedule", "state": 4, "nodeId": "node1", "acquired": true, "serialized": "rO0ABXNyABZvcmcucXVhcnR6L...30HhzcQB+ABx3CAAAAVdwIrfQeA==", "group": "scheduler-service-group", "_id": "scheduler-service-group_$x$x$_trigger-testlog-schedule", "_rev": "4" } ], "nextRunDate": "2016-09-28T09:31:47.000Z" }
Note that the output includes the trigger
that was
created as part of the scheduled job, as well as the
nextRunDate
for the job. For more information about the
trigger
properties, see
"Querying Schedule Triggers".
The following example uses a POST request to create an identical schedule to the one created in the previous example, but with a server-assigned ID:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request POST \ --data '{ "enabled":true, "type":"cron", "schedule":"0/1 * * * * ?", "persisted":true, "misfirePolicy":"fireAndProceed", "invokeService":"script", "invokeContext": { "script": { "type":"text/javascript", "file":"script/testlog.js" } } }' \ "http://localhost:8080/openidm/scheduler/job?_action=create" { "_id": "9858a39d-b1e7-4842-9874-0fb8179b149a", "enabled": true, "persisted": true, "misfirePolicy": "fireAndProceed", "schedule": "0/1 * * * * ?", "type": "cron", "invokeService": "org.forgerock.openidm.script", "invokeContext": { "script": { "type": "text/javascript", "file": "script/testlog.js" } }, "invokeLogLevel": "info", "timeZone": null, "startTime": null, "endTime": null, "concurrentExecution": false, "triggers": [ { "previous_state": 0, "name": "trigger-9858a39d-b1e7-4842-9874-0fb8179b149a", "state": 4, "nodeId": "node1", "acquired": true, "serialized": "...+XAeHNxAH4AHHcIAAABV2wX4dh4c3EAfgAcdwgAAAFXbBfh2Hg=...", "group": "scheduler-service-group", "_id": "scheduler-service-group_$x$x$_trigger-9858a39d-b1e7-4842-9874-0fb8179b149a", "_rev": "4" } ], "nextRunDate": "2016-09-27T14:41:28.000Z" }
The output includes the generated _id
of the schedule, in
this case "_id": "9858a39d-b1e7-4842-9874-0fb8179b149a"
.
16.6.3. Obtaining the Details of a Scheduled Job
The following example displays the details of the schedule created in the previous section. Specify the job ID in the URL:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/job/testlog-schedule" { "_id": "testlog-schedule", "enabled": true, "persisted": true, "misfirePolicy": "fireAndProceed", "schedule": "0/1 * * * * ?", "type": "cron", "invokeService": "org.forgerock.openidm.script", "invokeContext": { "script": { "type": "text/javascript", "file": "script/testlog.js" } }, "invokeLogLevel": "info", "timeZone": null, "startTime": null, "endTime": null, "concurrentExecution": false, "triggers": [ { "previous_state": -1, "name": "trigger-testlog-schedule", "state": 0, "nodeId": "node1", "acquired": true, "serialized": "rO0ABXNyABZvcmcucXVhcnR6L.../AHhzcQB+ABx3CAAAAVdwIrfQeA==", "group": "scheduler-service-group", "_id": "scheduler-service-group_$x$x$_trigger-testlog-schedule", "_rev": "7550" } ], "nextRunDate": "2016-09-28T10:03:13.000Z" }
16.6.4. Querying Scheduled Jobs
You can query defined and running scheduled jobs using a regular query
filter or a parameterized query. Support for parameterized queries is
restricted to _queryId=query-all-ids
. For more
information about query filters, see "Constructing Queries".
The following query returns the IDs of all defined schedules:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/job?_queryId=query-all-ids" { "result": [ { "_id": "reconcile_systemLdapAccounts_managedUser" }, { "_id": "testlog-schedule" } ], ... }
The following query returns the IDs, enabled status, and next run date of all defined schedules:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/job?_queryFilter=true&_fields=_id,enabled,nextRunDate" { "result": [ { "_id": "reconcile_systemLdapAccounts_managedUser", "enabled": false, "nextRunDate": null }, { "_id": "testlog-schedule", "enabled": true, "nextRunDate": "2016-09-28T10:11:06.000Z" } ], ... }
16.6.5. Updating a Schedule
To update a schedule definition, use a PUT request and update all the static properties of the object.
The following example disables the testlog schedule created in the previous
section by setting "enabled":false
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Content-Type: application/json" \ --request PUT \ --data '{ "enabled":false, "type":"cron", "schedule":"0/1 * * * * ?", "persisted":true, "misfirePolicy":"fireAndProceed", "invokeService":"script", "invokeContext": { "script": { "type":"text/javascript", "file":"script/testlog.js" } } }' \ "http://localhost:8080/openidm/scheduler/job/testlog-schedule" { "_id": "testlog-schedule", "enabled": false, "persisted": true, "misfirePolicy": "fireAndProceed", "schedule": "0/1 * * * * ?", "type": "cron", "invokeService": "org.forgerock.openidm.script", "invokeContext": { "script": { "type": "text/javascript", "file": "script/testlog.js" } }, "invokeLogLevel": "info", "timeZone": null, "startTime": null, "endTime": null, "concurrentExecution": false, "triggers": [], "nextRunDate": null }
When you disable a schedule, all triggers are removed and the
nextRunDate
is set to null
. If you
re-enable the schedule, a new trigger is generated, and the
nextRunDate
is recalculated.
16.6.6. Deleting a Schedule
To delete a schedule, send a DELETE request to the schedule ID. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request DELETE \ "http://localhost:8080/openidm/scheduler/job/testlog-schedule" { "_id": "testlog-schedule", "enabled": true, ... }
The DELETE request returns the entire JSON object.
16.6.7. Obtaining a List of Running Scheduled Jobs
The following command returns a list of the jobs that are currently executing. This list enables you to decide whether to wait for a specific job to complete before you place a server in maintenance mode.
This action does not list the jobs across a cluster, only the jobs currently executing on the node to which the request is routed.
Note that this list is accurate only at the moment the request was issued. The list can change at any time after the response is received.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/scheduler/job?_action=listCurrentlyExecutingJobs" [ { "enabled": true, "persisted": true, "misfirePolicy": "fireAndProceed", "schedule": "0 0/1 * * * ?", "type": "cron", "invokeService": "org.forgerock.openidm.sync", "invokeContext": { "action": "reconcile", "mapping": "systemLdapAccounts_managedUser" }, "invokeLogLevel": "info", "timeZone": null, "startTime": null, "endTime": null, "concurrentExecution": false } ]
16.6.8. Pausing Scheduled Jobs
In preparation for placing a server in maintenance mode, you can temporarily suspend all scheduled jobs. This action does not cancel or interrupt jobs that are already in progress - it simply prevents any scheduled jobs from being invoked during the suspension period.
The following command suspends all scheduled tasks and returns
true
if the tasks could be suspended successfully.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/scheduler/job?_action=pauseJobs" { "success": true }
16.6.9. Resuming All Scheduled Jobs
When an update has been completed, and your instance is no longer in maintenance mode, you can resume scheduled jobs to start them up again. Any jobs that were missed during the downtime will follow their configured misfire policy to determine whether they should be reinvoked.
The following command resumes all scheduled jobs and returns
true
if the jobs could be resumed successfully.
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/scheduler/job?_action=resumeJobs" { "success": true }
16.6.10. Querying Schedule Triggers
When a scheduled job is created, a trigger for that job is created
automatically and is included in the schedule definition. The trigger is
essentially what causes the job to be started. You can read all the triggers
that have been generated on a system with the following query on the
openidm/scheduler/trigger
context path:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/trigger?_queryFilter=true" { "result": [ { "_id": "scheduler-service-group_$x$x$_trigger-testlog-schedule", "_rev": "20862", "previous_state": -1, "name": "trigger-testlog-schedule", "state": 0, "nodeId": "node1", "acquired": true, "serialized": "rO0ABXNyABZvcmcucXVhcnR6L.../iHhzcQB+ABx3CAAAAVdwXUAweA==", "group": "scheduler-service-group" }, { "_id": "scheduler-service-group_$x$x$_trigger-reconcile_systemLdapAccounts_managedUser", "_rev": "1553", "previous_state": -1, "name": "trigger-reconcile_systemLdapAccounts_managedUser", "state": 0, "nodeId": null, "acquired": false, "serialized": "rO0ABXNyABZvcmcucXVhcnR6L...0gCB4c3EAfgAcdwgAAAFXcF6QIHg=", "group": "scheduler-service-group" } ], ... }
The contents of a trigger object are as follows:
_id
The ID of the trigger. The trigger ID takes the form
group_$x$x$_trigger-schedule-id
_rev
The revision of the trigger object. This property is reserved for internal use and specifies the revision of the object in the repository. This is the same value that is exposed as the object's ETag through the REST API. The content of this property is not defined. No consumer should make any assumptions of its content beyond equivalence comparison.
previous_state
The previous state of the trigger, before its current state. For a description of Quartz trigger states, see the Quartz API documentation.
name
The trigger name, in the form
trigger-schedule-id
state
The current state of the trigger. For a description of Quartz trigger states, see the Quartz API documentation.
nodeId
The ID of the node that has acquired the trigger, useful in a clustered deployment.
acquired
Whether the trigger has already been acquired by a node. Boolean, true or false.
serialized
The Base64 serialization of the trigger class.
group
The name of the group that the trigger is in, always
scheduler-service-group
.
To read the contents of a specific trigger send a PUT request to the trigger ID, for example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/trigger/scheduler-service-group_\$x\$x\$_trigger-testlog-schedule" { "_id": "scheduler-service-group_$x$x$_trigger-testlog-schedule", "_rev": "32088", "previous_state": -1, "name": "trigger-testlog-schedule", "state": 0, "nodeId": "node1", "acquired": true, "serialized": "rO0ABXNyABZvcmcucXVhcnR6L...2oHhzcQB+ABx3CAAAAVdwXUAweA==", "group": "scheduler-service-group" }
Note that you need to escape the $
signs in the URL.
To view the triggers that have been acquired, per node, send a GET request
to the scheduler/acquiredTriggers
context path. For
example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/acquiredTriggers" { "_id": "acquiredTriggers", "_rev": "102", "node1": [ "scheduler-service-group_$x$x$_trigger-testlog-schedule" ] }
To view the triggers that have not yet been acquired by any node, send a GET
request to the scheduler/waitingTriggers
context path.
For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/scheduler/waitingTriggers" { "_id": "waitingTriggers", "_rev": "1576", "names": [ "scheduler-service-group_$x$x$_trigger-0da27688-7ece-4799-bca4-09e185a6b0f4", "scheduler-service-group_$x$x$_trigger-0eeaf861-604b-4cf4-a044-bbbc78377070", "scheduler-service-group_$x$x$_trigger-136b7a1a-3aee-4321-8b6a-3e860e7b0292", "scheduler-service-group_$x$x$_trigger-1f6b116b-aa06-41da-9c19-80314373a20f", "scheduler-service-group_$x$x$_trigger-659b2bb0-53b8-4a4e-8347-8ed1ed5286af", "scheduler-service-group_$x$x$_trigger-testlog-schedule", "scheduler-service-group_$x$x$_trigger-ad9db1c7-a06d-4dc9-83b9-0c2e405dde1f" ] }
16.7. Managing Schedules Through the Admin UI
To manage schedules through the Admin UI, select Configure > Schedules. By default, only persisted schedules are shown in the Schedules list. To show non-persisted (in memory) schedules, select Filter by Type > In Memory.
16.8. Scanning Data to Trigger Tasks
In addition to the fine-grained scheduling facility described previously, OpenIDM provides a task scanning mechanism. The task scanner enables you to perform a batch scan on a specified property in OpenIDM, at a scheduled interval, and then to launch a task when the value of that property matches a specified value.
When the task scanner identifies a condition that should trigger the task, it can invoke a script created specifically to handle the task.
For example, the task scanner can scan all managed/user
objects for a "sunset date" and can invoke a script that launches a "sunset
task" on the user object when this date is reached.
16.8.1. Configuring the Task Scanner
The task scanner is essentially a scheduled task that queries a set of
managed users, then launches a script based on the query results. The task
scanner is configured in the same way as a regular scheduled task in a
schedule configuration file named
(schedule-task-name.json)
,
with the invokeService
parameter set to
taskscanner
. The invokeContext
parameter defines the details of the scan, and the task that should be
launched when the specified condition is triggered.
The following example defines a scheduled scanning task that triggers a
sunset script. This sample schedule configuration file is provided in
openidm/samples/example-configurations/task-scanner/conf/schedule-taskscan_sunset.json
.
To use the sample file, copy it to your project's conf
directory and edit it as required.
{ "enabled" : true, "type" : "cron", "schedule" : "0 0 * * * ?", "persisted" : true, "concurrentExecution" : false, "invokeService" : "taskscanner", "invokeContext" : { "waitForCompletion" : false, "numberOfThreads" : 5, "scan" : { "_queryId" : "scan-tasks", "object" : "managed/user", "property" : "/sunset/date", "condition" : { "before": "${Time.now}" }, "taskState" : { "started" : "/sunset/task-started", "completed" : "/sunset/task-completed" }, "recovery" : { "timeout" : "10m" } }, "task" : { "script" : { "type" : "text/javascript", "file" : "script/sunset.js" } } } }
The schedule configuration calls a script (script/sunset.js
).
To test the sample, copy
openidm/samples/example-configurations/task-scanner/script/sunset.js
to your project's script
directory. You will also
need to assign a user a sunset date. The task will only execute on users
who have a valid sunset date field. The sunset date field can be added
manually to users, but will need to be added to the managed/user
schema if you want the field to be visible from the admin UI. Below is an
example command to add a sunset date field to the user
bjensen
using the REST interface.
curl \ --header "Content-Type: application/json" \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ --data '[{ "operation" : "add", "field" : "sunset/date", "value" : "2017-06-20T22:58:36.272Z" }]' \ "http://localhost:8080/openidm/managed/user?_action=patch&_queryId=for-userName&uid=bjensen"
The remaining properties in the schedule configuration are as follows:
The invokeContext
parameter takes the following
properties:
waitForCompletion
(optional)This property specifies whether the task should be performed synchronously. Tasks are performed asynchronously by default (with
waitForCompletion
set to false). A task ID (such as{"_id":"354ec41f-c781-4b61-85ac-93c28c180e46"}
) is returned immediately. If this property is set to true, tasks are performed synchronously and the ID is not returned until all tasks have completed.maxRecords
(optional)The maximum number of records that can be processed. This property is not set by default so the number of records is unlimited. If a maximum number of records is specified, that number will be spread evenly over the number of threads.
numberOfThreads
(optional)By default, the task scanner runs in a multi-threaded manner, that is, numerous threads are dedicated to the same scanning task run. Multi-threading generally improves the performance of the task scanner. The default number of threads for a single scanning task is 10. To change this default, set the
numberOfThreads
property. The sample configuration sets the default number of threads to 5.scan
Defines the details of the scan. The following properties are defined:
_queryFilter, _queryId, or _queryExpression
A query filter, predefined query, or query expression that identifies the entries for which this task should be run. Query filters are recommended but you can also use native query expressions and parameterized, or predefined queries to identify the entries to be scanned.
The query filter provided in the sample schedule configuration (
((/sunset/date lt \"${Time.now}\") AND !(${taskState.completed} pr))
) identifies managed users whosesunset/date
property is before the current date and for whom the sunset task has not yet completed.The sample query supports time-based conditions, with the time specified in ISO 8601 format (Zulu time). You can write any query to target the set of entries that you want to scan.
object
Defines the managed object type against which the query should be performed, as defined in the
managed.json
file.property
Defines the property of the managed object, against which the query is performed. In the previous example, the
"property" : "/sunset/date"
indicates a JSON pointer that maps to the object attribute, and can be understood assunset: {"date" : "date"}
.If you are using a JDBC repository, with a generic mapping, you must explicitly set this property as searchable so that it can be queried by the task scanner. For more information, see "Using Generic Mappings".
condition
(optional)Indicates the conditions that must be matched for the defined property.
In the previous example, the scanner scans for users whose
/sunset/date
is prior to the current timestamp (at the time the script is run).You can use these fields to define any condition. For example, if you wanted to limit the scanned objects to a specified location, say, London, you could formulate a query to compare against object locations and then set the condition to be:
"condition" : { "location" : "London" },
For time-based conditions, the
condition
property supports macro syntax, based on theTime.now
object (which fetches the current time). You can specify any date/time in relation to the current time, using the+
or-
operator, and a duration modifier. For example:${Time.now + 1d}
would return all user objects whose/sunset/date
is the following day (current time plus one day). You must include space characters around the operator (+
or-
). The duration modifier supports the following unit specifiers:s
- secondm
- minuteh
- hourd
- dayM
- monthy
- yeartaskState
Indicates the names of the fields in which the start message and the completed message are stored. These fields are used to track the status of the task.
started
specifies the field that stores the timestamp for when the task begins.completed
specifies the field that stores the timestamp for when the task completes its operation. Thecompleted
field is present as soon as the task has started, but its value isnull
until the task has completed.recovery
(optional)Specifies a configurable timeout, after which the task scanner process ends. For clustered OpenIDM instances, there might be more than one task scanner running at a time. A task cannot be launched by two task scanners at the same time. When one task scanner "claims" a task, it indicates that the task has been started. That task is then unavailable to be claimed by another task scanner and remains unavailable until the end of the task is indicated. In the event that the first task scanner does not complete the task by the specified timeout, for whatever reason, a second task scanner can pick up the task.
task
Provides details of the task that is performed. Usually, the task is invoked by a script, whose details are defined in the
script
property:type
‒ the type of script, either JavaScript or Groovy.file
‒ the path to the script file. The script file takes at least two objects (in addition to the default objects that are provided to all OpenIDM scripts):input
‒ the individual object that is retrieved from the query (in the example, this is the individual user object).objectID
‒ a string that contains the full identifier of the object. TheobjectID
is useful for performing updates with the script as it allows you to target the object directly. For example:openidm.update(objectID, input['_rev'], input);
.
A sample script file is provided in
openidm/samples/taskscanner/script/sunset.js
. To use this sample file, copy it to your project'sscript/
directory. The sample script marks all user objects that match the specified conditions as inactive. You can use this sample script to trigger a specific workflow, or any other task associated with the sunset process.
For more information about using scripts in OpenIDM, see "Scripting Reference".
16.8.2. Managing Scanning Tasks Over REST
You can trigger, cancel, and monitor scanning tasks over the REST interface,
using the REST endpoint
http://localhost:8080/openidm/taskscanner
.
16.8.2.1. Triggering a Scanning Task
The following REST command runs a task named "taskscan_sunset".
The task itself is defined in a file named
conf/schedule-taskscan_sunset.json
:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/taskscanner?_action=execute&name=schedule/taskscan_sunset"
By default, a scanning task ID is returned immediately when the task is initiated. Clients can make subsequent calls to the task scanner service, using this task ID to query its state and to call operations on it.
For example, the scanning task initiated previously would return something similar to the following, as soon as it was initiated:
{"_id":"edfaf59c-aad1-442a-adf6-3620b24f8385"}
To have the scanning task complete before the ID is returned, set the
waitForCompletion
property to true
in the task definition file (schedule-taskscan_sunset.json
).
You can also set the property directly over the REST interface when the task is
initiated. For example:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/taskscanner?_action=execute&name=schedule/taskscan_sunset&waitForCompletion=true"
16.8.2.2. Canceling a Scanning Task
You can cancel a scanning task by sending a REST call with the
cancel
action, specifying the task ID. For example, the
following call cancels the scanning task initiated in the previous section:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request POST \ "http://localhost:8080/openidm/taskscanner/edfaf59c-aad1-442a-adf6-3620b24f8385?_action=cancel" { "_id":"edfaf59c-aad1-442a-adf6-3620b24f8385", "action":"cancel", "status":"SUCCESS" }
16.8.2.3. Listing Scanning Tasks
You can display a list of scanning tasks that have completed, and those
that are in progress, by running a RESTful GET on the
openidm/taskscanner
context path. The following example
displays all scanning tasks:
$ curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --request GET \ "http://localhost:8080/openidm/taskscanner" { "tasks": [ { "ended": 1352455546182 "started": 1352455546149, "progress": { "failures": 0 "successes": 2400, "total": 2400, "processed": 2400, "state": "COMPLETED", }, "_id": "edfaf59c-aad1-442a-adf6-3620b24f8385", } ] }
Each scanning task has the following properties:
ended
The time at which the scanning task ended.
started
The time at which the scanning task started.
progress
The progress of the scanning task, summarised in the following fields:
failures
- the number of records not able to be processedsuccesses
- the number of records processed successfullytotal
- the total number of recordsprocessed
- the number of processed recordsstate
- the overall state of the task,INITIALIZED
,ACTIVE
,COMPLETED
,CANCELLED
, orERROR
_id
The ID of the scanning task.
The number of processed tasks whose details are retained is governed by the
openidm.taskscanner.maxcompletedruns
property in the
conf/system.properties
file. By default, the last one
hundred completed tasks are retained.
Chapter 17. Managing Passwords
OpenIDM provides password management features that help you enforce password policies, limit the number of passwords users must remember, and let users reset and change their passwords.
17.1. Enforcing Password Policy
A password policy is a set of rules defining what sequence of characters constitutes an acceptable password. Acceptable passwords generally are too complex for users or automated programs to generate or guess.
Password policies set requirements for password length, character sets that passwords must contain, dictionary words and other values that passwords must not contain. Password policies also require that users not reuse old passwords, and that users change their passwords on a regular basis.
OpenIDM enforces password policy rules as part of the general policy service. For more information about the policy service, see "Using Policies to Validate Data". The default password policy applies the following rules to passwords as they are created and updated:
A password property is required for any user object.
The value of a password cannot be empty.
The password must include at least one capital letter.
The password must include at least one number.
The minimum length of a password is 8 characters.
The password cannot contain the user name, given name, or family name.
You can remove these validation requirements, or include additional requirements, by configuring the policy for passwords. For more information, see "Configuring the Default Policy for Managed Objects".
The password validation mechanism can apply in many situations.
- Password change and password reset
Password change involves changing a user or account password in accordance with password policy. Password reset involves setting a new user or account password on behalf of a user.
By default, OpenIDM controls password values as they are provisioned.
To change the default administrative user password,
openidm-admin
, see "Replace Default Security Settings".- Password recovery
Password recovery involves recovering a password or setting a new password when the password has been forgotten.
OpenIDM provides a self-service end user interface for password changes, password recovery, and password reset. For more information, see "Configuring User Self-Service".
- Password comparisons with dictionary words
You can add dictionary lookups to prevent use of password values that match dictio