Autonomous Identity 2022.11.0

Install a Multi-Node Air-Gapped Deployment

This chapter presents instructions on deploying Autonomous Identity in a multi-node air-gapped or offline target machine with no external Internet connectivity. ForgeRock provides a deployer script that pulls a Docker image from ForgeRock’s Google Cloud Registry ( repository. The image contains the microservices, analytics, and backend databases needed for the system.

The air-gap installation is similar to that of the multi-node deployment, except that the image and deployer script must be stored on a portable drive and copied to the air-gapped target environment.

The deployment depends on how the network is configured. You could have a Docker cluster with multiple Spark nodes and Cassandra or MongoDB nodes. The key is to determine the IP addresses of each node.

Summary of the installation steps


Deploy Autonomous Identity on a multi-node air-gapped target on Redhat Linux Enterprise 8 or CentOS Stream 8. The following are prerequisites:

  • Operating System. The target machine requires Redhat Linux Enterprise 8 or CentOS Stream 8. The deployer machine can use any operating system as long as Docker is installed. For this chapter, we use Redhat Linux Enterprise 8 as its base operating system.

    Autonomous Identity 2022.11.0 supports Red Hat Linux (RHEL) 7 and 8, CentOS 7, and CentOS Stream 8. If you are upgrading Autonomous Identity on a RHEL 7/CentOS 7, the upgrade to 2022.11 uses RHEL 7/CentOS 7. For new and clean installations, we recommend installing Autonomous Identity on RHEL 8 or CentOS Stream 8.
  • Default Shell. The default shell for the autoid user must be bash.

  • Subnet Requirements. We recommend deploying your multi-node machines within the same subnet. Ports must be open for the installation to succeed. Each instance should be able to communicate to the other instances.

    If any hosts used for the Docker cluster (docker-managers, docker-workers) have an IP address in the range of 10.0.x.x, they will conflict with the Swarm network. As a result, the services in the cluster will not connect to the Cassandra database or Elasticsearch backend.

    The Docker cluster hosts must be in a subnet that provides IP addresses 10.10.1.x or higher.

  • Deployment Requirements. Autonomous Identity provides a script that downloads and installs the necessary Docker images. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry ( The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.

  • Filesystem Requirements. Autonomous Identity requires a shared filesystem accessible from the Spark main, Spark worker, analytics hosts, and application layer. The shared filesystem should be mounted at the same mount directory on all of those hosts. If the mount directory for the shared filesystem is different from the default, /data , update the /autoid-config/vars.yml file to point to the correct directories:

    analytics_data_dir: /data
    analytics_conf_dif: /data/conf
  • Architecture Requirements. Make sure that the Spark main is on a separate node from the Spark workers.

  • Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB. The configuration procedure is slightly different for each database.

  • Deployment Best-Practice. The example combines the OpenSearch data and OpenSearch Dashboards nodes. For best performance in production, dedicate a separate node to OpenSearch, data nodes, and OpenSearch Dashboards.

  • IPv4 Forwarding. Many high-security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forward setting. In such environments, make sure to enable IPv4 forwarding in the file /etc/sysctl.conf:

We recommend that your deployer team have someone with Cassandra expertise. This guide is not sufficient to troubleshoot any issues that may arise.

Set up the Nodes

Make sure you have sufficient storage for your particular deployment. For more information on sizing considerations, see Deployment Planning Guide.

For multinode deployments, there is a known issue with RHEL 8/CentOS Stream 8 and overlay network configurations. See Known Issues in 2022.11.0.

Install third-party components

Set up a machine with the required third-party software dependencies. Refer to: chap-install-singlenode-target.adoc#install-third-party.

Prepare the Tar File

Run the following steps on an Internet-connected host machine:

  1. On the deployer machine, change to the installation directory.

    cd ~/autoid-config/
  2. Log in to the ForgeRock Google Cloud Registry ( using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.

    docker login -u _json_key -p "$(cat autoid_registry_key.json)"

    You should see:

    Login Succeeded
  3. Run the create-template command to generate the script wrapper. Note that the command sets the configuration directory on the target node to /config. Note that the --user parameter eliminates the need to use sudo while editing the hosts file and other configuration files.

    docker run --user=$(id -u) -v ~/autoid-config:/config -it create-template
  4. Open the ~/autoid-config/vars.yml file, set the offline_mode property to true, and then save the file.

    offline_mode: true
  5. Download the Docker images. This step downloads software dependencies needed for the deployment and places them in the autoid-packages directory.

    sudo ./ download-images
  6. Create a tar file containing all of the Autonomous Identity binaries.

    tar czf autoid-packages.tgz autoid-packages/*
  7. Copy the autoid-packages.tgz to a portable hard drive.

Install Autonomous Identity Air-Gapped

Make sure you have the following prerequisites:

  • IP address of machines running OpenSearch, MongoDB, or Cassandra.

  • The Autonomous Identity user should have permission to write to /opt/autoid on all machines

  • To download the deployment images for the install, you still need your registry key to log into the ForgeRock Google Cloud Registry ( to download the artifacts.

  • Make sure you have the proper OpenSearch certificates with the exact names for both pem and JKS files copied to ~/autoid-config/certs/elastic:

    • esnode.pem

    • esnode-key.pem

    • root-ca.pem

    • elastic-client-keystore.jks

    • elastic-server-truststore.jks

  • Make sure you have the proper MongoDB certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/mongo:

    • mongo-client-keystore.jks

    • mongo-server-truststore.jks

    • mongodb.pem

    • rootCA.pem

  • Make sure you have the proper Cassandra certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/cassandra:

    • Zoran-cassandra-client-cer.pem

    • Zoran-cassandra-client-keystore.jks

    • Zoran-cassandra-server-cer.pem

    • zoran-cassandra-server-keystore.jks

    • Zoran-cassandra-client-key.pem

    • Zoran-cassandra-client-truststore.jks

    • Zoran-cassandra-server-key.pem

    • Zoran-cassandra-server-truststore.jks

Install Autonomous Identity:
  1. Change to the directory.

    cd autoid-config
  2. Run the create-template command

    docker run --user=$(id -u) -v ~/autoid-config:/config -it <deployer-pro path> create-template

    For example:

    docker run --user=$(id -u) -v ~/autoid-config:/config -it create-template
  3. Create a certificate directory for elastic.

    mkdir -p autoid-config/certs/elastic
  4. Copy the OpenSearch certificates and JKS files to autoid-config/certs/elastic.

  5. Create a certificate directory for MongoDB.

    mkdir -p autoid-config/certs/mongo
  6. Copy the MongoDB certificates and JKS files to autoid-config/certs/mongo.

  7. Create a certificate directory for Cassandra.

    mkdir -p autoid-config/certs/cassandra
  8. Copy the Cassandra certificates and JKS files to autoid-config/certs/cassandra.

  9. Update the hosts file with the IP addresses of the machines. The hosts file must include the IP addresses for Docker nodes, Spark main/livy, and the MongoDB master. While the deployer pro does not install or configure the MongoDB main server, the entry is required to run the MongoDB CLI to seed the Autonomous Identity schema.

    #For replica sets, add the IPs of all Cassandra nodes
    # Add the MongoDB main node in the cluster deployment
    # For example: mongodb_master=True
    # Add only the main node in the cluster deployment
  10. Update the vars.yml file:

    1. Set offline_mode to true.

    2. Set db_driver_type to mongo or cassandra.

    3. Set elastic_host, elastic_port, and elastic_user properties.

    4. Set kibana_host.

    5. Set the Apache livy install directory.

    6. Ensure the elastic_user, elastic_port, and mongo_part are correctly configured.

    7. Update the vault.yml passwords for elastic and mongo to refect your installation.

    8. Set the Cassandra-related parameters in the vars.yml file. Default values are:

        enable_ssl: "true"
        contact_points: # comma separated values in case of replication set
        port: 9042
        username: zoran_dba
        cassandra_keystore_password: "Acc#1234"
        cassandra_truststore_password: "Acc#1234"
        ssl_client_key_file: "zoran-cassandra-client-key.pem"
        ssl_client_cert_file: "zoran-cassandra-client-cer.pem"
        ssl_ca_file: "zoran-cassandra-server-cer.pem"
        server_truststore_jks: "zoran-cassandra-server-truststore.jks"
        client_truststore_jks: "zoran-cassandra-client-truststore.jks"
        client_keystore_jks: "zoran-cassandra-client-keystore.jks"
  1. Download images:

    ./ download-images
  2. On the spark-master-livy machine, run the following commands to install the python package dependencies:

    1. Change to the /opt/autoid directory:

      cd /opt/autoid
    2. Create a requirements.txt file with the following content:

    3. Install the requirements file:

      pip3 install -r requirements.txt
  3. Make sure that the /opt/autoid directory exists and that it is both readable and writable.

  4. Run the deployer script:

    ./ run
  5. On the spark-master-livy machine, run the following commands to install the Python egg file:

    1. Change to the /opt/autoid/eggs file, and then install the egg file:

      cd /opt/autoid/eggs
      sudo easy_install-3.8 autoid_analytics-2021.3-py3.6.egg
    2. Source the .bashrc file:

      source ~/.bashrc
    3. Restart Spark and Livy.

      ./livy/bin/livy-server stop
      ./livy/bin/livy-server start

Set the Replication Factor

Once Cassandra has been deployed, you need to set the replication factor to match the number of nodes on your system. This ensures that each record is stored in each of the nodes. In the event one node is lost, the remaining node can continue to serve content even if the cluster itself is running with reduced redundancy.

Resolve Hostname

After installing Autonomous Identity, set up the hostname resolution for your deployment.

  1. Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node:

  2. If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser.

    Open a text editor and add an entry in the /etc/hosts (Linux/Unix) file or C:\Windows\System32\drivers\etc\hosts (Windows) for the target node.

    For multi-node, use the Docker Manager node as your target.

    <Docker Mgr Node Public IP Address>  <target-environment>-ui.<domain-name>

    For example:

    <IP Address>
  3. If you set up a custom domain name and target environment, add the entries in /etc/hosts. For example:

    <IP Address>

    For more information on customizing your domain name, see Customize Domains.

Access the Dashboard

Access the Autonomous Identity console UI:
  1. Open a browser. If you set up your own url, use it for your login.

  2. Log in as a test user.

    test user:
    password: <password>

Start the Analytics

If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.

For more information, see Set Entity Definitions.

Copyright © 2010-2022 ForgeRock, all rights reserved.