Autonomous Identity 2021.8.2

Install a Single Node Air-Gapped Deployment

This section presents instructions on deploying Autonomous Identity in a single-node target machine that has no Internet connectivity. This type of configuration, called an air-gap or offline deployment, provides enhanced security by isolating itself from outside Internet or network access.

The air-gap installation is similar to that of the single-node target deployment with Internet connectivity, except that the image and deployer script must be saved on a portable drive and copied to the air-gapped target machine.

Figure 7: A single-node air-gapped target deployment.

Autonomous Identity deployed in a single-node air-gapped target deployment

Prerequisites

Let’s deploy Autonomous Identity on a single-node air-gapped target on CentOS 7. The following are prerequisites:

  • Operating System. The target machine requires CentOS 7. The deployer machine can use any operating system as long as Docker is installed. For this chapter, we use CentOS 7 as its base operating system.

  • Memory Requirements. Make sure you have enough free disk space on the deployer machine before running the deployer.sh commands. We recommend at least 500GB.

  • Default Shell. The default shell for the autoid user must be bash.

  • Deployment Requirements. Autonomous Identity provides a Docker image that creates a deployer.sh script. The script downloads additional images necessary for the installation. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry (gcr.io). The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.

  • Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB.

  • Docker Required on Air-Gap Machines. When installing the Autonomous Identity binaries on the air-gap machine using a tar file, you must also manually install Docker 20.10.7 onto the machine.

  • IPv4 Forwarding. Many high security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forward setting. In such environments, make sure to enable IPv4 forwarding in the file /etc/sysctl.conf:

    net.ipv4.ip_forward=1

Set Up the Deployer Machine

Set up the deployer on an Internet-connect machine.

  1. The install assumes that you have CentOS 7 as your operating system. Check your CentOS 7 version.

    $ sudo cat /etc/centos-release
  2. Set the user for the target machine to a username of your choice. For example, autoid.

    $ sudo adduser autoid
  3. Set the password for the user you created in the previous step.

    $ sudo passwd autoid
  4. Configure the user for passwordless sudo.

    $ echo "autoid    ALL=(ALL)  NOPASSWD:ALL" | sudo tee /etc/sudoers.d/autoid
  5. Add administrator privileges to the user.

    $ sudo usermod -aG wheel autoid
  6. Change to the user account.

    $ su - autoid
  7. Install yum-utils package on the deployer machine. yum-utils is a utilities manager for the Yum RPM package repository. The repository compresses software packages for Linux distributions.

    $ sudo yum -y install yum-utils
  8. Create the installation directory. Note that you can use any install directory for your system as long as your run the deployer.sh script from there. Also, the disk volume where you have the install directory must have at least 8GB free space for the installation.

    $ mkdir ~/autoid-config

Install Docker on the Deployer Machine

  1. On the deployer machine, set up the Docker-CE repository.

    $ sudo yum-config-manager \
         --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  2. Install the latest version of the Docker CE, the command-line interface, and containerd.io, a containerized website.

    $ sudo yum -y install  docker-ce docker-ce-cli containerd.io
  3. Enable Docker to start at boot.

    $ sudo systemctl enable docker
  4. Start Docker.

    $ sudo systemctl start docker
  5. Check that Docker is running.

    $ systemctl status docker
  6. Add the user to the Docker group.

    $ sudo usermod -aG docker ${USER}
  7. Logout of the user account.

    $ logout
  8. Re-login using created user. Login with the user created for the deployer machine. For example, autoid.

    $ su - autoid

Set Up SSH on the Deployer

While SSH is not necessary to connect the deployer to the target node as the machines are isolated from one another. You still need SSH on the deployer so that it can communicate with itself.

  1. On the deployer machine, run ssh-keygen to generate an RSA keypair, and then click Enter. You can use the default filename. Enter a password for protecting your private key.

    $ ssh-keygen -t rsa -C "autoid"

    The public and private rsa key pair is stored in home-directory/.ssh/id_rsa and home-directory/.ssh/id_rsa.pub .

  2. Copy the SSH key to the ~/autoid-config directory.

    $ cp ~/.ssh/id_rsa ~/autoid-config
  3. Change the privileges to the file.

    $ chmod 400 ~/autoid-config/id_rsa

Prepare the Tar File

Run the following steps on an Internet-connected host machine:

  1. On the deployer machine, change to the installation directory.

    $ cd ~/autoid-config/
  2. Log in to the ForgeRock Google Cloud Registry (gcr.io) using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.

    $ docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid

    You should see:

    Login Succeeded
  3. Run the create-template command to generate the deployer.sh script wrapper. Note that the command sets the configuration directory on the target node to /config. Note that the --user parameter eliminates the need to use sudo while editing the hosts file and other configuration files.

    $ docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer:2021.8.2 create-template
  4. Open the ~/autoid-config/vars.yml file, set the offline_mode property to true, and then save the file.

    offline_mode: true
  5. Download the Docker images. This step downloads software dependencies needed for the deployment and places them in the autoid-packages directory.

    $ ./deployer.sh download-images
  6. Create a tar file containing all of the Autonomous Identity binaries.

    $ tar czf autoid-packages.tgz deployer.sh autoid-packages/*
  7. Copy the autoid-packages.tgz , deployer.sh , and SSH key (id_rsa ) to a portable hard drive.

Install on the Air-Gap Target

Before you begin, make sure you have CentOS 7 and Docker installed on your air-gapped target machine.

  1. Create the ~/autoid-config directory if you haven’t already.

    $ mkdir ~/autoid-config
  2. Copy the autoid-package.tgz tar file from the portable storage device.

  3. Unpack the tar file.

    $ tar xf autoid-packages.tgz -C ~/autoid-config
  4. On the air-gap host node, copy the SSH key to the ~/autoid-config directory.

  5. Change the privileges to the file.

    $ chmod 400 ~/autoid-config/id_rsa
  6. Change to the configuration directory.

    $ cd ~/autoid-config
  7. Install Docker.

    $ sudo ./deployer.sh install-docker
  8. Log out and back in.

  9. Change to the configuration directory.

    $ cd ~/autoid-config
  10. Import the deployer image.

    $ ./deployer.sh import-deployer

    You should see:

    …​
    db631c8b06ee: Loading layer [=============================================⇒]   2.56kB/2.56kB
    2d62082e3327: Loading layer [=============================================⇒]  753.2kB/753.2kB
    Loaded image: gcr.io/forgerock-autoid/deployer:2021.8.2
  11. Create the configuration template using the create-template command. This command creates the configuration files: ansible.cfg , vars.yml , vault.yml and hosts.

    $ ./deployer.sh create-template

    You should see:

    Config template is copied to host machine directory mapped to /config

Configure Autonomous Identity Air-Gapped

The create-template command from the previous section creates a number of configuration files, required for the deployment: ansible.cfg, vars.yml, hosts, and vault.yml.

If you are running a deployment for evaluation, you can minimally set the ansible.cfg file in step 1, set the private IP address mapping in the vars.yml file in step 2, edit the hosts file in step 3, and jump to step 6 run the deployer.
For air-gapped deployments, you must set the offline_mode property to true in the ~/autoid-config/vars.yml file in step 2 below. This is a new change in 2021.8.2 from prior releases.
  1. Open a text editor and edit the ~/autoid-config/ansible.cfg to set up the target machine user and SSH private key file location on the target node. Make sure that the remote_user exists on the target node and that the deployer machine can ssh to the target node as the user specified in the id_rsa file.

    [defaults]
    host_key_checking = False
    remote_user = autoid
    private_key_file = id_rsa
  2. Open a text editor and edit the ~/autoid-config/vars.yml file to configure specific settings for your deployment:

    • AI Product. Do not change this property.

      ai_product: auto-id
    • Domain and Target Environment. Set the domain name and target environment specific to your deployment by editing the /autoid-config/vars.xml file. By default, the domain name is set to forgerock.com and the target environment is set to autoid. The default Autonomous Identity URL will be: https://autoid-ui.forgerock.com. For this example, we use the default values.

      domain_name: forgerock.com
      target_environment: autoid

      If you change the domain name and target environment, you need to also change the certificates to reflect the new changes. For more information, see Customize the Domain and Namespace.

    • Analytics Data Directory and Analytics Configuration Direction. Although rarely necessary for a single node deployment, you can change the analytics and analytics configuration mount directories by editing the properties in the ~/autoid-config/vars.yml file.

      analytics_data_dir: /data
      analytics_conf_dif: /data/conf
    • Offline Mode. Set the offline_mode to true for air-gapped deployments.

      offline_mode: true
    • Database Type. By default, Apache Cassandra is set as the default database for Autonomous Identity. For MongoDB, set the db_driver_type: to mongo.

      db_driver_type: cassandra
    • Private IP Address Mapping. An air-gap deployment has no external IP addresses, but you may still need to define a mapping in the ~/autoid-config/vars.yml file, if your internal IP address differs from an external IP, say in a virtual air-gapped configuration.

      If your external and internal IP addresses are the same, you can skip this step.

      Add the private_ip_address_mapping property in the ~/autoid-config/vars.yml file. You can look up the private IP on the cloud console, or run sudo ifconfig on the target host. Make sure the values are within double quotes. The key should not be in double quotes and should have two spaces preceding the IP address.

      private_ip_address_mapping:
        external_ip:  "internal_ip"

      For example:

      private_ip_address_mapping:
        34.70.190.144:  "10.128.0.71"
    • Authentication Option. This property has three possible values:

      • Local. Local indicates that sets up elasticsearch with local accounts and enables the Autonomous Identity UI features: self-service and manage identities. Local auth mode should be enabled for demo environments only.

      • SSO. SSO indicates that single sign-on (SSO) is in use. With SSO only, the Autonomous Identity UI features, self-service and manage identities pages, is not available on the system but is managed by the SSO provider. The login page displays "Sign in using OpenID." For more information, see Set Up SSO.

      • LocalAndSSO. LocalAndSSO indicates that SSO is used and local account features, like self-service and manage identities are available to the user. The login page displays "Sign in using OpenID" and a link "Or sign in via email".

        authentication_option: "Local"
    • Access Log. By default, the access log is enabled. If you want to disable the access log, set the access_log_enabled variable to "false".

    • JWT Expiry and Secret File. Optional. By default, the session JWT is set at 30 minutes. To change this value, set the jwt_expiry property to a different value.

      jwt_expiry: "30 minutes"
      jwt_secret_file: "{{install path}}"/jwt/secret.txt"
      jwt_audience: "http://my.service"
      oidc_jwks_url: "na"
    • Local Auth Mode Password. When authentication_option is set to Local, the local_auth_mode_password sets the password for the login user.

    • Elasticsearch Heap Size. Optional. The default heap size for Elasticsearch is 1GB, which may be small for production. For production deployments, uncomment the option and specify 2G or 3G.

      #elastic_heap_size: 1g   # sets the heap size (1g|2g|3g) for the Elastic Servers
    • Java API Service. Optional. Set the Java API Service (JAS) properties for the deployment: authentication, maximum memory, directory for attribute mappings data source entities:

      jas:
          auth_enabled: true
          auth_type: 'jwt'
          signiture_key_id: 'service1-hmac'
          signiture_algorithm: 'hmac-sha256'
          max_memory: 4096M
          mapping_entity_type: /common/mappings
          datasource_entity_type: /common/datasources
  3. Open a text editor and enter the target host’s private IP addresses in the ~/autoid-config/hosts file. The following is an example of the ~/autoid-config/hosts file: NOTE: [notebook] is not used in Autonomous Identity.

    Click to See a Host File for Cassandra Deployments

    If you configured Cassandra as your database, the ~/autoid-config/hosts file is as follows for single-node air-gapped target deployment:

    [docker-managers]
    10.128.0.34
    
    [docker-workers]
    10.128.0.34
    
    [docker:children]
    docker-managers
    docker-workers
    
    [cassandra-seeds]
    10.128.0.34
    
    [spark-master]
    10.128.0.34
    
    [spark-workers]
    10.128.0.34
    
    [mongo_master]
    #ip# mongodb_master=True
    
    [mongo_replicas]
    
    [mongo:children]
    mongo_replicas
    mongo_master
    
    # ELastic Nodes
    [odfe-master-node]
    10.128.0.34
    
    [odfe-data-nodes]
    10.128.0.34
    
    [kibana-node]
    10.128.0.34
    
    [notebook]
    #ip#
    
    Click to See a Host File for MongoDB Deployments

    If you configured MongoDB as your database, the ~/autoid-config/hosts file is as follows for single-node air-gapped target deployment:

    [docker-managers]
    10.128.0.34
    
    [docker-workers]
    10.128.0.34
    
    [docker:children]
    docker-managers
    docker-workers
    
    [cassandra-seeds]
    
    [spark-master]
    10.128.0.34
    
    [spark-workers]
    10.128.0.34
    
    [mongo_master]
    10.128.0.34  mongodb_master=True
    
    [mongo_replicas]
    10.128.0.34
    
    [mongo:children]
    mongo_replicas
    mongo_master
    
    # ELastic Nodes
    [odfe-master-node]
    10.128.0.34
    
    [odfe-data-nodes]
    10.128.0.34
    
    [kibana-node]
    10.128.0.34
    
    [notebook]
    #ip#
    
  4. Set the Autonomous Identity passwords, located at ~/autoid-config/vault.yml.

    Despite the presence of special characters in the examples below, do not include special characters, such as & or $, in your production vault.yml passwords as it will result in a failed deployer process.
    configuration_service_vault:
        basic_auth_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
    
    cassandra_vault:
        cassandra_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        cassandra_admin_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        keystore_password: Acc#1234
        truststore_password: Acc#1234
    
    mongo_vault:
        mongo_admin_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        mongo_root_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        keystore_password: Acc#1234
        truststore_password: Acc#1234
    
    elastic_vault:
        elastic_admin_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        elasticsearch_password: ~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&
        keystore_password: Acc#1234
        truststore_password: Acc#1234
  5. Encrypt the vault file that stores the Autonomous Identity passwords, located at ~/autoid-config/vault.yml. The encrypted passwords will be saved to /config/.autoid_vault_password . The /config/ mount is internal to the deployer container.

    $ ./deployer.sh encrypt-vault
  6. Run the deployment.

    $ ./deployer.sh run

Resolve Hostname

After installing Autonomous Identity, set up the hostname resolution for your deployment.

Resolve the hostname:

  1. Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node: <target-environment>-ui.<domain-name>.

  2. If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser. Open a text editor and add an entry in the /etc/hosts (Linux/Unix) file or C:\Windows\System32\drivers\etc\hosts (Windows) for the self-service and UI services for each managed target node.

    <Target IP Address>  <target-environment>-ui.<domain-name>

    For example:

    34.70.190.144  autoid-ui.forgerock.com
  3. If you set up a custom domain name and target environment, add the entries in /etc/hosts. For example:

    34.70.190.144  myid-ui.abc.com

    For more information on customizing your domain name, see Customize Domains.

Access the Dashboard

Access the Autonomous Identity console UI:

  1. Open a browser. If you set up your own url, use it for your login.

  2. Log in as a test user.

    test user: bob.rodgers@forgerock.com
    password: <password>

Check Apache Cassandra

Check Cassandra:

  1. On the target node, check the status of Apache Cassandra.

    $ /opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
  2. An example output is as follows:

    Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  34.70.190.144 1.33 MiB   256          100.0%            a10a91a4-96e83dd-85a2-4f90d19224d9  rack1
    --

Check MongoDB

Check the status of MongoDB:

  1. On the target node, check the status of MongoDB.

    $ mongo --tls \
    --host <Host IP> \
    --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem  \
    --tlsAllowInvalidCertificates  \
    --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem

Check Apache Spark

Check Spark:

  1. SSH to the target node and open Spark dashboard using the bundled text-mode web browser

    $ elinks http://localhost:8080

    You should see Spark Master status as ALIVE and worker(s) with State ALIVE.

    Click to See an Example of the Spark Dashboard
    Spark Dashboard

Start the Analytics

If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.

For more information, see Set Entity Definitions.

Copyright © 2010-2022 ForgeRock, all rights reserved.