Install a Multi-Node Deployment
This chapter presents instructions on deploying Autonomous Identity in a multi-node target deployment that has Internet connectivity. ForgeRock provides a deployer script that pulls a Docker container image from ForgeRock's Google Cloud Registry (gcr.io) repository. The image contains the microservices, analytics, and backend databases needed for the system.
This installation assumes that you set up the deployer on a separate machine from the target.
The deployment depends on how the network is configured. You could have a Docker cluster with multiple Spark nodes and Cassandra or MongoDB nodes. The key is to determine the IP addresses of each node, which the deployer uses to set up the overlay network for your multinode system.
Figure 10: A multi-node deployment.
Let's deploy Autonomous Identity on a multi-node target on CentOS 7. The following are prerequisites:
Operating System. The target machine requires CentOS 7. The deployer machine can use any operating system as long as Docker is installed. For this guide, we use CentOS 7 as its base operating system.
Memory Requirements. Make sure you have enough free disk space on the deployer machine before running the
deployer.sh
commands. We recommend at least a 40GB/partition with 14GB used and 27GB free after running the commands.Default Shell. The default shell for the
autoid
user must be bash.Subnet Requirements. We recommend deploying your multinode instances within the same subnet. Ports must be open for the installation to succeed. Each instance should be able to communicate to the other instances.
Important
If any hosts used for the Docker cluster (docker-managers, docker-workers) have an IP address in the range of 10.0.x.x/16, they will conflict with the Swarm network. As a result, the services in the cluster will not connect to the Cassandra database or Elasticsearch backend.
The Docker cluster hosts must be in a subnet that provides IP addresses 10.10.1.x or higher.
Deployment Requirements. Autonomous Identity provides a Docker image that creates a
deployer.sh
script that downloads and installs the images necessary. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry (gcr.io). The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.Filesystem Requirements. Autonomous Identity requires a shared filesystem accessible from the Spark master, Spark worker, analytics hosts, and application layer. The shared filesystem should be mounted at the same mount directory on all of those hosts. If the mount directory for the shared filesystem is different from the default,
/data
, update the/autoid-config/vars.yml
file to point to the correct directories:analytics_data_dir: /data analytics_conf_dif: /data/conf
Architecture Requirements. Make sure that the analytics server is on the same node as the Spark master.
Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB. The configuration procedure is slightly different for each database.
Deployment Best-Practice. For best performance, dedicate a separate node to Elasticsearch, data nodes, and Kibana.
IPv4 Forwarding. Many high-security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forwarding setting. In such environments, make sure to enable IPv4 forwarding in the file
etc/sysctl.conf
:net.ipv4.ip_forward=1
Set Up the Target Nodes
Make sure you have sufficient storage for your particular deployment. For more information on sizing considerations, see Deployment Planning Guide.
For each target node, run the following commands.
The install assumes that you have CentOS 7 as your operating system. Check your CentOS 7 version.
$
sudo cat /etc/centos-release
Set the user for the target machine to a username of your choice. For example,
autoid
.$
sudo adduser autoid
Set the password for the user you created in the previous step.
$
sudo passwd autoid
Configure the user for passwordless sudo.
$
echo "autoid ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/autoid
Add administrator privileges to the user.
$
sudo usermod -aG wheel autoid
Change to the user account.
$
su - autoid
Install yum-utils package on the deployer machine. yum-utils is a utilities manager for the Yum RPM package repository. The repository compresses software packages for Linux distributions.
$
sudo yum install -y yum-utils
Set Up the Deployer Machine
Set up another machine as a deployer node. You can use any OS-based machine for the deployer as long as it has Docker installed. For this example, we use CentOS 7.
The install assumes that you have CentOS 7 as your operating system. Check your CentOS 7 version.
$
sudo cat /etc/centos-release
Set the user for the target machine to a username of your choice. For example,
autoid
.$
sudo adduser autoid
Set the password for the user you created in the previous step.
$
sudo passwd autoid
Configure the user for passwordless sudo.
$
echo "autoid ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/autoid
Add administrator privileges to the user.
$
sudo usermod -aG wheel autoid
Change to the user account.
$
su - autoid
Install yum-utils package on the deployer machine. yum-utils is a utilities manager for the Yum RPM package repository. The repository compresses software packages for Linux distributions.
$
sudo yum install -y yum-utils
Create the installation directory. Note that you can use any install directory for your system as long as your run the deployer.sh script from there. Also, the disk volume where you have the install directory must have at least 8GB free space for the installation.
$
mkdir ~/autoid-config
Install Docker on the Deployer Machine
Install Docker on the deployer machine. We run commands from this machine to install Autonomous Identity on the target machine. In this example, we use CentOS 7.
On the target machine, set up the Docker-CE repository.
$
sudo yum-config-manager \ --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install the latest version of the Docker CE, the command-line interface, and containerd.io, a containerized website.
$
sudo yum install -y docker-ce docker-ce-cli containerd.io
Enable Docker to start at boot.
$
sudo systemctl enable docker
Start Docker.
$
sudo systemctl start docker
Check that Docker is running.
$
systemctl status docker
Add the user to the Docker group.
$
sudo usermod -aG docker ${USER}
Reset the privileges on the Docker socket.
$
sudo chmod 666 /var/run/docker.sock
Set Up SSH on the Deployer
On the deployer machine, change to the
~/.ssh
directory.$
cd ~/.ssh
Run ssh-keygen to generate an RSA keypair, and then click Enter. You can use the default filename. Enter a password for protecting your private key.
$
ssh-keygen -t rsa -C "autoid"
The public and private rsa key pair is stored in
home-directory/.ssh/id_rsa
andhome-directory/.ssh/id_rsa.pub
.Copy the SSH key to the
autoid-config
directory.$
cp id_rsa ~/autoid-config
Change the privileges to the file.
$
chmod 400 ~/autoid-config/id_rsa
Copy your public SSH key,
id_rsa.pub
, to each of your target machine's~/.ssh/authorized_keys
file.Note
If your target system does not have an
/authorized_keys
directory, create it using mkdir -p ~/.ssh/authorized_keys.For this example, copy the SSH key to each node:
$
ssh-copy-id -i id_rsa.pub autoid@lt;Node 1 IP Address>
$
ssh-copy-id -i id_rsa.pub autoid@lt;Node 2 IP Address>
$
ssh-copy-id -i id_rsa.pub autoid@lt;Node 3 IP Address>
On the deployer machine, test your SSH connection to each target machine. This is a critical step. Make sure the connection works before proceeding with the installation.
If you can successfully SSH to each machine, set the privileges on your
~/.ssh
and~/.ssh/authorized_keys
.SSH to first node:
$
ssh autoid@lt;Node 1 IP Address>
Last login: Sat Oct 3 03:02:40 2020
Set the privileges.
$
chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
Enter Exit to end your SSH session.
SSH to the second node:
$
ssh autoid@lt;Node 2 IP Address>
Last login: Sat Oct 3 03:06:40 2020
Set the privileges.
$
chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
Enter Exit to end your SSH session.
SSH to the third node:
$
ssh autoid@lt;Node 3 IP Address>
Last login: Sat Oct 3 03:10:40 2020
Set the privileges.
$
chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
Enter Exit to end your SSH session.
Install Autonomous Identity
Before you begin, make sure you have CentOS 7 installed on your target machine.
On the deployer machine, change to the installation directory.
$
cd ~/autoid-config/
Log in to the ForgeRock Google Cloud Registry (gcr.io) using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.
$
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
You should see:
Login Succeeded
Run the create-template command to generate the
deployer.sh
script wrapper. Note that the command sets the configuration directory on the target node to/config
. Note that the --user parameter eliminates the need to use sudo while editing the hosts file and other configuration files.$
docker run --user=`id -u` -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer:2020.10.2 create-template
Make the script executable.
$
chmod +x deployer.sh
To see the list of commands, enter
deployer.sh
.$
./deployer.sh
Usage: deployer <command> Commands: create-template download-images import-deployer encrypt-vault decrypt-vault run create-tar install-docker install-dbutils upgrade
Configure Autonomous Identity
The create-template command from the previous section creates a number of configuration files, required for the deployment.
The create-template commands creates a number of configuration files, including
ansible.cfg
. Open a text editor and edit theansible.cfg
to set up the remote user and SSH private key file location on the target node. Make sure that the remote_user exists on the target node and that the deployer machine can ssh to the target node as the user specified in theid_rsa
file.[defaults] host_key_checking = False remote_user = autoid private_key_file = id_rsa
On the deployer machine, open a text editor and edit the
~/autoid-config/vars.yml
file to configure specific settings for your deployment:Domain and Target Environment. Set the domain name and target environment specific to your deployment by editing the
/autoid-config/vars.xml
file. By default, the domain name is set toforgerock.com
and the target environment is set toautoid
. The default Autonomous Identity URL will be:https://autoid-ui.forgerock.com
. For this example, we use the default values.domain_name: forgerock.com target_environment: autoid
If you change the domain name and target environment, you need to also change the certificates to reflect the new changes. For more information, see Customize the Domain and Namespace.
Analytics Data Directory and Analytics Configuration Direction. For a multi-node Spark deployment, Autonomous Identity requires a shared filesystem accessible from Spark Master, Spark Worker(s), and Analytics hosts. The shared filesystem should be mounted at same mount directory on all of the above hosts. If the mount directory for shared filesystem is different than
/data
, update the following properties in thevars.yaml
file to point to the correct location:analytics_data_dir: /data analytics_conf_dif: /data/conf
Dark Theme Mode. Optional. By default, the Autonomous Identity UI displays its pages with a light background. You can set a dark theme mode by setting the
enable_dark_theme
property totrue
.Database Type. By default, Apache Cassandra is set as the default database for Autonomous Identity. For MongoDB, set the
db_driver_type:
tomongo
.db_driver_type: mongo
Private IP Address Mapping. Define a mapping between the external IP and private IP addresses. This occurs when your target host is in a cloud, so that your external and internal IP addresses are different.
For each target node, add the
private_ip_address_mapping
property in the~/autoid-config/vars.yml
file. You can look up the private IP on the cloud console, or run sudo ifconfig on the target host. Make sure the values are within double quotes. The key should not be in double quotes and should have two spaces preceding the IP address.private_ip_address_mapping: external_ip: "internal_ip"
For example:
private_ip_address_mapping: 34.105.16.198: "10.128.0.51" 34.105.16.201: "10.128.0.54" 34.105.16.229: "10.128.0.71"
Authentication Option. Autonomous Identity provides a single sign-on (SSO) feature that you can configure with an OIDC identity provider.
JWT Expiry and Secret File. Optional. By default, the session JWT is set at 30 minutes. To change this value, set the
jwt_expiry
property to a different value.jwt_expiry: "30 minutes"
MongoDB Configuration. For MongoDB clusters, enable replication by uncommenting the
mongodb_replication_replset
property.# uncomment below for mongo with replication enabled. Not needed for single node deployments mongodb_replication_replset: mongors
Also, enable a custom key for inter-machine authentication in the clustered nodes.
# custom key # password for inter-process authentication # please regenerate this file on production environment with # command 'openssl rand -base64 741' mongodb_keyfile_content: | 8pYcxvCqoe89kcp33KuTtKVf5MoHGEFjTnudrq5BosvWRoIxLowmdjrmUpVfAivh CHjqM6w0zVBytAxH1lW+7teMYe6eDn2S/O/1YlRRiW57bWU3zjliW3VdguJar5i9 Z+1a8lI+0S9pWynbv9+Ao0aXFjSJYVxAm/w7DJbVRGcPhsPmExiSBDw8szfQ8PAU 2hwRl7nqPZZMMR+uQThg/zV9rOzHJmkqZtsO4UJSilG9euLCYrzW2hdoPuCrEDhu Vsi5+nwAgYR9dP2oWkmGN1dwRe0ixSIM2UzFgpaXZaMOG6VztmFrlVXh8oFDRGM0 cGrFHcnGF7oUGfWnI2Cekngk64dHA2qD7WxXPbQ/svn9EfTY5aPw5lXzKA87Ds8p KHVFUYvmA6wVsxb/riGLwc+XZlb6M9gqHn1XSpsnYRjF6UzfRcRR2WyCxLZELaqu iKxLKB5FYqMBH7Sqg3qBCtE53vZ7T1nefq5RFzmykviYP63Uhu/A2EQatrMnaFPl TTG5CaPjob45CBSyMrheYRWKqxdWN93BTgiTW7p0U6RB0/OCUbsVX6IG3I9N8Uqt l8Kc+7aOmtUqFkwo8w30prIOjStMrokxNsuK9KTUiPu2cj7gwYQ574vV3hQvQPAr hhb9ohKr0zoPQt31iTj0FDkJzPepeuzqeq8F51HB56RZKpXdRTfY8G6OaOT68cV5 vP1O6T/okFKrl41FQ3CyYN5eRHyRTK99zTytrjoP2EbtIZ18z+bg/angRHYNzbgk lc3jpiGzs1ZWHD0nxOmHCMhU4usEcFbV6FlOxzlwrsEhHkeiununlCsNHatiDgzp ZWLnP/mXKV992/Jhu0Z577DHlh+3JIYx0PceB9yzACJ8MNARHF7QpBkhtuGMGZpF T+c73exupZFxItXs1Bnhe3djgE3MKKyYvxNUIbcTJoe7nhVMrwO/7lBSpVLvC4p3 wR700U0LDaGGQpslGtiE56SemgoP
On production deployments, you can regenerate this file by running the following command:
$ openssl rand -base64 741
Elasticsearch Heap Size. Optional. The default heap size for Elasticsearch is 1GB, which may be small for production. For production deployments, uncomment the option and specify
2G
or3G
.#elastic_heap_size: 1g # sets the heap size (1g|2g|3g) for the Elastic Servers
OpenLDAP. Optional. Autonomous Identity installs an OpenLDAP Docker image on the target server to hold user data. Administrators can add or remove users or change their group privileges using the phpldapadmin command. You can customize your OpenLDAP domain, base DN, and URL to match your company's environment. For more information, see Configuring LDAP.
Open a text editor and enter the public IP addresses of the target machines in the
~/autoid-config/hosts
file. Make sure the target host IP addresses are accessible from the deployer machine. The following is an example of the~/autoid-config/hosts
file:If you configured Cassandra as your database, the
~/autoid-config/hosts
file is as follows for multi-node target deployments:[docker-managers] 34.105.16.198 [docker-workers] 34.105.16.201 [docker:children] docker-managers docker-workers [cassandra-seeds] 34.105.16.198 [cassandra-workers] 34.105.16.201 [spark-master] 34.105.16.198 [spark-workers] 34.105.16.201 [analytics] 34.105.16.198 [mongo_master] [mongo_replicas] [mongo:children] mongo_replicas mongo_master # ELastic Nodes [odfe-master-node] 34.105.16.229 [odfe-data-nodes] 34.105.16.229 [kibana-node] 34.105.16.229
If you configured MongoDB as your database, the
~/autoid-config/hosts
file is as follows for multi-node target deployments:[docker-managers] 34.105.16.198 [docker-workers] 34.105.16.201 [docker:children] docker-managers docker-workers [cassandra-seeds] [cassandra-workers] [spark-master] 34.105.16.198 [spark-workers] 34.105.16.201 [analytics] 34.105.16.198 [mongo_master] 34.105.16.198 mongodb_master=True [mongo_replicas] 34.105.16.201 [mongo:children] mongo_replicas mongo_master # ELastic Nodes [odfe-master-node] 34.105.16.229 [odfe-data-nodes] 34.105.16.229 [kibana-node] 34.105.16.229
Open a text editor and set the Autonomous Identity passwords for the configuration service, LDAP backend, and Cassandra database. The vault passwords file is located at
~/autoid-config/vault.yml
.Note
Do not include special characters & or $ in
vault.yml
passwords as it will result in a failed deployer process.configuration_service_vault: basic_auth_password: Welcome123 openldap_vault: openldap_password: Welcome123 cassandra_vault: cassandra_password: Welcome123 cassandra_admin_password: Welcome123 mongo_vault: mongo_admin_password: Welcome123 mongo_root_password: Welcome123 elastic_vault: elastic_admin_password: Welcome123 elasticsearch_password: Welcome123
Encrypt the vault file that stores the Autonomous Identity passwords, located at
~/autoid-config/vault.yml
. The encrypted passwords will be saved to/config/.autoid_vault_password
. The/config/
mount is internal to the deployer container.$
./deployer.sh encrypt-vault
Download the images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory.$
./deployer.sh download-images
Run the deployment.
$
./deployer.sh run
Resolve Hostname
After installing Autonomous Identity, set up the hostname resolution for your deployment.
Resolve the hostname:
Configure your DNS servers to access Autonomous Identity dashboard and self-service applications on the target node. The following domain names must resolve to the IP address of the target node:
<target-environment>-ui.<domain-name>
and<target-environment>-selfservice.<domain-name>
.If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser. Open a text editor and add an entry in the
/etc/hosts
file for the self-service and UI services for each managed target node.target-ip-address <target-environment>-ui.<domain-name> <target-environment>-selfservice.<domain-name>
For example:
34.70.190.144 autoid-ui.forgerock.com autoid-selfservice.forgerock.com
If you set up a custom domain name and target environment, add the entries in
/etc/hosts
. For example:34.70.190.144 myid-ui.abc.com myid-selfservice.abc.com
For more information on customizing your domain name, see Customize the Domain and Namespace.
Access the Dashboard
Access the Autonomous Identity console UI:
Open a browser, and point it to
https://autoid-ui.forgerock.com/
(or your customized URL:https://myid-ui.abc.com
).Log in as a test user:
bob.rodgers@forgerock.com
. Enter the password:Welcome123
.
Check Apache Cassandra
Check Cassandra:
On the target node, check the status of Apache Cassandra.
$
/opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
An example output is as follows:
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 34.70.190.144 1.33 MiB 256 100.0% a10a91a4-96e83dd-85a2-4f90d19224d9 rack1
Check MongoDB
Check the status of MongoDB:
On the target node, check the status of MongoDB.
$
mongo --tls --host <Host IP Address> --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem --tlsAllowInvalidCertificates --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem
Check Apache Spark
Check Spark:
SSH to the target node and open Spark dashboard using the bundled text-mode web browser
$
elinks http://localhost:8080
You should see Spark Master status as ALIVE and worker(s) with State ALIVE.
Access Self-Service
The self-service feature lets Autonomous Identity users change their own passwords.
Access self-service:
Open a browser and point it to:
https://autoid-selfservice.forgerock.com/
.
Start the Analytics
If the previous steps all check out successfully, you can start an analytics pipeline run, where association rules, confidence scores, predications, and recommendations are generated. Autonomous Identity provides a small demo data set that lets you run the analytics pipeline on. Note for production runs, prepare your company's dataset as outlined in Data Preparation.
Start the analytics service:
Run the analytics pipeline commands. This may take a bit longer than the install, depending on the size of your dataset. For specific information, see Run the Analytics Pipeline.