Environment Setup: AKS
Before deploying the CDM, you must:
Windows users
ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:
-
Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation
-
Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space
-
Nested virtualization enabled in the Linux VM.
Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.
Third-Party Software
Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.
ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.
The versions listed in the following table have been validated for deploying the CDM on Microsoft Azure. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.
Install the following third-party software:
Software | Version | Homebrew package |
---|---|---|
Docker Desktop[1] |
2.3.0.3 |
|
Kubernetes client ( |
1.18.6 |
|
Skaffold |
1.12.1 |
|
Kustomize |
3.8.1 |
|
Kubernetes context switcher ( |
0.9.1 |
|
Pulumi |
2.7.1 |
Do not use |
Helm |
3.2.4_1 |
|
Gradle |
6.5.1 |
|
Node.js |
12.18.3 |
|
Azure Command Line Interface |
2.10.1 |
|
Azure Subscription Setup
The CDM runs in a Kubernetes cluster in an Azure subscription.
This page outlines how the Cloud Deployment Team created and configured our Azure subscription before we created our cluster.
To replicate Azure subscription creation and configuration, follow this procedure:
-
Assign the following roles to users who will deploy the CDM:
-
Azure Kubernetes Service Cluster Admin Role
-
Azure Kubernetes Service Cluster User Role
-
Contributor
-
User Access Administrator
Remember, the CDM is a reference implementation, and is not for production use. The roles you assign in this step are suitable for the CDM. When you create a project plan, you’ll need to determine which Azure roles are required.
-
-
Log in to Azure services as a user with the roles you assigned in the previous step:
$ az login --username
my-user-name
-
View your current subscription ID:
$ az account show
-
If necessary, set the current subscription ID to the one you will use to deploy the CDM:
$ az account set --subscription
my-subscription-id
-
Choose the Azure region in which you will deploy the CDM. The Cloud Deployment Team deployed the CDM in the
eastus
region.To use any other region, note the following:
-
The region must support AKS.
-
The subscription, resource groups, and resources you create for your AKS cluster must reside in the same region.
-
-
As of this writing, the CDM uses Standard FSv2 Family vCPUs for the DS node pool. Make sure that your subscription has an adequate quota for this vCPU type in the region where you’ll deploy the CDM. If the quota is lower than 192 CPUs, request quota increase to 192 CPUs (or higher) before you create the cluster for the CDM.
When you create a project plan, you’ll need to determine which CPU types are needed, and, possibly, increase quotas.
-
DS data backup is stored in cloud storage.Before you deploy CDM, create an Azure Blob Storage container to store the DS data backup as blobs, and note its access link.
For more information on how to create and use Azure Blob Storage, see Quickstart: Create, download, and list blobs with Azure CLI.
-
The CDM uses Azure Container Registry (ACR) for storing Docker images.
If you do not have a container registry in your subscription, create one.
Kubernetes Cluster Creation and Setup
Now that you’ve installed third-party software on your local computer and set up an Azure subscription, you’re ready to create a Kubernetes cluster for the CDM in your project.
The Cloud Deployment Team used Pulumi software to create the CDM cluster. This section describes how the team used Pulumi to create and set up a Kubernetes cluster that can run the CDM.
It covers the following topics:
forgeops
Repository
Before you can deploy the CDK or the CDM, you must first get the
forgeops
repository:[4]
-
Clone the
forgeops
repository:$ git clone https://github.com/ForgeRock/forgeops.git
The
forgeops
repository is a public Git repository. You do not need credentials to clone it. -
Check out the
2020.08.07-ZucchiniRicotta.1
release tag, creating a branch namedmy-branch
:$ cd forgeops $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch Switched to a new branch 'my-branch'
Node.js Dependencies
The cluster
directory in the forgeops
repository contains Pulumi
scripts for creating the CDM cluster.
The Pulumi scripts are written in TypeScript and run in the Node.js environment.
Before running the scripts, you’ll need to install the Node.js dependencies
listed in the /path/to/forgeops/cluster/pulumi/package.json
file as
follows:
-
Change to the
/path/to/forgeops/cluster/pulumi
directory. -
Remove any previously installed Node.js dependencies:
$ rm -rf node_modules
-
Install dependencies:
$ npm install . . . added 292 packages from 447 contributors and audited 295 packages in 17.169s . . . found 0 vulnerabilities
Kubernetes Cluster Creation
After cloning the forgeops
repository and
installing Node.js dependencies, you’re ready
to create the Kubernetes cluster for the CDM.
This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:
-
prod
,nginx
, andcert-manager
namespaces created -
NGINX ingress controller deployed
-
Certificate Manager deployed
-
Prometheus and Grafana monitoring tools deployed
Perform the following procedures to replicate CDM cluster creation:
-
Obtain the following information from your AKS administrator:
-
The Azure region and zones in which you will create the cluster. The CDM is deployed within a single region.
The Cloud Deployment Team deployed the CDM in the
eastus
region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use this region when you deploy the CDM, regardless of your actual location.However, if you would like to deploy the CDM in a different Azure region, you may do so. The region must support Standard FSv2 Family vCPUs.
-
The name of your Azure Container Registry.
-
The name of the resource group associated with your Azure Container Registry.
-
-
ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you create a project plan, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.
Store your Pulumi passphrase in an environment variable:
$ export PULUMI_CONFIG_PASSPHRASE=
my-passphrase
The default Pulumi passphrase is
password
. -
Log in to Pulumi using the local option or the Pulumi service.
For example, to log in using the local option:
$ pulumi login -l
As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.
-
Create infrastructure components to support your cluster:
-
Change to the directory that contains the Azure infrastructure stack configuration files:
$ cd /path/to/forgeops/cluster/pulumi/azure/infra
-
Verify that your current working directory is
/path/to/forgeops/cluster/pulumi/azure/infra
. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly. -
Initialize the infrastructure stack:
$ pulumi stack init azure-infra
Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the
pulumi stack select
command. -
(Optional) If you’re deploying the CDM in a region other than
eastus
, configure your infrastructure stack with your region. Use the region you obtained in Step 1:$ pulumi config set azure-infra:location
my-region
-
Configure the Azure resource group for Azure Container Registry. Use the resource group you obtained in Step 1:
$ pulumi config set azure-infra:acrResourceGroupName
my-resource-group
-
Create the infrastructure components:
$ pulumi up
Pulumi provides a preview of the operation and issues the following prompt:
Do you want to perform this update?
Review the operation, and then select
yes
if you want to proceed. -
To verify that Pulumi created the infrastructure components, log in to the Azure console. Display the resource groups. You should see a new resource group named
azure-infra-ip-resource-group
.
-
-
Create your cluster:
-
Change to the directory that contains the cluster configuration files:
$ cd /path/to/forgeops/cluster/pulumi/azure/aks
-
Verify that your current working directory is
/path/to/forgeops/cluster/pulumi/azure/aks
. If you are not in this directory, Pulumi will create the CDM stack incorrectly. -
Initialize the CDM stack:
$ pulumi stack init aks-medium
-
(Optional) If you’re deploying the CDM in a region other than
eastus
, configure your infrastructure stack with your region. Use the region you obtained in Step 1:$ pulumi config set aks:location
my-aks-region
-
Configure the Azure resource group for Azure Container Registry. Use the resource group you obtained in Step 1:
$ pulumi config set aks:acrResourceGroupName
my-resource-group
-
$ pulumi up
Pulumi provides a preview of the operation and issues the following prompt:
Do you want to perform this update?
Review the operation, and then select
yes
if you want to proceed. -
Make a note of the static IP address that Pulumi reserved. The address appears in the output from the
pulumi up
command. Look for output similar to:staticIpAddress : "123.134.145.156"
You’ll need the IP address when you deploy the NGINX ingress controller.
-
Verify that Pulumi created the cluster using the Azure console.
-
-
After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file,
$HOME/.kube/config
. Configure your local computer’s Kubernetes settings so that thekubectl
command can access your new cluster:-
Verify that the
/path/to/forgeops/cluster/pulumi/azure/aks
directory is still your current working directory. -
Create a
kubeconfig
file with your new cluster’s configuration in the current working directory:$ pulumi stack output kubeconfig > kubeconfig
-
Configure Kubernetes to get cluster information from the union of the new
kubeconfig
file and the default Kubernetes configuration file:$ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
-
Run the
kubectx
command.The output should contain your newly created cluster and any existing clusters.
The current context should be set to the context for your new cluster.
-
-
Check the status of the pods in the your cluster until all the pods are ready:
-
List all the pods in the cluster:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system azure-cni-networkmonitor-gsbkg 1/1 Running 0 3m38s kube-system azure-cni-networkmonitor-k26mc 1/1 Running 0 3m40s kube-system azure-cni-networkmonitor-ng4qn 1/1 Running 0 8m40s . . . kube-system azure-ip-masq-agent-4kkpg 1/1 Running 0 8m40s kube-system azure-ip-masq-agent-6r699 1/1 Running 0 8m40s . . . kube-system coredns-698c77c5d7-k6q9h 1/1 Running 0 9m kube-system coredns-698c77c5d7-knwwm 1/1 Running 0 9m kube-system coredns-autoscaler-. . . 1/1 Running 0 9m . . . kube-system kube-proxy-5ztxd 1/1 Running 0 8m23s kube-system kube-proxy-6th8b 1/1 Running 0 9m6s . . . kube-system metrics-server-69df9f75bf-fc4pn 1/1 Running 1 20m kube-system tunnelfront-5b56b76594-6wzps 2/2 Running 1 20m
-
Review the output. Deployment is complete when:
-
The
READY
column indicates all running containers are available. The entry in theREADY
column represents [total number of containers/number of available containers]. -
All entries in the
STATUS
column indicateRunning
orCompleted
.
-
-
If necessary, continue to query your cluster’s status until all the pods are ready.
-
Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.
Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller.
Remember, the CDM is a reference implementation and not for production use. When you create a project plan, you’ll need to determine which ingress controller to use in production.
-
Deploy the NGINX ingress controller in your cluster. For
static-ip-address
, specify the IP address obtained when you performed Step 5.g of Create a Kubernetes Cluster for the CDM:$ /path/to/forgeops/bin/ingress-controller-deploy.sh \ -a -i static-ip-address -r azure-infra-ip-resource-group namespace/nginx created Release "nginx-ingress" does not exist. Installing it now. NAME: nginx-ingress . . .
-
Check the status of the services in the
nginx
namespace to note the external IP address for the ingress controller:$ kubectl get services --namespace nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.0.131.150 52.152.192.41 80… 23s nginx-ingress-default-backend ClusterIP 10.0.146.216 none 80… 23s
-
You’ll access ForgeRock Identity Platform services through the ingress controller. The URLs you’ll use must be resolvable from your local computer.
Add an entry similar to the following to your
/etc/hosts
file:ingress-ip-address prod.iam.example.com
For
ingress-ip-address
, specify the external IP of the ingress controller service in the previous command.
Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. Remember, the CDM is not for production use. When you create a project plan, you’ll need to determine how you want to manage certificates in production.
-
Deploy the Certificate Manager in your cluster:
$ /path/to/forgeops/bin/certmanager-deploy.sh customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created namespace/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created . . . service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created deployment.extensions/cert-manager-webhook condition met clusterissuer.cert-manager.io/default-issuer created secret/certmanager-ca-secret created
-
Check the status of the pods in the
cert-manager
namespace until all the pods are ready:$ kubectl get pods --namespace cert-manager NAME READY STATUS RESTARTS AGE cert-manager-6d5fd89bdf-khj5w 1/1 Running 0 3m57s cert-manager-cainjector-7d47d59998-h5b48 1/1 Running 0 3m57s cert-manager-webhook-6559cc8549-8vdtp 1/1 Running 0 3m56s
Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling. When you create a project plan, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your production environment.
-
Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore
info: skipping unknown hook: "crd-install"
messages:$ /path/to/forgeops/bin/prometheus-deploy.sh namespace/monitoring created "stable" has been added to your repositories Release "prometheus-operator" does not exist. Installing it now. manifest_sorter.go:175: info: skipping unknown hook: "crd-install" . . . NAME: prometheus-operator LAST DEPLOYED: Mon Aug 17 16:47:45 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 . . . customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met Release "forgerock-metrics" does not exist. Installing it now. NAME: forgerock-metrics LAST DEPLOYED: Mon Aug 17 16:48:27 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None
-
Check the status of the pods in the
monitoring
namespace until all the pods are ready:$ kubectl get pods --namespace monitoring NAME READY STATUS RESTARTS AGE alertmanager-prometheus-operator-alertmanager-0 2/2 Running 0 5m8s prometheus-operator-grafana-7b8598c98f-glhmn 2/2 Running 0 5m16s prometheus-operator-kube-state-metrics-. . . 1/1 Running 0 5m16s prometheus-operator-operator-55966c69dd-76v46 2/2 Running 0 5m16s prometheus-operator-prometheus-node-exporter-2tc48 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-4p4mr 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-4vz75 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-5vbnw 1/1 Running 0 3m32s prometheus-operator-prometheus-node-exporter-9vflt 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-bhmzn 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-hdjqm 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-hxwzw 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-kbrm9 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-ktpfs 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-sm85n 1/1 Running 0 3m31s prometheus-operator-prometheus-node-exporter-xntgk 1/1 Running 0 3m31s prometheus-prometheus-operator-prometheus-0 3/3 Running 1 4m57s
Cloud Storage for DS Backup
DS data backup is stored in cloud storage. Before you deploy the platform
on Azure, you should have set up an Azure Blob Storage container to store the
DS data backup. Then configure the forgeops
artifacts with the location
and credentials for the container:
-
Get the access link to the Azure Blob Storage container that you plan to use for DS backup.
-
Change to the
/path/to/forgeops/kustomize/base/kustomizeConfig
directory. -
Edit the
kustomization.yaml
file and set theDSBACKUP_DIRECTORY
parameter to the Azure link of the DS data backup. -
Change to the
/path/to/forgeops/kustomize/base/7.0/ds/base
directory. -
Run the following command to create the
cloud-storage-credentials.yaml
file:$ kubectl create secret generic cloud-storage-credentials \ --from-literal=AZURE_ACCOUNT_KEY_ID=
my-account-key
\ --from-literal=AZURE_ACCOUNT_NAME=my-account-name
\ --dry-run=client -o yaml > cloud-storage-credentials.yaml
docker push
Setup
In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your AKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.
For Skaffold to push the Docker images:
-
Docker must be running on your local computer.
-
Your local computer needs credentials that let Skaffold push the images to the Docker repository available to your cluster.
-
Skaffold needs to know the location of the Docker repository.
Perform the following procedure to let Skaffold to push Docker images to a registry accessible to your cluster:
-
If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.
-
If you don’t already have the name of the container registry that will hold ForgeRock Docker images, obtain it from your Azure administrator.
-
Log in to your container registry:
$ az acr login --name registry-name
Azure repository logins expire after 4 hours. Because of this, you’ll need to log in to ACR whenever your login session expires [5].
-
Run the
kubectx
command to obtain the Kubernetes context. -
Configure Skaffold with your Docker repository location and Kubernetes context:
$ skaffold config \ set default-repo registry-name.azurecr.io/cdm -k my-kubernetes-context
For example:
$ skaffold config set default-repo my-container-registry.azurecr.io/cdm -k aks
You’re now ready to deploy the CDM.
forgeops
repository and check out the 2020.08.07-ZucchiniRicotta.1
tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
cron
utility.