Environment Setup: GKE
Before deploying the CDM, you must:
Windows users
ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:
-
Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation
-
Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space
-
Nested virtualization enabled in the Linux VM.
Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.
Third-Party Software
Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.
ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.
The versions listed in the following table have been validated for deploying the CDM on Google Cloud. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.
Install the following third-party software:
Software | Version | Homebrew package |
---|---|---|
Docker Desktop[1] |
2.3.0.3 |
|
Kubernetes client ( |
1.18.6 |
|
Skaffold |
1.12.1 |
|
Kustomize |
3.8.1 |
|
Kubernetes context switcher ( |
0.9.1 |
|
Pulumi |
2.7.1 |
Do not use |
Helm |
3.2.4_1 |
|
Gradle |
6.5.1 |
|
Node.js |
12.18.3 |
|
Google Cloud SDK |
303.0.0 |
|
Google Cloud Project Setup
The CDM runs in a Kubernetes cluster in a Google Cloud project.
This section outlines how the Cloud Deployment Team created and configured our Google Cloud project before we created our cluster.
To replicate Google Cloud project creation and configuration, follow this procedure:
-
Log in to the Google Cloud Console and create a new project.
-
Authenticate to the Google Cloud SDK to obtain the permissions you’ll need to create a cluster:
-
Configure the Google Cloud SDK standard component to use your Google account. Run the following command:
$ gcloud auth login
-
A browser window appears, prompting you to select a Google account. Select the account you want to use for cluster creation.
A second screen requests several permissions. Select Allow.
A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"
-
Set the Google Cloud SDK configuration to reference your new project. Specify the project ID, not the project name, in the
gcloud config set project
command:$ gcloud config set project
my-project-id
-
Acquire new user credentials to use for Google Cloud SDK application default credentials:
$ gcloud auth application-default login
-
A browser window appears, prompting you to select a Google account. Select the account you want to use for cluster creation.
A second screen requests the required permission. Select Allow.
A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"
-
-
Assign the following roles to users who will be creating Kubernetes clusters and deploying CDM:
-
Editor
-
Kubernetes Engine Admin
-
Kubernetes Engine Cluster Admin
Remember, the CDM is a reference implementation, and is not for production use. The roles you assign in this step are suitable for the CDM. When you create a project plan, you’ll need to determine which Google Cloud roles are required.
-
-
As of this writing, the CDM uses the C2 machine type for the DS node pool. Make sure that your project has an adequate quota for this machine type in the region where you’ll deploy the CDM. If the quota is lower than 96 CPUs, request a quota increase to 96 CPUs (or higher) before you create the cluster for the CDM.
When you create a project plan, you’ll need to determine which machine types are needed, and, possibly, increase quotas.
-
Create a service account on GCS for performing backup, and download the service account credential file, which we refer to here as the
my-sa-credential.json
file. -
Create a Google Cloud Storage bucket and note the Link for gsutil of the bucket.
-
Grant permissions on the storage bucket to the service account you created for backup.
Kubernetes Cluster Creation and Setup
Now that you’ve installed third-party software on your local computer and set up a Google Cloud project, you’re ready to create a Kubernetes cluster for the CDM in your project.
The Cloud Deployment Team used Pulumi software to create the CDM cluster. This section describes how the team used Pulumi to create and set up a Kubernetes cluster that can run the CDM. It covers the following topics:
forgeops
Repository
Before you can deploy the CDK or the CDM, you must first get the
forgeops
repository:[4]
-
Clone the
forgeops
repository:$ git clone https://github.com/ForgeRock/forgeops.git
The
forgeops
repository is a public Git repository. You do not need credentials to clone it. -
Check out the
2020.08.07-ZucchiniRicotta.1
release tag, creating a branch namedmy-branch
:$ cd forgeops $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch Switched to a new branch 'my-branch'
Node.js Dependencies
The cluster
directory in the forgeops
repository contains Pulumi
scripts for creating the CDM cluster.
The Pulumi scripts are written in TypeScript and run in the Node.js environment.
Before running the scripts, you’ll need to install the Node.js dependencies
listed in the /path/to/forgeops/cluster/pulumi/package.json
file as follows:
-
Change to the
/path/to/forgeops/cluster/pulumi
directory. -
Remove any previously installed Node.js dependencies:
$ rm -rf node_modules
-
Install dependencies:
$ npm install > . . . added 292 packages from 447 contributors and audited 295 packages in 22.526s . . . found 0 vulnerabilities
Kubernetes Cluster Creation
After cloning theforgeops
repository
and installing Node.js dependencies,
you’re ready to create the Kubernetes cluster for the CDM.
This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:
-
prod
,nginx
, andcert-manager
namespaces created -
NGINX ingress controller deployed
-
Certificate Manager deployed
-
Prometheus and Grafana monitoring tools deployed
Perform the following procedures to replicate CDM cluster creation:
-
Obtain the following information from your Google Cloud administrator:
-
The ID of the project in which to create the cluster. Be sure to obtain the project ID and the project name.
-
The region in which to create the cluster. The CDM is deployed in three zones within a single region.
The Cloud Deployment Team deployed the CDM in the
us-east1
region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use this location when deploying the CDM, regardless of your actual location.However, if you would like to deploy the CDM in a different region, you may do so. The region must support C2 instance types.
-
-
ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you Create a Project Plan, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.
Store your Pulumi passphrase in an environment variable:
$ export PULUMI_CONFIG_PASSPHRASE=
my-passphrase
The default Pulumi passphrase is
password
. -
Log in to Pulumi using the local option or the Pulumi service.
For example, to log in using the local option:
$ pulumi login -l
As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.
-
Create networking infrastructure components to support your cluster:
-
Change to the directory that contains the Google Cloud infrastructure stack configuration files:
$ cd /path/to/forgeops/cluster/pulumi/gcp/infra
-
Verify that your current working directory is
/path/to/forgeops/cluster/pulumi/gcp/infra
. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly. -
Initialize the infrastructure stack:
$ pulumi stack init gcp-infra
Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the pulumi stack select command.
-
Configure the Google Cloud project for the infrastructure stack. Use the project ID you obtained in Step 1:
$ pulumi config set gcp:project
my-project-id
-
(Optional) If you’re deploying the CDM in a region other than
us-east1
, configure your infrastructure stack with your region. Use the region you obtained in Step 1:$ pulumi config set gcp:region
my-region
-
Create the infrastructure components:
$ pulumi up
Pulumi provides a preview of the operation and issues the following prompt:
Do you want to perform this update?
Review the operation, and then select
yes
to proceed. -
To verify that Pulumi created the infrastructure components, log in to the Google Cloud console. Select the VPC Networks option. You should see a new network with a public subnet in the VPC Networks list. The new network should be deployed in your region.
-
-
-
Change to the directory that contains the cluster configuration files:
$ cd /path/to/forgeops/cluster/pulumi/gcp/gke
-
Verify that your current working directory is
/path/to/forgeops/cluster/pulumi/gcp/gke
. If you are not in this directory, Pulumi will create the CDM stack incorrectly. -
Initialize the CDM stack:
$ pulumi stack init gke-medium
-
Configure the Google Cloud project for the cluster stack. Use the project ID you obtained in Step 1:
$ pulumi config set gcp:project
my-project-id
-
Create the cluster:
$ pulumi up
Pulumi provides a preview of the operation and issues the following prompt:
Do you want to perform this update?
Review the operation, and then select
yes
to proceed.Pulumi creates the cluster in the same region in which you created the infrastructure stack.
-
Make a note of the static IP address that Pulumi created. This IP address appears in the output from the
pulumi up
command. Look for output similar to:ip: "35.229.115.150"
You’ll need this IP address when you deploy the NGINX ingress controller.
-
To verify that Pulumi created the cluster, log in to the Google Cloud console. Select the Kubernetes Engine option. You should see the new cluster in the list of Kubernetes clusters.
-
-
After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file,
$HOME/.kube/config
. Configure your local computer’s Kubernetes settings so that thekubectl
command can access your new cluster:-
Verify that the
/path/to/forgeops/cluster/pulumi/gcp/gke
directory is still your current working directory. -
Create a
kubeconfig
file with your new cluster’s configuration in the current working directory:$ pulumi stack output kubeconfig > kubeconfig
-
Configure Kubernetes to get cluster information from the union of the new
kubeconfig
file and the default Kubernetes configuration file:$ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
-
Run the
kubectx
command.The output should contain your newly-created cluster and any existing clusters.
The current context should be set to the context for your new cluster.
-
-
Check the status of the pods in your cluster until all the pods are ready:
-
List all the pods in the cluster:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system event-exporter-v0.3.0-74bf544f8b-ddmp5 2/2 Running 0 61m kube-system fluentd-gke-8g2rc 2/2 Running 0 61m kube-system fluentd-gke-8ztb6 2/2 Running 0 61m . . . kube-system fluentd-gke-scaler-dd489f778-wdhwr 1/1 Running 0 61m . . . kube-system gke-metrics-agent-4fhss 1/1 Running 0 60m kube-system gke-metrics-agent-82qjl 1/1 Running 0 60m . . . kube-system kube-dns-5dbbd9cc58-8l8xl 4/4 Running 0 61m kube-system kube-dns-5dbbd9cc58-m5lmj 4/4 Running 0 66m kube-system kube-dns-autoscaler-6b7f784798-48p9n 1/1 Running 0 66m kube-system kube-proxy-gke-cdm-medium-ds-03b9e239-k67z 1/1 Running 0 61m . . . kube-system kube-proxy-gke-cdm-medium-primary-. . . 1/1 Running 0 62m . . . kube-system l7-default-backend-84c9fcfbb-9qkvq 1/1 Running 0 66m kube-system metrics-server-v0.3.3-6fb7b9484f-6k65z 2/2 Running 0 61m kube-system prometheus-to-sd-7nh9p 1/1 Running 0 62m . . . kube-system stackdriver-metadata-agent-cluster-. . . 2/2 Running 0 66m
-
Review the output. Deployment is complete when:
-
The
READY
column indicates all running containers are available. The entry in theREADY
column represents [total number of containers/number of available containers]. -
All entries in the
STATUS
column indicateRunning
orCompleted
.
-
-
If necessary, continue to query your cluster’s status until all the pods are ready.
-
Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.
Also, remember, the CDM is a reference implementation, and is not for production use. Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller. When you plan your production deployment, you’ll need to determine which ingress controller to use in production.
-
Deploy the NGINX ingress controller in your cluster. For
static-ip-address
, specify the IP address obtained when you performed Step 5f of the Create your cluster procedure:$ /path/to/forgeops/bin/ingress-controller-deploy.sh -g -i static-ip-address namespace/nginx created Release "nginx-ingress" does not exist. Installing it now. NAME: nginx-ingress LAST DEPLOYED: Mon Aug 10 16:14:33 2020 NAMESPACE: nginx STATUS: deployed REVISION: 1 TEST SUITE: None . . .
-
Check the status of the pods in the
nginx
namespace until all the pods are ready:$ kubectl get pods --namespace nginx NAME READY STATUS RESTARTS AGE nginx-ingress-controller-69b755f68b-9l5n8 1/1 Running 0 4m38s nginx-ingress-controller-69b755f68b-hp456 1/1 Running 0 4m38s nginx-ingress-default-backend-576b86996d-qxst9 1/1 Running 0 4m38s
-
Get the ingress controller’s external IP address:
$ kubectl get services --namespace nginx
The ingress controller’s IP address appears in the
EXTERNAL-IP
column. -
You’ll access ForgeRock Identity Platform services through the ingress controller. The URLs you’ll use must be resolvable from your local computer.
Add an entry similar to the following to your
/etc/hosts
file:ingress-ip-address prod.iam.example.com
For
ingress-ip-address
, specify the IP address that you obtained in the preceding step.
Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. When you plan for production deployment, you’ll need to determine how you want to manage certificates in production.
-
Deploy the Certificate Manager in your cluster:
$ /path/to/forgeops/bin/certmanager-deploy.sh customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created namespace/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created . . . service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created deployment.extensions/cert-manager-webhook condition met clusterissuer.cert-manager.io/default-issuer created secret/certmanager-ca-secret created
-
Check the status of the pods in the
cert-manager
namespace until all the pods are ready:$ kubectl get pods --namespace cert-manager NAME READY STATUS RESTARTS AGE cert-manager-6d5fd89bdf-khj5w 1/1 Running 0 3m57s cert-manager-cainjector-7d47d59998-h5b48 1/1 Running 0 3m57s cert-manager-webhook-6559cc8549-8vdtp 1/1 Running 0 3m56s
Remember the CDM is a reference implementation, and is not for production use. Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling, if you like. When you create a project plan, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your environment.
-
Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore
info: skipping unknown hook: "crd-install"
messages:$ /path/to/forgeops/bin/prometheus-deploy.sh namespace/monitoring created "stable" has been added to your repositories Release "prometheus-operator" does not exist. Installing it now. manifest_sorter.go:175: info: skipping unknown hook: "crd-install" . . . NAME: prometheus-operator LAST DEPLOYED: Mon Feb 10 16:47:45 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 . . . customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met Release "forgerock-metrics" does not exist. Installing it now. NAME: forgerock-metrics LAST DEPLOYED: Mon Feb 10 16:48:27 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None
-
Check the status of the pods in the
monitoring
namespace until all the pods are ready:$ kubectl get pods --namespace monitoring NAME READY STATUS RESTARTS AGE alertmanager-prometheus-operator-alertmanager-0 2/2 Running 0 94s prometheus-operator-grafana-86dcbfc89-5f9jf 2/2 Running 0 100s prometheus-operator-kube-state-metrics-66b4c95cd9-ln2mq 1/1 Running 0 100s prometheus-operator-operator-7684f89b74-h2dj2 2/2 Running 0 100s prometheus-operator-prometheus-node-exporter-4pt4m 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-59shz 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-5mknp 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-794pr 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-dc5hd 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-pl959 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-qlv9q 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-snckr 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-tgrg7 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-tvs7m 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-w6z54 1/1 Running 0 100s prometheus-operator-prometheus-node-exporter-ztvh4 1/1 Running 0 100s prometheus-prometheus-operator-prometheus-0 3/3 Running 1 84s
Cloud Storage for DS Backup
DS data backup is stored in cloud storage. Before you deploy the platform,
you should have set up a cloud storage bucket to store the DS data
backup. Configure the forgeops
artifacts with the location and credentials
for the cloud storage bucket:
-
Get the location of the
my-sa-credentials.json
file containing the encrypted credential for the service account used for storing DS backup in Google Cloud Storage as mentioned in Step 5 of Configure a Google Cloud Project for the CDM. -
Change to the
/path/to/forgeops/kustomize/base/7.0/ds/base/
directory. -
Run the following command:
$ kubectl create secret generic cloud-storage-credentials \ --from-file=GOOGLE_CREDENTIALS_JSON=/path/to/my-sa-credential.json \ --dry-run=client -o yaml > cloud-storage-credentials.yaml
-
Change to the
/path/to/forgeops/kustomize/base/kustomizeConfig
directory. -
Edit the
kustomization.yaml
file and set theDSBACKUP_DIRECTORY
parameter to the Link for gsutil parameter you noted in Step 6 of Configure a Google Cloud Project for the CDM.For example:
DSBACKUP_DIRECTORY
gs://my-backup-bucket
.
docker push
Setup
In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your GKE cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.
For Skaffold to be able to push the Docker images:
-
Docker must be running on your local computer.
-
Your local computer needs credentials that let Skaffold push the images to the Docker registry available to your cluster.
-
Skaffold needs to know the location of the Docker registry.
Perform the following procedure to enable Skaffold to push Docker images to a registry accessible to your cluster:
-
If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.
-
Set up a Docker credential helper:
$ gcloud auth configure-docker
-
Run the
kubectx
command to obtain the Kubernetes context. -
Configure Skaffold with the Docker registry location for your project and the Kubernetes context. Use your project ID (not your project name) and the the Kubernetes context that you obtained in the previous step:
$ skaffold config set default-repo gcr.io/my-project-ID -k
my-kubernetes-context
You’re now ready to deploy the CDM.
forgeops
repository and check out the 2020.08.07-ZucchiniRicotta.1
tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.