ForgeOps Documentation

ForgeRock provides a number of resources to help you get started in the cloud. These resources demonstrate how to deploy the ForgeRock Identity Platform on Kubernetes.

The ForgeRock Identity Platform serves as the basis for our simple and comprehensive identity and access management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.

Start Here

ForgeRock provides several resources to help you get started in the cloud. These resources demonstrate how to deploy ForgeRock Identity Platform™ (the platform) on Kubernetes. Before you proceed, review the following precautions:

  • Deploying ForgeRock software in a containerized environment requires advanced proficiency in many technologies. See Assess Your Skill Level for details.

  • If you don’t have experience with complex Kubernetes deployments, then either engage a certified ForgeRock consulting partner or deploy the platform on traditional architecture.

  • Don’t deploy ForgeRock software in Kubernetes in production until you have successfully deployed and tested the software in a non-production Kubernetes environment.

For information about obtaining support for ForgeRock Identity Platform software, see Support From ForgeRock.

Introducing the CDK and CDM

The forgeops repository and DevOps documentation address a range of our customers' typical business needs. The repository contains artifacts for two primary resources to help you with cloud deployment:

  • Cloud Developer’s Kit (CDK). The CDK is a minimal sample deployment for development purposes. Developers deploy the CDK, and then access AM’s and IDM’s GUI consoles and REST APIs to configure the platform and build customized Docker images for the platform.

  • Cloud Deployment Model (CDM). The CDM is a reference implementation for ForgeRock cloud deployments. You can get a sample ForgeRock Identity Platform deployment up and running in the cloud quickly using the CDM. After deploying the CDM, you can use it to explore how you might configure your Kubernetes cluster before you deploy the platform in production.

    The CDM is a robust sample deployment for demonstration and exploration purposes only. It is not a production deployment.

CDK CDM

Fully integrated AM, IDM, and DS installations

Randomly generated secrets

Resource requirement

Namespace in a GKE, EKS, AKS, or Minikube cluster

Dedicated GKE, EKS, or AKS cluster

Can run on Minikube

Multi-zone high availability

Replicated directory services

Ingress configuration

Certificate management

Prometheus monitoring, Grafana reporting, and alert management

ForgeRock’s DevOps documentation helps you deploy the CDK and CDM:

  • Cloud Developer’s Kit Documentation. Tells you how to install the CDK, modify the AM and IDM configurations, and create customized Docker images for the ForgeRock Identity Platform.

  • Cloud Deployment Model Documentation. Tells you how to quickly create a Kubernetes cluster on Google Cloud, Amazon Web Services (AWS), or Microsoft Azure, install the ForgeRock Identity Platform, access components in the deployment, and run lightweight benchmarks to test DS, AM, and IDM performance.

  • Deployment Topics Documentation. Contains how-tos for customizing monitoring, setting alerts, backing up and restoring directory data, and modifying CDM’s default security configuration.

  • DevOps Release Notes. Keeps you up-to-date with the latest changes to the forgeops repository.

Try Out the CDK and the CDM

Before you start planning a production deployment, deploy either the CDK or the CDM—or both. If you’re new to Kubernetes, or new to the ForgeRock Identity Platform, deploying these resources is a great way to learn. When you’ve finished deploying them, you’ll have sandboxes suitable for exploring ForgeRock cloud deployment.

Deploy the CDK

Illustrates the major tasks performed to get the CDK running as a sample environment.

The CDK is a minimal sample deployment of the ForgeRock Identity Platform. If you have access to a cluster on Google Cloud, EKS, or AKS, you can install the CDK in a namespace on your cluster. But even if you don’t have access to a cloud-based cluster, you can still deploy the CDK on a local computer running Minikube, and when you’re done, you’ll have a namespace on a local Kubernetes cluster with the ForgeRock Identity Platform.

Prerequisite technologies and skills:

More information:

Deploy the CDM

Illustrates the major tasks performed to deploy the CDM.

Deploy the CDM on Google Cloud, AWS, or Microsoft Azure to quickly spin up the platform for demonstration purposes. You’ll get a feel for what it’s like to deploy the platform on a Kubernetes cluster in the cloud. When you’re done, you won’t have a production-quality deployment. But you will have a robust, reference implementation of the platform.

After you get the CDM up and running, you can use it to test deployment customizations—options that you might want to use in production, but are not part of the CDM. Examples include, but are not limited to, securing SSL with a certificate that’s dynamically obtained from Let’s Encrypt; using an ingress controller other than the NGINX ingress controller; resizing the cluster to meet your business requirements; configuring Alert Manager to issue alerts when usage thresholds have been reached.

Prerequisite technologies and skills:

More information:

Build Your Own Service

Illustrates the major tasks performed when building a production deployment of ForgeRock Identity Platform in the cloud.

Perform the following activities to customize, deploy, and maintain a production ForgeRock Identity Platform implementation in the cloud:

Create a Project Plan

Illustrates the major tasks performed when planning a production deployment of ForgeRock Identity Platform in the cloud.

After you’ve spent some time exploring the CDK and CDM, you’re ready to define requirements for your production deployment. Remember, the CDM is not a production deployment. Use the CDM to explore deployment customizations, and incorporate the lessons you’ve learned as you build your own production service.

Analyze your business requirements and define how the ForgeRock Identity Platform needs to be configured to meet your needs. Identify systems to be integrated with the platform, such as identity databases and applications, and plan to perform those integrations. Assess and specify your deployment infrastructure requirements, such as backup, system monitoring, Git repository management, CI/CD, quality assurance, security, and load testing.

Prerequisite technologies and skills:

More information:

Configure the Platform

Illustrates the major tasks performed to configure the ForgeRock Identity Platform before deploying in production.

With your project plan defined, you’re ready to configure the ForgeRock Identity Platform to meet the plan’s requirements. Install the CDK on your developers' computers. Configure AM and IDM. If needed, include integrations with external applications in the configuration. Iteratively unit test your configuration as you modify it. Build customized Docker images that contain the configuration.

Prerequisite technologies and skills:

More information:

Configure Your Cluster

Illustrates the major tasks performed to configure the cluster before deploying in production.

With your project plan defined, you’re ready to configure a Kubernetes cluster that meets the requirements defined in the plan. Install the platform using the customized Docker images developed in Configure the Platform. Provision the ForgeRock identity repository with users, groups, and other identity data. Load test your deployment, and then size your cluster to meet service level agreements. Perform integration tests. Harden your deployment. Set up CI/CD for your deployment. Create monitoring alerts so that your site reliability engineers are notified when the system reaches thresholds that affect your SLAs. Implement database backup and test database restore. Simulate failures while under load to make sure your deployment can handle them.

Prerequisite technologies and skills:

More information:

Stay Up and Running

Illustrates the major tasks performed to keep a ForgeRock Identity Platform deployment up and running in production.

By now, you’ve configured the platform, configured a Kubernetes cluster, and deployed the platform with your customized configuration. Run your ForgeRock Identity Platform deployment in your cluster, continually monitoring it for performance and reliability. Take backups as needed.

Prerequisite technologies and skills:

More information:

Assess Your Skill Level

Benchmarking and Load Testing

I can:

  • Write performance tests, using tools such as Gatling and Apache JMeter, to ensure that the system meets required performance thresholds and service level agreements (SLAs).

  • Resize a Kubernetes cluster, taking into account performance test results, thresholds, and SLAs.

  • Run Linux performance monitoring utilities, such as top.

CI/CD for Cloud Deployments

I have experience:

  • Designing and implementing a CI/CD process for a cloud-based deployment running in production.

  • Using a cloud CI/CD tool, such as Tekton, Google Cloud Build, Codefresh, AWS CloudFormation, or Jenkins, to implement a CI/CD process for a cloud-based deployment running in production.

  • Integrating GitOps into a CI/CD process.

Docker

I know how to:

  • Write Dockerfiles.

  • Create Docker images, and push them to a private Docker registry.

  • Pull and run images from a private Docker registry.

I understand:

  • The concepts of Docker layers, and building images based on other Docker images using the FROM instruction.

  • The difference between the COPY and ADD instructions in a Dockerfile.

Git

I know how to:

  • Use a Git repository collaboration framework, such as GitHub, GitLab, or Bitbucket Server.

  • Perform common Git operations, such as cloning and forking repositories, branching, committing changes, submitting pull requests, merging, viewing logs, and so forth.

External Application and Database Integration

I have expertise in:

  • AM policy agents.

  • Configuring AM policies.

  • Synchronizing and reconciling identity data using IDM.

  • Managing cloud databases.

  • Connecting ForgeRock Identity Platform components to cloud databases.

ForgeRock Identity Platform

I have:

  • Attended ForgeRock University training courses.

  • Deployed the ForgeRock Identity Platform in production, and kept the deployment highly available.

  • Configured DS replication.

  • Passed the ForgeRock Certified Access Management and ForgeRock Certified Identity Management exams (highly recommended).

Google Cloud, AWS, or Azure (Basic)

I can:

  • Use the graphical user interface for Google Cloud, AWS, or Azure to navigate, browse, create, and remove Kubernetes clusters.

  • Use the cloud provider’s tools to monitor a Kubernetes cluster.

  • Use the command user interface for Google Cloud, AWS, or Azure.

  • Administer cloud storage.

Google Cloud, AWS, or Azure (Expert)

In addition to the skills and expertise listed in Google Cloud, AWS, or Azure (Basic) I can:

  • Read the Pulumi scripts in the forgeops repository to see how the CDM cluster is configured.

  • Create and manage a Kubernetes cluster using an infrastructure-as-code tool such as Pulumi, Terraform, or AWS CloudFormation.

  • Configure multi-zone and multi-region Kubernetes clusters.

  • Configure cloud-provider identity and access management (IAM).

  • Configure virtual private clouds (VPCs) and VPC networking.

  • Manage keys in the cloud using a service such as Google Key Management Service (KMS), Amazon KMS, or Azure Key Vault.

  • Configure and manage DNS domains on Google Cloud, AWS, or Azure.

  • Troubleshoot a deployment running in the cloud using the cloud provider’s tools, such as Google Stackdriver, Amazon CloudWatch, or Azure Monitor.

  • Integrate a deployment with certificate management tools, such as cert-manager and Let’s Encrypt.

  • Integrate a deployment with monitoring and alerting tools, such as Prometheus and Alertmanager.

I have obtained one of the following certifications (highly recommended):

  • Google Certified Associate Cloud Engineer Certification.

  • AWS professional-level or associate-level certifications (multiple).

  • Azure Administrator.

Integration Testing

I can:

  • Automate QA testing using a test automation framework.

  • Design a chaos engineering test for a cloud-based deployment running in production.

  • Use chaos engineering testing tools, such as Chaos Monkey.

Kubernetes (Basic)

I’ve gone through the tutorials at kubernetes.io, and am able to:

  • Use the kubectl command to determine the status of all the pods in a namespace, and to determine whether pods are operational.

  • Use the kubectl describe pod command to perform basic troubleshooting on pods that are not operational.

  • Use the kubectl command to obtain information about namespaces, secrets, deployments, and stateful sets.

  • Use the kubectl command to manage persistent volumes and persistent volume claims.

Kubernetes (Expert)

In addition to the skills and expertise listed in Kubernetes (Basic), I have:

  • Configured role-based access to cloud resources.

  • Configured Kubernetes objects, such as deployments and stateful sets.

  • Configured Kubernetes ingresses.

  • Passed the Cloud Native Certified Kubernetes Administrator exam (highly recommended).

Project Planning and Management for Cloud Deployments

I have planned and managed:

  • A production deployment in the cloud.

  • A production deployment of ForgeRock Identity Platform.

Security and Hardening for Cloud Deployments

I can:

  • Harden a ForgeRock Identity Platform deployment.

  • Configure TLS, including mutual TLS, for a multi-tiered cloud deployment.

  • Configure cloud identity and access management and role-based access control for a production deployment.

  • Configure encryption for a cloud deployment.

  • Configure Kubernetes pod security and network security policies.

  • Configure private Kubernetes networks, deploying bastion servers as needed.

  • Undertake threat modeling exercises.

  • Scan Docker images to ensure container security.

  • Configure and use private Docker container registries.

Site Reliability Engineering for Cloud Deployments

I can:

  • Manage multi-zone and multi-region deployments.

  • Implement DS backup and restore in order to recover from a database failure.

  • Manage cloud disk availability issues.

  • Analyze monitoring output and alerts, and respond should a failure occur.

  • Obtain logs from all the software components in my deployment.

  • Follow the cloud provider’s recommendations for patching and upgrading software in my deployment.

  • Implement an upgrade scheme, such as blue/green or rolling upgrades, in my deployment.

  • Create a Site Reliability Runbook for the deployment, documenting all the the procedures to be followed and other relevant information.

  • Follow all the procedures in the project’s Site Reliability Runbook, and revise the runbook if it becomes out-of-date.

About the forgeops Repository

Use ForgeRock’s forgeops repository to customize and deploy the ForgeRock Identity Platform on a Kubernetes cluster.

The repository contains files needed for customizing and deploying the ForgeRock Identity Platform on a Kubernetes cluster:

  • Files used to build Docker images for the ForgeRock Identity Platform:

    • Dockerfiles

    • Scripts and configuration files incorporated into ForgeRock’s Docker images

    • Canonical configuration profiles for the platform

  • Kustomize bases and overlays

  • Skaffold configuration files

In addition, the repository contains numerous utility scripts and sample files. The scripts and samples are useful for:

  • Deploying ForgeRock’s CDM quickly and easily

  • Exploring monitoring, alerts, and security customization

  • Modeling a CI/CD solution for cloud deployment

See Repository Reference for information about the files in the repository, recommendations about how to work with them, and the support status for the files.

Repository Updates

Every several weeks, ForgeRock tags the forgeops repository. Each tag marks a new release of the repository, with enhancements and bug fixes.

The current release tag is 2020.08.07-ZucchiniRicotta.1.

When you start working with the forgeops repository:

  1. Clone the forgeops repository.

    Depending on your organization’s setup, you’ll clone the repository either from ForgeRock’s public repository on GitHub, or from a fork. See Git Clone or Git Fork? for more information.

  2. Check out the current release tag and create a branch based on the tag. For example:

    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-working-branch
  3. Start watching the repository on GitHub, so that you’ll receive notifications about new releases.

ForgeRock recommends that you regularly incorporate changes from newer release tags into your working branch. When GitHub notifies you that a new release is available:

  1. Read the Release Notes, which contain information about the new release tag.

  2. Use the git fetch --tags command to fetch new forgeops repository tags into your clone.

  3. Create a new branch based on the new release tag:

    $ git checkout tags/new-release-tag -b releases/new-release-tag
  4. Rebase the commits from the new branch into your working branch in your forgeops repository clone.

    If you like, you can remove the branch based on the new release tag when you’ve finished rebasing.

It’s important to understand the impact of rebasing changes from a new release tag from ForgeRock into your branches. Repository Reference provides advice about which files in the forgeops repository to change, which files not to change, and what to look out for when you rebase. Follow the advice in Repository Reference to reduce merge conflicts, and to better understand how to resolve them when you rebase changes from a new release tag.

Repository Reference

Directories

bin

Example scripts you can use or model for a variety of deployment tasks.

Recommendation: Don’t modify the files in this directory. If you want to add your own scripts to the forgeops repository, create a subdirectory under bin, and store your scripts there.

Support Status: Sample files. Not supported by ForgeRock.

cicd

Example files for working with several CI/CD systems, including Tekton and Google Cloud Build.

Recommendation: Don’t modify the files in this directory. If you want to add your own CI/CD support files to the forgeops repository, create a subdirectory under cicd, and store your files there.

Support Status: Sample files. Not supported by ForgeRock.

cluster

Example scripts and artifacts that automate cluster creation.

Recommendation: When you deploy the CDM, you run pulumi config set commands to configure your cluster. The configuration is stored in YAML files in the cluster/pulumi subdirectory. Your modifications will cause merge conflicts when you rebase changes from a new release tag into your branches. When you resolve merge conflicts after a rebase, be sure to keep the modifications to the YAML files if you need them.

Don’t make any additional modifications to the files in this directory.

When you’re ready to automate cluster creation for your own deployment, create a subdirectory under cluster, and store your scripts and other files there.

Support Status: Sample files. Not supported by ForgeRock.

config

Configuration profiles, including the canonical cdk profile from ForgeRock and user-customized profiles.

Recommendation: Add your own profiles to this directory using the config.sh command. Do not modify the canonical cdk profile.

docker

Dockerfiles and other support files needed to build Docker images for version 7.0 of the ForgeRock Identity Platform.

Recommendation: When customizing ForgeRock’s default deployments, you’ll need to add files under the docker/7.0 directory. For example, to customize the AM WAR file, you might need to add plugin JAR files, user interface customization files, or image files.

If you only add new files under the docker/7.0 directory, you should not encounter merge conflicts when you rebase changes from a new release tag into your branches. However, if you need to modify any files from ForgeRock, you might encounter merge conflicts. Be sure to track changes you’ve made to any files in the docker directory, so that you’re prepared to resolve merge conflicts after a rebase.

Support Status: Dockerfiles and other files needed to build Docker images for the ForgeRock Identity Platform. Support is available from ForgeRock.

etc

Files used to support several examples, including the CDM.

Recommendation: Don’t modify the files in this directory (or its subdirectories). If you want to use CDM automated cluster creation as a model or starting point for your own automated cluster creation, then create your own subdirectories under etc, and copy the files you want to model into the subdirectories.

Support Status: Sample files. Not supported by ForgeRock.

jenkins-scripts

For ForgeRock internal use only. Do not modify or use.

kustomize

Artifacts for orchestrating the ForgeRock Identity Platform using Kustomize.

Recommendation: Common deployment customizations, such as changing the deployment namespace and providing a customized FQDN, require modifications to files in the kustomize/7.0 directory. You’ll probably change, at minimum, the kustomize/7.0/all/kustomization.yaml file.

Expect to encounter merge conflicts when you rebase changes from a new release tag into your branches. Be sure to track changes you’ve made to files in the kustomize directory, so that you’re prepared to resolve merge conflicts after a rebase.

Support Status: Kustomize bases and overlays. Support is available from ForgeRock.

legacy-docs

Documentation for deploying the ForgeRock Identity Platform using DevOps techniques. Includes documentation for supported and deprecated versions of the forgeops repository.

Recommendation: Don’t modify the files in this directory.

Support Status:

Documentation for supported versions of the forgeops repository: Support is available from ForgeRock.

Documentation for deprecated versions of the forgeops repository: Not supported by ForgeRock.

Files in the Top-Level Directory

.gcloudignore, .gitchangelog.rc, .gitignore

For ForgeRock internal use only. Do not modify.

cloudbuild.yaml

Example files for working with Google Cloud Build.

Recommendation: Don’t modify this file. If you want to add your own Cloud Build configuration to the forgeops repository, use a different file name.

Support Status: Sample file. Not supported by ForgeRock.

LICENSE

Software license for artifacts in the forgeops repository. Do not modify.

Makefile

For ForgeRock internal use only. Do not modify.

notifications.json

For ForgeRock internal use only. Do not modify.

README.md

The top-level forgeops repository README file. Do not modify.

skaffold.yaml

The declarative configuration for running Skaffold to deploy version 7.0 of the ForgeRock Identity Platform.

Recommendation: If you need to customize the skaffold.yaml file, you might encounter merge conflicts when you rebase changes from a new release tag into your branches. Be sure to track changes you’ve made to this file, so that you’re prepared to resolve merge conflicts after a rebase.

Support Status: Skaffold configuration file. Support is available from ForgeRock.

Git Clone or Git Fork?

For the simplest use cases—a single user in an organization installing the CDK or CDM for a proof of concept, or exploration of the platform—cloning ForgeRock’s public forgeops repository from GitHub provides a quick and adequate way to access the repository.

If, however, your use case is more complex, you might want to fork the forgeops repository, and use the fork as your common upstream repository. For example:

  • Multiple users in your organization need to access a common version of the repository and share changes made by other users.

  • Your organization plans to incorporate forgeops repository changes from ForgeRock.

  • Your organization wants to use pull requests when making repository updates.

If you’ve forked the forgeops repository:

  • You’ll need to synchronize your fork with ForgeRock’s public repository on GitHub when ForgeRock releases a new release tag.

  • Your users will need to clone your fork before they start working instead of cloning the public forgeops repository on GitHub. Because procedures in the Cloud Developer’s Kit Documentation and the Cloud Deployment Model Documentation tell users to clone the public repository, you’ll need to make sure your users follow different procedures to clone the forks instead.

  • The steps for initially obtaining and updating your repository clone will differ from the steps provided in the documentation. You’ll need to let users know how to work with the fork as the upstream instead of following the steps in the documentation.

Cloud Developer’s Kit Documentation

This documentation explains basic concepts and strategies for developing custom Docker images for ForgeRock software. You can develop custom images either on a single-user Minikube Kubernetes cluster, or on a shared Kubernetes cluster in the cloud.

Start Here

Important information you should know before deploying on Kubernetes.

Minikube Environment

Set up your environment to use Minikube to develop custom Docker images for the platform.

Shared Cluster Environment

Set up your environment to use a shared cluster to develop custom Docker images for the platform.

Deploy the Platform

Deploy the ForgeRock Identity Platform.

Access the Platform

Access platform UIs and APIs.

Docker Images

Create customized Docker images for the platform.

Shutdown

Stop your deployment.

Troubleshoot

Troubleshoot your deployment.

Support from ForgeRock

Support options for the ForgeRock Identity Platform and ForgeRock’s DevOps offering.

The ForgeRock Identity Platform serves as the basis for our simple and comprehensive identity and access management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.

About the Cloud Developer's Kit

The CDK is a minimal sample deployment for development purposes. It includes fully integrated AM, IDM, and DS installations, and randomly generated secrets. Developers deploy the CDK, and then access AM’s and IDM’s GUI consoles and REST APIs to configure the platform and build customized Docker images for the platform.

This documentation describes how to use the CDK to stand up the platform in your developer environment, then create and test customized Docker images containing your custom AM and IDM configurations:

Illustrates the major configuration tasks performed before deploying in production.

Customizing the platform using the CDK is one of the major activities required before deploying the platform in production. To better understand how this activity fits in to the overall deployment process, see Configure the Platform.

Containerization

The CDK uses Docker for containerization. The CDK leverages the following Docker capabilities:

  • File-Based Representation of Containers. Docker images contain a file system and run-time configuration information. Docker containers are running instances of Docker images.

  • Modularization. Docker images are based on other Docker images. For example, an AM image is based on a Tomcat image that is itself based on an OpenJDK JRE image. In this example, the AM container has AM software, Tomcat software, and the OpenJDK JRE.

  • Collaboration. Public and private Docker registries let users collaborate by providing cloud-based access to Docker images. Continuing with the example, the public Docker registry at https://hub.docker.com/ has Docker images for Tomcat and the OpenJDK JRE that any user can download. You build Docker images for the ForgeRock Identity Platform based on the Tomcat and OpenJDK JRE images in the public Docker registry. You can then push the Docker images to a private Docker registry that other users in your organization can access.

ForgeRock provides a set of unsupported, evaluation-only base images for the ForgeRock Identity Platform. These images are available in ForgeRock’s public Docker registry.

Developers working with the CDK use the base images from ForgeRock to build customized Docker images for a fully-configured ForgeRock Identity Platform deployment:

Brief overview of containers for developers.

Users working with the CDM also use the base images from ForgeRock to perform proof-of-concept deployments, and to benchmark the ForgeRock Identity Platform.

The base images from ForgeRock are evaluation-only. They are unsupported for production use. Because of this, you must build your own base images before you deploy in production:

Brief overview of containers in production.

For information about how to build base images for deploying the ForgeRock Identity Platform in production, see Base Docker Images.

Orchestration

The CDK uses Kubernetes for container orchestration. The CDK has been tested on the following Kubernetes implementations:

CDK Architecture: Minikube

The CDK uses Skaffold to trigger Docker image builds and Kubernetes orchestration. Here’s what Skaffold does:

  1. Calls the Docker client on the local computer to build and tag their customized Docker images for the ForgeRock Identity Platform. The customized images are based on Docker images in ForgeRock’s public Docker registry, gcr.io/forgerock-io.

  2. Pushes the Docker images to the Docker engine that’s part of the Minikube VM.

  3. Calls Kustomize to orchestrate the ForgeRock Identity Platform in your namespace. Kustomize uses the Docker images that Skaffold pushed to your Docker registry.

The following diagram illustrates how the CDK uses Skaffold to build and orchestrate Docker images on Minikube:

CDM clusters have three zones and two node pools. The node pools have six nodes each.

After deploying the ForgeRock Identity Platform, you’ll see the following pods running in your namespace:

Diagram of the deployed CDK.
am

The am pod runs AM.

When AM starts, it obtains its configuration from the /home/forgerock/openam/config directory [1].

After the am pod has started, an Amster job is triggered. This job populates AM’s run-time data.

ds-cts-0

The ds-cts-0 pod runs the directory service used by the AM Core Token Service.

ds-idrepo-0

The ds-idrepo-0 pod runs the following directory services:

  • Identity repository shared by AM and IDM

  • IDM repository

  • AM application and policy store

idm-0

The idm-0 pod runs IDM.

When IDM starts, it obtains its configuration from the /opt/openidm/conf directory [2].

In containerized deployments, IDM must retrieve its configuration from the file system and not from the IDM repository. The default values for the openidm.fileinstall.enabled and openidm.config.repo.enabled properties in the CDK’s system.properties file ensure that IDM retrieves its configuration from the file system. Do not override the default values for these properties.

UI pods

Several pods provide access to ForgeRock common user interfaces:

  • admin-ui

  • end-user-ui

  • login-ui

In addition to these pods, you’ll see that three jobs that load data into the environment have run to completion:

  • The forgeops-secrets job, which generates a set of Kubernetes secrets used by the platform.

  • The amster job, which loads application data, such as OAuth 2.0 client definitions, to the idrepo DS instance.

  • The ldif-importer job, which loads policy data required by AM to the the idrepo DS instance.

CDK Architecture: Shared Cloud Cluster

A shared cluster lets multiple developers deploy and configure the ForgeRock Identity Platform on a single cloud-based Kubernetes cluster. A Kubernetes administrator sets up the shared cluster, then provides details to the developers so that they can access the cluster. Each developer then works in their own isolated environment within the cluster, called a namespace.

The CDK uses Skaffold to trigger Docker image builds and Kubernetes orchestration. Here’s what Skaffold does:

  1. Calls the Docker client on the local computer to build and tag their customized Docker images for the ForgeRock Identity Platform. The customized images are based on Docker images in ForgeRock’s public Docker registry, gcr.io/forgerock-io.

  2. Pushes the Docker images to a Docker registry accessible to the shared cluster.

  3. Calls Kustomize to orchestrate the ForgeRock Identity Platform in your namespace. Kustomize uses the Docker images that Skaffold pushed to your Docker registry.

The following diagram illustrates how the CDK uses Skaffold to build Docker images locally, push them to a shared registry, and orchestrate them in a shared cluster:

Diagram of a development environment that uses a shared cluster.

After deploying the ForgeRock Identity Platform, you’ll see the following pods running in your namespace:

Diagram of the deployed CDK.
am

The am pod runs AM.

When AM starts, it obtains its configuration from the /home/forgerock/openam/config directory [3].

After the am pod has started, an Amster job is triggered. This job populates AM’s run-time data.

ds-cts-0

The ds-cts-0 pod runs the directory service used by the AM Core Token Service.

ds-idrepo-0

The ds-idrepo-0 pod runs the following directory services:

  • Identity repository shared by AM and IDM

  • IDM repository

  • AM application and policy store

idm-0

The idm-0 pod runs IDM.

When IDM starts, it obtains its configuration from the /opt/openidm/conf directory [4].

In containerized deployments, IDM must retrieve its configuration from the file system and not from the IDM repository. The default values for the openidm.fileinstall.enabled and openidm.config.repo.enabled properties in the CDK’s system.properties file ensure that IDM retrieves its configuration from the file system. Do not override the default values for these properties.

UI pods

Several pods provide access to ForgeRock common user interfaces:

  • admin-ui

  • end-user-ui

  • login-ui

In addition to these pods, you’ll see that three jobs that load data into the environment have run to completion:

  • The forgeops-secrets job, which generates a set of Kubernetes secrets used by the platform.

  • The amster job, which loads application data, such as OAuth 2.0 client definitions, to the idrepo DS instance.

  • The ldif-importer job, which loads policy data required by AM to the the idrepo DS instance.

Environment Setup: Minikube

This section describes how to set up your local computer for developing custom Docker images for the ForgeRock Identity Platform on a Minikube cluster.

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Tasks to set up your local computer:

When you’ve completed the setup tasks, you’ll have an environment like the one shown in this diagram.

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[5]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Third-Party Software

After you’ve obtained the forgeops repository, you’ll need to install a set of third-party software on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the following table have been validated for building custom Docker images for the ForgeRock Identity Platform. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install all of the following third-party software:

Software Version Homebrew package

Docker Desktop[6]

2.3.0.3

docker (cask)[7]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

VirtualBox

6.1.10

virtualbox (cask)[7]

Minikube

1.12.0

minikube

Minikube Virtual Machine

Now that you’ve installed third-party software on your local computer, you’re ready to create a Minikube VM. When you create a Minikube VM, a Kubernetes cluster is created in the VM.

The following configuration has been validated for building custom Docker images for the ForgeRock Identity Platform:

  • Kubernetes version: 1.17.4

  • Memory: 12 GB or more

  • Disk space: 40 GB or more

Perform the following procedure to set up Minikube:

Set up Minikube
  1. Use the minikube start command to create a Minikube VM. In this example, the Minikube VM is created with a Kubernetes cluster suitable for building custom Docker images for the ForgeRock Identity Platform:

    $ minikube start --memory=12288 --cpus=3 --disk-size=40g \
     --vm-driver=virtualbox --bootstrapper kubeadm --kubernetes-version=1.17.4
    😄  minikube v1.12.1 on Darwin 10.15.6
    ✨  Using the virtualbox driver based on user configuration
    🔥  Creating virtualbox VM (CPUs=3, Memory=12288MB, Disk=40960MB) …​
    🐳  Preparing Kubernetes v1.17.4 on Docker 19.03.12 …​
    🔎  Verifying Kubernetes components…​
    🌟  Enabling addons: default-storageclass, storage-provisioner
    🏄  Done! kubectl is now configured to use "minikube"
  2. Run the following command to enable the ingress controller plugin built into Minikube:

    $ minikube addons enable ingress
    🌟  The 'ingress' addon is enabled
  3. Before attempting to work with the ForgeRock Identity Platform on Minikube, you must implement the workaround for Minikube issue 1568. The workaround lets pods deployed on Minikube reach themselves on the network.

    Run the following command to work around the issue:

    $ minikube ssh sudo ip link set docker0 promisc on

    Note that you must run this command every time you restart the Minikube VM.

Namespace

After you’ve created the Minikube VM and Kubernetes cluster, create a namespace in your new cluster.

ForgeRock recommends that you deploy the ForgeRock Identity Platform in a namespace other than the default namespace. Deploying to a non-default namespace lets you separate workloads in a cluster. Separating a workload into a namespace lets you delete the workload easily; just delete the namespace.

Perform the following procedure to create a namespace:

Create a Namespace
  1. Create a namespace in your Kubernetes cluster:

    $ kubectl create namespace my-namespace
    namespace/my-namespace created
  2. Make the new namespace your active namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is "my-namespace".

Hostname Resolution

After you’ve created a namespace, set up hostname resolution for the ForgeRock Identity Platform servers you’ll deploy in your namespace.

  1. Run the minikube ip command to get the Minikube ingress controller’s IP address:

    $ minikube ip
    111.222.33.44
  2. Add an entry similar to the following to the /etc/hosts file:

    minikube-ip-address my-namespace.iam.example.com

Minikube’s Docker Engine

Now you’ve prepared your cluster by creating a namespace and setting up hostname resolution. Your last step before you can deploy the ForgeRock Identity Platform is to set up your local computer to execute docker commands on Minikube’s Docker engine.

ForgeRock recommends using the built-in Docker engine when developing custom Docker images using Minikube. When you use Minikube’s engine, you don’t have to build Docker images on a local engine and then push the images to a local or cloud-based Docker registry. Instead, you build images using the same Docker engine that Minikube uses. This streamlines development.

Set up your local computer to use Minikube’s Docker engine as follows:

Set Up Your Local Computer to Use Minikube’s Docker Engine
  1. Run the docker-env command in your shell:

    $ eval $(minikube docker-env)
  2. Stop Skaffold from pushing Docker images to a remote Docker registry [8]:

    $ skaffold config set --kube-context minikube local-cluster true
    set value local-cluster to true for context minikube

For more information about using Minikube’s built-in Docker engine, see Use local images by re-using the Docker daemon in the Minikube documentation.

Environment Setup: Shared Cloud Cluster

This section describes how to set up your local computer before you start developing custom Docker images for the ForgeRock Identity Platform.

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Jump to the section that contains the setup activities for the type of cluster you’ll be working on:

When you’ve completed the setup tasks, you’ll have an environment like the one shown in this diagram.

GKE Cluster

This section is for users developing custom Docker images for the ForgeRock Identity Platform on a shared GKE cluster.

Tasks to set up your local computer:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[9]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Third-Party Software

After you’ve obtained the forgeops repository, you’ll need to get non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the tables below have been validated for building custom Docker images for the ForgeRock Identity Platform. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[10]

2.3.0.3

docker (cask)[11]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Google Cloud SDK

303.0.0

google-cloud-sdk (cask)[7]

Getting Cluster Details

Next, you’ll need to get some information about the cluster from your cluster administrator. You’ll provide this information as you perform various tasks to access the cluster.

Obtain the following cluster details:

  • The name of the Google Cloud project that contains the cluster.

  • The cluster name.

  • The Google Cloud zone in which the cluster resides.

  • The IP address of your cluster’s ingress controller.

  • The location of the Docker registry from which your cluster will obtain images for the ForgeRock Identity Platform.

Context for the Shared Cluster

You’ve now installed third-party software on your local computer and obtained some details about the cluster. You’re ready to create a context on your local computer to enable access to the shared cluster.

Kubernetes uses contexts to access Kubernetes clusters. Before you can access the shared cluster, you must create a context on your local computer if it’s not already present.

Perform the following procedure to create a context for the shared cluster:

Create a Context for a GKE Cluster
  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted:

    • If the current context references the shared cluster, there is nothing further to do. Proceed to Namespace.

    • If the context of the shared cluster is present in the kubectx command output, set the context as follows:

      $ kubectx my-context
      Switched to context "my-context".

      After you have set the context, proceed to Namespace.

    • If the context of the shared cluster is not present in the kubectx command output, continue to the next step in this procedure.

  2. Configure the Google Cloud SDK standard component to use your Google account. Run the following command:

    $ gcloud auth login
  3. A browser window prompts you to log in to Google. Log in using your Google account.

    A second screen requests several permissions. Select Allow.

    A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"

  4. Return to the terminal window and run the following command. Use the cluster name, zone, and project name you obtained from your cluster administrator:

    $ gcloud container clusters \
     get-credentials cluster-name --zone google-zone --project google-project
    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for cluster-name.
  5. Run the kubectx command again and verify that the context for our Kubernetes cluster is now the current context.

Namespace

After you’ve created a context for the shared cluster, create a namespace in the cluster. Namespaces let you isolate your deployments from other developers' deployments.

ForgeRock recommends that you deploy the ForgeRock Identity Platform in a namespace other than the default namespace. Deploying to a non-default namespace lets you separate workloads in a cluster. Separating a workload into a namespace lets you delete the workload easily; just delete the namespace.

Perform the following procedure to create a namespace:

Create a Namespace
  1. Create a namespace in your Kubernetes cluster:

    $ kubectl create namespace my-namespace
    namespace/my-namespace created
  2. Make the new namespace your current namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is "my-namespace".

Hostname Resolution

After you’ve created a namespace, you might need to set up hostname resolution for the ForgeRock Identity Platform servers you’ll deploy in your namespace.

Take the following actions:

  1. Determine whether DNS resolves the hostname, my-namespace.iam.example.com.

  2. If DNS does not resolve the hostname, add an entry to the /etc/hosts file similar to the following:

    ingress-ip-address my-namespace.iam.example.com

    For ingress-ip-address, specify the IP address of your cluster’s ingress controller that you obtained from your cluster administrator.

    The hosts file is located at /etc/hosts

docker push Setup

In the environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your GKE cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to your cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Set up a Docker credential helper:

    $ gcloud auth configure-docker
  3. Run the kubectx command to obtain the Kubernetes context.

  4. Configure Skaffold with the Docker registry location you obtained from your cluster administrator and the Kubernetes context you obtained in the previous step:

    $ skaffold config set default-repo my-docker-registry -k my-kubernetes-context

You’re now ready to deploy the ForgeRock Identity Platform in your namespace. Proceed to CDK Deployment.

EKS Cluster

This section is for users developing custom Docker images for the ForgeRock Identity Platform on a shared EKS cluster.

Tasks to set up your local computer:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[5]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Third-Party Software

After you’ve obtained the forgeops repository, you’ll need to get non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux. For a list of the Homebrew packages to install, see Homebrew Package Names.

The versions listed in the tables below have been validated for building custom Docker images for the ForgeRock Identity Platform. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[12]

2.3.0.3

docker (cask)[13]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Amazon AWS Command Line Interface

2.0.40

awscli

AWS IAM Authenticator for Kubernetes

0.5.1

aws-iam-authenticator

Getting Cluster Details

Next, you’ll need to get some information about the cluster from your cluster administrator. You’ll provide this information as you perform various tasks to access the cluster.

Obtain the following cluster details:

  • Your AWS access key ID.

  • Your AWS secret access key.

  • The AWS region in which the cluster resides.

  • The cluster name.

  • The IP address of your cluster’s ingress controller.

  • The location of the Docker registry from which your cluster will obtain images for the ForgeRock Identity Platform.

Context for the Shared Cluster

You’ve now installed third-party software on your local computer and obtained some details about the cluster. You’re ready to create a context on your local computer to enable access to the shared cluster.

Kubernetes uses contexts to access Kubernetes clusters. Before you can access the shared cluster, you must create a context on your local computer if it’s not already present.

Perform the following procedure to create a context for the shared cluster:

Create a Context for an EKS Cluster
  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted:

    • If the current context references the shared cluster, there is nothing further to do. Proceed to Namespace.

    • If the context of the shared cluster is present in the kubectx command output, set the context as follows:

      $ kubectx my-context
      Switched to context "my-context".

      After you have set the context, proceed to Namespace.

    • If the context of the shared cluster is not present in the kubectx command output, continue to the next step in this procedure.

  2. Run the aws configure command. This command logs you in to AWS and sets the AWS region. Use the access key ID, secret access key, and region you obtained from your cluster administrator. You do not need to specify a value for the default output format:

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:
  3. Run the following command. Use the cluster name you obtained from your cluster administrator:

    $ aws eks update-kubeconfig --name my-cluster
    Added new context arn:aws:eks:us-east-1:813759318741:cluster/my-cluster
    to /Users/my-user-name/.kube/config
  4. Run the kubectx command again and verify that the context for your Kubernetes cluster is now the current context.

In Amazon EKS environments, the cluster owner must grant access to a user before the user can access cluster resources. For details about how the cluster owner can grant you access to the cluster, refer the cluster owner to Cluster Access for Multiple AWS Users.

Namespace

After you’ve created a context for the shared cluster, create a namespace in the cluster. Namespaces let you isolate your deployments from other developers' deployments.

ForgeRock recommends that you deploy the ForgeRock Identity Platform in a namespace other than the default namespace. Deploying to a non-default namespace lets you separate workloads in a cluster. Separating a workload into a namespace lets you delete the workload easily; just delete the namespace.

Perform the following procedure to create a namespace:

Create a Namespace
  1. Create a namespace in your Kubernetes cluster:

    $ kubectl create namespace my-namespace
    namespace/my-namespace created
  2. Make the new namespace your current namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is "my-namespace".

Hostname Resolution

After you’ve created a namespace, set up hostname resolution for the ForgeRock Identity Platform servers you’ll deploy in your namespace.

Take the following actions:

  1. Determine whether DNS resolves the hostname, my-namespace.iam.example.com.

  2. If DNS does not resolve the hostname, add an entry to the /etc/hosts file similar to the following:

    ingress-ip-address my-namespace.iam.example.com

    For ingress-ip-address, specify the IP address of your cluster’s ingress controller that you obtained from your cluster administrator.

docker push Setup

In the environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your EKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to your cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Log in to Amazon ECR. Use the Docker registry location you obtained from your cluster administrator:

    $ aws ecr get-login-password | \
     docker login --username AWS --password-stdin my-docker-registry
    stdin my-docker-registry
    Login Succeeded

    ECR login sessions expire after 12 hours. Because of this, you’ll need to perform these steps again whenever your login session expires.[14]

  3. Run the kubectx command to obtain the Kubernetes context.

  4. Configure Skaffold with the Docker registry location and the Kubernetes context:

    $ skaffold config set default-repo my-docker-registry -k my-kubernetes-context

You’re now ready to deploy the ForgeRock Identity Platform in your namespace. Proceed to CDK Deployment.

AKS Cluster

This section is for users developing custom Docker images for the ForgeRock Identity Platform on a shared AKS cluster.

Tasks to set up your local computer:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[5]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Third-Party Software

After you’ve obtained the forgeops repository, you’ll need to get non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux. For a list of the Homebrew packages to install, see Homebrew Package Names.

The versions listed in the tables below have been validated for building custom Docker images for the ForgeRock Identity Platform. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[15]

2.3.0.3

docker (cask)[16]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Azure Command Line Interface

2.10.1

azure-cli

Getting Cluster Details

Next, you’ll need to get some information about the cluster from your cluster administrator. You’ll provide this information as you perform various tasks to access the cluster.

Obtain the following cluster details:

  • The ID of the Azure subscription that contains the cluster. Be sure to obtain the hexadecimal subscription ID, not the subscription name.

  • The name of the resource group that contains the cluster.

  • The cluster name.

  • The IP address of your cluster’s ingress controller.

  • The location of the Docker registry from which your cluster will obtain images for the ForgeRock Identity Platform.

Context for the Shared Cluster

You’ve now installed third-party software on your local computer and obtained some details about the cluster. You’re ready to create a context on your local computer to enable access to the shared cluster.

Kubernetes uses contexts to access Kubernetes clusters. Before you can access the shared cluster, you must create a context on your local computer if it’s not already present.

Perform the following procedure to create a context for the shared cluster:

Create a Context for an AKS Cluster
  1. Run the kubectx command and review the output. The current Kubernetes context is highlighted:

    • If the current context references the shared cluster, there is nothing further to do. Proceed to Namespace.

    • If the context of the shared cluster is present in the kubectx command output, set the context as follows:

      $ kubectx my-context
      Switched to context "my-context".

      After you have set the context, proceed to Namespace.

    • If the context of the shared cluster is not present in the kubectx command output, continue to the next step in this procedure.

  2. Configure the Azure CLI to use your Microsoft Azure. Run the following command:

    $ az login
  3. A browser window prompts you to log in to Azure. Log in using your Microsoft account.

    A second screen should appear with the message, "You have logged into Microsoft Azure!"

  4. Return to the terminal window and run the following command. Use the resource group, cluster name, and subscription ID you obtained from your cluster administrator:

    $ az aks get-credentials \
     --resource-group my-fr-resource-group \
     --name my-fr-cluster \
     --subscription your subscription ID \
     --overwrite-existing
  5. Run the kubectx command again and verify that the context for your Kubernetes cluster is now the current context.

Namespace

After you’ve created a context for the shared cluster, create a namespace in the cluster. Namespaces let you isolate your deployments from other developers' deployments.

ForgeRock recommends that you deploy the ForgeRock Identity Platform in a namespace other than the default namespace. Deploying to a non-default namespace lets you separate workloads in a cluster. Separating a workload into a namespace lets you delete the workload easily; just delete the namespace.

Perform the following procedure to create a namespace:

Create a Namespace
  1. Create a namespace in your Kubernetes cluster:

    $ kubectl create namespace my-namespace
    namespace/my-namespace created
  2. Make the new namespace your current namespace:

    $ kubens my-namespace
    Context "my-context" modified.
    Active namespace is "my-namespace".

Hostname Resolution

After you’ve created a namespace, set up hostname resolution for the ForgeRock Identity Platform servers you’ll deploy in your namespace.

Take the following actions:

  1. Determine whether DNS resolves the hostname, my-namespace.iam.example.com.

  2. If DNS does not resolve the hostname, add an entry to the /etc/hosts file similar to the following:

    ingress-ip-address my-namespace.iam.example.com

    For ingress-ip-address, specify the IP address of your cluster’s ingress controller that you obtained from your cluster administrator.

docker push Setup

In the environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your AKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to your cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Install the ACR Docker Credential Helper.

  3. Run the kubectx command to obtain the Kubernetes context.

  4. Configure Skaffold with the Docker registry location you obtained from your cluster administrator and the Kubernetes context you obtained in the previous step:

    $ skaffold config set default-repo my-docker-registry -k my-kubernetes-context

You’re now ready to deploy the ForgeRock Identity Platform in your namespace. Proceed to CDK Deployment.

CDK Deployment

After you’ve set up your development environment, your next step is to deploy the platform.

Perform the following procedure to deploy the ForgeRock Identity Platform in your namespace:

Deploy the ForgeRock Identity Platform
  1. Change the deployment namespace for the all environment from the default namespace to your namespace:

    1. Change to the directory containing the all environment:

      $ cd /path/to/forgeops/kustomize/overlay/7.0/all
    2. Open the kustomization.yaml file.

    3. Modify two lines in the file so that the platform is deployed in your namespace:

      Original Text Revised Text

      namespace: default

      namespace: my-namespace

      FQDN: "default.iam.example.com"

      FQDN: "my-namespace.iam.example.com"

    4. Save the updated kustomization.yaml file.

  2. Initialize the staging area for configuration profiles with the canonical CDK configuration profile for the ForgeRock Identity Platform:

    $ cd /path/to/forgeops/bin
    $ ./config.sh init --profile cdk --version 7.0

    The config.sh init command copies the canonical CDK configuration profile from the master directory for configuration profiles to the staging area:

    This diagram shows how the staging area is initialized from the canonical CDK profile.

    For more information about the management of ForgeRock Identity Platform configuration profiles in the forgeops repository, see Configuration Profiles.

  3. Run Skaffold to build Docker images and deploy the ForgeRock Identity Platform:

    $ cd /path/to/forgeops
    $ skaffold run
  4. In a separate terminal tab or window, run the kubectl get pods command to monitor status of the deployment. Wait until all the pods are ready.

    Your namespace should have the pods shown in this diagram.

    You’re now ready to access tools that will help you customize ForgeRock Identity Platform Docker images. Proceed to UI and API Access for more information about using ForgeRock’s administration consoles and REST APIs from your development environment.

UI and API Access

Now that you’ve deployed the ForgeRock Identity Platform, you’ll need to know how to access its administration tools. You’ll use these tools to build customized Docker images for the platform.

This page shows you how to access the ForgeRock Identity Platform’s administrative consoles and REST APIs.

You access AM and IDM services through the Kubernetes ingress controller. Access components using their normal interfaces:

  • For AM, the console and REST APIs.

  • For IDM, the Admin UI and REST APIs.

You can’t access DS through the ingress controller, but you can use Kubernetes methods to access the DS pods.

For more information about how AM and IDM are configured in the CDK, see Configuration in the forgeops repository’s top-level README file.

AM Services

Access the AM console and REST APIs as follows:

Access the AM Console
  1. Obtain the amadmin user’s password:

    $ cd /path/to/forgeops/bin
    $ ./print-secrets.sh amadmin
  2. Open a new window or tab in a web browser.

  3. Go to https://my-namespace.iam.example.com/platform.

    The Kubernetes ingress controller handles the request, routing it to the login-ui pod.

    The login UI prompts you to log in.

  4. Log in as the amadmin user.

    The ForgeRock Identity Platform UI appears in the browser.

  5. Select Native Consoles > Access Management.

    The AM console appears in the browser.

Access the AM REST APIs
  1. Start a terminal window session.

  2. Run a curl command to verify that you can access the REST APIs through the ingress controller. For example:

    $ curl \
     --insecure \
     --request POST \
     --header "Content-Type: application/json" \
     --header "X-OpenAM-Username: amadmin" \
     --header "X-OpenAM-Password: 179rd8en9rffa82rcf1qap1z0gv1hcej" \
     --header "Accept-API-Version: resource=2.0" \
     --data "{}" \
     'https://my-namespace.iam.example.com/am/json/realms/root/authenticate'
    {
        "tokenId":"AQIC5wM2…​",
        "successUrl":"/am/console",
        "realm":"/"
    }

IDM Services

Access the IDM Admin UI and REST APIs as follows:

Access the IDM Admin UI
  1. Obtain the amadmin user’s password:

    $ cd /path/to/forgeops/bin
    $ ./print-secrets.sh amadmin
  2. Open a new window or tab in a web browser.

  3. Go to https://my-namespace.iam.example.com/platform.

    The Kubernetes ingress controller handles the request, routing it to the login-ui pod.

    The login UI prompts you to log in.

  4. Log in as the amadmin user.

    The ForgeRock Identity Platform UI appears in the browser.

  5. Select Native Consoles > Identity Management.

    The IDM Admin UI appears in the browser.

Access the IDM REST APIs
  1. Start a terminal window session.

  2. If you haven’t already done so, get the amadmin user’s password using the print-secrets.sh command.

  3. AM authorizes IDM REST API access using the OAuth 2.0 authorization code flow. The CDK comes with the idm-admin-ui client, which is configured to let you get a bearer token using this OAuth 2.0 flow. You’ll use the bearer token in the next step to access the IDM REST API:

    1. Get a session token for the amadmin user:

      $ curl \
       --request POST \
       --insecure \
       --header "Content-Type: application/json" \
       --header "X-OpenAM-Username: amadmin" \
       --header "X-OpenAM-Password: vr58qt11ihoa31zfbjsdxxrqryfw0s31" \
       --header "Accept-API-Version: resource=2.0, protocol=1.0" \
       'https://my-namespace.iam.example.com/am/json/realms/root/authenticate'
      {
       "tokenId":"AQIC5wM…​TU3OQ*",
       "successUrl":"/am/console",
       "realm":"/"}
    2. Get an authorization code. Specify the ID of the session token that you obtained in the previous step in the --Cookie parameter:

      $ curl \
       --dump-header - \
       --insecure \
       --request GET \
       --Cookie "iPlanetDirectoryPro=AQIC5wM…​TU3OQ*" \
       "https://my-namespace.iam.example.com/am/oauth2/realms/root/authorize?redirect_uri=https://my-namespace.iam.example.com/platform/appAuthHelperRedirect.html&client_id=idm-admin-ui&scope=openid&response_type=code&state=abc123"
      HTTP/2 302
      server: nginx/1.17.10
      date: Tue, 21 Jul 2020 16:54:20 GMT
      content-length: 0
      location: https://my-namespace.iam.example.com/platform/appAuthHelperRedirect.html
       ?code=3cItL9G52DIiBdfXRngv2_dAaYM&iss=http://my-namespace.iam.example.com:80/am/oauth2&state=abc123
       &client_id=idm-admin-ui
      set-cookie: route=1595350461.029.542.7328; Path=/am; Secure; HttpOnly
      x-frame-options: SAMEORIGIN
      x-content-type-options: nosniff
      cache-control: no-store
      pragma: no-cache
      set-cookie: OAUTH_REQUEST_ATTRIBUTES=DELETED; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Path=/; HttpOnly
    3. Exchange the authorization code for an access token. Specify the access code that you obtained in the previous step in the code URL parameter:

      $ curl --request POST \
       --insecure \
       --data "grant_type=authorization_code" \
       --data "code=3cItL9G52DIiBdfXRngv2_dAaYM" \
       --data "client_id=idm-admin-ui" \
       --data "redirect_uri=https://my-namespace.iam.example.com/platform/appAuthHelperRedirect.html" \
       "https://my-namespace.iam.example.com/am/oauth2/realms/root/access_token" 
      {
       "access_token":"oPzGzGFY1SeP2RkI-ZqaRQC1cDg",
       "scope":"openid",
       "id_token":"eyJ0eXAiOiJKV..sO4HYqlQ",
       "token_type":"Bearer",
       "expires_in":239
      }
  4. Run a curl command to verify that you can access the openidm/config REST endpoint through the ingress controller. Use the access token returned in the previous step as the bearer token in the authorization header.

    The following example command provides information about the IDM configuration:

    $ curl \
     --insecure \
     --request GET \
     --header "Authorization: Bearer oPzGzGFY1SeP2RkI-ZqaRQC1cDg" \
     --data "{}" \
     https://my-namespace.iam.example.com/openidm/config
    {
     "_id":"",
     "configurations":
      [
       {
        "_id":"ui.context/admin",
        "pid":"ui.context.4f0cb656-0b92-44e9-a48b-76baddda03ea",
        "factoryPid":"ui.context"
        },
        . . .
       ]
    }

Directory Services

The DS pods in the CDK are not exposed outside of the cluster. If you need to access one of the DS pods, use a standard Kubernetes method:

  • Execute shell commands in DS pods using the kubectl exec command.

  • Forward a DS pod’s LDAPS port (1636) to your local computer. Then you can run LDAP CLI commands like ldapsearch. You can also use an LDAP editor such as Apache Directory Studio to access the directory.

For all CDM directory pods, the directory superuser DN is uid=admin. Obtain this user’s password by running the print-secrets.sh dsadmin command.

Custom Docker Image Development

Before you can develop custom Docker images, you must have deployed the ForgeRock Identity Platform and learned how to access its administration GUIs and REST APIs. Now you’re ready to configure the platform to meet your needs. As you configure the platform, you can decide at any point to build new custom Docker images that will incorporate the configuration changes you’ve made.

When you develop custom Docker images, you iterate on the following process:

  • Access AM and IDM running in the CDK, and customize them using their GUIs and REST APIs.

  • Export your customizations from the CDK to a Git repository on your local computer.

  • Rebuild the Docker images for the platform with your new customizations.

  • Redeploy the platform on the CDK.

Before you build customized Docker images for the platform, be sure you’re familiar with the types of data used by the platform. This conceptual information helps you understand which type of data is included in custom Docker images.

To develop customized Docker images, start with base images and a canonical configuration profile from ForgeRock. Then, build up a configuration profile, customizing the platform to meet your needs. The configuration profile is integrated into the customized Docker image:

Brief overview of containers for developers.

Before you deploy the platform in production, you must change from using ForgeRock’s evaluation-only base images to using base images you build yourself. Building your own base images is covered in Base Docker Images.

Custom Images for AM

With AM up and running, you can iteratively:

  • Customize AM’s configuration and run-time data using the console and the REST APIs.

  • Capture changes to the AM configuration by synchronizing them from the AM service running on Kubernetes back to the staging area and the master directory for configuration profiles in your forgeops repository clone.

    Skaffold detects the changes and rebuilds the am Docker image. Then, it restarts AM, and you can test any changes you’ve made to the AM configuration based on the updated Docker image.

  • Capture changes to certain run-time data - OAuth 2.0 clients and IG agents - by synchronizing them from the AM service running on Kubernetes back to the staging area and the master directory for configuration profiles in your forgeops repository clone.

    Skaffold detects the changes and rebuilds the amster Docker image.

am Image

The am Docker image contains the AM configuration.

Perform the following procedure iteratively when developing a customized am Docker image:

Create a Customized am Docker Image
  1. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  2. Modify the AM configuration using the AM console or the REST APIs.

    For information about how to access the AM Admin UI or REST APIs, see AM Services.

  3. Export the changes you made to the AM configuration to your forgeops repository clone:

    $ cd /path/to/forgeops/bin
    $ ./config.sh export --component am --version 7.0
    Exporting AM configuration..
    
    Any changed configuration files have been exported into
    docker/7.0/am/config. Check any changed files before saving back to the config
    folder to ensure correct formatting/functionality.

    The config.sh export command exports the modified parts of the AM configuration from the running ForgeRock Identity Platform to the docker/7.0/am/config directory.

  4. Verify the changed files using the config.sh diff -c am command.

    If any of the files contain hard-coded host names or passwords, replace them with configuration expressions. AM resolves configuration expressions when it starts up.

    See Property Value Substitution for important information about configuring values that vary at run-time, such as passwords and host names, in containerized deployments.

  5. After you’ve replaced all the hard-coded host names and passwords in exported files with configuration expressions, copy the exported files to your configuration profile.

    For more information about the management of ForgeRock Identity Platform configurations in the forgeops repository, see Configuration Profiles.

  6. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  7. Reinitialize the staging area with your configuration profile:

    $ cd /path/to/forgeops/bin
    $ ./config.sh init --profile my-profile --component am --version 7.0
  8. Execute the skaffold run command:

    $ cd /path/to/forgeops
    $ skaffold run

    Skaffold builds a new am Docker image and redeploys AM.

  9. To validate that AM has the expected configuration, start the console and verify that your configuration changes are present.

amster Image

The amster Docker image contains AM run-time data.

Perform the following procedure iteratively when developing a customized amster Docker image:

Create a Customized amster Docker Image
  1. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  2. Modify OAuth 2.0 clients or IG agents using the AM console or the REST APIs.

    For information about how to access the AM console or REST APIs, see AM Services.

  3. Synchronize the changes you made to the AM configuration to your configuration profile in your forgeops repository clone:

    $ cd /path/to/forgeops/bin
    $ ./config.sh sync --profile my-profile --component amster --version 7.0
    Exporting Amster configuration…​
    
    Skaffold is used to run the export job. Ensure your default-repo is set.
    
    Removing any existing Amster jobs…​
    job.batch "amster" deleted
    Deploying Amster job…​
    Waiting for Amster pod to come up.
    No resources found in david namespace.
    Amster job is responding..
    
    Executing Amster export within the amster pod
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future
    version. Use kubectl kubectl exec [POD] — [COMMAND] instead.
    
    Amster OpenAM Shell (7.0.0-SNAPSHOT build @build.number@, JVM: 11.0.8)
    Type ':help' or ':h' for help.
    -----------------------------------------------------------------------------
    am> :load /tmp/do_export.amster
    Export completed successfully
    Copying the export to the ./tmp directory
    tar: Removing leading `/' from member names
    Dynamic config exported
    
    Shutting down Amster job…​
    
    Saving Amster configuration..
    
    
    * APPLYING FIXES *
    Adding back amsterVersion placeholder …​
    Adding back FQDN placeholder …​
    Removing 'userpassword-encrypted' fields …​
    Add back password placeholder with defaults
    
    * The above fixes have been made to the Amster files. If you have exported new
    files that should contain commons placeholders or passwords, please update the
    rules in this script.*

    The config.sh sync command exports the modified AM configuration profile from the running ForgeRock Identity Platform to the staging area. Then, it saves the configuration profile as my-profile in the master directory for configuration profiles:

    This diagram shows how the config.sh command synchronizes a configuration profile.

    For more information about the management of ForgeRock Identity Platform configuration profiles in the forgeops repository, see Configuration Profiles.

  4. Examine each JSON file that was written to your configuration profile.

    If any of the files contain hard-coded host names or passwords, replace them with configuration expressions. AM resolves configuration expressions when it starts up.

    See Property Value Substitution for important information about configuring values that vary at run-time, such as passwords and host names, in containerized deployments.

  5. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  6. Reinitialize the staging area with your configuration profile:

    $ cd /path/to/forgeops/bin
    $ ./config.sh init --profile my-profile --component amster --version 7.0
  7. Shut down your ForgeRock Identity Platform deployment and delete PVCs used by the deployment from your namespace. See CDK Shutdown and Removal for details.

  8. Redeploy the ForgeRock Identity Platform:

    $ cd /path/to/forgeops
    $ skaffold run
  9. To validate that AM has the expected changes to run-time data, start the console and verify that your changes are present.

Custom Images for IDM

With IDM up and running, you can iteratively:

  • Customize IDM’s configuration using the Admin UI and the REST APIs.

  • Capture your configuration changes by synchronizing them from the IDM service running on Kubernetes back to the staging area and the master directory for configuration profiles in your forgeops repository clone.

    Skaffold detects the changes and rebuilds the idm Docker image. Then, it restarts IDM, and you can test the deployment based on the updated Docker image.

idm Image

The idm Docker image contains the IDM configuration.

Perform the following procedure iteratively when developing a customized idm Docker image:

Create a Customized idm Docker Image
  1. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  2. Modify the IDM configuration using the IDM Admin UI or the REST APIs.

    For information about how to access the IDM Admin UI or REST APIs, see IDM Services.

    When modifying the IDM configuration, use configuration expressions, not hard-coded values, for all host names and passwords.

    See Property Value Substitution for important information about configuring values that vary at run-time, such as passwords and host names, in containerized deployments.

  3. Synchronize the changes you made to the IDM configuration to your forgeops repository clone:

    $ cd /path/to/forgeops/bin
    $ ./config.sh sync --profile my-profile --component idm --version 7.0
    tar: Removing leading `/' from member names

    The config.sh sync command exports the modified IDM configuration from the running ForgeRock Identity Platform to the staging area. Then, it saves the configuration profile as my-profile in the master directory for configuration profiles:

    This diagram shows how the config.sh command synchronizes a configuration profile.

    For more information about the management of ForgeRock Identity Platform configurations in the forgeops repository, see Configuration Profiles.

  4. Execute the skaffold run command:

    $ cd /path/to/forgeops
    $ skaffold run

    Skaffold builds a new idm Docker image and redeploys IDM.

  5. Perform version control activities on your forgeops repository clone:

    1. Run the git status command.

    2. Review the state of the working directory and staging area.

    3. (Optional) Run the git commit command to commit changes to files that have been modified.

  6. To validate that IDM has the expected configuration, start the Admin UI and verify that your configuration changes are present.

Property Value Substitution

ForgeRock recommends using property value substitution for all passwords in the ForgeRock Identity Platform configuration in containerized deployments. Property value substitution eases promotion of the ForgeRock Identity Platform configuration from a development environment to a test or production environment.

You should also use property value substitution for host names and any other configuration values that will change as you promote your deployment from a development environment to a test or production environment.

To use property value substitution:

  • For AM, edit exported JSON files, replacing hard-coded values for passwords, host names, and other fields with configuration expressions.

  • For IDM, specify values for passwords, host names, and other fields using configuration expressions in the IDM Admin UI, or when using the REST API.

  • Specify values for configuration expressions in the data section of the config map in the /path/to/forgeops/kustomize/overlay/7.0/all/kustomization.yaml file. Use a key/value pair for each value to be substituted, specifying the configuration expression as the key.

Specifying passwords as described in the preceding section is extremely insecure. The passwords appear in cleartext in a Kubernetes config map.

It is expected that a future version of the platform will be able to resolve configuration expressions for passwords from password management systems; for example, HashiCorp Vault and Google Cloud Key Management System (KMS). For more information, see IDM issue #13262.

About Data Used by the Platform

The ForgeRock Identity Platform uses two types of data: configuration data and run-time data.

Configuration Data

Configuration data consists of properties and settings used by the ForgeRock Identity Platform. You update configuration data during the development phase of ForgeRock Identity Platform implementation. You should not change configuration data during the testing and production phases.

You change configuration data iteratively in a development environment. After changing configuration data, you rebuild Docker images and restart ForgeRock Identity Platform services when you’re ready to test sets of changes. If you make incorrect changes to configuration data, the platform might become inoperable. After testing modifications to configuration data, you promote your changes to test and production environments.

Examples of configuration data include AM realms, AM authentication trees, IDM social identity provider definitions, and IDM data mapping models for reconciliation.

Configuration Profiles

A ForgeRock Identity Platform configuration profile is a named set of configuration data that describes the operational characteristics of a running ForgeRock deployment.

Configuration profiles reside in two locations in the forgeops repository:

  • The master directory. Holds a canonical configuration profile for the CDK and user-customized configuration profiles. User-customized configuration profiles in this directory are considered to be the source of truth for ForgeRock Identity Platform deployments.

    The master directory for configuration profiles is located at the path /path/to/forgeops/config/7.0. You use Git to manage the configuration profiles in this directory.

  • The staging area. Holds a single configuration profile. You copy a profile from the master directory to the staging area before building a customized Docker image for the ForgeRock Identity Platform.

    The staging area is located in subdirectories of the path, /path/to/forgeops/docker/7.0. Configuration profiles copied to the staging area are transient and are not managed with Git.

The config.sh script lets you copy configuration profiles between the master directory and the staging area. It also lets you copy profiles from Kubernetes pods running ForgeRock Identity Platform components to the staging area.

You run this script before you build a customized Docker image for the platform. The script lets you specify which configuration profile to copy to the staging area. Skaffold uses this profile when it builds a Docker image.

For example, when you start developing customized images for the platform, you run the config.sh init command to initialize the staging area with the canonical CDK profile:

This diagram shows how the staging area is initialized from the canonical CDK profile.

You run the config.sh sync command to synchronize configuration changes you’ve made in a running deployment back to the staging area, and then to the master directory:

This diagram shows how the config.sh command synchronizes a configuration profile.

For more information about the config.sh script, see Managing Configurations in the forgeops repository’s top-level README file.

Run-Time Data

Run-time data consists of identities, policies, applications, and data objects used by the ForgeRock Identity Platform. You might extract sample run-time data while developing configuration data. Run-time data is volatile throughout ForgeRock Identity Platform implementation. Expect it to change even when the ForgeRock Identity Platform is in production.

You usually use sample data for run-time data. Run-time data that’s changed during development is not typically promoted to test and production environments. There’s no need to modify Docker images or restart ForgeRock Identity Platform services when run-time data is modified.

Examples of run-time data include AM and IDM identities, AM policies, AM OAuth 2.0 client definitions, and IDM relationships.

In the ForgeRock Identity Platform, run-time data is stored in databases and is not file-based. For more information about how run-time data is stored in AM and IDM, see:

CDK Troubleshooting

Kubernetes deployments are multi-layered and often complex.

Errors and misconfigurations can crop up in a variety of places. Performing a logical, systematic search for the source of a problem can be daunting.

Troubleshooting techniques you can use when attempting to resolve an issue:

Third-Party Software Versions

ForgeRock recommends installing tested versions of third-party software in environments where you’ll run the CDK. See Environment Setup: Minikube for tested versions of third-party software.

If you used Homebrew to install third-party software, you can use the following commands to obtain software versions:

  • Homebrew: brew list --versions

  • Homebrew casks: brew cask list --versions

Minikube VM Configuration (Minikube deployments only)

The minikube start command example in Minikube Virtual Machine specifies the virtual hardware requirements for a Minikube VM.

Run the VBoxManage showvminfo "minikube" command to verify that your Minikube VM meets the stated memory requirement (Memory Size in the output), and to gather other information that might be of interest when troubleshooting issues running the CDK in a Minikube environment.

Sufficient Disk Space (Minikube deployments only)

When the Minikube VM runs low on disk space, it acts unpredictably. Unexpected application errors can appear.

Verify that adequate disk space is available by logging in to the Minikube VM and running a command to display free disk space:

$ minikube ssh
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  383M  3.6G  10% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   64K  3.9G   1% /tmp
/dev/sda1        25G  7.7G   16G  33% /mnt/sda1
/Users          465G  219G  247G  48% /Users
$ exit
logout

In the preceding example, 16 GB of disk space is available on the Minikube VM.

kubectl bash Tab Completion

The bash shell contains a feature that lets you use the Tab key to complete file names.

A bash shell extension that provides similar Tab key completion for the kubectl command is available. While not a troubleshooting tool, this extension can make troubleshooting easier, because it lets you enter kubectl commands more easily.

For more information about the kubectl bash Tab completion extension, see Enabling shell autocompletion in the Kubernetes documentation.

Note that to install the bash Tab completion extension, you must be running version 4 or later of the bash shell. To determine your bash shell version, run the bash --version command.

Expanded Kustomize Output

If you’ve modified any of the Kustomize bases and overlays that come with the CDK, you might want to see how your changes affect CDK deployment. Use the kustomize build command to see how Kustomize expands your bases and overlays into YAML files.

For example:

$ cd /path/to/forgeops/kustomize/overlay/7.0
$ kustomize build all
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: forgeops-secrets
  name: forgeops-secrets-serviceaccount
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app: forgeops-secrets
  name: forgeops-secrets-role
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  - configmaps
  verbs:
  - get
  - list
. . .

Skaffold Issues

Skaffold provides different levels of debug logging information. When you encounter issues deploying the platform with Skaffold, you can set the logging verbosity to display more messages. The additional messages might help you identify problems.

For example:

$ cd /path/to/forgeops
$ skaffold dev -v debug
INFO[0000] starting gRPC server on port 50051
INFO[0000] starting gRPC HTTP server on port 50052
INFO[0000] Skaffold &{Version:v0.38.0 ConfigVersion:skaffold/v1beta14 GitVersion: GitCommit:1012d7339d0055ab93d7f88e95b7a89292ce77f6 GitTreeState:clean BuildDate:2020-09-13T02:16:09Z GoVersion:go1.13 Compiler:gc Platform:darwin/amd64}
DEBU[0000] config version (skaffold/v1beta12) out of date: upgrading to latest (skaffold/v1beta14)
DEBU[0000] found config for context "minikube"
DEBU[0000] Defaulting build type to local build
DEBU[0000] validating yamltags of struct SkaffoldConfig
DEBU[0000] validating yamltags of struct Metadata
. . .

Pod Descriptions and Container Logs

Look at pod descriptions and container log files for irregularities that indicate problems.

Pod descriptions contain information about active Kubernetes pods, including their configuration, status, containers (including containers that have finished running), volume mounts, and pod-related events.

Container logs contain startup and run-time messages that might indicate problem areas. Each Kubernetes container has its own log that contains output written to stdout by the application running in the container. am container logs are especially important for troubleshooting AM issues in Kubernetes deployments: AM writes its debug logs to stdout. Therefore, the am container logs include all the AM debug logs.

Here’s an example of how you can use pod descriptions and container logs to troubleshoot. Events in the pod description indicate that Kubernetes was unsuccessful in pulling a Docker image required to run a container. You can review your Docker registry’s configuration to determine whether a misconfiguration caused the problem.

The debug-logs.sh script generates the following HTML-formatted output, which you can view in a browser:

  • Descriptions of all the Kubernetes pods running the ForgeRock Identity Platform in your namespace

  • Logs for all of the containers running in these pods

Perform the following procedure to run the debug-logs.sh script and then view the output in a browser:

Run the debug-logs.sh Script
  1. Make sure that your namespace is the current namespace in your Kubernetes context.

  2. Change to the /path/to/forgeops/bin directory in your forgeops repository clone.

  3. Run the debug-logs.sh script:

    $ ./debug-logs.sh
    Generating debug log for namespace my-namespace
    rm: /tmp/forgeops/*: No such file or directory
    Generating amster-75c77f6974-rd2r2 logs
    Generating configstore-0 logs
    Generating ctsstore-0 logs
    Generating snug-seal-openam-6b84c96b78-xj8vs logs
    Generating userstore-0 logs
    open file:///tmp/forgeops/log.html in your browser
  4. In a browser, go to the URL shown in the debug-logs.sh output. For example, file:///tmp/forgeops/log.html. The browser displays a screen with a link for each ForgeRock Identity Platform pod in your namespace:

    Screen shot of debug-logs.sh output.
    debug-logs.sh Output
  5. (Optional) To access the information for a pod, select its link from the start of the debug-logs.sh output.

    Selecting the link takes you to the pod’s description. Logs for each of the pod’s containers follow the pod’s description.

  6. (Optional) To modify the output to contain the latest updates to the pod descriptions and container logs, run the debug-logs.sh script again, and then refresh your browser.

Kubernetes Container Access

You can log in to the bash shell of any container in the CDK with the kubectl exec command. From the shell, you can access ForgeRock-specific files, such as audit, debug, and application logs, and other files that might help you troubleshoot problems.

For example, access the AM authentication audit log as follows:

$ kubectl exec openam-960906639-wrjd8 -c openam -it /bin/bash
bash-4.3$ pwd
/usr/local/tomcatbash-4.3$ cd
bash-4.3$ pwd
/home/forgerockbash-4.3$ cd openam/openam/log
bash-4.3$ ls
access.audit.json  activity.audit.json authentication.audit.json config.audit.jsonbash-4.3$ cat authentication.audit.json

{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","component":"Authentication","eventName":"AM-LOGIN-MODULE-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","authControlFlag":"REQUIRED","moduleClass":"Amster","ipAddress":"172.17.0.3","authLevel":"0"}}],"principal":["amadmin"],"timestamp":"2017-09-29T18:14:46.200Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-88"}
{"realm":"/","transactionId":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-86","userId":"id=amadmin,ou=user,dc=openam,dc=forgerock,dc=org","component":"Authentication","eventName":"AM-LOGIN-COMPLETED","result":"SUCCESSFUL","entries":[{"moduleId":"Amster","info":{"authIndex":"service","ipAddress":"172.17.0.3","authLevel":"0"}}],"timestamp":"2017-09-29T18:14:46.454Z","trackingIds":["29aac0af-4b62-48cd-976c-3bb5abbed8c8-79"],"_id":"29aac0af-4b62-48cd-976c-3bb5abbed8c8-95"}bash-4.3$ exit

You can also copy files from a Kubernetes pod to your local system using the kubectl cp command. For more information, see the kubectl command reference.

CDK Shutdown and Removal

When you’re done working with your ForgeRock Identity Platform deployment, shut it down and remove it from your namespace as follows:

Shut Down and Remove a ForgeRock Identity Platform Deployment
  1. Go to the terminal window where you started Skaffold.

  2. Run the skaffold delete command to shut down your deployment and remove it from your namespace.

  3. Delete DS persistent volume claims (PVCs) from your namespace:

    $ kubectl delete pvc --all
    persistentvolumeclaim "data-ds-cts-0" deleted
    persistentvolumeclaim "data-ds-idrepo-0" deleted

Cloud Deployment Model Documentation

Deploy the CDM on GKE, Amazon EKS, or AKS to quickly spin up the platform for demonstration purposes. You’ll get a feel for what it’s like to deploy the platform on a Kubernetes cluster in the cloud. When you’re done, you won’t have a production-quality deployment. But you will have a robust, reference implementation of the ForgeRock Identity Platform.

Start Here

Important information you should know before deploying on Kubernetes.

GKE Setup

Set up a Google Cloud project and a GKE Kubernetes cluster prior to deploying the platform.

EKS Setup

Set up an AWS account and an Amazon EKS Kubernetes cluster prior to deploying the platform.

AKS Setup

Set up an Azure subscription and an AKS Kubernetes cluster prior to deploying the platform.

Deploy the Platform

Deploy the ForgeRock Identity Platform on your GKE, EKS, or AKS cluster.

Access the Platform

Access platform UIs and APIs.

The ForgeRock Identity Platform serves as the basis for our simple and comprehensive identity and access management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.

About the Cloud Deployment Model

The ForgeRock Cloud Deployment Team has developed Docker images, Kustomize bases and overlays, Skaffold workflows, Pulumi scripts, and other artifacts expressly to build the Cloud Deployment Model (CDM). The forgeops repository on GitHub contains the CDM artifacts you can use to deploy the ForgeRock Identity Platform in a cloud environment.

The CDM is a reference implementation for ForgeRock cloud deployments. You can get a sample ForgeRock Identity Platform deployment up and running in the cloud quickly using the CDM. After deploying the CDM, you can use it to explore how you might configure your Kubernetes cluster before you deploy the platform in production.

The CDM is a robust sample deployment for demonstration and exploration purposes only. It is not a production deployment.

This documentation describes how to use the CDM to stand up a Kubernetes cluster in the cloud that runs the ForgeRock Identity Platform, and then access the platform’s GUIs and REST APIs and run lightweight benchmarks. When you’re done, you can use the CDM to explore deployment customizations:

Illustrates the major tasks performed to deploy the CDM.

Standing up a Kubernetes cluster and deploying the platform using the CDM is an activity you might want to perform as a learning and exploration exercise before you put together a project plan for deploying the platform in production. To better understand how this activity fits in to the overall deployment process, see Deploy the CDM.

Using the CDM artifacts and this documentation, you can quickly get the ForgeRock Identity Platform running in a Kubernetes cloud environment. You deploy the CDM to begin to familiarize yourself with some of the steps you’ll need to perform when deploying the platform in the cloud for production use. These steps include creating a cluster suitable for deploying the ForgeRock Identity Platform, installing the platform, accessing its UIs and APIs, and running simple benchmarks.

Standardizes the process. The ForgeRock Cloud Deployment Team’s mission is to standardize a process for deploying ForgeRock Identity Platform natively in the cloud. The Team is made up of technical consultants and cloud software developers. We’ve had numerous interactions with ForgeRock customers, and discussed common deployment issues. Based on our interactions, we standardized on Kubernetes as the cloud platform, and we developed the CDM artifacts to make deployment of the platform easier in the cloud.

Simplifies baseline deployment. We then developed artifacts—Dockerfiles, Kustomize bases and overlays, Skaffold workflows, and Pulumi scripts—to simplify the deployment process. We deployed a production-quality Kubernetes cluster, and kept it up and running 24x7. We conduct continuous integration and continuous deployment as we add new capabilities and fix problems in the system. We maintain, troubleshoot, and tune the system for optimized performance. Most importantly, we documented the process, and captured benchmark results—a process with results you can replicate.

Eliminates guesswork. If you use our CDM artifacts and follow the instructions in this documentation without deviation, you can attain results similar to the benchmark results reported in this document. The CDM takes the guesswork out of setting up a cloud environment.It bypasses the deploy-test-integrate-test-repeat cycle many customers struggle through when spinning up the ForgeRock Identity Platform in the cloud for the first time.

Prepares you to deploy in production. After you’ve deployed the CDM, you’ll be ready to start working with experts on deploying in production. We strongly recommend that you engage a ForgeRock technical consultant or partner to assist you to deploy the platform in production.

CDM Architecture

Once you deploy the CDM, the ForgeRock Identity Platform is fully operational within a Kubernetes cluster. forgeops artifacts provide well-tuned JVM settings, memory, CPU limits, and other CDM configurations. Here are some of the characteristics of the CDM:

Multi-zone Kubernetes cluster

ForgeRock Identity Platform is deployed in a Kubernetes cluster. For high availability, CDM clusters are distributed across three zones.

For better node sizing, pods in CDM clusters are organized in two node pools.

Go here for a diagram that shows the organization of pods in zones and node pools in a CDM cluster.

Third-party deployment and monitoring tools
Ready-to-use ForgeRock Identity Platform components
  • Multiple DS instances are deployed for higher availability. Separate instances are deployed for Core Token Service (CTS) tokens and identities. The instances for identities also contain AM and IDM run-time data.

  • The AM configuration is file-based, stored at the path /home/forgerock/openam/config inside the AM Docker container (and in the AM pods).

  • Multiple AM instances are deployed for higher availability. The AM instances are configured to access the DS data stores.

  • Multiple IDM instances are deployed for higher availability. The IDM instances are configured to access the DS data stores.

Highly available, distributed deployment

Deployment across the three zones ensures that the ingress controller and all ForgeRock Identity Platform components are highly available.

Distribution across the two node pools—primary and DS—groups like pods together, enabling appropriate node sizing.

The following diagram shows how pods are organized in node pools and zones on CDM clusters:

CDM clusters have three zones and two node pools. The node pools have six nodes each.
Load balancing

The NGINX Ingress Controller provides load balancing services for CDM deployments. Ingress controller pods run in the nginx namespace. Implementation varies by cloud provider.

Secured communication

The ingress controller is SSL-enabled. SSL is terminated at the ingress controller. Incoming requests and outgoing responses are encrypted. For more information, see Secure HTTP.

Stateful Sets

The CDM uses Kubernetes stateful sets to manage the DS pods. Stateful sets protect against data loss if Kubernetes client containers fail.

The CTS data stores are configured for affinity load balancing for optimal performance:

AM connections to CTS servers use token affinity in CDM.

The AM policies, application data, and identities reside in the idrepo directory service. The deployment uses a single idrepo master that can fail over to one or more secondary directory services:

For all the AM pods
Authentication

IDM is configured to use AM for authentication.

DS replication

All DS instances are configured for full replication of identities and session tokens.

Backup and restore

The CDM is ready to back up directory data, but backups are not scheduled by default. To schedule backups, see Backup and Restore.

You can enable the automatic restore capability in CDM to create new DS instances with data from the backup of another CDM deployment with the same DS topology.

Initial data loading jobs

When it starts up, the CDM runs three jobs to load data into the environment:

  • The forgeops-secrets job generates a set of Kubernetes secrets used by the platform.

  • The amster job loads application data, such as OAuth 2.0 client definitions, to the idrepo DS instance.

  • The ldif-importer job loads policy data required by AM to the the idrepo DS instance.

Environment Setup: GKE

Before deploying the CDM, you must:

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Third-Party Software

Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the following table have been validated for deploying the CDM on Google Cloud. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[17]

2.3.0.3

docker (cask)[18]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Pulumi

2.7.1

Do not use brew to install Pulumi[19] .

Helm

3.2.4_1

kubernetes-helm

Gradle

6.5.1

gradle

Node.js

12.18.3

node@12 CDM requires Node.js version 12.

Google Cloud SDK

303.0.0

google-cloud-sdk (cask)[7]

Google Cloud Project Setup

The CDM runs in a Kubernetes cluster in a Google Cloud project.

This section outlines how the Cloud Deployment Team created and configured our Google Cloud project before we created our cluster.

To replicate Google Cloud project creation and configuration, follow this procedure:

Configure a Google Cloud Project for the CDM
  1. Log in to the Google Cloud Console and create a new project.

  2. Authenticate to the Google Cloud SDK to obtain the permissions you’ll need to create a cluster:

    1. Configure the Google Cloud SDK standard component to use your Google account. Run the following command:

      $ gcloud auth login
    2. A browser window appears, prompting you to select a Google account. Select the account you want to use for cluster creation.

      A second screen requests several permissions. Select Allow.

      A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"

    3. Set the Google Cloud SDK configuration to reference your new project. Specify the project ID, not the project name, in the gcloud config set project command:

      $ gcloud config set project my-project-id
    4. Acquire new user credentials to use for Google Cloud SDK application default credentials:

      $ gcloud auth application-default login
    5. A browser window appears, prompting you to select a Google account. Select the account you want to use for cluster creation.

      A second screen requests the required permission. Select Allow.

      A third screen should appear with the heading, "You are now authenticated with the Google Cloud SDK!"

  3. Assign the following roles to users who will be creating Kubernetes clusters and deploying CDM:

    • Editor

    • Kubernetes Engine Admin

    • Kubernetes Engine Cluster Admin

    Remember, the CDM is a reference implementation, and is not for production use. The roles you assign in this step are suitable for the CDM. When you create a project plan, you’ll need to determine which Google Cloud roles are required.

  4. As of this writing, the CDM uses the C2 machine type for the DS node pool. Make sure that your project has an adequate quota for this machine type in the region where you’ll deploy the CDM. If the quota is lower than 96 CPUs, request a quota increase to 96 CPUs (or higher) before you create the cluster for the CDM.

    When you create a project plan, you’ll need to determine which machine types are needed, and, possibly, increase quotas.

  5. Create a service account on GCS for performing backup, and download the service account credential file, which we refer to here as the my-sa-credential.json file.

  6. Create a Google Cloud Storage bucket and note the Link for gsutil of the bucket.

  7. Grant permissions on the storage bucket to the service account you created for backup.

Kubernetes Cluster Creation and Setup

Now that you’ve installed third-party software on your local computer and set up a Google Cloud project, you’re ready to create a Kubernetes cluster for the CDM in your project.

The Cloud Deployment Team used Pulumi software to create the CDM cluster. This section describes how the team used Pulumi to create and set up a Kubernetes cluster that can run the CDM. It covers the following topics:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[20]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Node.js Dependencies

The cluster directory in the forgeops repository contains Pulumi scripts for creating the CDM cluster.

The Pulumi scripts are written in TypeScript and run in the Node.js environment. Before running the scripts, you’ll need to install the Node.js dependencies listed in the /path/to/forgeops/cluster/pulumi/package.json file as follows:

Install Node.js Dependencies
  1. Change to the /path/to/forgeops/cluster/pulumi directory.

  2. Remove any previously installed Node.js dependencies:

    $ rm -rf node_modules
  3. Install dependencies:

    $ npm install
    >
    . . .
    added 292 packages from 447 contributors and audited 295 packages in 22.526s
    . . .
    found 0 vulnerabilities

Kubernetes Cluster Creation

After cloning theforgeops repository and installing Node.js dependencies, you’re ready to create the Kubernetes cluster for the CDM.

This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:

  • prod, nginx, and cert-manager namespaces created

  • NGINX ingress controller deployed

  • Certificate Manager deployed

  • Prometheus and Grafana monitoring tools deployed

Perform the following procedures to replicate CDM cluster creation:

Create a Kubernetes Cluster for the CDM
  1. Obtain the following information from your Google Cloud administrator:

    1. The ID of the project in which to create the cluster. Be sure to obtain the project ID and the project name.

    2. The region in which to create the cluster. The CDM is deployed in three zones within a single region.

      The Cloud Deployment Team deployed the CDM in the us-east1 region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use this location when deploying the CDM, regardless of your actual location.

      However, if you would like to deploy the CDM in a different region, you may do so. The region must support C2 instance types.

  2. ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you Create a Project Plan, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.

    Store your Pulumi passphrase in an environment variable:

    $ export PULUMI_CONFIG_PASSPHRASE=my-passphrase

    The default Pulumi passphrase is password.

  3. Log in to Pulumi using the local option or the Pulumi service.

    For example, to log in using the local option:

    $ pulumi login -l

    As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.

  4. Create networking infrastructure components to support your cluster:

    1. Change to the directory that contains the Google Cloud infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/gcp/infra
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/gcp/infra. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly.

    3. Initialize the infrastructure stack:

      $ pulumi stack init gcp-infra

      Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the pulumi stack select command.

    4. Configure the Google Cloud project for the infrastructure stack. Use the project ID you obtained in Step 1:

      $ pulumi config set gcp:project my-project-id
    5. (Optional) If you’re deploying the CDM in a region other than us-east1, configure your infrastructure stack with your region. Use the region you obtained in Step 1:

      $ pulumi config set gcp:region my-region
    6. Create the infrastructure components:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes to proceed.

    7. To verify that Pulumi created the infrastructure components, log in to the Google Cloud console. Select the VPC Networks option. You should see a new network with a public subnet in the VPC Networks list. The new network should be deployed in your region.

  5. Create your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/gcp/gke
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/gcp/gke. If you are not in this directory, Pulumi will create the CDM stack incorrectly.

    3. Initialize the CDM stack:

      $ pulumi stack init gke-medium
    4. Configure the Google Cloud project for the cluster stack. Use the project ID you obtained in Step 1:

      $ pulumi config set gcp:project my-project-id
    5. Create the cluster:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes to proceed.

      Pulumi creates the cluster in the same region in which you created the infrastructure stack.

    6. Make a note of the static IP address that Pulumi created. This IP address appears in the output from the pulumi up command. Look for output similar to:

      ip: "35.229.115.150"

      You’ll need this IP address when you deploy the NGINX ingress controller.

    7. To verify that Pulumi created the cluster, log in to the Google Cloud console. Select the Kubernetes Engine option. You should see the new cluster in the list of Kubernetes clusters.

  6. After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file, $HOME/.kube/config. Configure your local computer’s Kubernetes settings so that the kubectl command can access your new cluster:

    1. Verify that the /path/to/forgeops/cluster/pulumi/gcp/gke directory is still your current working directory.

    2. Create a kubeconfig file with your new cluster’s configuration in the current working directory:

      $ pulumi stack output kubeconfig > kubeconfig
    3. Configure Kubernetes to get cluster information from the union of the new kubeconfig file and the default Kubernetes configuration file:

      $ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
    4. Run the kubectx command.

      The output should contain your newly-created cluster and any existing clusters.

      The current context should be set to the context for your new cluster.

  7. Check the status of the pods in your cluster until all the pods are ready:

    1. List all the pods in the cluster:

      $ kubectl get pods --all-namespaces
      NAMESPACE    NAME                                        READY STATUS    RESTARTS AGE
      kube-system  event-exporter-v0.3.0-74bf544f8b-ddmp5      2/2   Running   0        61m
      kube-system  fluentd-gke-8g2rc                           2/2   Running   0        61m
      kube-system  fluentd-gke-8ztb6                           2/2   Running   0        61m
      . . .
      kube-system  fluentd-gke-scaler-dd489f778-wdhwr          1/1   Running   0        61m
      . . .
      kube-system  gke-metrics-agent-4fhss                     1/1   Running   0        60m
      kube-system  gke-metrics-agent-82qjl                     1/1   Running   0        60m
      . . .
      kube-system  kube-dns-5dbbd9cc58-8l8xl                   4/4   Running   0        61m
      kube-system  kube-dns-5dbbd9cc58-m5lmj                   4/4   Running   0        66m
      kube-system  kube-dns-autoscaler-6b7f784798-48p9n        1/1   Running   0        66m
      kube-system  kube-proxy-gke-cdm-medium-ds-03b9e239-k67z  1/1   Running   0        61m
      . . .
      kube-system  kube-proxy-gke-cdm-medium-primary-. . .     1/1   Running   0        62m
      . . .
      kube-system  l7-default-backend-84c9fcfbb-9qkvq          1/1   Running   0        66m
      kube-system  metrics-server-v0.3.3-6fb7b9484f-6k65z      2/2   Running   0        61m
      kube-system  prometheus-to-sd-7nh9p                      1/1   Running   0        62m
      . . .
      kube-system  stackdriver-metadata-agent-cluster-. . .    2/2   Running   0        66m
    2. Review the output. Deployment is complete when:

      • The READY column indicates all running containers are available. The entry in the READY column represents [total number of containers/number of available containers].

      • All entries in the STATUS column indicate Running or Completed.

    3. If necessary, continue to query your cluster’s status until all the pods are ready.

Deploy an NGINX Ingress Controller

Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.

Also, remember, the CDM is a reference implementation, and is not for production use. Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller. When you plan your production deployment, you’ll need to determine which ingress controller to use in production.

  1. Deploy the NGINX ingress controller in your cluster. For static-ip-address, specify the IP address obtained when you performed Step 5f of the Create your cluster procedure:

    $ /path/to/forgeops/bin/ingress-controller-deploy.sh -g -i static-ip-address
    namespace/nginx created
    Release "nginx-ingress" does not exist. Installing it now.
    NAME: nginx-ingress
    LAST DEPLOYED: Mon Aug 10 16:14:33 2020
    NAMESPACE: nginx
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    . . .
  2. Check the status of the pods in the nginx namespace until all the pods are ready:

    $ kubectl get pods --namespace nginx
    NAME                                              READY STATUS    RESTARTS AGE
    nginx-ingress-controller-69b755f68b-9l5n8         1/1   Running   0        4m38s
    nginx-ingress-controller-69b755f68b-hp456         1/1   Running   0        4m38s
    nginx-ingress-default-backend-576b86996d-qxst9    1/1   Running   0        4m38s
  3. Get the ingress controller’s external IP address:

    $ kubectl get services --namespace nginx

    The ingress controller’s IP address appears in the EXTERNAL-IP column.

  4. You’ll access ForgeRock Identity Platform services through the ingress controller. The URLs you’ll use must be resolvable from your local computer.

    Add an entry similar to the following to your /etc/hosts file:

    ingress-ip-address prod.iam.example.com

    For ingress-ip-address, specify the IP address that you obtained in the preceding step.

Deploy Certificate Manager

Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. When you plan for production deployment, you’ll need to determine how you want to manage certificates in production.

  1. Deploy the Certificate Manager in your cluster:

    $ /path/to/forgeops/bin/certmanager-deploy.sh
    customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
    namespace/cert-manager created
    serviceaccount/cert-manager-cainjector created
    serviceaccount/cert-manager created
    serviceaccount/cert-manager-webhook created
    clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
    . . .
    service/cert-manager created
    service/cert-manager-webhook created
    deployment.apps/cert-manager-cainjector created
    deployment.apps/cert-manager created
    deployment.apps/cert-manager-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    deployment.extensions/cert-manager-webhook condition met
    clusterissuer.cert-manager.io/default-issuer created
    secret/certmanager-ca-secret created
  2. Check the status of the pods in the cert-manager namespace until all the pods are ready:

    $ kubectl get pods --namespace cert-manager
    NAME                                              READY STATUS    RESTARTS AGE
    cert-manager-6d5fd89bdf-khj5w                     1/1   Running   0        3m57s
    cert-manager-cainjector-7d47d59998-h5b48          1/1   Running   0        3m57s
    cert-manager-webhook-6559cc8549-8vdtp             1/1   Running   0        3m56s
Deploy Prometheus, Grafana, and Alertmanager

Remember the CDM is a reference implementation, and is not for production use. Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling, if you like. When you create a project plan, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your environment.

  1. Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore info: skipping unknown hook: "crd-install" messages:

    $ /path/to/forgeops/bin/prometheus-deploy.sh
    namespace/monitoring created
    "stable" has been added to your repositories
    Release "prometheus-operator" does not exist. Installing it now.
    manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
    . . .
    NAME: prometheus-operator
    LAST DEPLOYED: Mon Feb 10 16:47:45 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    . . .
    customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met
    Release "forgerock-metrics" does not exist. Installing it now.
    NAME: forgerock-metrics
    LAST DEPLOYED: Mon Feb 10 16:48:27 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  2. Check the status of the pods in the monitoring namespace until all the pods are ready:

    $ kubectl get pods --namespace monitoring
    NAME                                                      READY   STATUS    RESTARTS   AGE
    alertmanager-prometheus-operator-alertmanager-0           2/2     Running   0          94s
    prometheus-operator-grafana-86dcbfc89-5f9jf               2/2     Running   0          100s
    prometheus-operator-kube-state-metrics-66b4c95cd9-ln2mq   1/1     Running   0          100s
    prometheus-operator-operator-7684f89b74-h2dj2             2/2     Running   0          100s
    prometheus-operator-prometheus-node-exporter-4pt4m        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-59shz        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-5mknp        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-794pr        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-dc5hd        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-pl959        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-qlv9q        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-snckr        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-tgrg7        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-tvs7m        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-w6z54        1/1     Running   0          100s
    prometheus-operator-prometheus-node-exporter-ztvh4        1/1     Running   0          100s
    prometheus-prometheus-operator-prometheus-0               3/3     Running   1          84s

Cloud Storage for DS Backup

DS data backup is stored in cloud storage. Before you deploy the platform, you should have set up a cloud storage bucket to store the DS data backup. Configure the forgeops artifacts with the location and credentials for the cloud storage bucket:

Set the Credentials and Location for Cloud Storage
  1. Get the location of the my-sa-credentials.json file containing the encrypted credential for the service account used for storing DS backup in Google Cloud Storage as mentioned in Step 5 of Configure a Google Cloud Project for the CDM.

  2. Change to the /path/to/forgeops/kustomize/base/7.0/ds/base/ directory.

  3. Run the following command:

    $ kubectl create secret generic cloud-storage-credentials \
     --from-file=GOOGLE_CREDENTIALS_JSON=/path/to/my-sa-credential.json \
     --dry-run=client -o yaml > cloud-storage-credentials.yaml
  4. Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.

  5. Edit the kustomization.yaml file and set the DSBACKUP_DIRECTORY parameter to the Link for gsutil parameter you noted in Step 6 of Configure a Google Cloud Project for the CDM.

    For example: DSBACKUP_DIRECTORY gs://my-backup-bucket.

docker push Setup

In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your GKE cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to your cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure to enable Skaffold to push Docker images to a registry accessible to your cluster:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Set up a Docker credential helper:

    $ gcloud auth configure-docker
  3. Run the kubectx command to obtain the Kubernetes context.

  4. Configure Skaffold with the Docker registry location for your project and the Kubernetes context. Use your project ID (not your project name) and the the Kubernetes context that you obtained in the previous step:

    $ skaffold config set default-repo gcr.io/my-project-ID -k my-kubernetes-context

You’re now ready to deploy the CDM.

Environment Setup: EKS

Before deploying the CDM, you must:

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Deployment Overview

The following diagram provides an overview of a CDM deployment in the Amazon EKS environment:

CDM cluster uses two subnets and contains two worker nodes.
  • An AWS stack template is used to create a virtual private cloud (VPC).

  • Three subnets are configured across three availability zones.

  • A Kubernetes cluster is created over the three subnets.

  • Three worker nodes are created within the cluster. The worker nodes contain the computing infrastructure to run the CDM components.

  • A local file system is mounted to the DS pod for storing directory data backup.

Third-Party Software

Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the following table have been validated for deploying the CDM on Amazon Web Services. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[21]

2.3.0.3

docker (cask)[22]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Pulumi

2.7.1

Do not use brew to install Pulumi[23] .

Helm

3.2.4_1

kubernetes-helm

Gradle

6.5.1

gradle

Node.js

12.18.3

node@12 CDM requires Node.js version 12.

Amazon AWS Command Line Interface

2.0.40

awscli

AWS IAM Authenticator for Kubernetes

0.5.1

aws-iam-authenticator

Amazon EKS Environment Setup

The CDM runs in a Kubernetes cluster in an Amazon EKS environment.

This section outlines the steps that the Cloud Deployment Team performed to set up an Amazon EKS environment for deploying CDM. It includes the following topics:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[24]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Permissions to Configure CDM Resources

This section outlines how the Cloud Deployment Team granted permissions enabling users to manage CDM resources in Amazon EKS.

Grant Users AWS Permissions

Remember, the CDM is a reference implementation and is not for production use. The permissions you grant in this procedure are suitable for the CDM. When you create a project plan, you’ll need to determine which AWS permissions are required.

  1. Create a group with the name cdm-users.

  2. Attach the following AWS preconfigured policies to the cdm-users group:

    • AWSLambdaFullAccess

    • IAMUserChangePassword

    • IAMReadOnlyAccess

    • PowerUserAccess

  3. Create the following four policies in the IAM service of your AWS account:

    1. Create the eks-full-access policy using the eks-full-access.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    2. Create the iam-change-user-key policy using the iam-change-user-key.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    3. Create the iam-create-role policy using the iam-create-role.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

    4. Create the iam-limited-write policy using the iam-limited-write.json file in the /path/to/forgeops/etc/aws-example-iam-policies directory.

  4. Attach the policies you created to the cdm-users group.

  5. Assign AWS users who will set up CDM to the cdm-users group.

  6. Create an S3 bucket to store DS data backup, and note its S3 link.

Perform all the subsequent steps here as an AWS user who is a member of the cdm-users group.

Amazon EKS Cluster Dependencies

This section outlines how the Cloud Deployment Team set up dependencies before creating a Kubernetes cluster in an Amazon EKS environment.

Set Up Amazon EKS Cluster Dependencies
  1. If you haven’t already done so, set up your aws command-line interface environment using the aws configure command.

  2. Verify that you have logged in as a member of the cdm-users group:

    $ aws iam list-groups-for-user --user-name my-user-name --output json
    {
        "Groups": [
            {
                "Path": "/",
                "GroupName": "cdm-users",
                "GroupId": "ABCDEFGHIJKLMNOPQRST",
                "Arn": "arn:aws:iam::048497731163:group/cdm-users",
                "CreateDate": "2020-03-11T21:03:17+00:00"
            }
        ]
    }
  3. The Cloud Deployment Team deployed the CDM in the us-east-1 (N. Virginia) region. To validate your deployment against the benchmarks in Performance Benchmarks, use the us-east-1 region when deploying the CDM.

    To set your default region to us-east-1, run the aws configure command:

    $ aws configure set default.region us-east-1

    To use any other region, note the following:

    • The region must support Amazon EKS.

    • Objects required for your EKS cluster must reside in the same region. To make sure that the objects are created in the correct region, be sure to set your default region as shown above.

  4. Create Amazon ECR repositories for the ForgeRock Identity Platform Docker images:

    $ for i in am amster ds-cts ds-idrepo forgeops-secrets idm ig ldif-importer;
    do
     aws ecr create-repository --repository-name "forgeops/${i}";
    done
    
    {
        "repository": {
            "repositoryArn": "arn:aws:ecr:us-east-1:. . .:repository/forgeops/am",
            "registryId": ". . .",
            "repositoryName": "forgeops/am",
            "repositoryUri": ". . . .dkr.ecr.us-east-1.amazonaws.com/forgeops/am",
            "createdAt": "2020-08-03T14:19:54-08:00"
        }
    }
    . . .

Node.js Dependencies

The cluster directory in the forgeops repository contains Pulumi scripts for creating the CDM cluster.

The Pulumi scripts are written in TypeScript and run in the Node.js environment. Before running the scripts, you’ll need to install the Node.js dependencies listed in the /path/to/forgeops/cluster/pulumi/package.json file as follows:

Install Node.js Dependencies
  1. Change to the /path/to/forgeops/cluster/pulumi directory.

  2. Remove any previously installed Node.js dependencies:

    $ rm -rf node_modules
  3. Install dependencies:

    $ npm install
    > . . .
    added 292 packages from 447 contributors and audited 295 packages in 22.526s
    . . .
    found 0 vulnerabilities

Kubernetes Cluster Creation

After cloning the forgeops repository and setting up other dependencies, you’re ready to create the Kubernetes cluster for the CDM.

This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:

  • prod, nginx, and cert-manager namespaces created

  • NGINX ingress controller deployed

  • Certificate Manager deployed

  • Prometheus and Grafana monitoring tools deployed

Perform the following procedures to replicate CDM cluster creation:

Create a Kubernetes Cluster for the CDM
  1. Obtain the following information from your AWS administrator:

    • The AWS region and zones in which you will create the cluster. The CDM is deployed in three zones within a single region.

      The Cloud Deployment Team deployed the CDM in three zones of the us-east-1 region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use these locations when deploying the CDM, regardless of your actual location.

    • Note the AMI ID for the latest patch version of Kubernetes 1.17 for your region from the tables in Amazon EKS-Optimized AMI.

  2. ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you plan for production deployment, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.

    Store your Pulumi passphrase in an environment variable:

    $ export PULUMI_CONFIG_PASSPHRASE=my-passphrase

    The default Pulumi passphrase is password.

  3. Log in to Pulumi using the local option or the Pulumi service.

    For example, to log in using the local option:

    $ pulumi login -l

    As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.

  4. Create networking infrastructure components to support your cluster:

    1. Change to the directory that contains the AWS infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/infra
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/aws/infra. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly.

    3. Initialize the infrastructure stack:

      $ pulumi stack init aws-infra

      Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the pulumi stack select command.

    4. (Optional) If you’re deploying the CDM in an AWS region other than us-east-1, configure your infrastructure stack with your region:

      $ pulumi config set aws:region my-region
  5. Create the infrastructure components:

    $ pulumi up

    Pulumi provides a preview of the operation and issues the following prompt:

    Do you want to perform this update?

    Review the operation, and then select yes if you want to proceed.

  6. To verify that Pulumi created the infrastructure components, log in to the AWS console, and select your region. Go to the VPC services page and verify that a new VPC is created in your region.

  7. Create your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/eks
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/aws/eks. If you are not in this directory, Pulumi will create the CDM stack incorrectly.

    3. Initialize the CDM stack:

      $ pulumi stack init eks-medium
    4. Configure the AWS region and zones for the cluster stack:

      1. Create a key pair called cdm_id_rsa in the ~/.ssh directory. You’ll need this key pair to log in to worker nodes.

    5. Configure the public key in the CDM stack.

      $ pulumi config set --secret eks:pubKey < ~/.ssh/cdm_id_rsa.pub
      1. (Optional) If you’re deploying the CDM in an AWS region other than us-east-1, configure the cluster stack with your region:

        $ pulumi config set aws:region my-region
      2. Configure each of your worker node types with the AMI ID you noted in Step 1:

        $ pulumi config set dsnodes:ami my-AMI-ID
        $ pulumi config set primarynodes:ami my-AMI-ID
        $ pulumi config set frontendnodes:ami my-AMI-ID
    6. Create the cluster:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes if you want to proceed.

      • To verify that Pulumi created the cluster, log in to the AWS console. Select the EKS service link. You should see the new cluster in the list of Amazon EKS clusters.

  8. After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file, $HOME/.kube/config. Configure your local computer’s Kubernetes settings so that the kubectl command can access your new cluster:

    1. Verify that the /path/to/forgeops/cluster/pulumi/aws/eks directory is still your current working directory.

    2. Create a kubeconfig file with your new cluster’s configuration in the current working directory:

      $ pulumi stack output kubeconfig > kubeconfig
    3. Configure Kubernetes to get cluster information from the union of the new kubeconfig file and the default Kubernetes configuration file:

      $ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
    4. Run the kubectx command.

      The output should contain your newly-created cluster and any existing clusters.

      The current context should be set to the context for your new cluster.

  9. Check the status of the pods in your cluster until all the pods are ready:

    1. List all the pods in the cluster:

      $ kubectl get pods --all-namespaces
      NAMESPACE    NAME                                        READY STATUS    RESTARTS AGE
      kube-system  event-exporter-v0.3.0-74bf544f8b-ddmp5      2/2   Running   0        61m
      kube-system  fluentd-aws-8g2rc                           2/2   Running   0        61m
      kube-system  fluentd-aws-8ztb6                           2/2   Running   0        61m
      . . .
      kube-system  fluentd-aws-scaler-dd489f778-wdhwr          1/1   Running   0        61m
      . . .
      kube-system  kube-dns-5dbbd9cc58-8l8xl                   4/4   Running   0        61m
      kube-system  kube-dns-5dbbd9cc58-m5lmj                   4/4   Running   0        66m
      kube-system  kube-dns-autoscaler-6b7f784798-48p9n        1/1   Running   0        66m
      kube-system  kube-proxy-aws-cdm-medium-ds-03b9e239-k67z  1/1   Running   0        61m
      . . .
      kube-system  kube-proxy-aws-cdm-medium-primary-. . .     1/1   Running   0        62m
      . . .
      kube-system  l7-default-backend-84c9fcfbb-9qkvq          1/1   Running   0        66m
      kube-system  metrics-server-v0.3.3-6fb7b9484f-6k65z      2/2   Running   0        61m
      kube-system  prometheus-to-sd-7nh9p                      1/1   Running   0        62m
      . . .
      kube-system  stackdriver-metadata-agent-cluster-. . .    2/2   Running   0        66m
    2. Review the output. Deployment is complete when:

      • The READY column indicates all running containers are available. The entry in the READY column represents [total number of containers/number of available containers].

      • All entries in the STATUS column indicate Running or Completed.

    3. If necessary, continue to query your cluster’s status until all the pods are ready.

Deploy an NGINX Ingress Controller

Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.

Also, remember, the CDM is a reference implementation not for production use. Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller.

When you plan your production deployment, you’ll need to determine which ingress controller to use in production.

  1. Deploy the NGINX ingress controller in your cluster:

    $ /path/to/forgeops/bin/ingress-controller-deploy.sh -e
    namespace/nginx created
    Release "nginx-ingress" does not exist. Installing it now.
    NAME: nginx-ingress
    LAST DEPLOYED: Mon Aug 10 16:14:33 2020
    NAMESPACE: nginx
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    . . .
  2. Check the status of the pods in the nginx namespace until all the pods are ready:

    $ kubectl get pods --namespace nginx
    NAME                                             READY   STATUS    RESTARTS   AGE
    nginx-ingress-controller-l7wn7                   1/1     Running   0          37s
    nginx-ingress-controller-n5g89                   1/1     Running   0          37s
    nginx-ingress-controller-nmnr7                   1/1     Running   0          37s
    nginx-ingress-controller-rlsd5                   1/1     Running   0          37s
    nginx-ingress-controller-x4h56                   1/1     Running   0          37s
    nginx-ingress-controller-zcbz5                   1/1     Running   0          37s
    nginx-ingress-default-backend-6b8dc9d88f-4w4h5   1/1     Running   0          37s
  3. Obtain the DNS name of the AWS elastic load balancer (ELB):

    $ aws elbv2 describe-load-balancers | grep DNSName
    
    "DNSName": "ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com",
  4. Get the external IP addresses of the ELB. For example:

    $ host ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 52.202.249.9
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 52.71.212.215
    ExtIngressLB-488797d-a852d467b394ea75.elb.us-east-1.amazonaws.com has address 50.16.129.191

    The host command returns several IP addresses. You can use any of the IP addresses when you modify your local hosts file in the next step.

  5. You’ll access ForgeRock Identity Platform services through the ELB. The URLs you’ll use must be resolvable from your local computer.

    Add an entry similar to the following to your /etc/hosts file:

    ingress-ip-address prod.iam.example.com

    For ingress-ip-address, specify any one of the ELB’s external IP addresses that you obtained in the previous step.

Deploy Certificate Manager

The CDM is a reference implementation and is not for production use. Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. When you plan for production depoyment, you’ll need to determine how you want to manage certificates in production.

  1. Deploy the Certificate Manager in your cluster:

    $ /path/to/forgeops/bin/certmanager-deploy.sh
    
    customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
    namespace/cert-manager created
    serviceaccount/cert-manager-cainjector created
    serviceaccount/cert-manager created
    serviceaccount/cert-manager-webhook created
    clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
    . . .
    service/cert-manager created
    service/cert-manager-webhook created
    deployment.apps/cert-manager-cainjector created
    deployment.apps/cert-manager created
    deployment.apps/cert-manager-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    deployment.extensions/cert-manager-webhook condition met
    clusterissuer.cert-manager.io/default-issuer created
    secret/certmanager-ca-secret created
  2. Check the status of the pods in the cert-manager namespace until all the pods are ready:

    $ kubectl get pods --namespace cert-manager
    
    NAME                                              READY STATUS    RESTARTS AGE
    cert-manager-6d5fd89bdf-khj5w                     1/1   Running   0        3m57s
    cert-manager-cainjector-7d47d59998-h5b48          1/1   Running   0        3m57s
    cert-manager-webhook-6559cc8549-8vdtp             1/1   Running   0        3m56s
Deploy Prometheus, Grafana, and Alertmanager

The CDM is a reference implementation, and is not for production use. Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling, if you like. When you plan your production deployment, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your environment.

  1. Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore info: skipping unknown hook: "crd-install" messages:

    $ /path/to/forgeops/bin/prometheus-deploy.sh
    namespace/monitoring created
    "stable" has been added to your repositories
    Release "prometheus-operator" does not exist. Installing it now.
    manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
    . . .
    NAME: prometheus-operator
    LAST DEPLOYED: Mon Aug 10 16:47:45 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    . . .
    customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met
    Release "forgerock-metrics" does not exist. Installing it now.
    NAME: forgerock-metrics
    LAST DEPLOYED: Mon Aug 10 16:48:27 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  2. Check the status of the pods in the monitoring namespace until all the pods are ready:

    $ kubectl get pods --namespace monitoring
    NAME                                                READY STATUS    RESTARTS AGE
    alertmanager-prometheus-operator-alertmanager-0     2/2   Running   0        5m8s
    prometheus-operator-grafana-7b8598c98f-glhmn        2/2   Running   0        5m16s
    prometheus-operator-kube-state-metrics-. . .        1/1   Running   0        5m16s
    prometheus-operator-operator-55966c69dd-76v46       2/2   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-82r4b  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-85ns8  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-kgwln  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-rrwrx  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-vl8f9  1/1   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-xmjrf  1/1   Running   0        5m16s
    . . .
    prometheus-prometheus-operator-prometheus-0         3/3   Running   1        4m57s

Cloud Storage for DS Backup

DS data backup is stored in cloud storage. Before you deploy CDM, create an S3 storage bucket to store the DS data backup. Then configure the forgeops artifacts with the location and credentials for the S3 bucket, such as s3://my-backup-bucket:

Set the location and credentials for the S3 bucket
  1. Change to the /path/to/forgeops/kustomize/base/7.0/ds/base/ directory.

  2. Run the following command to create the cloud-storage-credentials.yaml file:

    $ kubectl create secret generic cloud-storage-credentials \
     --from-literal=AWS_ACCESS_KEY_ID=my-access-key \
     --from-literal=AWS_SECRET_ACCESS_KEY=my-secret-access-key \
     --dry-run=client -o yaml > cloud-storage-credentials.yaml
  3. Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.

  4. Edit the kustomization.yaml file and set the DSBACKUP_DIRECTORY parameter to the S3 link of the DS data backup bucket.

    For example: DSBACKUP_DIRECTORY s3://my-backup-bucket.

docker push Setup

In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your Amazon EKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to be able to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker registry available to the shared cluster.

  • Skaffold needs to know the location of the Docker registry.

Perform the following procedure to enable Skaffold to push Docker images to a registry accessible to your cluster:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. Obtain your 12 digit AWS account ID. You’ll need it when you run subsequent steps in this procedure.

  3. Log in to Amazon ECR:

    $ aws ecr get-login-password | docker login --username AWS  \
     --password-stdin my-account-id.dkr.ecr.my-region.amazonaws.com
    stdin my-account-id.dkr.ecr.my-region.amazonaws.com
    Login Succeeded

    ECR login sessions expire after 12 hours. Because of this, you’ll need to log in again whenever your login session expires [25].

  4. Run the kubectx command to obtain the Kubernetes context.

  5. Configure Skaffold with your Docker registry location and the Kubernetes context:

    $ skaffold config \
     set default-repo my-account-id.dkr.ecr.my-region.amazonaws.com/forgeops \
     -k my-kubernetes-context
    set value default-repo to my-account-id.dkr.ecr.my-region.amazonaws.com/forgeops
    for context my-kubernetes-context

You’re now ready to deploy the CDM.

Environment Setup: AKS

Before deploying the CDM, you must:

Windows users

ForgeRock supports deploying the CDK and CDM using macOS and Linux. If you have a Windows computer, you’ll need to create a Linux VM. We tested using the following configurations:

  • Hypervisor: Hyper-V, VMWare Player, or VMWare Workstation

  • Guest OS: Ubuntu 19.10 with 12 GB memory and 60 GB disk space

  • Nested virtualization enabled in the Linux VM.

Perform all the procedures in this documentation within the Linux VM. In this documentation, the local computer refers to the Linux VM for Windows users.

Third-Party Software

Before installing the CDM, you must obtain non-ForgeRock software and install it on your local computer.

ForgeRock recommends that you install third-party software using Homebrew on macOS and Linux.

The versions listed in the following table have been validated for deploying the CDM on Microsoft Azure. Earlier and later versions will probably work. If you want to try using versions that are not in the tables, it is your responsibility to validate them.

Install the following third-party software:

Software Version Homebrew package

Docker Desktop[26]

2.3.0.3

docker (cask)[27]

Kubernetes client (kubectl)

1.18.6

kubernetes-cli

Skaffold

1.12.1

skaffold

Kustomize

3.8.1

kustomize

Kubernetes context switcher (kubectx)

0.9.1

kubectx

Pulumi

2.7.1

Do not use brew to install Pulumi[28] .

Helm

3.2.4_1

kubernetes-helm

Gradle

6.5.1

gradle

Node.js

12.18.3

node@12 CDM requires Node.js version 12.

Azure Command Line Interface

2.10.1

azure-cli

Azure Subscription Setup

The CDM runs in a Kubernetes cluster in an Azure subscription.

This page outlines how the Cloud Deployment Team created and configured our Azure subscription before we created our cluster.

To replicate Azure subscription creation and configuration, follow this procedure:

Configure an Azure Subscription for the CDM
  1. Assign the following roles to users who will deploy the CDM:

    • Azure Kubernetes Service Cluster Admin Role

    • Azure Kubernetes Service Cluster User Role

    • Contributor

    • User Access Administrator

    Remember, the CDM is a reference implementation, and is not for production use. The roles you assign in this step are suitable for the CDM. When you create a project plan, you’ll need to determine which Azure roles are required.

  2. Log in to Azure services as a user with the roles you assigned in the previous step:

    $ az login --username my-user-name
  3. View your current subscription ID:

    $ az account show
  4. If necessary, set the current subscription ID to the one you will use to deploy the CDM:

    $ az account set --subscription my-subscription-id
  5. Choose the Azure region in which you will deploy the CDM. The Cloud Deployment Team deployed the CDM in the eastus region.

    To use any other region, note the following:

    • The region must support AKS.

    • The subscription, resource groups, and resources you create for your AKS cluster must reside in the same region.

  6. As of this writing, the CDM uses Standard FSv2 Family vCPUs for the DS node pool. Make sure that your subscription has an adequate quota for this vCPU type in the region where you’ll deploy the CDM. If the quota is lower than 192 CPUs, request quota increase to 192 CPUs (or higher) before you create the cluster for the CDM.

    When you create a project plan, you’ll need to determine which CPU types are needed, and, possibly, increase quotas.

  7. DS data backup is stored in cloud storage.Before you deploy CDM, create an Azure Blob Storage container to store the DS data backup as blobs, and note its access link.

    For more information on how to create and use Azure Blob Storage, see Quickstart: Create, download, and list blobs with Azure CLI.

  8. The CDM uses Azure Container Registry (ACR) for storing Docker images.

    If you do not have a container registry in your subscription, create one.

Kubernetes Cluster Creation and Setup

Now that you’ve installed third-party software on your local computer and set up an Azure subscription, you’re ready to create a Kubernetes cluster for the CDM in your project.

The Cloud Deployment Team used Pulumi software to create the CDM cluster. This section describes how the team used Pulumi to create and set up a Kubernetes cluster that can run the CDM.

It covers the following topics:

forgeops Repository

Before you can deploy the CDK or the CDM, you must first get the forgeops repository:[29]

Obtain the forgeops Repository
  1. Clone the forgeops repository:

    The forgeops repository is a public Git repository. You do not need credentials to clone it.

  2. Check out the 2020.08.07-ZucchiniRicotta.1 release tag, creating a branch named my-branch:

    $ cd forgeops
    $ git checkout tags/2020.08.07-ZucchiniRicotta.1 -b my-branch
    Switched to a new branch 'my-branch'

Node.js Dependencies

The cluster directory in the forgeops repository contains Pulumi scripts for creating the CDM cluster.

The Pulumi scripts are written in TypeScript and run in the Node.js environment. Before running the scripts, you’ll need to install the Node.js dependencies listed in the /path/to/forgeops/cluster/pulumi/package.json file as follows:

Install Node.js Dependencies
  1. Change to the /path/to/forgeops/cluster/pulumi directory.

  2. Remove any previously installed Node.js dependencies:

    $ rm -rf node_modules
  3. Install dependencies:

    $ npm install
    
    . . .
    
    added 292 packages from 447 contributors and audited 295 packages in 17.169s
    . . .
    found 0 vulnerabilities

Kubernetes Cluster Creation

After cloning the forgeops repository and installing Node.js dependencies, you’re ready to create the Kubernetes cluster for the CDM.

This section outlines how the Cloud Deployment Team created our cluster. The cluster has the following characteristics:

  • prod, nginx, and cert-manager namespaces created

  • NGINX ingress controller deployed

  • Certificate Manager deployed

  • Prometheus and Grafana monitoring tools deployed

Perform the following procedures to replicate CDM cluster creation:

Create a Kubernetes Cluster for the CDM
  1. Obtain the following information from your AKS administrator:

    • The Azure region and zones in which you will create the cluster. The CDM is deployed within a single region.

      The Cloud Deployment Team deployed the CDM in the eastus region. If you want to validate your deployment against the benchmarks in Performance Benchmarks, use this region when you deploy the CDM, regardless of your actual location.

      However, if you would like to deploy the CDM in a different Azure region, you may do so. The region must support Standard FSv2 Family vCPUs.

    • The name of your Azure Container Registry.

    • The name of the resource group associated with your Azure Container Registry.

  2. ForgeRock provides Pulumi scripts to use for cluster creation. Use them when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore a different infrastructure-as-code solution, if you like. When you create a project plan, you’ll need to identify your organization’s preferred infrastructure-as-code solution, and create your own cluster creation automation scripts, if necessary.

    Store your Pulumi passphrase in an environment variable:

    $ export PULUMI_CONFIG_PASSPHRASE=my-passphrase

    The default Pulumi passphrase is password.

  3. Log in to Pulumi using the local option or the Pulumi service.

    For example, to log in using the local option:

    $ pulumi login -l

    As of this writing, issues have been encountered when using cloud provider backends for storing Pulumi stacks, a preview feature. Because of this, do not specify a cloud provider backend when logging in to Pulumi.

  4. Create infrastructure components to support your cluster:

    1. Change to the directory that contains the Azure infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/azure/infra
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/azure/infra. If you are not in this directory, Pulumi will create the infrastructure stack incorrectly.

    3. Initialize the infrastructure stack:

      $ pulumi stack init azure-infra

      Note that initializing a Pulumi stack also selects the stack, so you don’t need to explicitly execute the pulumi stack select command.

    4. (Optional) If you’re deploying the CDM in a region other than eastus, configure your infrastructure stack with your region. Use the region you obtained in Step 1:

      $ pulumi config set azure-infra:location my-region
    5. Configure the Azure resource group for Azure Container Registry. Use the resource group you obtained in Step 1:

      $ pulumi config set azure-infra:acrResourceGroupName my-resource-group
    6. Create the infrastructure components:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes if you want to proceed.

    7. To verify that Pulumi created the infrastructure components, log in to the Azure console. Display the resource groups. You should see a new resource group named azure-infra-ip-resource-group.

  5. Create your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/azure/aks
    2. Verify that your current working directory is /path/to/forgeops/cluster/pulumi/azure/aks. If you are not in this directory, Pulumi will create the CDM stack incorrectly.

    3. Initialize the CDM stack:

      $ pulumi stack init aks-medium
    4. (Optional) If you’re deploying the CDM in a region other than eastus, configure your infrastructure stack with your region. Use the region you obtained in Step 1:

      $ pulumi config set aks:location my-aks-region
    5. Configure the Azure resource group for Azure Container Registry. Use the resource group you obtained in Step 1:

      $ pulumi config set aks:acrResourceGroupName my-resource-group
    6. Create the cluster:

      $ pulumi up

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this update?

      Review the operation, and then select yes if you want to proceed.

    7. Make a note of the static IP address that Pulumi reserved. The address appears in the output from the pulumi up command. Look for output similar to:

      staticIpAddress    : "123.134.145.156"

      You’ll need the IP address when you deploy the NGINX ingress controller.

    8. Verify that Pulumi created the cluster using the Azure console.

  6. After creating a Kubernetes cluster, Pulumi does not write cluster configuration information to the default Kubernetes configuration file, $HOME/.kube/config. Configure your local computer’s Kubernetes settings so that the kubectl command can access your new cluster:

    1. Verify that the /path/to/forgeops/cluster/pulumi/azure/aks directory is still your current working directory.

    2. Create a kubeconfig file with your new cluster’s configuration in the current working directory:

      $ pulumi stack output kubeconfig > kubeconfig
    3. Configure Kubernetes to get cluster information from the union of the new kubeconfig file and the default Kubernetes configuration file:

      $ export KUBECONFIG=$PWD/kubeconfig:$HOME/.kube/config
    4. Run the kubectx command.

      The output should contain your newly created cluster and any existing clusters.

      The current context should be set to the context for your new cluster.

  7. Check the status of the pods in the your cluster until all the pods are ready:

    1. List all the pods in the cluster:

      $ kubectl get pods --all-namespaces
      NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
      kube-system   azure-cni-networkmonitor-gsbkg   1/1     Running   0          3m38s
      kube-system   azure-cni-networkmonitor-k26mc   1/1     Running   0          3m40s
      kube-system   azure-cni-networkmonitor-ng4qn   1/1     Running   0          8m40s
      . . .
      kube-system   azure-ip-masq-agent-4kkpg        1/1     Running   0          8m40s
      kube-system   azure-ip-masq-agent-6r699        1/1     Running   0          8m40s
      . . .
      kube-system   coredns-698c77c5d7-k6q9h         1/1     Running   0          9m
      kube-system   coredns-698c77c5d7-knwwm         1/1     Running   0          9m
      kube-system   coredns-autoscaler-. . .         1/1     Running   0          9m
      . . .
      kube-system   kube-proxy-5ztxd                 1/1     Running   0          8m23s
      kube-system   kube-proxy-6th8b                 1/1     Running   0          9m6s
      . . .
      kube-system   metrics-server-69df9f75bf-fc4pn  1/1     Running   1          20m
      kube-system   tunnelfront-5b56b76594-6wzps     2/2     Running   1          20m
    2. Review the output. Deployment is complete when:

      • The READY column indicates all running containers are available. The entry in the READY column represents [total number of containers/number of available containers].

      • All entries in the STATUS column indicate Running or Completed.

    3. If necessary, continue to query your cluster’s status until all the pods are ready.

Deploy an NGINX Ingress Controller

Before you perform this procedure, you must have initialized your CDM cluster by performing the steps in Create a Kubernetes Cluster for the CDM. If you did not set up your cluster using this technique, the cluster might be missing some required configuration.

Use the NGINX ingress controller when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore deploying a different ingress controller.

Remember, the CDM is a reference implementation and not for production use. When you create a project plan, you’ll need to determine which ingress controller to use in production.

  1. Deploy the NGINX ingress controller in your cluster. For static-ip-address, specify the IP address obtained when you performed Step 5.g of Create a Kubernetes Cluster for the CDM:

    $ /path/to/forgeops/bin/ingress-controller-deploy.sh \
     -a -i static-ip-address -r azure-infra-ip-resource-group
    namespace/nginx created
    Release "nginx-ingress" does not exist. Installing it now.
    NAME: nginx-ingress
    . . .
  2. Check the status of the services in the nginx namespace to note the external IP address for the ingress controller:

    $ kubectl get services --namespace nginx
    NAME                          TYPE          CLUSTER-IP    EXTERNAL-IP    PORT(S)  AGE
    nginx-ingress-controller      LoadBalancer  10.0.131.150  52.152.192.41  80…​    23s
    nginx-ingress-default-backend ClusterIP     10.0.146.216  none           80…​    23s
  3. You’ll access ForgeRock Identity Platform services through the ingress controller. The URLs you’ll use must be resolvable from your local computer.

    Add an entry similar to the following to your /etc/hosts file:

    ingress-ip-address prod.iam.example.com

    For ingress-ip-address, specify the external IP of the ingress controller service in the previous command.

Deploy Certificate Manager

Use cert-manager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different certificate management tooling, if you like. Remember, the CDM is not for production use. When you create a project plan, you’ll need to determine how you want to manage certificates in production.

  1. Deploy the Certificate Manager in your cluster:

    $ /path/to/forgeops/bin/certmanager-deploy.sh
    customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
    namespace/cert-manager created
    serviceaccount/cert-manager-cainjector created
    serviceaccount/cert-manager created
    serviceaccount/cert-manager-webhook created
    clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
    . . .
    service/cert-manager created
    service/cert-manager-webhook created
    deployment.apps/cert-manager-cainjector created
    deployment.apps/cert-manager created
    deployment.apps/cert-manager-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    deployment.extensions/cert-manager-webhook condition met
    clusterissuer.cert-manager.io/default-issuer created
    secret/certmanager-ca-secret created
  2. Check the status of the pods in the cert-manager namespace until all the pods are ready:

    $ kubectl get pods --namespace cert-manager
    NAME                                              READY STATUS    RESTARTS AGE
    cert-manager-6d5fd89bdf-khj5w                     1/1   Running   0        3m57s
    cert-manager-cainjector-7d47d59998-h5b48          1/1   Running   0        3m57s
    cert-manager-webhook-6559cc8549-8vdtp             1/1   Running   0        3m56s
Deploy Prometheus, Grafana, and Alertmanager

Use Prometheus, Grafana, and Alertmanager when you deploy the CDM. After you’ve finished deploying the CDM, you can use the CDM as a sandbox to explore different monitoring, reporting, and alerting tooling. When you create a project plan, you’ll need to determine how you want to implement monitoring, alerts, and reporting in your production environment.

  1. Deploy Prometheus, Grafana, and Alertmanager in your cluster. You can safely ignore info: skipping unknown hook: "crd-install" messages:

    $ /path/to/forgeops/bin/prometheus-deploy.sh
    namespace/monitoring created
    "stable" has been added to your repositories
    Release "prometheus-operator" does not exist. Installing it now.
    manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
    . . .
    NAME: prometheus-operator
    LAST DEPLOYED: Mon Aug 17 16:47:45 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    . . .
    customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met
    customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met
    Release "forgerock-metrics" does not exist. Installing it now.
    NAME: forgerock-metrics
    LAST DEPLOYED: Mon Aug 17 16:48:27 2020
    NAMESPACE: monitoring
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  2. Check the status of the pods in the monitoring namespace until all the pods are ready:

    $ kubectl get pods --namespace monitoring
    NAME                                                  READY STATUS    RESTARTS AGE
    alertmanager-prometheus-operator-alertmanager-0       2/2   Running   0        5m8s
    prometheus-operator-grafana-7b8598c98f-glhmn          2/2   Running   0        5m16s
    prometheus-operator-kube-state-metrics-. . .          1/1   Running   0        5m16s
    prometheus-operator-operator-55966c69dd-76v46         2/2   Running   0        5m16s
    prometheus-operator-prometheus-node-exporter-2tc48    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-4p4mr    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-4vz75    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-5vbnw    1/1   Running   0        3m32s
    prometheus-operator-prometheus-node-exporter-9vflt    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-bhmzn    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-hdjqm    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-hxwzw    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-kbrm9    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-ktpfs    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-sm85n    1/1   Running   0        3m31s
    prometheus-operator-prometheus-node-exporter-xntgk    1/1   Running   0        3m31s
    prometheus-prometheus-operator-prometheus-0           3/3   Running   1        4m57s

Cloud Storage for DS Backup

DS data backup is stored in cloud storage. Before you deploy the platform on Azure, you should have set up an Azure Blob Storage container to store the DS data backup. Then configure the forgeops artifacts with the location and credentials for the container:

Set Location and Credentials for Azure Blob Storage Container
  1. Get the access link to the Azure Blob Storage container that you plan to use for DS backup.

  2. Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.

  3. Edit the kustomization.yaml file and set the DSBACKUP_DIRECTORY parameter to the Azure link of the DS data backup.

  4. Change to the /path/to/forgeops/kustomize/base/7.0/ds/base directory.

  5. Run the following command to create the cloud-storage-credentials.yaml file:

    $ kubectl create secret generic cloud-storage-credentials \
     --from-literal=AZURE_ACCOUNT_KEY_ID=my-account-key \
     --from-literal=AZURE_ACCOUNT_NAME=my-account-name \
     --dry-run=client -o yaml > cloud-storage-credentials.yaml

docker push Setup

In the deployment environment you’re setting up, Skaffold builds Docker images using the Docker software you’ve installed on your local computer. After it builds the images, Skaffold pushes them to a Docker registry available to your AKS cluster. With the images on the remote Docker registry, Skaffold can orchestrate the ForgeRock Identity Platform, creating containers from the Docker images.

For Skaffold to push the Docker images:

  • Docker must be running on your local computer.

  • Your local computer needs credentials that let Skaffold push the images to the Docker repository available to your cluster.

  • Skaffold needs to know the location of the Docker repository.

Perform the following procedure to let Skaffold to push Docker images to a registry accessible to your cluster:

Set up Your Local Computer to Push Docker Images
  1. If it’s not already running, start Docker on your local computer. For more information, see the Docker documentation.

  2. If you don’t already have the name of the container registry that will hold ForgeRock Docker images, obtain it from your Azure administrator.

  3. Log in to your container registry:

    $ az acr login --name registry-name

    Azure repository logins expire after 4 hours. Because of this, you’ll need to log in to ACR whenever your login session expires [30].

  4. Run the kubectx command to obtain the Kubernetes context.

  5. Configure Skaffold with your Docker repository location and Kubernetes context:

    $ skaffold config \
     set default-repo registry-name.azurecr.io/cdm -k my-kubernetes-context

    For example:

    $ skaffold config set default-repo my-container-registry.azurecr.io/cdm -k aks

You’re now ready to deploy the CDM.

CDM Deployment

Now that you’ve set up your deployment environment following the instructions in the Environment Setup section for your cloud platform, you’re ready to deploy the CDM. This page shows you how to deploy the CDM in your Kubernetes cluster using artifacts from the forgeops repository.

Perform the following procedure:

Deploy the CDM
  1. Initialize the staging area for configuration profiles with the canonical CDK configuration profile [31] for the ForgeRock Identity Platform:

    $ cd /path/to/forgeops/bin
    $ ./config.sh init --profile cdk --version 7.0

    The config.sh init command copies the canonical CDK configuration profile from the master directory for configuration profiles to the staging area:

    The staging area is initialized from the canonical CDK profile.

    For more information about the management of ForgeRock Identity Platform configuration profiles in the forgeops repository, see Configuration Profiles.

  2. Change to the /path/to/forgeops directory and execute the skaffold run command:

    $ cd /path/to/forgeops
    $ skaffold run -p medium
  3. Make the prod namespace your current namespace:

    $ kubens prod
  4. Check the status of the pods in the prod namespace until all the pods are ready:

    1. Run the kubectl get pods command:

      $ kubectl get pods
      NAME                         READY   STATUS     RESTARTS   AGE
      admin-ui-6989d76f87-qwfxz    1/1     Running    0          1m1s
      am-9758bc5fd-hndsg           1/1     Running    0          2m37s
      am-9758bc5fd-qr124           1/1     Running    0          3m51s
      am-9758bc5fd-a6ccs           1/1     Running    0          3m51s
      amster-f7dpg                 0/1     Completed  0          3m
      ds-cts-0                     1/1     Running    0          2m36s
      ds-cts-1                     1/1     Running    0          114s
      ds-cts-2                     1/1     Running    0          70s
      ds-idrepo-0                  1/1     Running    0          2m36s
      ds-idrepo-1                  1/1     Running    0          112s
      ds-idrepo-2                  1/1     Running    0          74s
      end-user-ui-579d784b4-phk2v  1/1     Running    0          1m1s
      forgeops-secrets-k82w7       0/1     Completed  0          2m35s
      idm-0                        1/1     Running    0          2m28s
      idm-1                        1/1     Running    0          4m02s
      ldif-importer-l2d04          0/1     Completed  0          2m10s
      login-ui-78f44b644f-6srjg    1/1     Running    0          1m
    2. Review the output. Deployment is complete when:

      • All entries in the STATUS column indicate Running or Completed.

      • The READY column indicates all running containers are available. The entry in the READY column represents [total number of containers/number of available containers].

      • Three AM and two IDM pods are present.

      • The initial loading jobs (amster, forgeops-secret, and ldif-importer) have reached Completed status.

    3. If necessary, continue to query your deployment’s status until all the pods are ready.

UI and API Access

This page shows you how to access and monitor the ForgeRock Identity Platform components that make up the CDM.

AM and IDM are configured for access through the CDM cluster’s Kubernetes ingress controller. You can access these components using their normal interfaces:

  • For AM, the console and REST APIs.

  • For IDM, the Admin UI and REST APIs.

DS cannot be accessed through the ingress controller, but you can use Kubernetes methods to access the DS pods.

For more information about how AM and IDM have been configured in the CDM, see Configuration in the forgeops repository’s top-level README file for more information about the configurations.

AM Services

Access the AM console and REST APIs as follows:

Access the AM Console
  1. Obtain the amadmin user’s password:

    $ cd /path/to/forgeops/bin
    $ ./print-secrets.sh amadmin
  2. Open a new window or tab in a web browser.

  3. Go to https://prod.iam.example.com/platform.

    The Kubernetes ingress controller handles the request, routing it to the login-ui pod.

    The login UI prompts you to log in.

  4. Log in as the amadmin user.

    The ForgeRock Identity Platform UI appears in the browser.

  5. Select Native Consoles > Access Management.

    The AM console appears in the browser.

Access the AM REST APIs
  1. Start a terminal window session.

  2. Run a curl command to verify that you can access the REST APIs through the ingress controller. For example:

    $ curl \
     --insecure \
     --request POST \
     --header "Content-Type: application/json" \
     --header "X-OpenAM-Username: amadmin" \
     --header "X-OpenAM-Password: 179rd8en9rffa82rcf1qap1z0gv1hcej" \
     --header "Accept-API-Version: resource=2.0" \
     --data "{}" \
     'https://prod.iam.example.com/am/json/realms/root/authenticate'
    
    {
        "tokenId":"AQIC5wM2…​",
        "successUrl":"/am/console",
        "realm":"/"
    }

IDM Services

Access the IDM Admin UI and REST APIs as follows:

Access the IDM Admin UI
  1. Obtain the amadmin user’s password:

    $ cd /path/to/forgeops/bin
    $ ./print-secrets.sh amadmin
  2. Open a new window or tab in a web browser.

  3. Go to https://prod.iam.example.com/platform.

    The Kubernetes ingress controller handles the request, routing it to the login-ui pod.

    The login UI prompts you to log in.

  4. Log in as the amadmin user.

    The ForgeRock Identity Platform UI appears in the browser.

  5. Select Native Consoles > Identity Management.

    The IDM Admin UI appears in the browser.

Access the IDM REST APIs
  1. Start a terminal window session.

  2. If you haven’t already done so, get the amadmin user’s password using the print-secrets.sh command.

  3. AM authorizes IDM REST API access using the OAuth 2.0 authorization code flow. The CDM comes with the idm-admin-ui client, which is configured to let you get a bearer token using this OAuth 2.0 flow. You’ll use the bearer token in the next step to access the IDM REST API:

    1. Get a session token for the amadmin user:

      $ curl \
       --request POST \
       --insecure \
       --header "Content-Type: application/json" \
       --header "X-OpenAM-Username: amadmin" \
       --header "X-OpenAM-Password: vr58qt11ihoa31zfbjsdxxrqryfw0s31" \
       --header "Accept-API-Version: resource=2.0, protocol=1.0" \
       'https://prod.iam.example.com/am/json/realms/root/authenticate'
      {
       "tokenId":"AQIC5wM…​TU3OQ*",
       "successUrl":"/am/console",
       "realm":"/"}
    2. Get an authorization code. Specify the ID of the session token that you obtained in the previous step in the --Cookie parameter:

      $ curl \
       --dump-header - \
       --insecure \
       --request GET \
       --Cookie "iPlanetDirectoryPro=AQIC5wM…​TU3OQ*" \
       "https://prod.iam.example.com/am/oauth2/realms/root/authorize?redirect_uri=https://prod.iam.example.com/platform/appAuthHelperRedirect.html&client_id=idm-admin-ui&scope=openid&response_type=code&state=abc123"
      HTTP/2 302
      server: nginx/1.17.10
      date: Tue, 21 Jul 2020 16:54:20 GMT
      content-length: 0
      location: https://prod.iam.example.com/platform/appAuthHelperRedirect.html
       ?code=3cItL9G52DIiBdfXRngv2_dAaYM&iss=http://prod.iam.example.com:80/am/oauth2&state=abc123
       &client_id=idm-admin-ui
      set-cookie: route=1595350461.029.542.7328; Path=/am; Secure; HttpOnly
      x-frame-options: SAMEORIGIN
      x-content-type-options: nosniff
      cache-control: no-store
      pragma: no-cache
      set-cookie: OAUTH_REQUEST_ATTRIBUTES=DELETED; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Path=/; HttpOnly
    3. Exchange the authorization code for an access token. Specify the access code that you obtained in the previous step in the code URL parameter:

      $ curl --request POST \
       --insecure \
       --data "grant_type=authorization_code" \
       --data "code=3cItL9G52DIiBdfXRngv2_dAaYM" \
       --data "client_id=idm-admin-ui" \
       --data "redirect_uri=https://prod.iam.example.com/platform/appAuthHelperRedirect.html" \
       "https://prod.iam.example.com/am/oauth2/realms/root/access_token" 
      {
       "access_token":"oPzGzGFY1SeP2RkI-ZqaRQC1cDg",
       "scope":"openid",
       "id_token":"eyJ0eXAiOiJKV..sO4HYqlQ",
       "token_type":"Bearer",
       "expires_in":239
      }
  4. Run a curl command to verify that you can access the openidm/config REST endpoint through the ingress controller. Use the access token returned in the previous step as the bearer token in the authorization header.

    The following example command provides information about the IDM configuration:

    $ curl \
     --insecure \
     --request GET \
     --header "Authorization: Bearer oPzGzGFY1SeP2RkI-ZqaRQC1cDg" \
     --data "{}" \
     https://prod.iam.example.com/openidm/config
    {
     "_id":"",
     "configurations":
      [
       {
        "_id":"ui.context/admin",
        "pid":"ui.context.4f0cb656-0b92-44e9-a48b-76baddda03ea",
        "factoryPid":"ui.context"
        },
        . . .
       ]
    }

Directory Services

The DS pods in the CDM are not exposed outside of the cluster. If you need to access one of the DS pods, use a standard Kubernetes method:

  • Execute shell commands in DS pods using the kubectl exec command.

  • Forward a DS pod’s LDAPS port (1636) to your local computer. Then you can run LDAP CLI commands, for example ldapsearch. You can also use an LDAP editor such as Apache Directory Studio to access the directory.

For all CDM directory pods, the directory superuser DN is uid=admin. Obtain this user’s password by running the print-secrets.sh dsadmin command.

CDM Monitoring

Here are procedures to access Grafana dashboards and the Prometheus web UI:

Access Grafana Dashboards

For information about the Grafana UI, see the Grafana documentation.

  1. Forward port 3000 on your local computer to port 3000 on the Grafana web server:

    $ kubectl \
     port-forward \
     $(kubectl get pods --selector=app.kubernetes.io/name=grafana \
     --output=jsonpath="{.items..metadata.name}" --namespace=monitoring) \
     3000 --namespace=monitoring
    
    Forwarding from 127.0.0.1:3000 → 3000
    Forwarding from [::1]:3000 → 3000
  2. In a web browser, go to http://localhost:3000 to start the Grafana user interface.

  3. Log in to Grafana as the admin user. The password is password.

  4. When you’re done using the Grafana UI, enter Cntl+c in the terminal window to stop port forwarding.

Access the Prometheus Web UI

For information about the Prometheus web UI, see the Prometheus documentation.

  1. Forward port 9090 on your local computer to port 9090 on the Prometheus web server:

    $ kubectl \
     port-forward \
     $(kubectl get pods --selector=app=prometheus \
     --output=jsonpath="{.items..metadata.name}" --namespace=monitoring) \
     9090 --namespace=monitoring
    
    Forwarding from 127.0.0.1:9090 → 9090
    Forwarding from [::1]:9090 → 9090
  2. In a web browser, go to http://localhost:9090.

    The Prometheus web UI appears in the browser.

  3. When you’re done using the Prometheus web UI, enter Cntl+c in the terminal window to stop port forwarding.

For a description of the CDM monitoring architecture and information about how to customize CDM monitoring, see Monitoring Customizations.

Performance Benchmarks

CDM benchmarks have not yet been run for ForgeRock Identity Platform 7.

For the CDM benchmarks running platform version 6.5, see the Benchmarking CDM Performance section in the 6.5 Cloud Deployment Model Cookbook of the corresponding cloud platform:

CDM Removal: GKE

When you’re done working with the CDM, you can remove it from Google Cloud Platform by following this procedure:

Remove the CDM
  1. Run the skaffold delete command to shut down your deployment and remove it from your namespace.

  2. Log in to Pulumi using the local option or the Pulumi service. Be sure to log in the same way that you logged in when you created your cluster in Step 3 of Create a Kubernetes Cluster for the CDM.

  3. Remove your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/gcp/gke
    2. Select the Pulumi stack that you used when you created your cluster:

      $ pulumi stack select gke-medium
    3. Delete your cluster:

      $ pulumi destroy

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this destroy?

      Review the operation, and then select yes if you want to proceed.

    4. To verify that Pulumi removed the cluster, log in to the Google Cloud console and select the Kubernetes Engine option.

      You should not see the CDM cluster in the list of Kubernetes clusters in your Google Cloud project.

  4. Remove networking infrastructure components:

    1. Change to the directory that contains the Google Cloud infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/gcp/infra
    2. Select the gcp-infra Pulumi stack:

      $ pulumi stack select gcp-infra
      • Delete the infrastructure components:

        $ pulumi destroy

        Pulumi provides a preview of the operation and issues the following prompt:

        Do you want to perform this destroy?

        Review the operation, and then select yes if you want to proceed.

      • To verify that Pulumi removed the infrastructure components, log in to the Google Cloud console. Select the VPC Networks option.

        The network formerly used for CDM infrastructure should no longer be listed.

  5. Remove the CDM cluster from your local computer’s Kubernetes settings:

    1. Unset the KUBECONFIG environment variable:

      $ unset KUBECONFIG
    2. Run the kubectx command.

      The Kubernetes context for the CDM cluster should not appear in the kubectx command output.

CDM Removal: EKS

When you’re done working with the CDM, you can remove it from Amazon Web Services by following this procedure:

Remove the CDM
  1. Run the skaffold delete command to shut down your deployment and remove it from your namespace.

  2. Log in to Pulumi using the local option or the Pulumi service. Be sure to log in the same way that you logged in when you created your cluster in Step 3 of Create a Kubernetes Cluster for the CDM.

  3. Remove your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/eks
    2. Select the Pulumi stack that you used when you created your cluster:

      $ pulumi stack select eks-medium
    3. Delete your cluster:

      $ pulumi destroy

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this destroy?

      Review the operation, and then select yes if you want to proceed.

    4. To verify that Pulumi removed the cluster, log in to the AWS console and select Services > EKS.

      You should not see the CDM cluster in the list of Kubernetes clusters.

  4. Remove networking infrastructure components:

    1. Change to the directory that contains the AWS infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/aws/infra
    2. Select the aws-infra Pulumi stack:

      $ pulumi stack select aws-infra
    3. Delete the infrastructure components:

      $ pulumi destroy

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this destroy?

      Review the operation, and then select yes if you want to proceed.

    4. To verify that Pulumi removed the infrastructure components, log in to the AWS console and select Services > VPC > VPCs.

      You should not see the eks-cdm VPC in the list of VPCs.

  5. Remove the CDM cluster from your local computer’s Kubernetes settings:

    1. Unset the KUBECONFIG environment variable:

      $ unset KUBECONFIG
    2. Run the kubectx command.

      The Kubernetes context for the CDM cluster should not appear in the kubectx command output.

CDM Removal: AKS

When you’re done working with the CDM, you can remove it from Azure by following this procedure:

Remove the CDM
  1. Run the skaffold delete command to shut down your deployment and remove it from your namespace.

  2. Log in to Pulumi using the local option or the Pulumi service. Be sure to log in the same way that you logged in when you created your cluster in Step 3 of Create a Kubernetes Cluster for the CDM.

  3. Remove your cluster:

    1. Change to the directory that contains the cluster configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/azure/aks
    2. Select the Pulumi stack that you used when you created your cluster:

      $ pulumi stack select aks-medium
    3. Delete your cluster:

      $ pulumi destroy

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this destroy?

      Review the operation, and then select yes if you want to proceed.

    4. To verify that Pulumi removed the cluster, log in to the Azure console and select Kubernetes services.

      You should not see the CDM cluster in the list of Kubernetes clusters.

  4. Remove infrastructure components:

    1. Change to the directory that contains the Azure infrastructure stack configuration files:

      $ cd /path/to/forgeops/cluster/pulumi/azure/infra
    2. Select the azure-infra Pulumi stack:

      $ pulumi stack select azure-infra
    3. Delete the infrastructure components:

      $ pulumi destroy

      Pulumi provides a preview of the operation and issues the following prompt:

      Do you want to perform this destroy?

      Review the operation, and then select yes if you want to proceed.

    4. To verify that Pulumi removed the infrastructure components, log in to the Azure console. Display the resource groups.

      The resource group named azure-infra-ip-resource-group should not be present.

  5. Remove the CDM cluster from your local computer’s Kubernetes settings:

    1. Unset the KUBECONFIG environment variable:

      $ unset KUBECONFIG
    2. Run the kubectx command.

      The Kubernetes context for the CDM cluster should not appear in the kubectx command output.

Next Steps

If you’ve followed the instructions for deploying the CDM without modifying configurations, then the following indicates that you’ve been successful:

  • The Kubernetes cluster and pods are up and running.

  • DS, AM, and IDM are installed and running. You can access each ForgeRock component.

  • DS is provisioned with sample users. Replication and failover work as expected.

  • Monitoring tools are installed and running. You can access a monitoring console for DS, AM, and IDM.

  • You can run the benchmarking tests without errors.

  • Your benchmarking test results are similar to our results.

When you’re satisfied that all of these conditions are met, then you’ve successfully taken the first steps towards deploying the ForgeRock Identity Platform in the cloud. Congratulations!

Now that you’re familiar with the CDM—ForgeRock’s reference implementation—you’re ready to work with a project team to plan and configure your production deployment. You’ll need a team with expertise in the ForgeRock Identity Platform, in your cloud provider, and in Kubernetes on your cloud provider. We strongly recommend that you engage a ForgeRock technical consultant or partner to assist you with deploying the platform in production.

You’ll perform these major activities:

Platform configuration. ForgeRock Identity Platform experts configure AM and IDM using the CDK, and build custom Docker images for the ForgeRock Identity Platform. The Cloud Developer’s Kit Documentation provides information about platform configuration tasks.

Cluster configuration. Cloud technology experts configure the Kubernetes cluster that will host the ForgeRock Identity Platform for optimal performance and reliability. Tasks include: configuring your Kubernetes cluster to suit your business needs; setting up monitoring and alerts to track site health and performance; backing up configuration and user data for disaster preparedness; and securing your deployment. The Deployment Topics Documentation and READMEs in the forgeops repository provide information about cluster configuration.

Site reliability engineering. Site reliability engineers monitor the ForgeRock Identity Platform deployment, and keep the deployment up and running based on your business requirements. These might include use cases, service-level agreements, thresholds, and load test profiles. The Deployment Topics Documentation, and READMEs in the forgeops repository, provide information about site reliability.

Deployment Topics Documentation

After you get the CDM up and running, you can use it to test deployment customizations — options that are not part of the CDM, but which you might want to use when you deploy in production. This part of the documentation covers building base Docker images, scheduling backups, restoring directory data, customizing monitoring, and customizing security.

Start Here

Important information you should know before deploying on Kubernetes.

Base Docker Images

Build base Docker images for the platform.

Backup and Restore

Schedule backups for directory data, and restore data if a failure occurs.

Monitoring

Customize the CDM’s monitoring configuration and set up alerts.

Security

Customize CDM security.

Support from ForgeRock

Support options for the ForgeRock Identity Platform and ForgeRock’s DevOps offering.

The ForgeRock Identity Platform serves as the basis for our simple and comprehensive identity and access management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, see https://www.forgerock.com.

Base Docker Images

ForgeRock provides a set of unsupported, evaluation-only base images for the ForgeRock Identity Platform. These images are available in ForgeRock’s public Docker registry.

Developers working with the CDK use the base images from ForgeRock to build customized Docker images for a fully-configured ForgeRock Identity Platform deployment:

Brief overview of containers for developers.

Users working with the CDM also use the base images from ForgeRock to perform proof-of-concept deployments, and to benchmark the ForgeRock Identity Platform.

The base images from ForgeRock are evaluation-only. They are not supported for production use. Because of this, you must build your own base images before you deploy in production:

Brief overview of containers in production.

This topic tells you how to build your own base images, which you can deploy in production.

Your Own Base Docker Images

Perform the following procedure to build base Docker images that you can use in production deployments of the ForgeRock Identity Platform. After you’ve built the base images, push them to your Docker registry:

Build and Deploy Your Own Base Docker Images
  1. Download the latest versions of the AM, Amster, IDM, and DS .zip files from the ForgeRock Download Center. Optionally, you can also download the latest version of the IG .zip file.

  2. Build an Amster Docker image. This image must be available in order to build the AM image in the next step:

    1. Unzip the Amster .zip file:

      $ unzip Amster-7.0.0.zip -d amster
    2. Change to the amster/samples/docker directory in the expanded .zip file output.

    3. Run the setup.sh script:

      $ ./setup.sh
      
      + mkdir -p build
      + find ../.. '!' -name .. '!' -name samples '!' -name docker -maxdepth 1 -exec cp -R '{}' build/ ';'
      + cp ../../docker/amster-install.sh ../../docker/docker-entrypoint.sh ../../docker/export.sh ../../docker/tar.sh build
    4. Build the amster Docker image:

      $ docker build --tag amster:7.0 .
      
      Sending build context to Docker daemon   51.7MB
      Step 1/12 : FROM gcr.io/forgerock-io/java-11:latest
       --→ 4d0811d78b02
      Step 2/12 : ENV FORGEROCK_HOME /home/forgerock
       --→ Running in 472b8b0e1200
      Removing intermediate container 472b8b0e1200
      . . .
  3. Build the AM base image:

    1. Unzip the AM .zip file.

    2. Change to the openam/samples/docker directory in the expanded .zip file output.

    3. Update and run the setup.sh script:

      $ chmod u+x setup.sh
      $ sed -i'.tmp' -e 's/[Oo]pen[Aa][Mm]/AM/g' setup.sh
      $ ./setup.sh
    4. Change to the images/am-empty directory.

    5. Set privileges on scripts in the images/am-empty directory:

      $ chmod u+x docker-entrypoint.sh
      $ chmod u+x scripts/*
    6. Update the WAR file name in the Dockerfile:

      $ sed -i'.tmp' -e 's/openam.war/AM.war/g' Dockerfile
    7. Build the am-empty Docker image:

      $ docker build --tag am-empty:7.0 .
      
      Sending build context to Docker daemon  198.7MB
      Step 1/27 : FROM tomcat:9-jdk11-adoptopenjdk-hotspot AS base
      9-jdk11-adoptopenjdk-hotspot: Pulling from library/tomcat
      a1125296b23d: Pull complete
      3c742a4a0f38: Pull complete
      4c5ea3b32996: Pull complete
      1b4be91ead68: Pull complete
      0cbfb179272d: Pull complete
      9648e8b5b6e1: Pull complete
      d1aef586a7d1: Pull complete
      ddd6eed10da2: Pull complete
      3f5d89f2e2b4: Pull complete
      Digest: sha256:ee307e4f1a1f5596b3636eb9107aa7989c768716bf0157651b28a4e34ff0846f
      Status: Downloaded newer image for tomcat:9-jdk11-adoptopenjdk-hotspot
       --→ 4f1036cadd4b
      Step 2/27 : RUN apt-get update -y &&     apt-get install -y binutils wget unzip
       --→ Running in ef6f6c08ba9b
      . . .
    8. Change to the ../am-base directory.

    9. Set privileges on scripts in the images/am-base directory:

      $ chmod u+x docker-entrypoint.sh
      $ chmod u+x scripts/*
    10. Build the am-base Docker image:

      $ docker build --build-arg docker_tag=7.0 --tag my-registry/am-base:7.0 .
      
      Sending build context to Docker daemon  27.27MB
      Step 1/27 : ARG docker_tag=latest
      Step 2/27 : FROM amster:${docker_tag} as amster
       --→ 50d60dbf29f5
      Step 3/27 : FROM am-empty:${docker_tag} AS generator
       --→ 0b258dc0c896
      Step 4/27 : USER 0
       --→ Running in 0512b3042833
      Removing intermediate container 0512b3042833
       --→ 59dfa4e1043e
      . . .
  4. Now that the AM image is built, tag the Amster image in advance of pushing it to your private repository:

    $ docker tag amster:7.0 my-registry/amster:7.0
  5. Build the DS base image:

    1. Unzip the DS .zip file.

    2. Change to the opendj directory in the expanded .zip file output.

    3. Run the samples/docker/setup.sh script to create a server:

      $ ./samples/docker/setup.sh
      
      + rm -f template/config/tools.properties
      + cp -r samples/docker/Dockerfile samples/docker/README.md . . .
      + rm -rf — README README.md bat '*.zip' opendj_logo.png setup.bat upgrade.bat setup.sh
      + ./setup --serverId docker --hostname localhost . . .
      
      Validating parameters…​.. Done
      Configuring certificates…​…​. Done
      . . .
    4. Build the DS base Docker image:

      $ docker build --tag my-registry/ds:7.0 .
      
      Sending build context to Docker daemon  54.19MB
      Sending build context to Docker daemon  55.31MB
      Step 1/14 : FROM gcr.io/forgerock-io/java-11:latest
       --→ 4d0811d78b02
      Step 2/14 : COPY --chown=forgerock:root . /opt/opendj/
       --→ 75c3db504d4c
      Step 3/14 : USER 11111
       --→ Running in 2346c3e1d73f
      Removing intermediate container 2346c3e1d73f
       --→ d66c728f8d2e
      Step 4/14 : WORKDIR /opt/opendj
       --→ Running in 2aa62e2d415f
      Removing intermediate container 2aa62e2d415f
       --→ 9e2cdf65ae56
      . . .
  6. Build the ldif-importer base image:

    1. Change to the /path/to/forgeops/docker/7.0/ldif-importer directory.

    2. Open the file, Dockerfile.

    3. Change the FROM statement—the first line in the file—to:

      FROM my-registry/ds:7.0
    4. Save and close the updated file.

    5. Create the base ldif-importer image:

      $ docker build . --tag my-registry/ldif-importer:7.0
  7. Build the idm Docker image:

    1. Unzip the IDM .zip file.

    2. Change to the openidm directory in the expanded .zip file output.

    3. Build the IDM base Docker image:

      $ docker build . --file bin/Custom.Dockerfile --tag my-registry/idm:7.0
      
      Sending build context to Docker daemon  220.2MB
      Step 1/7 : FROM gcr.io/forgerock-io/java-11:latest
       --→ 4d0811d78b02
      Step 2/7 : RUN apt-get update &&     apt-get install -y ttf-dejavu
       --→ Running in e0943ff14f4b
      Get:1 http://deb.debian.org/debian stable InRelease [121 kB]
      Get:2 http://deb.debian.org/debian stable-updates InRelease [51.9 kB]
      Get:3 http://deb.debian.org/debian stable/main amd64 Packages [7905 kB]
      Get:4 http://security.debian.org/debian-security stable/updates InRelease [65.4 kB]
      Get:5 http://security.debian.org/debian-security stable/updates/main amd64 Packages [213 kB]
      Get:6 http://deb.debian.org/debian stable-updates/main amd64 Packages [7868 B]
      Fetched 8364 kB in 2s (3401 kB/s)
      Reading package lists…​
      . . .
  8. (Optional) Build the IG base image:

    1. Unzip the IG .zip file.

    2. Change to the identity-gateway directory in the expanded .zip file output.

    3. Build the IG base Docker image:

      $ docker build ig --tag my-registry/ig:7.0
      
      Sending build context to Docker daemon  54.19MB
      Step 1/7 : FROM gcr.io/forgerock-io/java-11:latest
      latest: Pulling from forgerock-io/java-11
      d50302ca539a: Already exists
      79c4c086a545: Pull complete
      dc6dba627cfa: Pull complete
      Digest: sha256:5c5fdae70dbabb58c6fa0609b4d5a51b049f562e337d8bd9ed8653f7078b3d88
      Status: Downloaded newer image for gcr.io/forgerock-io/java-11:latest
       --→ 4d0811d78b02
      Step 2/7 : ENV INSTALL_DIR /opt/ig
       --→ Running in 5fff26381d8f
      Removing intermediate container 5fff26381d8f
       --→ e5bb2b75f4fb
      Step 3/7 : COPY --chown=forgerock:root . "${INSTALL_DIR}"
       --→ 1f35484fefb8
      Step 4/7 : ENV IG_INSTANCE_DIR /var/ig
       --→ Running in 3526eaf403d5
      Removing intermediate container 3526eaf403d5
       --→ 194c0495a29d
      Step 5/7 : RUN mkdir -p "${IG_INSTANCE_DIR}"  && chown -R forgerock:root "${IG_INSTANCE_DIR}" "${INSTALL_DIR}"     && chmod -R g+rwx "${IG_INSTANCE_DIR}" "${INSTALL_DIR}"
       --→ Running in a9c4bbcb7df0
      Removing intermediate container a9c4bbcb7df0
       --→ b6ca5a1022a7
      Step 6/7 : USER 11111
       --→ Running in fd53e422afad
      Removing intermediate container fd53e422afad
       --→ 954148a95b46
      Step 7/7 : ENTRYPOINT ${INSTALL_DIR}/bin/start.sh ${IG_INSTANCE_DIR}
       --→ Running in 59353752d80a
      Removing intermediate container 59353752d80a
       --→ 610d9934bfd0
      Successfully built 610d9934bfd0
      Successfully tagged my-registry/ig:7.0
  9. Run the docker images to verify that you built the base images:

    $ docker images
    
    REPOSITORY                      TAG      IMAGE ID        CREATED        SIZE
    my-registry/am-base             7.0      d115125b1c3f    1 hour ago     795MB
    my-registry/amster              7.0      d9e1c735f415    1 hour ago     577MB
    my-registry/ds                  7.0      ac8e8ab0fda6    1 hour ago     196MB
    my-registry/idm                 7.0      0cc1b7f70ce6    1 hour ago     387MB
    my-registry/ig                  7.0      9728c30c1829    1 hour ago     249MB
    my-registry/ldif-importer       7.0      1ef5333c4230    1 hour ago     223MB
    . . .
  10. Push the new base Docker images to your Docker registry.

    See your registry provider documentation for detailed instructions. For most Docker registries, you run the docker login command to log in to the registry. Then, you run the docker push command to push a Docker image to the registry.

    However, some Docker registries have different requirements. For example, to push Docker images to Google Container Registry, you use Google Cloud SDK commands instead of using the docker push command.

    Push the following images:

    • my-registry/am-base:7.0

    • my-registry/amster:7.0

    • my-registry/ds:7.0

    • my-registry/ldif-importer:7.0

    • my-registry/idm:7.0

    If you’re deploying your own IG base image, also push the my-registry/ig:7.0 image.

Developer Dockerfile Changes

After you’ve pushed your own base images to your Docker registry, update the Dockerfiles that your developers use when creating customized Docker images for the ForgeRock Identity Platform. The Dockerfiles can now reference your own base images instead of the evaluation-only images from ForgeRock.

Change Developer Dockerfiles to Use Your Base Images
  1. Update the AM Dockerfile:

    1. Change to the /path/to/forgeops/docker/7.0/am directory.

    2. Open the file, Dockerfile, in that directory.

    3. Change the line:

      FROM gcr.io/forgerock-io/am-base...

      to:

      FROM my-registry/am-base:7.0
  2. Make a similar change to the file, /path/to/forgeops/docker/7.0/amster/Dockerfile.

  3. Make a similar change to the file, /path/to/forgeops/docker/7.0/ds/cts/Dockerfile.

  4. Make a similar change to the file, /path/to/forgeops/docker/7.0/ds/idrepo/Dockerfile.

  5. Make a similar change to the file, /path/to/forgeops/docker/7.0/idm/Dockerfile.

  6. (Optional) Make a similar change to the file, /path/to/forgeops/docker/7.0/ig/Dockerfile.

You can now build customized Docker images for the ForgeRock Identity Platform based on your own Docker images and use them in production deployments.

The next time you run Skaffold, you must set the --no-prune and --cache-artifacts options to false to ensure that Skaffold loads the new images that you just built instead of loading previous images from cache. For example:

$ skaffold run --no-prune=false --cache-artifacts=false

Backup and Restore

The backup topology of DS in the CDM:

Diagram about backup topology.

Some important DS backup and restore considerations in the CDM:

  • Six DS instances are deployed using Kubernetes stateful sets (three each for idrepo and cts backends).

  • DS data backups are stored in Google Cloud Service, Amazon S3, or Azure Blob Storage. Before deploying the CDM set up a cloud storage container and the necessary credentials to access the container.

  • Backups must be scheduled using the schedule-backups.sh script in the /path/to/forgeops/bin path on your local computer.

  • By default, an incremental backup of the CTS and identity repository directory instances is performed every hour. The first incremental backup created is a full backup.

  • The backup schedule can be customized separately for CTS and identity repository DS instances based on your recovery objectives.

  • By default, the backups are scheduled from the last (or -2) pod of each DS instance. You can customize and schedule backups from any pod of a DS instance. You can also schedule backups from multiple pods.

  • DS server instances use cryptographic keys to sign and verify the integrity of backup files, and to encrypt data. Server instances protect these keys in a deployment by encrypting them with the shared master key. For portability, servers store the encrypted keys in the backup files.

  • An idrepo instance can be restored using the backup of any other idrepo instance, as long as it holds a copy of the shared master key used to encrypt the keys.

  • Similarly, a cts instance can also be restored using the backup of another cts instance, as long as the shared master key is available to the cts being restored.

  • You can set up new DS instances with data from a previous backup when you deploy CDM.

DS Backup Schedules

After you deploy the CDM, schedule backups of the directory data.

Default Backup Schedule

The default backup schedule creates incremental backups of the idrepo instance at the beginning of every hour, and the cts instance at 10 minutes past every hour.

Run the schedule-backups.sh script to start backing up cts and idrepo instances using the default schedule:

$  /path/to/forgeops/bin/schedule-backups.sh my-namespace

In the CDM deployment, DS is deployed to the prod namespace. So you would specify prod as the namespace in the above command. If you have deployed DS in another namespace, you must specify the corresponding namespace.

Customized Backup Schedule

You can customize the backup schedule for cts and idrepo instances separately. You can also schedule backups from any DS pod.

To customize the schedule for the idrepo instance, and to schedule backups from the ds-idrepo-1 pod in addition to the ds-idrepo-2 pod:

Customize Backup Schedule
  1. Edit the schedule-backups.sh file in the /path/to/forgeops/bin folder and change the line BACKUP_SCHEDULE_IDREPO="0 * * * *" to suit your backup schedule for idrepo.

    For example, the following line schedules an incremental backup for the idrepo instance every 30 minutes:

    BACKUP_SCHEDULE_IDREPO="*/30 * * * *"

  2. Edit the kustomization.yaml file in the /path/to/forgeops/kustomize/base/kustomizeConfig folder on your local computer and change the line that begins with - DSBACKUP_HOSTS= to:

    - DSBACKUP_HOSTS="ds-cts-2,ds-idrepo-2,ds-idrepo-1"

  3. Run the schedule-backups.sh script.

    $ /path/to/forgeops/bin/schedule-backups.sh prod

Similarly, edit the schedule-backups.sh script and change the line BACKUP_SCHEDULE_CTS="10 * * * *" to suit your backup schedule for the cts instance. If you want to add or change the cts pod that performs backup, edit the kustomization.yaml file in the /path/to/forgeops/kustomize/base/kustomizeConfig folder. Then run the schedule-backups.sh script.

The schedule-backups.sh script stops any previously scheduled backup jobs before initiating the new schedule.

CDM Automatic Restore

When the CDM is deployed, new, empty DS instances are created. You can enable the automatic restore capability in CDM to:

  • Deploy new DS instances using the data from a previous backup.

  • Set up the cts and idrepo DS pods to recover from operational failures.

The automatic restore capability to deploy new instances with backup data is useful when a system disaster occurs, or when directory services are lost. It is also useful when you want to port test environment data to a production deployment. Note that when you restore a DS instance, all its replicas are restored.

You can enable automatic restore before you deploy CDM. To enable automatic restore, you must set up the DS data backup to cloud storage, copy the backup data to the cloud storage location, and then follow these steps:

Enable Automatic Restore
  1. Change to the /path/to/forgeops/kustomize/base/kustomizeConfig directory.

  2. Edit the kustomization.yaml file and set the AUTORESTORE_FROM_DSBACKUP parameter to "true".

  3. Deploy platform as described in CDM Deployment.

In a CDM deployment that has automatic restore enabled, you can recover a failed pod:

Recover a Failed DS Pod
  1. Delete the PVC attached to the failed DS pod using the kubectl delete pvc command. The PVC might not get deleted immediately if the attached pod is running.

  2. In another terminal window, stop the failed DS pod using the kubectl delete pod command. This will delete the PVC.

The automatic restore feature of CDM recreates the PVC, copies data from the backup, restores the DS pod with the latest backup, and replicates the current data from other DS pods.

You can also perform manual restore of DS. For more information about how to manually restore DS, see the Restore section in the ForgeRock Directory Services Maintenance Guide.

Consider following these best practices:

  • Use a backup that is newer than the last replication purge.

  • When you restore a DS replica using backups that are older than the purge delay, that replica will no longer be able to participate in replication. Reinitialize the replica to restore the replication topology.

  • If the available backups are older than the purge delay, then initialize the DS replica from an up-to-date master instance. For more information on how to initialize a replica, see Manual Initialization in the ForgeRock Directory Services Configuration Guide.

Monitoring Customizations

This topic describes the CDM’s monitoring architecture. It also covers common customizations you might perform to change the way monitoring, reporting, and sending alerts works in your environment.

The CDM uses Prometheus to monitor ForgeRock Identity Platform components and Kubernetes objects, Prometheus Alertmanager to send alert notifications, and Grafana to analyze metrics using dashboards.

Prometheus and Grafana are deployed when you run the prometheus-deploy.sh script. This script installs Helm charts from the prometheus-operator project into the monitoring namespace of a CDM cluster. These Helm charts deploy Kubernetes pods that run the Prometheus and Grafana services.

The following Prometheus and Grafana pods from the prometheus-operator project run in the monitoring namespace:

Pod Description

alertmanager-prometheus-operator-alertmanager-0

Handles Prometheus alerts by grouping them together, filtering them, and then routing them to a receiver, such as a Slack channel.

prometheus-operator-kube-state-metrics-...

Generates Prometheus metrics for cluster node resources, such as CPU, memory, and disk usage. One pod is deployed for each CDM node.

prometheus-operator-prometheus-node-exporter-...

Generates Prometheus metrics for Kubernetes API objects, such as deployments and nodes.

prometheus-operator-grafana-...

Provides the Grafana service.

prometheus-prometheus-operator-prometheus-0

Provides the Prometheus service.

See the prometheus-operator Helm chart README file for more information about the pods in the preceding table.

In addition to the pods from the prometheus-operator project, the import-dashboards-... pod from the forgeops project runs after Grafana starts up. This pod imports Grafana dashboards from the ForgeRock Identity Platform and terminates after importing has completed.

To access CDM monitoring dashboards, see CDM Monitoring.

The CDM uses Prometheus and Grafana for monitoring, reporting, and sending alerts. If you prefer to use different tools, deploy infrastructure in Kubernetes to support those tools.

Prometheus and Grafana are evolving technologies. Descriptions of these technologies were accurate at the time of this writing, but might differ when you deploy them.

Custom Grafana Dashboard Imports

The CDM includes a set of Grafana dashboards. You can customize, export and import Grafana dashboards using the Grafana UI or HTTP API.

For information about importing custom Grafana dashboards, see the Import Custom Grafana Dashboards section of the Prometheus and Grafana Deployment README file in the forgeops repository.

Prometheus Operator

The CDM’s monitoring framework is based on the Prometheus Operator for Kubernetes project. The Prometheus Operator project provides monitoring definitions for Kubernetes services and deployment, and management of Prometheus instances.

When deployed, the Prometheus Operator watches for ServiceMonitor CRDs—Kubernetes Custom Resource Definitions. CRDs are Kubernetes class types that you can manage with the kubectl command. The ServiceMonitor CRDs define targets to be scraped.

In the CDM, the Prometheus Operator configuration is defined in the prometheus-operator.yaml file in the forgeops repository. To customize the CDM monitoring framework, change values in these files, following the examples documented in README files in the Prometheus Operator project on GitHub.

Additional Alerts

CDM alerts are defined in the fr-alerts.yaml file in the forgeops repository.

To configure additional alerts, see the Configure Alerting Rules section of the Prometheus and Grafana Deployment README file in the forgeops repository.

Security Customizations

This topic describes several options for securing the ForgeRock Identity Platform as deployed on the CDM.

ForgeRock Secrets Generator

The ForgeRock secrets generator randomly generates all secrets for AM, IDM, and DS services running in the CDK and the CDM. The secrets generator runs as a Kubernetes job before AM, IDM, and DS are deployed.

See the forgeops-secrets README file for more information about the secrets generator, including a list of which secrets it generates, and how to override default secrets.

Cluster Access for Multiple AWS Users

It’s common for team members to share the use of a cluster. For team members to share a cluster, the cluster owner must grant access to each user.

Grant Users Access to an EKS Cluster
  1. Get the ARNs and names of users who need access to your cluster.

  2. Set the Kubernetes context to your Amazon EKS cluster.

  3. Edit the authorization configuration map for the cluster using the kubectl edit command:

    $ kubectl edit -n kube-system configmap/aws-auth
  4. Under the mapRoles section, insert the mapUser section. An example is shown here with the following parameters:

    • The user ARN is arn:aws:iam::012345678901:user/new.user.

    • The user name registered in AWS is new.user.

      …​ mapUsers: |
          - userarn: arn:aws:iam::012345678901:user/new.user
            username: new.user
            groups:
              - system:masters
      …​
  5. For each additional user, insert the - userarn: entry in the mapUsers: section:

    …​ mapUsers: |
        - userarn: arn:aws:iam::012345678901:user/new.user
          username: new.user
          groups:
            - system:masters
        - userarn: arn:aws:iam::901234567890:user/second.user
          username: second.user
          groups:
            - system:masters
    …​
  6. Save the configuration map.

Access Restriction by IP Address

When installing the ingress controller in production environments, you should consider configuring a CIDR block in the Helm chart for the ingress controller so that you restrict access to worker nodes from a specific IP address or a range of IP addresses.

To specify a range of IP addresses allowed to access resources controlled by the ingress controller, specify the --set controller.service.loadBalancerSourceRanges=your IP range option when you install your ingress controller.

For example:

$ helm install --namespace nginx --name nginx \
 --set rbac.create=true \
 --set controller.publishService.enabled=true \
 --set controller.stats.enabled=true \
 --set controller.service.externalTrafficPolicy=Local \
 --set controller.service.type=LoadBalancer \
 --set controller.image.tag="0.21.0" \
 --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \
 --set controller.service.loadBalancerSourceRanges="{81.0.0.0/8,3.56.113.4/32}" \
 stable/nginx-ingress

Secure HTTP

The CDK and CDM enable secure communication with ForgeRock Identity Platform services using an SSL-enabled ingress controller. Incoming requests and outgoing responses are encrypted. SSL is terminated at the ingress controller.

You can configure communication with ForgeRock Identity Platform services using one of the following options:

  • Over HTTPS using a self-signed certificate. Communication is encrypted, but users will receive warnings about insecure communication from some browsers.

  • Over HTTPS using a certificate with a trust chain that starts at a trusted root certificate. Communication is encrypted, and users will not receive warnings from their browsers.

  • Over HTTPS using a dynamically obtained certificate from Let’s Encrypt. Communication is encrypted and users will not receive warnings from their browsers. A cert-manager pod installed in your Kubernetes cluster calls Let’s Encrypt to obtain a certificate, and then automatically installs a Kubernetes secret.

You install a Helm chart from the cert-manager project to provision certificates. By default, the pod issues a self-signed certificate. You can also configure the pod to issue a certificate with a trust chain that begins at a trusted root certificate, or to dynamically obtain a certificate from Let’s Encrypt.

Certificate Management Automation

In the CDM, certificate management is provided by the cert-manager add-on. The certificate manager deployed in CDM generates a self-signed certificate to secure CDM communication.

In your own deployment, you can specify a different certificate issuer or DNS challenge provider by changing values in the ingress.yaml file.

For more information about configuring certificate management, see the cert-manager documentation.

DevOps Release Notes

Click here for recommended Kubernetes versions for deploying ForgeRock Identity Platform 7.

Click here for limitations when deploying ForgeRock Identity Platform 7 on Kubernetes.

July 16, 2020

Release tag: 2020.07.15-alleVongole

This release provides performance improvements and small bug fixes.

Click here for the detailed change log.

June 25, 2020

Release tag: 2020.06.24-laPaniscia

This release provides performance improvements and small bug fixes.

Click here for the detailed change log.

May 18, 2020

Release tag: 2020.05.13-AlPomodoro.1

This release changes the recommended method for obtaining the forgeops repository. We now recommend that users clone the repository, and then create a branch based on the current release tag. For more information, see About the forgeops Repository .

The release also provides performance improvements and small bug fixes.

Click here for the detailed change log.

February 21, 2020

This release is a major new revision of the forgeops repository, and provides a completely new approach to ForgeRock DevOps. This release changes the way that AM, IDM, and IG configuration is stored and managed. Instead of relying on the external forgeops-init repository, configuration is now stored inside the Docker images for AM, IDM, and IG. Instead of using Helm charts, the forgeops repository now includes Kustomize and Skaffold example artifacts for deploying the CDK and CDM on Kubernetes.

The model for CDM cluster creation has also changed. Instead of bash scripts, the forgeops repository now provides Pulumi scripts to create GKE, EKS, and AKS clusters in which the CDM is installed.

Deprecated

Glossary

affinity (AM)

AM affinity deployment lets AM spread the LDAP reqests load over multiple directory server instances. Once a CTS token is created and assigned to a session, AM sends all subsequent token operations to the same token origin directory server from any AM node. This ensures that the load of CTS token management is spread across directory servers.

Amazon EKS

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on Amazon Web Services without needing to set up or maintain your own Kubernetes control plane.

ARN (AWS)

An Amazon Resource Name (ARN) uniquely identifies an Amazon Web Service (AWS) resource. AWS requires an ARN when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls.

AWS IAM Authenticator for Kubernetes

The AWS IAM Authenticator for Kubernetes is an authentication tool that lets you use Amazon Web Services (AWS) credentials for authenticating to a Kubernetes cluster.

Azure Kubernetes Service (AKS)

AKS is a managed container orchestration service based on Kubernetes. AKS is available on the Microsoft Azure public cloud. AKS manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications.

cloud-controller-manager

The cloud-controller-manager daemon runs controllers that interact with the underlying cloud providers. The cloud-controller-manager daemon runs provider-specific controller loops only.

Cloud Developer’s Kit (CDK)

The developer artifacts in the forgeops Git repository, together with the ForgeRock Identity Platform documentation, form the Cloud Developer’s Kit (CDK). Use the CDK to set up the platform in your developer environment.

Cloud Deployment Model (CDM)

The Cloud Deployment Model (CDM) is a common use ForgeRock Identity Platform architecture, designed to be easy to deploy and easy to replicate. The ForgeRock Cloud Deployment Team has developed Kustomize bases and overlays, Skaffold configuration files, Docker images, and other artifacts expressly to build the CDM.

CloudFormation (AWS)

CloudFormation is a service that helps you model and set up your AWS resources. You create a template that describes all the AWS resources that you want. AWS CloudFormation takes care of provisioning and configuring those resources for you.

CloudFormation template (AWS)

An AWS CloudFormation template describes the resources that you want to provision in your AWS stack. AWS CloudFormation templates are text files formatted in JSON or YAML.

cluster

A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

cluster master

A cluster master schedules, runs, scales, and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.

ConfigMap

A configuration map, called ConfigMap in Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.

container

A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be "contained" together and made available to specific processes without interference from the rest of the system. Containers decouple applications from underlying host infrastructure.

DaemonSet

A set of daemons, called DaemonSet in Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows a one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.

deployment

A Kubernetes deployment represents a set of multiple, identical pods. Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.

deployment controller

A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.

Docker container

A Docker container is a runtime instance of a Docker image. The container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.

Docker Engine

The Docker Engine is a client-server application with these components:

  • A server, which is a type of long-running program called a daemon process (the dockerd command)

  • A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do

  • A command-line interface (CLI) client (the docker command)

Dockerfile

A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.

Docker Hub

Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.

Docker image

A Docker image is an application you would like to run. A container is a running instance of an image.

An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

An image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.

Docker namespace

Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

The PID namespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.

Docker registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.

Docker repository

A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.

dynamic volume provisioning

The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning lets you create storage volumes on demand. It automatically provisions storage when it is requested by users.

egress

An egress controls access to destinations outside the network from within a Kubernetes network. For an external destination to be accessed from a Kubernetes environment, the destination should be listed as an allowed destination in the whitelist configuration.

firewall rule

A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming (ingress) or outgoing (egress) traffic, not both.

garbage collection

Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute, and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.

Google Kubernetes Engine (GKE)

The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.

horizontal pod autoscaler

The horizontal pod autoscaler lets a Kubernetes cluster to automatically scale the number of pods in a replication controller, deployment, replica set, or stateful set based on observed CPU utilization. Users can specify the CPU utilization target to enable the controller to adjust athe number of replicas.

Source: Horizontal Pod Autoscaler in the Kubernetes documentation.

ingress

An ingress is a collection of rules that allow inbound connections to reach the cluster services.

instance group

An instance group is a collection of instances of virtual machines. The instance groups lets you easily monitor and control the group of virtual machines together.

instance template

An instance template is a global API resource to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.

kubectl

The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.

kube-controller-manager

The Kubernetes controller manager is a process that embeds core controllers shipped with Kubernetes. Each controller is a separate process. To reduce complexity, the controllers are compiled into a single binary and run in a single process.

kubelet

A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.

kube-scheduler

The kube-scheduler component is on the master node. It watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.

Kubernetes

Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.

Kubernetes DNS

A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.

Kubernetes namespace

Kubernetes supports multiple virtual clusters backed by the same physical cluster. A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:

  • default: The default namespace for user created objects which don’t have a namespace

  • kube-system: The namespace for objects created by the Kubernetes system

  • kube-public: The automatically created namespace that is readable by all users

Let’s Encrypt

Let’s Encrypt is a free, automated, and open certificate authority.

Microsoft Azure

Microsoft Azure is the Microsoft cloud platform, including infrastructure as a service (IaaS) and platform as a service (PaaS) offerings.

network policy

A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.

node (Kubernetes)

A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.

node controller (Kubernetes)

A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes, such as: lifecycle operations, operational status, and maintaining an internal list of nodes.

node pool (Kubernetes)

A Kubernetes node pool is a collection of nodes with the same configuration. At the time of creating a cluster, all the nodes created in the default node pool. You can create your custom node pools for configuring specific nodes that have a different resource requirements such as memory, CPU, and disk types.

persistent volume

A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.

persistent volume claim

A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:

pod anti-affinity (Kubernetes)

Kubernetes pod anti-affinity constrains which nodes can run your pod, based on labels on the pods that are already running on the node, rather than based on labels on nodes. Pod anti-affinity lets you control the spread of workload across nodes and also isolate failures to nodes.

pod (Kubernetes)

A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.

region (Azure)

An Azure region, also known as a location, is an area within a geography, containing one or more data centers.

replication controller (Kubernetes)

A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.

resource group (Azure)

A resource group is a container that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that are logically grouped together.

secret (Kubernetes)

A Kubernetes secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.

security group (AWS)

A security group acts as a virtual firewall that controls the traffic for one or more compute instances.

service (Kubernetes)

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.

service principal (Azure)

An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources. Service principals let applications access resources with the restrictions imposed by the assigned roles instead of accessing resources as a fully privileged user.

shard

Sharding is a way of partitioning directory data so that the load can be shared by multiple directory servers. Each data partition, also known as a shard, exposes the same set of naming contexts, but only a subset of the data. For example, a distribution might have two shards. The first shard contains all users whose names begins with A-M, and the second contains all users whose names begins with N-Z. Both have the same naming context.

stack (AWS)

A stack is a collection of AWS resources that you can manage as a single unit. You can create, update, or delete a collection of resources by using stacks. All the resources in a stack are defined by the AWS template.

stack set (AWS)

A stack set is a container for stacks. You can provision stacks across AWS accounts and regions by using a single AWS template. All the resources included in each stack of a stack set are defined by the same template.

subscription (Azure)

An Azure subscription is used for pricing, billing, and payments for Azure cloud services. Organizations can have multiple Azure subscriptions, and subscriptions can span multiple regions.

volume (Kubernetes)

A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.

VPC (AWS)

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud.

worker node (AWS)

An Amazon Elastic Container Service for Kubernetes (Amazon EKS) worker node is a standard compute instance provisioned in Amazon EKS.

workload (Kubernetes)

A Kubernetes workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.

Appendix A: Support From ForgeRock

This appendix contains information about support options for the ForgeRock Cloud Developer’s Kit, the ForgeRock Cloud Deployment Model, and the ForgeRock Identity Platform.

ForgeRock DevOps Support

ForgeRock has developed artifacts in the forgeops Git repository for the purpose of deploying the ForgeRock Identity Platform in the cloud. The companion DevOps documentation provides examples, including the ForgeRock Cloud Developer’s Kit (CDK) and the ForgeRock Cloud Deployment Model (CDM), to help you get started.

These artifacts and documentation are provided on an "as is" basis. ForgeRock does not guarantee the individual success developers may have in implementing the code on their development platforms or in production configurations.

Commercial Support

ForgeRock provides commercial support for the following DevOps resources:

  • Artifacts in the forgeops Git repository:

    • Files used to build Docker images for the ForgeRock Identity Platform:

      • Dockerfiles

      • Scripts and configuration files incorporated into ForgeRock’s Docker images

      • Canonical configuration profiles for the platform

    • Kustomize bases and overlays

    • Skaffold configuration files

  • ForgeRock DevOps Documentation

ForgeRock provides commercial support for the ForgeRock Identity Platform. For supported components, containers, and Java versions, see the following:

Support Limitations

ForgeRock provides no commercial support for the following:

  • Artifacts other than Dockerfiles, Kustomize bases, Kustomize overlays, and Skaffold YAML configuration files in the forgeops Git repository. Examples include scripts, example configurations, and so forth.

  • Non-ForgeRock infrastructure. Examples include Docker, Kubernetes, Google Cloud Platform, Amazon Web Services, Microsoft Azure, and so forth.

  • Non-ForgeRock software. Examples include Java, Apache Tomcat, NGINX, Apache HTTP Server, Certificate Manager, Prometheus, and so forth.

  • Production deployments that use ForgeRock’s evaluation-only Docker images. When deploying the ForgeRock Identity Platform using Docker images, you must build and use your own images for production deployments. For information about how to build your own Docker images for the ForgeRock Identity Platform, see Base Docker Images.

Third-Party Kubernetes Services

Third-Party Kubernetes Services

ForgeRock supports deployments on Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (Amazon EKS), and Microsoft Azure Kubernetes Service (AKS), on the Kubernetes versions listed here.

You should be able to deploy the ForgeRock Identity Platform on any Cloud Native Computing Foundation-compliant Kubernetes implementation, but some modifications to the forgeops repository artifacts might be required to accommodate specific details in your implementation. For example, the way ingress is handled is often unique to Kubernetes implementations.

If you report an issue on an implementation other than GKE, Amazon EKS, or AKS, ForgeRock will make commercially reasonable efforts to provide first-line support. In the case we are unable to reproduce a reported issue internally, we might request that you:

  • Reproduce the problem on one of the supported platforms.

  • Engage your Kubernetes vendor’s support to collaborate on problem identification and remediation.

Customers deploying on platforms other than GKE, Amazon EKS, or AKS are expected to have a support contract in place with their vendor that ensures support resources can be engaged if this situation occurs.

Documentation Access

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock developer documentation, such as this site, aims to be technically accurate with respect to the sample that is documented. It is visible to everyone.

Problem Reports and Feedback

If you are a named customer Support Contact, contact ForgeRock using the Customer Support Portal to request information, or report a problem with Dockerfiles, Kustomize bases, Kustomize overlays, or Skaffold YAML configuration files in the CDK or the CDM.

If you have questions regarding the CDK or the CDM that are not answered in the documentation, file an issue at https://github.com/ForgeRock/forgeops/issues.

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation.

  • Steps to reproduce the problem.

    If the problem occurs on a Kubernetes system other than Minikube, GKE, EKS, OpenShift, or AKS, we might ask you to reproduce the problem on one of those.

  • HTML output from the debug-logs.sh script. For more information, see Pod Descriptions and Container Logs.

  • Description of the environment, including the following information:

    • Environment type: Minikube, GKE, EKS, AKS, or OpenShift.

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only).

      • Docker client (all environments).

      • Minikube (all environments).

      • kubectl command (all environments).

      • Kustomize (all environments).

      • Skaffold (all environments).

      • Google Cloud SDK (GKE environments only).

      • Amazon AWS Command Line Interface (EKS environments only).

      • Azure Command Line Interface (AKS environments only).

    • forgeops repository branch.

    • Any patches or other software that might be affecting the problem.

Contact Information

ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.

ForgeRock has staff members around the globe who support our international customers and partners. For details on ForgeRock’s support offering, including support plans and service-level agreements (SLAs), visit https://www.forgerock.com/support.


1. When you build the am Docker image, the AM configuration files are copied from the /path/to/forgeops/docker/7.0/am/config directory to the /home/forgerock/openam/config directory.
2. When you build the idm Docker image, the IDM configuration files are copied from the /path/to/forgeops/docker/7.0/idm/conf directory to the /opt/openidm/conf directory.
3. When you build the am Docker image, the AM configuration files are copied from the /path/to/forgeops/docker/7.0/am/config directory to the /home/forgerock/openam/config directory.
4. When you build the idm Docker image, the IDM configuration files are copied from the /path/to/forgeops/docker/7.0/idm/conf directory to the /opt/openidm/conf directory.
5. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
6. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
7. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
8. If your cluster’s context is not minikube, replace minikube with the actual context name in the skaffold config set command.
9. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
10. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
11. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
12. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
13. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
14. You can automate logging into ECR every 12 hours by using the cron utility.
15. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
16. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
17. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
18. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
19. Since Pulumi does not maintain previous brew packages, and CDM installation depends on Pulumi version 2.7.1, install Pulumi from Available Versions of Pulumi.
20. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
21. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
22. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
23. Since Pulumi does not maintain previous brew packages, and CDM installation depends on Pulumi version 2.7.1, install Pulumi from Available Versions of Pulumi.
24. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
25. You can automate logging into ECR every 12 hours by using the cron utility.
26. Install Docker Desktop on macOS. On Linux computers, install Docker CE instead. For more information, see the Docker documentation.
27. The Linux version of Homebrew does not support installing software it maintains as casks. Because of this, if you’re setting up an environment on Linux, you won’t be able to use Homebrew for this package. Instead, refer to the package’s documentation for installation instructions.
28. Since Pulumi does not maintain previous brew packages, and CDM installation depends on Pulumi version 2.7.1, install Pulumi from Available Versions of Pulumi.
29. For the short term, follow the steps in the procedure to clone the forgeops repository and check out the 2020.08.07-ZucchiniRicotta.1 tag. For the long term, you’ll need to implement a strategy for managing updates, especially if a team of people in your organization works with the repository. For example, you might want to adopt a workflow that uses a fork as your organization’s common upstream repository. For more information, see About the forgeops Repository.
30. You can automate logging in to ACR by using the cron utility.
31. The CDM and the CDK both use the CDK canonical configuration profile.