Guide to ForgeRock Identity Platform™ deployment in the cloud using DevOps techniques.


The DevOps Guide covers installation, configuration, and deployment of the ForgeRock Identity Platform using DevOps techniques.

This guide provides a general introduction to DevOps deployment of ForgeRock® software and an overview of DevOps deployment strategies. It also includes several deployment examples that illustrate best practices to help you get started with your own DevOps deployments.

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the ForgeRock DevOps Examples is not available from ForgeRock.

For information about obtaining support for ForgeRock products, see Appendix A, "Getting Support".


Do not deploy ForgeRock software in a containerized environment in production until you have successfully deployed and tested the software in a non-production environment.

Deploying ForgeRock software in a containerized environment requires advanced proficiency in the technologies you use in your deployment. The technologies include, but are not limited to, Docker, Kubernetes, load balancers, Google Cloud Platform, Amazon Web Services, and Microsoft Azure.

If your organization lacks experience with complex DevOps deployments, then either engage with a certified ForgeRock consulting partner or deploy the platform on traditional architecture.

About ForgeRock Identity Platform Software

ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.

The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:

  • ForgeRock Access Management (AM)

  • ForgeRock Identity Management (IDM)

  • ForgeRock Directory Services (DS)

  • ForgeRock Identity Gateway (IG)

  • ForgeRock Identity Message Broker (IMB)

Third-Party Software and DevOps Deployments

The ForgeRock Identity Platform DevOps Examples require you to install software products that are not part of the ForgeRock Identity Platform. We strongly recommend that you become familiar with basic concepts for the following software before attempting to use it even in your initial experiments with DevOps deployments:

Table 1. DevOps Environments Prerequisite Software
SoftwareRecommended Level of FamiliarityLinks to Introductory Material
Oracle VirtualBox Install, start, and stop VirtualBox software; understand virtual machine settings; create snapshots First Steps chapter in the VirtualBox documentation
Docker Client Build, list, and remove images; understand the Docker client-server architecture; understand Docker registry concepts Get Started With Docker tutorial
Kubernetes Identify Kubernetes entities such as pods and clusters; understand the Kubernetes client-server architecture

Kubernetes tutorials

Scalable Microservices with Kubernetes on Udacity

The Illustrated Children's Guide to Kubernetes

Minikube Understand what Minikube is; create and start a Minikube virtual machine; run docker and kubectl commands that access the Docker Engine and Kubernetes cluster running in the Minikube virtual machine

Running Kubernetes Locally via Minikube

Hello Minikube tutorial

kubectl (Kubernetes client) Run kubectl commands on a Kubernetes cluster kubectl command overview
Kubernetes Helm Understand what a Helm chart is; understand the Helm client-server architecture; run the helm command to install, list, and delete Helm charts in a Kubernetes cluster

Helm Quickstart

Blog entry describing Helm charts

Google Kubernetes Engine (GKE) Create a Google Cloud Platform account and project, and make GKE available in the project Quickstart for Kubernetes Engine
Google Cloud SDK Run the gcloud command to access GKE components in a Google Cloud Platform project Google Cloud SDK documentation

Chapter 1. Gathering Deployment Information

A deployment worksheet helps the user gather required information and communicate it to the deployment team. It can also serve as a “test” of users skills, especially if supported by strongly worded warnings.

These might be section headers:

  1. Identify and document resources to be used in the deployment such as accounts and access credentials, clusters, namespaces, secrets, service accounts, URLs, domain names, hostnames and port numbers, license numbers or keys.

  2. Identify sequence dependencies for creating and using each of these resources.

  3. Create a timeline for obtaining and using resources in the deployment.

  4. Define a strategy for managing Docker images.

  5. Create a high availability plan.

Chapter 2. GKE Deployment Runbook


" I think we need to include a "cookbook" or "runbook" or "example deployment" chapter in which we take either the "small" or "medium" deployment and walk though the whole thing from beginning to end using GKE. And whatever artifacts that we have used for this cookbook should go into our new "public" GitHub repository. The artifacts will be cluster creation and configuration scripts, custom values.yaml and any other utility to get the job done. And we have most of this already from our benchmarking and Pavel has done an excellent job with some scripts which we can reuse. I am willing to write this chapter btw provided that the doc team is willing to own it and clean it up. "

Chapter 3. Building CI/CD Infrastructure

This is ambitious and we ought to think about doing something very general:


" I think this is overly ambitious given our current state. CI/CD tools are very opinionated - even more than Kubernetes itself. Some possible choices are Jenkins, Spinnaker, Brigade, Codefresh, Pivotal Concourse, Google Cloudbuilder, and AWS/Azure (I am sure they have something). Whatever we pick is probably going to be used by at most 5% of our customer/partner base. I think we should use CI/CD to serve our own needs for deployment testing, and we might want to lightly document it (hey this is what we did, you might find it interesting...) "


" I mostly agree with Warren :) We could do a general model and talk about the integration points that we provide. "


" Mostly agree with Gary mostly agreeing with Warren. I would ask that Eng team demonstrates the tasks to Doc team so we can capture the details in some form for internal purposes. Then the Doc team can make the customer-facing doc as generic as possible. Does this work for you both? "

Chapter 4. Monitoring Your Deployment

Placeholder for the monitoring chapter, including running benchmarks to verify performance criteria are met.

Chapter 5. Making the ForgeRock Identity Platform Highly Available

Placeholder for the HA chapter.

Chapter 6. Backing up and Restoring Directory Data

Placeholder for the backup/restore chapter.

Chapter 7. Managing Passwords, Keys, and Keystores

Placeholder for the security management chapter.

Chapter 8. Upgrading the ForgeRock Identity Platform

Placeholder for the upgrade chapter.


" We have some major product issues to resolve first. I'd like to see us delay discussion of upgrades until AM config as an artifact has landed. "

Appendix A. Getting Support

This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.

A.1. Statement of Support

The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples is not available from ForgeRock.

ForgeRock does not support production deployments that use the evaluation-only Docker images described in Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide. When deploying the ForgeRock Identity Platform by using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide.

ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:

ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.

A.2. Accessing Documentation Online

ForgeRock publishes comprehensive documentation online:

  • The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.

    While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.

  • ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.

A.3. How to Report Problems or Provide Feedback

If you have questions regarding the DevOps Examples that are not answered by the documentation, you can ask questions on the DevOps forum at

When requesting help with a problem, include the following information:

  • Description of the problem, including when the problem occurs and its impact on your operation

  • Description of the environment, including the following information:

    • Environment type (Minikube or Google Kubernetes Engine (GKE))

    • Software versions of supporting components:

      • Oracle VirtualBox (Minikube environments only)

      • Docker client (all environments)

      • Minikube (all environments)

      • kubectl command (all environments)

      • Kubernetes Helm (all environments)

      • Google Cloud SDK (GKE environments only)

    • DevOps Examples release version

    • Any patches or other software that might be affecting the problem

  • Steps to reproduce the problem

  • Any relevant access and error logs, stack traces, or core dumps

A.4. Getting Support and Contacting ForgeRock

ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see

ForgeRock has staff members around the globe who support our international customers and partners. For details, visit, or send an email to ForgeRock at



The cloud-controller-manager daemon runs controllers that interact with the underlying cloud providers. cloud-controller-manager is an alpha feature introduced in Kubernetes release 1.6. The cloud-controller-manager daemon runs cloud-provider-specific controller loops only.

Source: cloud-controller-manager section in the Kubernetes Concepts documentation.


A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

Source: Container Cluster Architecture in the Kubernetes Concepts documentation.

cluster master

A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.

Source: Container Cluster Architecture in the Kubernetes Concepts docuementation.


A configuration map, called ConfigMap in Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.

Source: ConfigMap in the Kubernetes Cocenpts documentation.


A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.

Source Container Cluster Architecture in the Google Cloud Platform documentation


A set of daemons, called DaemonSet in Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.

Source DaemonSet in the Google Cloud Platform documentation.


A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.

Source: Deployment in Google Cloud Platform documentation.

deployment controller

A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.

Source: Deployments in the Google Cloud Platform documentation.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Source: About Docker Cloud in the Docker Cloud documentation.

Docker container

A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Source: Containers section in the Docker architecture documentation.

Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.

Source: Docker daemon section in the Docker Overview documentation.

Docker Engine

The Docker Engine is a client-server application with these components:

  • A server, which is a type of long-running program called a daemon process (the dockerd command)

  • A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do

  • A command-line interface (CLI) client (the docker command)

Source: Docker Engine section in the Docker Overview documentation.


A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.

Source: Dockerfile section in the Docker Overview documentation.

Docker Hub

Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.

An image is an application you would like to run. A container is a running instance of an image.

Source: Overview of Docker Hub section in the Docker Overview documentation.

Docker image

A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.

A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.

An image is an application you would like to run. A container is a running instance of an image.

Source: Docker objects section in the Docker Overview documentation. Hello Whales: Images vs. Containers in Dockers.

Docker namespace

Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.

The PID namespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.

Source: Namespaces section in the Docker Overview documentation.

Docker registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.

Source: Docker registries section in the Docker Overview documentation.

Docker repository

A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.

Source: Repositories on Docker Hub section in the Docker Overview documentation.

Docker service

In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.

Source: About services in the Docker Get Started documentation.

dynamic volume provisioning

The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.

Source: Dynamic Volume Provisioning in the Kubernetes Concepts documentation.

firewall rule

A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.

Source: Firewall Rules Overview in the Google Cloud Platform documentation.

garbage collection

Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.

Source: Garbage Collection in the Kubernetes Concepts documentation.

Google Kubernetes Engine (GKE)

The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.

Source: Kubernetes Engine Overview in the Google Cloud Platform documentation.


An ingress is a collection of rules that allow inbound connections to reach the cluster services.

Source: Ingress in the Kubernetes Concepts documentation.

instance group

An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.

Source:: Instance Groups in the Google Cloud Platform documentation.

instance template

An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.

Source:: Instance Templates in Google Cloud Platform documentation.


The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.

Source: Kubernetes Object Management in the Kubernetes Concepts documentation.


The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

Source: kube-controller-manager in Kubernetes Reference documentation.


A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.

Source: kubelets in the Kubernetes Concepts documentation.


The kube-scheduler component is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.

Source: Kubernetes components in the Kubernetes Concepts documentation.


Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.

Source: Kubernetes Concepts

Kubernetes DNS

A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.

Source: DNS for services and pods in the Kubernetes Concepts documentation.

Kubernetes namespace

A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:

  • default: The default namespace for user created objects which don't have a namespace

  • kube-system: The namespace for objects created by the Kubernetes system

  • kube-public: The automatically created namespace that is readable by all users

Kubernetes supports multiple virtual clusters backed by the same physical cluster.

Source: Namespaces in the Kubernetes Concepts documentation.

network policy

A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.

Source:    Network policies in the Kubernetes Concepts documentation.

Kubernetes node

A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.

Source: Nodes in the Kubernetes Concepts documentation.

Kubernetes node controller

A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.

Source: Node Controller in the Kubernetes Concepts documentation.

persistent volume

A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.

Source: Persistent Volumes in the Kubernetes Concepts documentation.

persistent volume claim

A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:

  • Mounted once for read and write access

  • Mounted many times for read-only access

Source: Persistent Volumes in the Kubernetes Concepts documentation.

Kubernetes pod

A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.

Source: Understanding Pods in the Kubernetes Concepts documentation.

replication controller

A replication controller ensures that a specified number of Kubernetes pod replicas are running at any one time. The replication controller ensures that a pod or a homogeneous set of pods is always up and available.

Source: ReplicationController in the Kubernetes Concepts documentation.


A secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.

Source Secrets in the Kubernetes Concepts documentation.

Kubernetes service

A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.

Source: Services in the Kubernetes Concepts documentation.

stateful set

A stateful set is the workload API object used to manage stateful applications. It represents a set of pods with unique, persistent identities, and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled. The state information and other resilient data for any given stateful Set pod is maintained in a persistent storage object associated with the stateful set.

Source: StatefulSets in the Kubernetes Concepts documentation.

Kubernetes volume

A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.

Source: Volumes in the Kubernetes Concepts documentation.


A workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.

Source: Understanding Pods in the Kubernetes Concepts documentation.

Read a different version of :