Guide to ForgeRock Identity Platform™ deployment in the cloud using DevOps techniques.
The DevOps Guide covers installation, configuration, and deployment of the ForgeRock Identity Platform using DevOps techniques.
This guide provides a general introduction to DevOps deployment of ForgeRock® software and an overview of DevOps deployment strategies. It also includes several deployment examples that illustrate best practices to help you get started with your own DevOps deployments.
The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the ForgeRock DevOps Examples is not available from ForgeRock.
For information about obtaining support for ForgeRock products, see Appendix A, "Getting Support".
Do not deploy ForgeRock software in a containerized environment in production until you have successfully deployed and tested the software in a non-production environment.
Deploying ForgeRock software in a containerized environment requires advanced proficiency in the technologies you use in your deployment. The technologies include, but are not limited to, Docker, Kubernetes, load balancers, Google Cloud Platform, Amazon Web Services, and Microsoft Azure.
If your organization lacks experience with complex DevOps deployments, then either engage with a certified ForgeRock consulting partner or deploy the platform on traditional architecture.
About ForgeRock Identity Platform Software
ForgeRock Identity Platform™ is the only offering for access management, identity management, user-managed access, directory services, and an identity gateway, designed and built as a single, unified platform.
The platform includes the following components that extend what is available in open source projects to provide fully featured, enterprise-ready software:
ForgeRock Access Management (AM)
ForgeRock Identity Management (IDM)
ForgeRock Directory Services (DS)
ForgeRock Identity Gateway (IG)
ForgeRock Identity Message Broker (IMB)
Third-Party Software and DevOps Deployments
The ForgeRock Identity Platform DevOps Examples require you to install software products that are not part of the ForgeRock Identity Platform. We strongly recommend that you become familiar with basic concepts for the following software before attempting to use it even in your initial experiments with DevOps deployments:
|Software||Recommended Level of Familiarity||Links to Introductory Material|
|Oracle VirtualBox||Install, start, and stop VirtualBox software; understand virtual machine settings; create snapshots||First Steps chapter in the VirtualBox documentation|
|Docker Client||Build, list, and remove images; understand the Docker client-server architecture; understand Docker registry concepts||Get Started With Docker tutorial|
|Kubernetes||Identify Kubernetes entities such as pods and clusters; understand the Kubernetes client-server architecture|
|Minikube||Understand what Minikube is; create and start a Minikube virtual machine; run docker and kubectl commands that access the Docker Engine and Kubernetes cluster running in the Minikube virtual machine|
|kubectl (Kubernetes client)||Run kubectl commands on a Kubernetes cluster||kubectl command overview|
|Kubernetes Helm||Understand what a Helm chart is; understand the Helm client-server architecture; run the helm command to install, list, and delete Helm charts in a Kubernetes cluster|
|Google Kubernetes Engine (GKE)||Create a Google Cloud Platform account and project, and make GKE available in the project||Quickstart for Kubernetes Engine|
|Google Cloud SDK||Run the gcloud command to access GKE components in a Google Cloud Platform project||Google Cloud SDK documentation|
Chapter 1. Gathering Deployment Information
A deployment worksheet helps the user gather required information and communicate it to the deployment team. It can also serve as a “test” of users skills, especially if supported by strongly worded warnings.
These might be section headers:
Identify and document resources to be used in the deployment such as accounts and access credentials, clusters, namespaces, secrets, service accounts, URLs, domain names, hostnames and port numbers, license numbers or keys.
Identify sequence dependencies for creating and using each of these resources.
Create a timeline for obtaining and using resources in the deployment.
Define a strategy for managing Docker images.
Create a high availability plan.
Chapter 2. GKE Deployment Runbook
" I think we need to include a "cookbook" or "runbook" or "example deployment" chapter in which we take either the "small" or "medium" deployment and walk though the whole thing from beginning to end using GKE. And whatever artifacts that we have used for this cookbook should go into our new "public" GitHub repository. The artifacts will be cluster creation and configuration scripts, custom values.yaml and any other utility to get the job done. And we have most of this already from our benchmarking and Pavel has done an excellent job with some scripts which we can reuse. I am willing to write this chapter btw provided that the doc team is willing to own it and clean it up. "
Chapter 3. Building CI/CD Infrastructure
This is ambitious and we ought to think about doing something very general:
" I think this is overly ambitious given our current state. CI/CD tools are very opinionated - even more than Kubernetes itself. Some possible choices are Jenkins, Spinnaker, Brigade, Codefresh, Pivotal Concourse, Google Cloudbuilder, and AWS/Azure (I am sure they have something). Whatever we pick is probably going to be used by at most 5% of our customer/partner base. I think we should use CI/CD to serve our own needs for deployment testing, and we might want to lightly document it (hey this is what we did, you might find it interesting...) "
" I mostly agree with Warren :) We could do a general model and talk about the integration points that we provide. "
" Mostly agree with Gary mostly agreeing with Warren. I would ask that Eng team demonstrates the tasks to Doc team so we can capture the details in some form for internal purposes. Then the Doc team can make the customer-facing doc as generic as possible. Does this work for you both? "
Chapter 4. Monitoring Your Deployment
Placeholder for the monitoring chapter, including running benchmarks to verify performance criteria are met.
Chapter 5. Making the ForgeRock Identity Platform Highly Available
Placeholder for the HA chapter.
Chapter 6. Backing up and Restoring Directory Data
Placeholder for the backup/restore chapter.
Chapter 7. Managing Passwords, Keys, and Keystores
Placeholder for the security management chapter.
Chapter 8. Upgrading the ForgeRock Identity Platform
Placeholder for the upgrade chapter.
" We have some major product issues to resolve first. I'd like to see us delay discussion of upgrades until AM config as an artifact has landed. "
Appendix A. Getting Support
This appendix contains information about support options for the ForgeRock DevOps Examples and the ForgeRock Identity Platform.
A.1. Statement of Support
The ForgeRock DevOps Examples and the accompanying Git repository demonstrate deployment in a containerized environment using DevOps techniques. You are responsible for adapting the examples to suit your production requirements. These resources are provided for demonstration purposes only. Commercial support for the DevOps Examples is not available from ForgeRock.
ForgeRock does not support production deployments that use the evaluation-only Docker images described in Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide. When deploying the ForgeRock Identity Platform by using Docker images, you must build and use your own images for production deployments. For information about how to build Docker images for the ForgeRock Identity Platform, see Section 3.2, "Using the Evaluation-Only Docker Images" in the DevOps Guide.
ForgeRock provides commercial support for the ForgeRock Identity Platform only. For supported components, containers, and Java versions, see the following:
Before You Install in the ForgeRock Access Management Release Notes.
Before You Install in the ForgeRock Identity Management Release Notes.
Before You Install in the ForgeRock Directory Services Release Notes.
Before You Install in the ForgeRock Identity Message Broker Release Notes.
Before You Install in the ForgeRock Identity Gateway Release Notes.
ForgeRock does not provide support for software that is not part of the ForgeRock Identity Platform, such as Docker, Kubernetes, Java, Apache Tomcat, NGINX, Apache HTTP Server, and so forth.
A.2. Accessing Documentation Online
ForgeRock publishes comprehensive documentation online:
The ForgeRock Knowledge Base offers a large and increasing number of up-to-date, practical articles that help you deploy and manage ForgeRock software.
While many articles are visible to community members, ForgeRock customers have access to much more, including advanced information for customers using ForgeRock software in a mission-critical capacity.
ForgeRock product documentation, such as this document, aims to be technically accurate and complete with respect to the software documented. It is visible to everyone and covers all product features and examples of how to use them.
A.3. How to Report Problems or Provide Feedback
If you have questions regarding the DevOps Examples that are not answered by the documentation, you can ask questions on the DevOps forum at https://forum.forgerock.com/forum/devops.
When requesting help with a problem, include the following information:
Description of the problem, including when the problem occurs and its impact on your operation
Description of the environment, including the following information:
Environment type (Minikube or Google Kubernetes Engine (GKE))
Software versions of supporting components:
Oracle VirtualBox (Minikube environments only)
Docker client (all environments)
Minikube (all environments)
kubectl command (all environments)
Kubernetes Helm (all environments)
Google Cloud SDK (GKE environments only)
DevOps Examples release version
Any patches or other software that might be affecting the problem
Steps to reproduce the problem
Any relevant access and error logs, stack traces, or core dumps
A.4. Getting Support and Contacting ForgeRock
ForgeRock provides support services, professional services, training through ForgeRock University, and partner services to assist you in setting up and maintaining your deployments. For a general overview of these services, see https://www.forgerock.com.
cloud-controller-managerdaemon runs controllers that interact with the underlying cloud providers.
cloud-controller-manageris an alpha feature introduced in Kubernetes release 1.6. The
cloud-controller-managerdaemon runs cloud-provider-specific controller loops only.
A container cluster is the foundation of Kubernetes Engine. A cluster consists of at least one cluster master and multiple worker machines called nodes. The Kubernetes objects that represent your containerized applications all run on top of a cluster.
- cluster master
A cluster master schedules, runs, scales and upgrades the workloads on all nodes of the cluster. The cluster master also manages network and storage resources for workloads.
A configuration map, called
ConfigMapin Kubernetes manifests, binds the configuration files, command-line arguments, environment variables, port numbers, and other configuration artifacts to the assigned containers and system components at runtime. The configuration maps are useful for storing and sharing non-sensitive, unencrypted configuration information.
A container is an allocation of resources such as CPU, network I/O, bandwidth, block I/O, and memory that can be “contained” together and made available to specific processes without interference from the rest of the system.
A set of daemons, called
DaemonSetin Kubernetes manifests, manages a group of replicated pods. Usually, the daemon set follows an one-pod-per-node model. As you add nodes to a node pool, the daemon set automatically distributes the pod workload to the new nodes as needed.
A Kubernetes deployment represents a set of multiple, identical pods. A Kubernetes deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
- deployment controller
A deployment controller provides declarative updates for pods and replica sets. You describe a desired state in a deployment object, and the deployment controller changes the actual state to the desired state at a controlled rate. You can define deployments to create new replica sets, or to remove existing deployments and adopt all their resources with new deployments.
- Docker Cloud
Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.
- Docker container
A Docker container is a runtime instance of a Docker image. A Docker container is isolated from other containers and its host machine. You can control how isolated your container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
- Docker daemon
The Docker daemon (
dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A Docker daemon can also communicate with other Docker daemons to manage Docker services.
- Docker Engine
The Docker Engine is a client-server application with these components:
A server, which is a type of long-running program called a daemon process (the
A REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do
A command-line interface (CLI) client (the
A Dockerfile is a text file that contains the instructions for building a Docker image. Docker uses the Dockerfile to automate the process of building a Docker image.
- Docker Hub
Docker Hub provides a place for you and your team to build and ship Docker images. You can create public repositories that can be accessed by any other Docker Hub user, or you can create private repositories you can control access to.
An image is an application you would like to run. A container is a running instance of an image.
- Docker image
A Docker image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
A Docker image includes the application code, a runtime engine, libraries, environment variables, and configuration files that are required to run the application.
An image is an application you would like to run. A container is a running instance of an image.
- Docker namespace
Docker namespaces provide a layer of isolation. When you run a container, Docker creates a set of namespaces for that container. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
PIDnamespace is the mechanism for remapping process IDs inside the container. Other namespaces such as net, mnt, ipc, and uts provide the isolated environments we know as containers. The user namespace is the mechanism for remapping user IDs inside a container.
- Docker registry
A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can also run your own private registry.
- Docker repository
A Docker repository is a public, certified repository from vendors and contributors to Docker. It contains Docker images that you can use as the foundation to build your applications and services.
- Docker service
In a distributed application, different pieces of the application are called “services.” Docker services are really just “containers in production.” A Docker service runs only one image, but it codifies the way that image runs including which ports to use, the number replicas the container should run, and so on. By default, the services are load-balanced across all worker nodes.
- dynamic volume provisioning
The process of creating storage volumes on demand is called dynamic volume provisioning. Dynamic volume provisioning allows storage volumes to be created on-demand. It automatically provisions storage when it is requested by users.
- firewall rule
A firewall rule lets you allow or deny traffic to and from your virtual machine instances based on a configuration you specify. Each Kubernetes network has a set of firewall rules controlling access to and from instances in its subnets. Each firewall rule is defined to apply to either incoming glossary-ingress(ingress) or outgoing (egress) traffic, not both.
- garbage collection
Garbage collection is the process of deleting unused objects. Kubelets perform garbage collection for containers every minute and garbage collection for images every five minutes. You can adjust the high and low threshold flags and garbage collection policy to tune image garbage collection.
- Google Kubernetes Engine (GKE)
The Google Kubernetes Engine (GKE) is an environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machine instances grouped together to form a container cluster.
An ingress is a collection of rules that allow inbound connections to reach the cluster services.
- instance group
An instance group is a collection of instances of virtual machines. The instance groups enable you to easily monitor and control the group of virtual machines together.
- instance template
An instance template is a global API resource that you can use to create VM instances and managed instance groups. Instance templates define the machine type, image, zone, labels, and other instance properties. They are very helpful in replicating the environments.
The kubectl command-line tool supports several different ways to create and manage Kubernetes objects.
The Kubernetes controller manager is a process that embeds core controllers that are shipped with Kubernetes. Logically each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
A kubelet is an agent that runs on each node in the cluster. It ensures that containers are running in a pod.
kube-schedulercomponent is on the master node and watches for newly created pods that do not have a node assigned to them, and selects a node for them to run on.
Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers.
Source: Kubernetes Concepts
- Kubernetes DNS
A Kubernetes DNS pod is a pod used by the kubelets and the individual containers to resolve DNS names in the cluster.
- Kubernetes namespace
A Kubernetes namespace is a virtual cluster that provides a way to divide cluster resources between multiple users. Kubernetes starts with three initial namespaces:
default: The default namespace for user created objects which don't have a namespace
kube-system: The namespace for objects created by the Kubernetes system
kube-public: The automatically created namespace that is readable by all users
Kubernetes supports multiple virtual clusters backed by the same physical cluster.
- network policy
A Kubernetes network policy specifies how groups of pods are allowed to communicate with each other and with other network endpoints.
- Kubernetes node
A Kubernetes node is a virtual or physical machine in the cluster. Each node is managed by the master components and includes the services needed to run the pods.
- Kubernetes node controller
A Kubernetes node controller is a Kubernetes master component that manages various aspects of the nodes such as: lifecycle operations on the nodes, operational status of the nodes, and maintaining an internal list of nodes.
- persistent volume
A persistent volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins that have a lifecycle independent of any individual pod that uses the PV.
- persistent volume claim
A persistent volume claim (PVC) is a request for storage by a user. A PVC specifies size, and access modes such as:
Mounted once for read and write access
Mounted many times for read-only access
- Kubernetes pod
A Kubernetes pod is the smallest, most basic deployable object in Kubernetes. A pod represents a single instance of a running process in a cluster. Containers within a pod share an IP address and port space.
- replication controller
replication controllerensures that a specified number of Kubernetes pod replicas are running at any one time. The
replication controllerensures that a pod or a homogeneous set of pods is always up and available.
A secret is a secure object that stores sensitive data, such as passwords, OAuth 2.0 tokens, and SSH keys in your clusters.
- Kubernetes service
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them. This is sometimes called a microservice.
- stateful set
A stateful set is the workload API object used to manage stateful applications. It represents a set of pods with unique, persistent identities, and stable hostnames that Kubernetes Engine maintains regardless of where they are scheduled. The state information and other resilient data for any given stateful Set pod is maintained in a persistent storage object associated with the stateful set.
- Kubernetes volume
A Kubernetes volume is a storage volume that has the same lifetime as the pod that encloses it. Consequently, a volume outlives any containers that run within the pod, and data is preserved across container restarts. When a pod ceases to exist, the Kubernetes volume also ceases to exist.
A workload is the collection of applications and batch jobs packaged into a container. Before you deploy a workload on a cluster, you must first package the workload into a container.